diff --git a/04_documentation/01258B.pdf b/04_documentation/01258B.pdf new file mode 100644 index 0000000..fcc3849 Binary files /dev/null and b/04_documentation/01258B.pdf differ diff --git a/04_documentation/ausound/sound-au.com/5px.gif b/04_documentation/ausound/sound-au.com/5px.gif new file mode 100644 index 0000000..50b18bf Binary files /dev/null and b/04_documentation/ausound/sound-au.com/5px.gif differ diff --git a/04_documentation/ausound/sound-au.com/60px.gif b/04_documentation/ausound/sound-au.com/60px.gif new file mode 100644 index 0000000..503600d Binary files /dev/null and b/04_documentation/ausound/sound-au.com/60px.gif differ diff --git a/04_documentation/ausound/sound-au.com/7053705.pdf b/04_documentation/ausound/sound-au.com/7053705.pdf new file mode 100644 index 0000000..0c8184c Binary files /dev/null and b/04_documentation/ausound/sound-au.com/7053705.pdf differ diff --git a/04_documentation/ausound/sound-au.com/DIY_RTA_spreadsheet.xls b/04_documentation/ausound/sound-au.com/DIY_RTA_spreadsheet.xls new file mode 100644 index 0000000..56d33a0 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/DIY_RTA_spreadsheet.xls differ diff --git a/04_documentation/ausound/sound-au.com/a.gif b/04_documentation/ausound/sound-au.com/a.gif new file mode 100644 index 0000000..f5d3336 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/a.gif differ diff --git a/04_documentation/ausound/sound-au.com/a1.gif b/04_documentation/ausound/sound-au.com/a1.gif new file mode 100644 index 0000000..096d23c Binary files /dev/null and b/04_documentation/ausound/sound-au.com/a1.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f1-1.gif b/04_documentation/ausound/sound-au.com/ab-f1-1.gif new file mode 100644 index 0000000..50bfe7e Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f1-1.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f1-2.gif b/04_documentation/ausound/sound-au.com/ab-f1-2.gif new file mode 100644 index 0000000..60cd719 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f1-2.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f1-3.gif b/04_documentation/ausound/sound-au.com/ab-f1-3.gif new file mode 100644 index 0000000..93aaf19 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f1-3.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f1-4.gif b/04_documentation/ausound/sound-au.com/ab-f1-4.gif new file mode 100644 index 0000000..f70821d Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f1-4.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f1-5.gif b/04_documentation/ausound/sound-au.com/ab-f1-5.gif new file mode 100644 index 0000000..3d9e3dc Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f1-5.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f2-1.gif b/04_documentation/ausound/sound-au.com/ab-f2-1.gif new file mode 100644 index 0000000..a440e67 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f2-1.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f2-2.gif b/04_documentation/ausound/sound-au.com/ab-f2-2.gif new file mode 100644 index 0000000..9b2647f Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f2-2.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f2-3.gif b/04_documentation/ausound/sound-au.com/ab-f2-3.gif new file mode 100644 index 0000000..9373da2 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f2-3.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f2-4.gif b/04_documentation/ausound/sound-au.com/ab-f2-4.gif new file mode 100644 index 0000000..42ea5ae Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f2-4.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f2-5.gif b/04_documentation/ausound/sound-au.com/ab-f2-5.gif new file mode 100644 index 0000000..29fcec9 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f2-5.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f2-6.gif b/04_documentation/ausound/sound-au.com/ab-f2-6.gif new file mode 100644 index 0000000..022aa37 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f2-6.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f2-7.gif b/04_documentation/ausound/sound-au.com/ab-f2-7.gif new file mode 100644 index 0000000..1e80bac Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f2-7.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f3-1.gif b/04_documentation/ausound/sound-au.com/ab-f3-1.gif new file mode 100644 index 0000000..c62104f Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f3-1.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f3-2.gif b/04_documentation/ausound/sound-au.com/ab-f3-2.gif new file mode 100644 index 0000000..8310156 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f3-2.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f3-3.gif b/04_documentation/ausound/sound-au.com/ab-f3-3.gif new file mode 100644 index 0000000..5b5b2e5 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f3-3.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f3-4.gif b/04_documentation/ausound/sound-au.com/ab-f3-4.gif new file mode 100644 index 0000000..1892c09 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f3-4.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f3-5.gif b/04_documentation/ausound/sound-au.com/ab-f3-5.gif new file mode 100644 index 0000000..32f43b5 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f3-5.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f4-1.gif b/04_documentation/ausound/sound-au.com/ab-f4-1.gif new file mode 100644 index 0000000..223051a Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f4-1.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f5-1.gif b/04_documentation/ausound/sound-au.com/ab-f5-1.gif new file mode 100644 index 0000000..5656a11 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f5-1.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f5-2.gif b/04_documentation/ausound/sound-au.com/ab-f5-2.gif new file mode 100644 index 0000000..bd8f833 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f5-2.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f5-3.gif b/04_documentation/ausound/sound-au.com/ab-f5-3.gif new file mode 100644 index 0000000..edf55ae Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f5-3.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f5-4.gif b/04_documentation/ausound/sound-au.com/ab-f5-4.gif new file mode 100644 index 0000000..0f255ae Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f5-4.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f5-5.gif b/04_documentation/ausound/sound-au.com/ab-f5-5.gif new file mode 100644 index 0000000..fce5937 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f5-5.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f5-6.gif b/04_documentation/ausound/sound-au.com/ab-f5-6.gif new file mode 100644 index 0000000..02ba1d7 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f5-6.gif differ diff --git a/04_documentation/ausound/sound-au.com/ab-f5-7.gif b/04_documentation/ausound/sound-au.com/ab-f5-7.gif new file mode 100644 index 0000000..69530a0 Binary files /dev/null and b/04_documentation/ausound/sound-au.com/ab-f5-7.gif differ diff --git a/04_documentation/ausound/sound-au.com/about.htm b/04_documentation/ausound/sound-au.com/about.htm new file mode 100644 index 0000000..6cd7dd3 --- /dev/null +++ b/04_documentation/ausound/sound-au.com/about.htm @@ -0,0 +1,92 @@ + + +
+ + + + + + +![]() | + + + + + + + |
Elliott Sound Products | +About The Audio Pages |
This site was initially created some time in late 1998, and has progressed from a single page (a somewhat shorter version of the bi-amping article) to what you see today. I have gradually built up the content, and the overall site 'map' has changed several times as I have tried to incorporate all the new stuff in a reasonably sensible manner.
+ +As the site continues to grow, you will see more changes, but I will always keep the user interface as simple as possible to maximise loading speed. This is one reason that you won't see fancy mapped graphics, frames, flash animations or other frills that might make the site look really cool, but at the expense of download times. Likewise, I never have pop-ups that ask you to register before you can view the site contents, and likewise you won't see pop-up requests/ demands to disable your ad blocker. While I'd prefer that you do so, it's not (and never will be) a requirement.
+ +The overall philosophy of the site has never changed - keep to the facts, stay away from the constant efforts of the subjectivist camp to ever 'improve' on what they have - almost always with expensive 'tweaks' whose performance cannot be measured, or can only be heard by people with 'finely tuned ears' . Music is to listen to. Recordings are rarely perfect, the concept of reproduction ever matching a live performance is a myth. Listen to the music, not the equipment.
Unfortunately, many people respond more readily to rhetoric and 'herd opinion' than to facts and logic, and there are forces (hi-fi reviewers, the market in general, and the political apparatus) that see it as their business to take advantage of this tendency rather than to rectify it. My philosophy is exactly the opposite - I suggest that 'herd opinion' be eschewed, and I always try to provide information based on verifiable engineering principles.
+ +Good equipment is always something to strive for, since your enjoyment is greater when it sounds good. I love to experiment, and many of the designs are experimental - in some cases just to prove a point (the DoZ is a perfect example). Sometimes these experiments backfire (the DoZ is a perfect example!), and I get a whole bunch of e-mail telling me how great it sounds.
+ +How much of the great sound is purely the result of the reader having built it himself/herself? I honestly have no idea, but it doesn't matter. If people can get double the enjoyment from building and then listening to equipment then so much the better. In the long run it is all about enjoyment; of music, of making something and of life.
+ +May you all enjoy building my projects as much as I enjoy bringing them to you.
+ + +I have been asked many times about the way I create the circuit diagrams (or schematics, if you insist), and over the time the pages have been running this has changed. I currently use either Protel or (mainly) SIMetrix to draw the diagrams, although I have used other methods before and since. These are simply captured and pasted into the XP version of Paintbrush (which runs fine on later versions of Windows, somewhat surprisingly) for touch-ups, and the final image is then exported as a GIF file. This method is a little time consuming, but I have found that the images are very clear, and I get consistent results. All schematics on the ESP site have unique features that allow me to recognise them even after they have been stolen and re-published elsewhere.
+ + +The content of all the articles and projects is entirely my own unless otherwise stated. This extends to the philosophy of the site itself, which is mine and mine alone. This (of course) does not mean that others will not have similar ideas (many do), nor that I automatically disagree with the opinions of others who might have a slightly different opinion on the same subject. I have been corrected many, many times - for anything from spelling mistakes to errors in diagrams (I have even managed to get a few electrolytics backwards - oops!), and various people have assisted with additional information on a number of occasions.
+ +I do not (knowingly) steal the ideas, drawings or other content of others, and any information from others is reproduced with permission and full credit is given to the original author. Contributions are encouraged, as I am determined to make the best audio web site around, and I cannot do it alone.
+ +There is a very small number of images on these pages that seem to be in the public domain, and I have used some of these where appropriate. If any reader out there sees their image on my pages and is offended that I purloined it, let me know and I will remove it.
+ +![]() Sometimes you see an image that is just too wonderful to ignore - the picture * here falls into that category. It was sent to me by a friend, and I am sorry to say that I know not where it came from. I just loved it on sight! + +I do not use (or condone the use of) spam (the web kind or the canned variety), so you will never get bulk e-mail or cans of 'meat-like substance' from me for any reason, so that image is appropriate in its own silly way I just wish I knew where it came from so I could thank its creator. Whoever you are - my thanks and apologies for 'borrowing' this image. ++ Cheers, |
Created 09 Aug 2000
![]() | + + + + + |
Elliott Sound Products | +Project X |
I have finally been able to add 'Project X', thanks to Phil Allison. Somehow, 'X' just seemed appropriate Have fun.
This is probably going to be a very controversial device. Its purpose is to prove people wrong and that is very confronting. If you don't wish to have your cherished beliefs about amplifiers and audio generally challenged then do not build or use this unit.
+ +Many of you will know about the ABX system for doing audio comparisons. No doubt it is a very fine piece of design but out of reach for the average person. Some years ago I felt that a much simpler device would at least allow me to do comparisons on power amplifiers while the music played in a similar way to ABX. This device was the outcome. After using it for a few seconds my attitude to audio listening tests changed forever. If you are game for a challenge then try it yourself. It will cost you under $50 to build. It may be the best or worst fifty bucks you ever spent depending on your attitude.
+ +The idea is that the listener, you, sits in the 'hot seat' and concentrates on familiar music on your favourite speakers in your own lounge room with the ability swap power amps over without moving more than one finger. If there is even a small change in timbre or definition it should be instantly audible. The stereo image and any other factor can be checked precisely since you can sit totally still during the switchover. Well at least that was what was expected to happen.
+ + +The design is very simple and I lashed mine up in a couple of hours and put it to use immediately. A couple of good quality relays, some hefty terminals and banana plugs and a long wire finishing in a hand held push button are the ingredients. The circuit is as shown and is self-explanatory. What is does however, is staggering.
+ +Providing the two amps to be compared are of high quality (why would you be interested in anything else?) and of course fault free, the gains are carefully matched and have similar bandwidth the device permits instant and seamless switching of the amplifier outputs to the speakers at the push of the button in the listeners hand.
+ +Because the relays switch in a couple of milliseconds there is no audible interruption. This surprised me at first and I tried my sine wave generator set to 100 Hz and pushed the switch. About half the time I could hear a faint 'tick' sound but this was not audible on musical programme.
+ +
Figure 1 - The A-B Switch Box Schematic
A description of the circuit is hardly needed (but I'll give a brief one anyway). The DC voltage must be matched to the relays, and 12V is suggested as the most practical. The relays should be high quality, high current types, and DPDT relays can be used with the contacts paralleled for lowest resistance. Gold flashed contacts are desirable, but standard silver contacts should be quite adequate unless there is severe atmospheric pollution in your area.
+ +The speaker grounds for the two amps were connected together at the box and the speaker grounds were coupled as well. Use short thick leads for connection to the amps. It is possible this might cause an earth loop hum with some combinations.
+ +++ ++
+Warning! Never attempt to use this switching device with amplifiers that have a BTL (bridge-tied-load) + output arrangement. If there is a warning that neither speaker terminal may be earthed then this describes your amplifier. If you connect it to the switching unit + you will almost certainly cause severe damage to the amplifier. Both amplifiers under test must be conventional amps that have the -ve speaker terminal connected to + earth (ground). +
The switch (marked 'Push-On / Push-Off') is the remote switch, and needs a lead that is long enough to reach the listening position. There is no real limit to the length, even with light duty figure-8 'speaker' lead, but in excess of 10 metres or so may cause some voltage loss. The LED is optional, and may be omitted. If fitted, be very careful that the LED cannot be seen from the hot seat, as it may dim slightly when the relays are energised, thus giving a visual clue - this you don't need!
+ +Notes:
+
When comparing amps of different power ratings, stay within the capacity of the smaller unit. Valve (vacuum tube) and transistor amps may be compared as long as the valve model has a damping factor greater than 25. Valve amps with a low damping factor (output impedance of 2Ω or more) will sound different (note: different does not mean 'better'!). Any test involving a valve amp is at your own risk! Be careful, as many valve amps don't like an open-circuit output. Amps with subsonic filters may be distinguishable from those without on some material.
Matching the gain may be difficult if the amps do not have level controls. Solder a 10kΩ multiturn trim pot to the back of each RCA plug on the amp with more level to set the gain.
+ +You must use a push-on, push-off hand held switch. Never use one that needs to be held down to make the relays change over. The switch must be one that does not change 'feel' from one position to the other (some feel slightly different depending on the latching mechanism).
+ +After the initial surprise (shock) wears off, try the old 'stop and restart A-B test' method again. See what happens!
+ +This is a contributed article, and the author (Phil Allison) has made this information available for the sake of audio.
+ +I do suggest that if you want to test this technique that Phil's instructions must be followed to the letter - even the smallest variation in level can invalidate any A-B test. Comparisons between valve and transistorised amplifiers are likely to show differences, only partly due to the higher output impedance of a valve amp, and extra care is needed to balance the levels with this combination.
+ +It is also important that there are no visual cues that might alert you that one amp or the other is in operation. To be safe, place a screen of some sort between you and the amplifiers and switch box. As Phil has stated, if amplifiers are of different power ratings, make sure that neither amp clips (distorts) at any point during the tests. This will be immediately audible, and is not a valid test for an amplifier's sound quality (since it has none when clipping!)
+ +You might find it hard to get a push-on/ push-off switch that has no difference in feel between states. If this is the case, you can use the circuit shown in Project 166.This allows a momentary switch to be used, and there will be absolutely no difference in feel either way. If you decide to include an indicator LED (which is a good idea so you can verify the switching action), it must have a switch in series so it can be turned off for testing. Before you start, you might want to get someone else to press the button a (random) number of times to make sure that there's no in-built bias. This is recommended regardless of the switch used.
+ +If you don't want to be confronted by this switch box, build one anyway. It will allow you to make comparisons between amplifiers that are otherwise impossible to do with accuracy or repeatability. You might find that there are audible differences, or you might not. Either way, it gives you the ability to know (rather than assume or imagine) that one amplifier is different from another.
+ + +This tester can also be used to change speakers for comparisons. These are the most inaccurate of all electronic components, and there are some interesting traps that can affect the result, leading to a very wrong conclusion. I have discussed this elsewhere, but most people will be unaware that some things are decidedly counter-intuitive. If one speaker (A) has a notch (aka 'suck-out') at some frequency, if you listen to it for 30 seconds or so, then switch over to another speaker (B) that has (comparatively) flat response, speaker B will sound wrong!
+ +It will seem to the listener that speaker B has a peak at the frequency where the notch was situated in speaker A. This can be verified either with hardware (a notch filter that you can switch in and out of circuit), or you can use Audacity or similar to insert a notch. This is human hearing (ear-brain interface) at work, and it's surprisingly easy to be fooled by the way our auditory systems work. This is always active, and (perhaps surprisingly) it doesn't matter much if the test is blind or sighted. The test should be blind to prevent visual cues that can lead to other issues, but it will still work even when you know which is which. Avoiding the experimenter-expectancy effect is still important though.
+ +There's a discussion of this issue at Harbeth, with four videos. The first two discuss this issue in some depth. Never underestimate the apparent 'problems' you might hear that are caused by our hearing mechanism. Despite the (fallacious) claims that only listening can reveal the 'truly best' audio, it becomes very obvious that test equipment is your friend. Measurements have no inbuilt prejudices - provided the measurement is set up properly of course.
+ +If comparing speakers, the amplifier goes to the 'speaker' terminals, and the speakers are wired to the 'amp 1' and 'amp 2' terminals. Unless the speaker systems have identical sensitivity (dB/W/m) they will sound different, with the louder speaker almost invariably sounding better (even if it's not).
+ +![]() | + + + + + + + |
Elliott Sound Products | +Project ABX |
This project describes the construction of test equipment for double-blind or ABX testing of source components - preamplifiers, tuners, DACs etc. or even, if that is your particular vice, interconnects. It builds on the work done by Phil Allison described in Project X. I recommend that you read that project description before you commence this one. If you do not like what you read there, then you might as well stop reading at this point.
+ +Double-blind and ABX tests do not allow the listener to know which component they are listening to, and furthermore don't allow the test controller to know either. This guards against visual cues to the audience (including body language).
+ +There is information on the principles behind ABX testing elsewhere on the Net, therefore, I intend to give only the briefest description here.
+ +An ABX test allows the listener to select either A or B as many times as they like, and ultimately decide which of these is X, where X is randomly selected by the equipment to be either A or B, and the responses are logged for correlation when the test is complete. For example, a 1000Hz tone is assigned as A, and a 1500Hz tone as B. Random selection determines which of these is X. Listen to A then X, listen to B then X, and decide if A or B sounds like X (this would be a rather easy test for anyone to get right, of course- but if B were to be 1001Hz it may be more of a challenge).
+ +True ABX testing is normally not easy and uses a microcontroller or a PC to interface to the switching module. This was not an option as the project is meant to be simple and inexpensive. The simple remote control (part 2) requires an operator to control switching between A and B and to whom the active channel is known. However, if you proceed to build the double-blind remote (part 3), then the unit is a true ABX comparator - you select the next test in the sequence with the 12-position rotary switch and then have the opportunity to decode if X is input A or input B. This is 'double-blind' because the sequence is not known until the test has been completed.
+ +With the basic unit, two pieces of equipment (for example, two preamplifiers) are under comparison (A and B). They are fed from the same source. The audience hears first A and then B. Thereafter, the test controller selects either A or B as X and repeats the test for up to (about) twenty iterations, changing from A to B at his or her whim, and the audience is required to write down which they think it is for each iteration. Thereafter, the results are analysed to find out whether the audience was able accurately to identify which piece of equipment was in use at each iteration of the test.
+ +There is no requirement that A and B are selected the same number of times during any single test. In fact, a test controller once ran a whole test with A. None of the members of the audience selected 'A' for each iteration. I leave you to consider what that result says about people's confidence in their ability to identify different components.
+ +The project is in three parts of which you must build at least two. I recommend that you build only parts 1 and 2 to start with. You might find that your experience of this piece of test equipment is so infuriating that you will regret the time spent on building part 3 if you proceed with it immediately.
+ +It is vitally important that the output level of the devices under test is equal. A difference of 1dB is normally sufficient to bias the test, normally in favour of the louder channel. Therefore, you will need a test-tone CD for calibration purposes if using CD as your source or an LP with test tones if using vinyl. If comparing tuners, you will probably find that tuning in the white noise between stations is the only way you can calibrate the two pieces of equipment. You will also need a multimeter. Unless you are using an expensive true RMS digital multimeter, you should not use a test tone above 500Hz. With an analogue meter you can use a higher pitch but I do not recommend that you go higher than 1kHz. 1dB represents a voltage between 891.25mV and 1.122V, assuming a reference level of 1V (RMS). You should aim for better than ±50mV variation for a 1V signal. This equates to ±25mV for a 500mV signal.
+ + +The circuit diagram should be fairly easy to understand. There is nothing difficult about it, and no electronics are involved in the audio path.
+ +Connections 1 through 4 go to either of the remotes and should be wired to a four-pin connector. The relays K1-3 can be low current types. The battery must match the voltage of the relays. I used 6V DPDT relays. K3 and K2 are used to select input A or B respectively. K1 mutes the output for calibration purposes. You could replace K1 with a switch if you wish but you then have to run more wires inside the box. The voltmeter connects externally to the two pins marked "Calibrate". Pots are used on both inputs solely for consistency - if a pot is used on only one of the inputs, then some people may use this as "proof" that the device modifies one channel and not the other, and this can be used as an excuse as to why the results failed to correlate "correctly".
+ +The circuit should be built on a piece of Veroboard or similar and mounted in a shielded metal case. SW1 and D4 are mounted on the outside of the case, as are the connectors for attaching the voltmeter, the 4-pin connector for connection to the remote and the RCA plugs for the A and B inputs and the X output.
+ +If you are proposing to use only the remote described in part 2 you do not need to take any precautions to ensure that K2 and K3 are inaudible at the listening position. However, testing using the part 3 controller requires that the relays be inaudible at the listening position. To do this you will have to mechanically decouple the circuit board from the case and also use some acoustic dampening material in the case. See note 2 below.
+ +WARNING: I do not recommend that you omit the muting part of the circuit. When you come to use this device, you will first choose the music you wish to hear for the test and establish a realistic listening level Then you mute the output and use the 0dB level test tone to adjust the levels of both channels to the same voltage. You will probably find that the steady-state voltage is greater than 2V RMS. Unless you have monster loudspeakers, a tone at this level, fed through a power amplifier with a 30dB gain (which is common), will probably destroy your speakers and will do nothing for your hearing.
+ +I have found that different pieces of music require different volume settings to establish a realistic listening level. An AB test where you are wincing at the volume or straining to hear the music is of no use. The calibration mute circuit has been included to speed up the recalibration between tests using different pieces of music.
+ +That said, however, if you wish to go through the process of either (a) turning off your power amplifier(s) or (b) disconnecting your loudspeakers each time you need to recalibrate, you can omit this part of the circuit. However, you have been warned. (And you have forgotten to reconnect the speakers and/or turn on the amp(s) before proceeding with the test.)
+ +VR1 (and optionally VR2) is included for trimming of the voltages during calibration. Obviously, the volume should be set using the preamps' volume controls (if those are what you are comparing). However, if you are testing preamps, both of which have stepped volume controls, you might find it difficult to match the voltages at your realistic listening level. That is what VR1 is for. If you don't expect to be doing this, VR1 can be omitted. For A-B testing of other equipment, such as tuners, VR1 will be required.
+ +This is a simple selector which can be built in a small plastic box and is connected to the controller using four-core cable. Heavy duty cable is not required. The cable should be long enough that the operator can sit further away from the source equipment than a person in the normal listening position. SW1 is a 3-position rotary selector switch used to select A or B or null (centre position). The channel selected is indicated by the relevant LED. SW2 is a DPDT toggle switch which flips the channel from A or B or vice versa. This can be used for simple A-B testing. I have found that there is an audible break when a toggle switch is used for SW2. You might find no break is audible if you use a rotary selector.
+ +To do an AB test with this remote you will need a (patient) assistant. The procedure (using as an example two preamps as the devices under evaluation) is as follows:
+ +a) | Use Y-splitters to connect the source equipment to each of the preamps. Connect the output of one + preamp to input A and the other to input B on the controller box. Connect output X to your crossover or power amplifier. |
+ | |
b) | Select one of the channels and play some music adjusting the volume on that channel to the desired realistic + listening level. |
+ | |
c) | On the controller box, mute the output using the switch provided. Connect a voltmeter to the calibration + terminals. Insert the test tone CD, play a 0dB calibration tone and note the voltage. Use SW1 on the remote to change to the other channel and adjust + the volume so that the same voltage is displayed. If stepped volume controls are in use, you might need to trim using VR1 on the controller. (Set VR1 + to the maximum before calibrating and use it to attenuate the voltage.) Once the voltage is adjusted, remove the test-tone CD and replace the music CD + and turn off the muting switch. Do not touch the volume controls again for the duration of the test. |
+ | |
d) | Assume your normal listening position. You will need a pencil and paper to record your choices. |
+ | |
e) | The person conducting the test (the operator) should sit out of view of the listener with the remote to hand. + The listener must not be able to see the LEDs on the remote. Decide which one of you controls the source and how many iterations of the test will take place. |
+ | |
f) | The operator selects first channel A and the listener familiarizes him/herself with the particular + characteristics of the preamp connected to that channel. Channel B is then selected and familiarization of that channel takes place. |
+ | |
g) | The operator now selects either channel, the music is played and the listener to writes down whether A or + B is his choice for X. The listener does not divulge to the operator his choice. The test is then continued for the number of iterations agreed upon, + with the operator selecting either A or B. I recommend that, to avoid any unintentional bias, the sequence of changes be determined in advance of the + test by some random operation (tossing a coin, throwing a die, for example). The operator should carry out this random operation prior to conducting + the test. |
+ | |
h) | The last iteration of the test having been concluded you may either check the results or return to step (b) + with another piece of music. |
The listener's choices are then compared with the operator's notes of the actual channel in use during each iteration. It should be immediately apparent whether any there is any correlation between the listener's choices and the actual selections.
+ +If you do not have a patient assistant, or patience has run out, you might wish to proceed to ...
+ +This remote uses cascaded SPDT slide switches to set up an A-B sequence which can be used blind for testing purposes and then read back to check the results. It should be apparent from the schematic how this works. The separate poles of SW1 and SW2 must each connect to lines 3 (A) and 4 (B). SW3 selects between SW1 and SW2. For each position on the rotary selector SW4, three SPDT slide switches are required. I have built this with a twelve-position selector using 36 slide switches.
+ +In the above, 'n' is the number of poles on the rotary switch. If a 12 position switch is used for SW4, then 'n' is 12, and 'n-1' is 11.
+ +The schematic, for clarity, shows the switches SW1, SW2 and SW3 connected together in a regular sequence. However, when building this remote, the switches are hooked up in an irregular fashion so that it is not possible from the outside of the box to identify what function any particular switch has, nor how they are wired in relation to one another. The separate poles of each switch (SW1 and SW2), however, must each connect to the A and B buses. If you do not do this you will introduce a bias in favour of one of the channels. The photo shows my own unit. Properly constructed, it should look like a mess of wires, as does mine. The orange and blue wires are the bus lines, the green wires the connections between the centre poles of the SW1s and SW2s and the brown wires go to the rotary selector.
+ +If you decide to proceed with this part of the project, I must warn that this remote is tedious to construct and you must proceed with great care because trouble-shooting an incorrect hook-up or a dry joint can be difficult. You should first wire between the centre poles of the SW1s and SW2s to the SW3s, then run the bus wires to the outside poles of the SW1s and SW2s and finally connect from the centre pole of the SW3s to the rotary selector. Solid-core or magnet wire is recommended. The remote can be constructed in a plastic case.
+ +So how do we use this abomination?
+ +i) | It is connected to the control box with the same four-core connector that you made up for part 2. |
+ | |
j) | Before proceeding with the test, set up the test sequence by moving half of the total number of slide switches on the box. SW5 must be open at + this time; no LEDs illuminated. |
+ | |
k) | SW6 switches selects A or B for calibration as per (b) and (c). SW5 is closed (LEDs in circuit) during this step. |
+ | |
l) | Open SW5 so that no LEDs are illuminated. |
+ | |
m) | Assume your normal listening position and use SW6 to select A and B as in step (f) |
+ | |
n) | Move SW6 to the X position. Is it A or B? Write down your choice. Proceed through all the other positions on SW4 marking down your choice each time. |
+ | |
o) | When all positions have been sampled, close SW5, rotate the selector SW4 and observe which channel was in use for each iteration of the test. Correlate + your results. |
+ | |
p) | If you wish to repeat the test, proceed again from step (j). |
Notes
+I hope that this little project will amuse you.
+ +Steven Hill, August 2002
+ + +Steven has done something I had thought would only really be possible using a microcontroller or a PC to achieve. The random switching is of such complexity that it would be virtually impossible for anyone to know and remember the combinations created by each switch. Especially if assembled according to the instructions - the switches are not only capable of giving an excellent randomisation of their own accord, but if they are wired in a random fashion as you build the unit, the likelihood of being able to remember the combinations is extremely low.
+ +Will it be worth the effort? Only you can answer that, and it depends on how serious you are about being able to tell the difference between pieces of equipment. Just like Phil's original Project "X", your first reaction may be that the unit is not working - the LED indicator that Steven included in the design to allow you to correlate the results will certainly prove that A and B are being selected, and if you are really unsure, you can always switch off one of the units under test.
+ +All in all, this is an ambitious project, but one that every hi-fi reviewer should make (or have made) - I expect that if this were done, a great many of the glowing reviews we currently see would diminish. They may even vanish altogether.
+ +Needless to say, the tester can be also used to verify that the expensive capacitors you bought really don't make any difference, or that all well constructed interconnects sound the same. This is all very confronting, but it is necessary if we are to get hi-fi back on track, and eliminate the snake oil.
+ +![]() ![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +ESP's Advertising |
+ + + + + + +
ESP has subscribed to Google advertising for some time now. This might come as a surprise to many people, but unfortunately the economics of running such a large website mean that alternative sources of revenue are the only way that I can continue to keep the site operating. In case you were wondering, the sale of PCBs is not a 'goldmine'. I keep prices as reasonable as possible, and the small additional income from the ads is essential for survival of the site.
+ +I know that many people use ad blocking add-ons with their browsers, and I ask that you place an exception for the ESP site. I have very deliberately avoided placing ads in the middle of pages, and they are restricted to small sections at the top and bottom of each page, identical to what you see here. They may not be the greatest revenue raisers (as Google tells me regularly), but they are as un-intrusive as possible.
+ +Unlike many other sites, you will never be blocked from accessing the material here, even if you do use an ad blocker. As noted above, I prefer that you disable it for the ESP site, but it's not (and will never be) a requirement.
+ +You will never see pop-up ads that open a new window, nor will you ever be asked if you really want to leave a page (or the site). Nor do I request that you 'register' to see the information - everything is freely available to everyone, with no restrictions other than to the secure section. That is restricted to purchasers only, and that's been the case for nearly 20 years.
+ +The primary goal of the site is to provide unbiased information to readers, and everything published is (or is believed to be) accurate and free of snake-oil or other contaminants.
+ +Should you see adverts that you think are inappropriate, please let me know, and if I agree they will be blocked at the source.
+ +![]() | + + + + + + + |
Elliott Sound Products | +Amplifier Basics - How Amps Work (Intro) |
![]() ![]() |
The term 'amplifier' is somewhat 'all-encompassing', and is often thought (by many users in particular) to mean a power amplifier for driving loudspeakers. This is not the case (well, it is, but it is not the only case), and this article will attempt to explain some of the basics of amplification - what it means and how it is achieved. This article is not intended for the designer (although designers are more than welcome to read it if they wish), and is not meant to cover all possibilities. It is a primer, and gives fairly basic explanations (although some will no doubt dispute this) of each of the major points.
+ +I will explain the basic amplifying elements, namely valves (vacuum tubes), bipolar transistors and FETs, all of which work towards the same end, but do it differently. This article is based on the principles of audio amplification - radio frequency (RF) amplifiers are designed differently because of the special requirements when working with high frequencies.
+ +Not to be left out, the opamp is also featured, because although it is not a single 'component' in the strict sense, it is now accepted as a building block in its own right.
+ +This article is not intended for the complete novice (although they, too, are more than welcome), but for the intermediate electronics or audio enthusiast, who will have the most to gain from the explanations given.
+ +Before we continue, I must explain some of the terms that are used. Without knowledge of these, you will be unable to follow the discussion that follows.
+ +Electrical Units + | |||
Name | Measurement of | aka | Symbol |
Volt | electrical 'pressure' | voltage | V, U, E (EMF) |
Ampere | the flow of electrons | current | A, I |
Watt | power | W, P | |
Ohm | resistance to current flow | Ω, R | |
Ohm | impedance, reactance | Ω, Z, X | |
Farad | capacitance | F, C | |
Henry | inductance | H, L | |
Hertz | frequency | Hz |
Note: 'aka' means 'Also Known As'. Although the Greek letter omega (Ω) is the symbol for Ohms, I often use the word Ohm or the letter 'R' to denote Ohms. Any resistance of greater than 1,000 Ohms will be shown as (for example) 1k5, meaning 1,500 Ohms, or 1M for 1,000,000 Ohms. The second symbol shown in the table is that commonly used in a formula.
+ +When it comes to Volts and Amperes (Amps), we have alternating current and direct current (AC and DC respectively). The power from a wall outlet is AC, as is the output from a CD or tape machine. The mains from the wall outlet is at a high voltage and is capable of high current, and is used to power the amplifying circuits. The signal from your audio source is at a low voltage and can supply only a small current, and must be amplified so that it can drive a loudspeaker.
+ + +Impedance
+
A derived unit of resistance, capacitance and inductance in combination is called impedance, although it is not a requirement that all three be included. Impedance is also measured in Ohms, but is a complex figure, and often fails completely to give you any useful information. The impedance of a speaker is a case in point. Although the brochure may state that a speaker has an impedance of 8Ω, in reality it will vary depending on frequency, the type of enclosure, and even nearby walls or furnishings.
Units
+
In all areas of electronics, there are smaller and larger amounts of many things that would be very inconvenient to have to write in full. For example, a capacitor might have a value of 0.000001F or a resistor a value of 150,000Ω. Because of this, there are conventional units that are applied to make our lives easier (well, once we are used to using them, anyway). It is much easier to say 1uF or 150k (the same as above, but using standard units). These units are described below.
Conventional Metric Units + | ||
Symbol | Name | Multiplication |
p | pico | 1 x 10-12 |
n | nano | 1 x 10-9 |
μ | micro | 1 x 10-6 |
m | milli | 1 x 10-3 |
k | kilo | 1 x 103 |
M | Mega | 1 x 106 |
G | Giga | 1 x 109 |
T | Tera | 1 x 1012 |
Although commonly written as the letter 'u', the symbol for micro is actually the Greek letter mu (μ) as shown. In audio, Giga and Tera are not commonly found (not at all so far - except for specifying the input impedance of some opamps!). There are also others (such as femto - 1x10-15) that are extremely rare and were not included. Of the standard electrical units, only the Farad is so large that the defacto standard is the microfarad (µF). Most of the others are reasonably sensible in their basic form. + +
It is important to understand that the symbol for microfarad is µF (or uF), not mF - that's a millifarad, and is 1,000 µF.
+ + +The term 'amplify' basically means to make stronger. The strength of a signal (in terms of voltage) is referred to as amplitude, but there is no equivalent for current (curritude?, nah, sounds silly). This in itself is confusing, because although 'amplitude' refers to voltage, it contains the word 'amp', as in ampere. Maybe we should introduce 'voltitude' - No? Just live with it.
+ +To understand how any amplifier works, you need to understand the two major types of amplification, and a third 'derived' type:
+ +In the case of a voltage amplifier, a small input voltage will be increased, so that for example a 10mV (0.01V) input signal might be amplified so that the output is 1 Volt. This represents a 'gain' of 100 - the output voltage is 100 times as great as the input voltage. This is called the voltage gain of the amplifier.
+ +In the case of a current amplifier, an input current of 10mA (0.01A) might be amplified to give an output of 1A. Again, this is a gain of 100, and is the current gain of the amplifier.
+ +If we now combine the two amplifiers, then calculate the input power and the output power, we will measure the power gain:
+ +P = V × I | (where I = current, note that the symbol changes in a formula) |
The input and output power can now be calculated:
+ +Pin = 0.01 × 0.01 | (0.01V and 0.01A, or 10mV and 10mA) | |
Pin = 100µW | ||
Pout = 1 × 1 | (1V and 1A) | |
Pout = 1W |
The power gain is therefore 10,000, which is the voltage gain multiplied by the current gain. Somewhat surprisingly perhaps, we are not interested in power gain with audio amplifiers. There are good reasons for this, as shall be explained in the remainder of this page. Having said this, in reality all amplifiers are power amplifiers, since a voltage cannot exist without power unless the impedance is infinite or zero. This is never achieved, so some power is always present. It is convenient to classify amplifiers as above, and no harm is done by the small error of terminology.
+ +Note that a voltage or current gain of 100 is 40dB, and a power gain of 10,000 is also 40dB.
+ + +Input Impedance
+
Amplifiers will be quoted as having a specific input impedance. This only tells us the load it will place on preceding equipment, such as a preamplifier. It is neither practical nor useful to match the impedance of a preamp to a power amp, or a power amp to a speaker. This will be discussed in more detail later in this article.
The load is that resistance or impedance placed on the output of an amplifier. In the case of a power amplifier, the load is most commonly a loudspeaker. Any load will require that the source (the preceding amplifier) is capable of providing it with sufficient voltage and current to be able to perform its task. In the case of a speaker, the power amplifier must be capable of providing a voltage and current sufficient to cause the speaker cone(s) to move the distance required. This movement is converted to sound by the speaker.
+ +Even though an amplifier might be able to make the voltage great enough to drive a speaker cone, it will be unable to do so if it cannot provide enough current. This has nothing to do with its output impedance. An amplifier can have a very low output impedance, but only be capable of a small current (an operational amplifier, or opamp is a case in point). This is very important, and needs to be fully understood before you will be able to fully appreciate the complexity of the amplification process.
+ + +Output Impedance
+
The output impedance (Zout) of an amplifier is a measure of the impedance or resistance 'looking' back into the amplifier. It has nothing to do with the actual loading that may be placed at the output.
For example, an amplifier has an output impedance of 10Ω. This is verified by placing a load of 10Ω across the output, and the voltage can be seen to decrease to ½ that with no load. However, unless this amplifier is capable of substantial output current, we might have to make this measurement at a very low output voltage, or the amplifier will be unable to drive the load. If the output clips (distorts) the measurement is invalid.
+ +Another amplifier might have an output impedance of 100Ω, but be capable of driving 10A into the load. Output impedance and current are completely separate, and must not be seen to be in any way equivalent. Both of these possibilities will be demonstrated later in this series.
+ +It is very rare that you will ever be able to perform a direct measurement of output impedance. An opamp configured for a gain of 10 (20dB) will usually have such a low Zout that it's almost impossible to measure it directly, other than by using an input level of a few microvolts. Most power amps will be stressed badly by attempting to drive close to a short circuit, and will show their displeasure by blowing up or triggering their protection circuits (if fitted).
+ +The output impedance is also independent of the power supply impedance. This causes the maximum undistorted power to fall with lower impedance loads, so an amp may be able to deliver 50W into 8Ω but only 80W into 4Ω (continuous power - peak power can be higher for short transients). Failure to understand that all of these factors are independent from each other will lead to false conclusions. It's easy to fall into the traps, and some manufacturers make this worse by claiming that their 'XyZ-5000' 50W amplifier can deliver 100 amps to the load, but fail to tell buyers that no sensible (or even non-sensible) load can ever draw that much current.
+ +The output impedance is (roughly) equal to the open-loop (zero feedback) output impedance, divided by the feedback ratio. An amplifier may have an open-loop Zout of 5Ω, with 46dB of feedback (a factor of 200). Closed-loop Zout is then 5 / 200, or 25mΩ. However, the feedback ratio is almost always frequency dependent, so unless the frequency is specified, the Zout figure may not be meaningful.
+ + +Feedback
+
Feedback is a term that creates more and bloodier battles between audio enthusiasts than almost any other. Without it, we would not have the levels of performance we enjoy today, and many amplifier types would be unlistenable without it.
Feedback in its broadest sense means that a certain amount of the output signal is 'fed back' into the input. An amplifier - or an element of an amplifying device - is presented with the input signal, and compares it to a 'small scale replica' of the output signal. If there is any difference, the amp corrects this, and ideally ensures that the output is an exact replica of the input, but with a greater amplitude. Feedback may be as a voltage or current, and has a similar effect in either case.
+ +In many designs, one part of the complete amplifier circuit (usually the input stage) acts as an error amplifier, and supplies exactly the right amount of signal (with correction as needed) to the rest of the amp to ensure that there is no difference between the input and output signals, other than amplitude. This is (of course) an ideal state, and is never achieved in practice. There will always be some difference, however slight. Note that any amplifier that suffers from crossover (aka notch) distortion cannot be made linear with feedback, because at zero output (where this distortion occurs) there is also (almost) zero gain. You can't have feedback unless there is some 'excess' gain!
+ + +Signal Inversion
+
When used as voltage amplifiers, all the standard active devices invert the signal. This means that if a positive-going signal goes in, it emerges as a larger - but now negative-going - signal. This does not actually matter for the most part, but it is convenient (and conventional) to try to make amplifiers non-inverting. To achieve this, two stages must be used (or a transformer) to make the phase of the amplified signal the same as the input signal.
The exact mechanism as to how and why this happens will be explained as we go along.
+ + +Design Phase
+
The design phase of an amplifier is not remarkably different, regardless of the type of components used in the design itself. There is a sequence that will be used most of the time to finalise the design, and this will (or should) follow a pattern.
+ 200 / 50 = 4mA. ++ + Since the Class-A driver must operate in Class-A (what a surprise), it will need to operate with a current of 1.5 to 5 times the expected maximum driver + current, to ensure that it never turns off. The same applies with a MOSFET amp that will expect (for example) a maximum gate capacitance charge (or discharge) + current of 4mA at the highest amplitudes and frequencies.
These are only guidelines (of course), and there are many cases where currents are greater (or smaller) than suggested. The end result is in the performance of the amp, and the textbook approach is not always going to give the expected result. Note that there are some essential simplifications in the above - it is an overview, and is only intended to give you the basic idea.
+ + +For the purposes of this article, there are three different types of amplifying devices, and each will be discussed in turn. Each has its strengths and weaknesses, but all have one common failing - they are not perfect.
+ +A perfect amplifier or other device (known generally as 'ideal') will perform its task within certain set limits, without adding or subtracting anything from the original signal. No ideal amplifying device exists, and as a result, no ideal amplifier exists, since all must be built with real-life (non-ideal) devices.
+ +The amplifying devices currently available are:
+ +There are also some derivatives of the above, such as Insulated Gate Bipolar Transistors (IGBT), and Metal Oxide Semiconductor Field Effect Transistors (MOSFET). Of these, the MOSFET is a popular choice among many designers due to some desirable characteristics, and these will be covered in their own section.
+ +All of these devices are reliant on other non-amplifying ('support') components, commonly known as passive components. The passive devices are resistors, capacitors and inductors, and without these, we would be unable to build amplifiers at all.
+ +All the devices we use for amplification have a variable current output, and it is only the way that they are used that allows us to create a voltage amplifier. Valves and FETs are voltage controlled devices, meaning that the output current is determined by a voltage, and no current is drawn from the signal source (in theory). Bipolar transistors are current controlled, so the output current is determined by the input current. This means that no voltage is required from the signal source, only current. Again, this is in theory, and it is not realisable in practice.
+ +Only by using the support components can we convert the current output of any of these amplifying devices into a voltage. The most commonly used for this purpose is a resistor.
+ + +All active devices have certain parameters in common (although they will have different naming conventions depending on the device). Essentially these are ...
+ +This is by no means all of the ratings, there are many more, and vary from device to device. Some MOSFETs for example will have Peak Current ratings, which will be many times the continuous rating, but only for very limited time. Bipolar transistors have a Safe Operating Area (SOA) graph, which indicates that in some circumstances you must not operate the device anywhere near its maximum power dissipation, or it will fail due to a phenomenon called second breakdown (described later).
+ +With most semiconductors, in many cases it will not be possible to operate them at anywhere near the maximum power dissipation, because thermal resistance is such that the heat simply cannot be removed from the junction and into the heatsink fast enough. In these cases, it might be necessary to use multiple devices to achieve the performance that can (theoretically) be obtained from a single component. This is very common in audio amplifiers.
+ + +There are some things that you just can't get away from, and maths is one of them. (Sorry.) I will only include the essentials here, but will describe any others that are needed as we go. I am not about to give a lesson in algebra, but the best reason for ever doing the subject is to learn how to transpose electronics formulae ! Transposition is up to you (unless I am forced to do it for a calculation here or there).
+ +Ohm's Law
+
The first of these is Ohm's Law, which states that a voltage of 1V across a resistance of 1 Ohm will cause a current of 1 Amp to flow. The formula is ...
+ R = V / I (where R = resistance in Ohms, V = Voltage in Volts, and I = current in Amps) ++ +
Like all such formulae, this can be transposed (oops, I said I wasn't going to do this, didn't I).
+ ++ V = R × I (× means multiplied by), and+ +
+ I = V / R +
Reactance
+
Then there is the impedance (reactance) of a capacitor, which varies inversely with frequency (as frequency is increased, the reactance falls and vice versa).
+ Xc = 1 / ( 2π × f × C ) ++ +
where Xc is capacitive reactance in Ohms, π (pi) is 3.14159, f is frequency in Hz, and C is capacitance in Farads.
+ +Inductive reactance, being the reactance of an inductor. This is proportional to frequency.
+ ++ Xl = 2π × f × L ++ +
where Xl is inductive reactance in Ohms, and L is inductance in Henrys (others as above).
+ +Frequency
+
There are many different calculations for this, depending on the combination of components. The -3dB frequency for resistance and capacitance (the most common in amplifier design) is determined by ...
+ fo = 1 / ( 2π × R × C ) where fo is the -3dB frequency ++ +
When resistance and inductance are combined, the formula is
+ ++ fo = R / (2π × L) ++ +
Power
+
Power is a measure of work, which can be either physical work (moving a speaker cone) or thermal work - heat. Power in any form where voltage, current and resistance are present can be calculated by a number of means:
+ P = V × I+ +
+ P = V² / R
+ P = I² × R +
where P is power in watts, V is voltage in Volts, and I is current in Amps.
+ + +Decibels (dB)
+
It has been known for a very long time that human ears cannot resolve very small differences in sound pressure. Originally, it was determined that the smallest variation that is audible is 1dB - 1 decibel, or 1/10 of 1 Bel. It seems fairly commonly accepted that the actual limit is about 0.5dB, but it is not uncommon to hear that some people can (or genuinely believe they can) resolve much smaller variations. I shall not be distracted by this!
+ dB = 20 × log ( V1 / V2 )+ +
+ dB = 20 × log ( I1 / I2 )
+ dB = 10 × log ( P1 / P2 ) +
As can be seen, dB calculations for voltage and current use 20 times the log (base 10) of the larger unit divided by the smaller unit. With power, a multiplication of 10 is used. Either way, a drop of 3dB represents half the power and vice versa.
+ +There are many others, but these will be sufficient for now. I do not intend this to be a complete electronics course, so I will give you that which is needed to understand the remainder of the article - for the rest, there are lots of excellent books on electronics, and these will have every formula you ever wanted.
+ +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Amplifier Basics - How Amps Work (Part 1) |
![]() ![]() |
In the beginning the vacuum tube was the only way to amplify, and valves (or 'tubes') survive to this day, with a dedicated following of 'believers' who are convinced that the development of the transistor (or indeed, any semiconductor) was fundamentally a bad idea. This is not a discussion I intend to follow - I intend simply to state how these devices amplify a signal, and the factors that determine voltage and current gain. For more information about valve circuits, look at the material shown in the ESP Valve Pages.
+ +The basic amplifying valve (there are many different types with higher complexity) has three elements. These are ...
+ +When a positive voltage is applied to the anode with respect to the cathode, an electron stream is emitted from the cathode and flows to the anode, completing the circuit. The grid is a fine coil of wire, suspended between the other two elements. A negative voltage on the grid (with respect to the cathode) will repel some of the electron stream, causing the current to be reduced. If the voltage on the grid were to be varied, then the cathode to anode current must also vary, and an amplifier is born. Figure 1.1 shows the basic circuit of a valve voltage amplifier.
+ +This circuit configuration is known as 'common cathode', because the cathode reference point (earth) is common to both input and output. By placing a resistor 'Rk' in the cathode circuit, a voltage is developed because of the current flow. If the grid is referenced to earth (ground), then the grid is negative with respect to the cathode. The voltages shown on the circuit are typical of a single element of a 12AX7 twin triode. Note that the two valve pins that are not connected are for the heater. This is used to heat the cathode so that it emits electrons more readily.
+ +The cathode resistor will cause the circuit to reach a stable current, where any attempt at increasing the cathode current will cause a greater voltage across the resistor, making the grid effectively more negative and reducing the current. A point of equilibrium is quickly reached, where the circuit operates in a stable manner. This is known as cathode biasing, and is most common with signal level and low power amps.
+ +By applying a varying voltage (the signal) to the grid, the current between cathode and anode will vary too. Since the anode load is a resistance, a varying voltage will be developed which will (hopefully) be greater than the voltage applied to the grid. The input voltage has been amplified.
+ +Because the signal voltage on the grid is 'fighting' the attempt of the cathode resistor to maintain the current through the valve at a constant value, this is a form of feedback. It is also known as cathode degeneration. The name is of no consequence, because as local feedback, it will improve the linearity of the stage but reduce the gain. In reality, the improvement in linearity is only minor, and leaving out the capacitor can increase noise in sensitive circuits - especially hum that is induced into the cathode from the heater, which was nearly always operated from AC in the past, but DC heater supplies are now common.
+ +Where the cathode is directly heated (the filament has the oxide coatings directly applied), DC operation is mandatory or hum would result. Directly heated cathodes will always emit electrons unevenly, because of the voltage gradient across the filament. The only common directly heated valves used in any number these days are rectifiers.
+ +Note that for indirectly heated cathodes, the heating element is called the heater, but for directly heated cathodes it is more commonly referred to as the filament (as in the heated filament of a light bulb).
+ +Because valves have a relatively low voltage gain, it is common to bypass the cathode resistor with a capacitor to defeat the local feedback and extract as much gain as possible, as is shown in Figure 1.1. The gain (or more correctly, the transfer characteristic) of a valve is sometimes measured in mA/V - which tells us how many milliamps change in anode current will occur with a grid voltage change of 1 Volt. Another common way to describe this is 'mu' (µ) or amplification factor. Yet another value is common with valves - the 'conductance' (aka mutual conductance or transconductance), which is the opposite of resistance, and is expressed in Mhos (Ohms backwards - seriously!) or Siemens.
+ +One problem with valves has always been the number of different methods used to describe what is essentially the same thing. Depending entirely which book you happen to be reading, you will see the effective gain quoted as mA/V, mutual conductance ('gm', in Mhos or more commonly µMhos), or the equally obscure term 'Amplification Factor', none of which has any direct relevance to the gain you can expect without further calculation.
+ +The output impedance of the circuit of Figure 1.1 is about 44k - it's the value of the plate resistor in parallel with the internal plate resistance. Rg2 is the grid resistor for the following stage, and at 1M, loads the output and reduces gain.
+ + +There are four main characteristics that are quoted for any given valve. These are:
+ +One important thing to realise about valves is that everything changes. The characteristics vary widely with plate voltage, load resistance, bias current and just about everything else you can think of. Despite this, it is still possible to design a circuit using valves that will be repeatable from one unit to the next, provided the designer knows what s/he is doing.
+ +A typical signal valve (such as the 12AX7 high mu dual triode) has a plate resistance of 80k, an amplification factor (mu) of 100, and a gm (using the circuit of Figure 1.1) of about 1250µMhos, which can (by simple mathematics) be converted into a figure of 1.25mA/V, meaning that a change of 1V on the grid will cause a change of 1.6mA in the anode current. This does not actually mean what it says, since the valve might be quite incapable of sustaining an anode current of 1.25mA under all circumstances. However, a change of 0.1V at the grid can cause a change in plate current of 0.125mA - the measurement is typically 'normalised' to make comparison easier.
+ +Let us now have a look at how the valve amplifies the signal. The transfer curve in Figure 1.2 shows the input waveform applied to the grid, at any convenient frequency. As the signal becomes more positive, the valve draws more current, until at the peak of the waveform, the grid voltage has been made 0.1V more positive than it was before. Therefore, the anode current is 0.125mA greater that it was before. Using Ohm's Law, 0.125mA with a resistance of 100k means that the anode voltage should be 12.5 Volts lower than when in the idle (or quiescent) state.
+ +This would seem to imply that the valve has a gain of (12.5 / 0.1) 125 - not a chance! The circuit of Figure 1 will have a typical voltage gain (Av) of much less than this. Why? Because the valve's internal plate resistance wasn't considered. This is effectively in parallel with the plate load resistor and external load (the grid resistor of the next stage). When these are taken into consideration, the gain can be calculated at around 55 - somewhat shy of the figure obtained before considering the complete circuit.
+ +The only way to be certain what a valve will actually do is consult the manufacturer's data, and refer to the transfer curves for the mode of operation and cathode current you wish to use. Valve characteristics, supply voltage, plate current, plate voltage and the impedance of the next stage all have a profound effect on the performance of any valve.
+ +As can be seen from Figure 1.2, the transfer curve is not linear, which means that as the valve approaches cut-off (turned off completely) or saturation (turned on completely) the characteristics change, and distortion is introduced. A (very) rough estimation of maximum RMS output voltage to keep distortion below 1% is about 0.1 of the quiescent plate voltage, but often less. Thus, with a plate voltage of 125V at idle, the maximum output voltage will be 12.5V RMS. This assumes that the valve has been biased correctly in the first place. From the graph we can see that at high values of negative grid voltage the valve will cut off, while at low (or positive) grid voltage, the valve is turned on as hard as it can.
+ +A valve can be thought of as having an infinite input impedance (although this is never realised in practice). The input impedance is approximately equal to the value of the grid resistor for audio frequencies. The output current is therefore controlled by a voltage at the grid, so the valve might be considered a voltage controlled current source (or VCCS).
+ + +Figure 1.3 shows a valve current amplifier, commonly known as a cathode follower, or common plate (because the plate circuit is common to both input and output - for AC signals only). Although this circuit can provide a useful increase of current, and an equally useful decrease in output impedance, it has a voltage gain that is less than unity. Typically, this will be about 0.8 to 0.9, so for every volt of signal applied to the input, we only get about 850mV output.
+ +The cathode follower is typically used where a low impedance output is desired, since the output impedance of most valve circuits is rather high (equal to the value of the plate load resistor in parallel with the internal plate resistance). Simply attaching a low impedance load to a voltage amplifier stage will cause the output level to be dramatically reduced, so the current amplifier (cathode follower) is a useful stage. The output impedance of the circuit of Figure 1.3 can be expected to be about 1/10th the value of the cathode resistance Rk2 - but this is highly dependent on the valve itself and its operating current. The available current is very low, so the circuit will not be able to drive a load much less than Rk2, or 47k. Remember that output impedance and drive capability are not related.
+ +Note that the grid must still be biased to an appropriate voltage negative with respect to the cathode. The bypassed cathode resistor is used as before, but the grid is connected to the bottom of this resistor, and not ground. If it were connected to ground, the circuit would be capable of only very small signal levels before it distorted.
+ + +Finally, we can combine a voltage amplifier stage and a current amplifier stage, and get a power amplifier. Cathode followers are unusual in valve power amplifiers, and it is far more common to use a plate-loaded 'push-pull' output stage, using a transformer in the plate circuit to match the high voltage and relatively high impedance of the output valves to the impedance of the speaker. In a few cases, output stages have been configured to use part of the transformer winding in the anode circuit, and some in the cathode circuit. This can improve linearity, but makes the output valves harder to drive.
+ +'Transformerless' valve output stages had a short period of popularity, but most required high impedance loudspeakers which were expensive and disappeared only a few years after they were introduced. The high voltage requirement and comparatively low current capabilities make valves unsuited to direct-coupling to 'normal' speaker impedances.
+ +Figure 1.4 shows a very basic valve power amplifier, using a triode in 'single-ended' mode. The output transformer converts the high voltage, high impedance plate circuit of the valve to a low voltage, low impedance signal for the loudspeaker. Because the primary of the output transformer must carry the full DC quiescent current of the valve (which will be a large, high current unit), it needs a very large core of laminated steel with an air gap to minimise saturation effects and distortion.
+ +Interestingly, these inefficient and high distortion amplifiers have made a comeback in recent years. However in the heyday of the valve, the inefficiency and high distortion of these circuits was such that they were replaced in nearly all installations by more efficient and lower distortion circuits, such as that shown in Figure 1.5.
+ +The valves shown for the output are called pentodes (from penta - five), having 5 electrodes instead of the three for a triode. The second grid (called the screen grid, or just screen) increases the gain of the valve dramatically, while the third grid, the suppressor, prevents what is called 'secondary emission' from the plate. The screen accelerates the electron flow so much that electrons bounce off the plate, or dislodge others. The addition of the screen gives the valve some nice characteristics, such as much higher gain, but also some nasty ones (lower linearity, more distortion), which the suppressor counteracts to some degree. The suppressor grid is almost always connected internally to the cathode. It is not uncommon for designers to connect pentodes as triodes, by connecting the screen and plate together.
+ +The first stage of the circuit is interesting, and is called a phase splitter. It is a combination of a voltage amplifier and a current amplifier, having equal values of resistance in each circuit (i.e. Rp = Rk2). Because all valves have the same 'polarity', they cannot be used like transistors or MOSFETs, but must be driven with their own signal of the correct polarity.
+ +The incoming signal is therefore sent 'as is' to one valve (from the cathode circuit), and is inverted for the other - hence the term push-pull. As one valve 'pulls' the anode current lower, the other simultaneously 'pushes' it higher. In a properly designed circuit, the two output valves will pass the signal between them with little disturbance. Any disturbance in this region is called crossover distortion, because it happens as the signal crosses over from one valve to the other.
+ +Notice something else quite different. The cathodes of the output valves are connected directly to earth, and the grids are supplied with a negative bias from a separate negative power supply. This is the most common method of biasing output valves in high power circuits, having a much greater efficiency than cathode biasing.
+ +For many large output valves, it is not even considered a good idea to use cathode biasing, because the amount of negative grid voltage required is too high. Voltages of up to -60V are not uncommon with high power pentodes or another common type, beam power tetrodes (I will not cover these in more detail, but there is much information to be found on the web). Using cathode bias for this sort of voltage and current is inefficient and reduces the output power dramatically.
+ + +The above is but a very small offering from the world of the vacuum tube. As I said in the introduction, this is not designed to be a complete electronics training course. The circuits presented are basic only, which is to say that they will all work, but are not optimised in any way.
+ +For further reading, the most highly recommended work is the rather old (but still considered the reference manual) "Radiotron Designer's Handbook", by F. Langford-Smith and originally published by Amalgamated Wireless Valve Co. Pty. Ltd. in Australia. My copy is dated 1957, but it has recently been republished (although I think it is quite expensive, unfortunately).
+ +Overall, the valve is still an almost mystical thing, but in all honesty, modern amplifiers using transistors or MOSFETs are so vastly superior in terms of fidelity, efficiency and reliability, that I really don't see what all the fuss is about. Having said this, I was using a valve preamplifier on my own system until recently.
+ +There is no doubt that valves do have some very nice characteristics, and for guitar amplifiers there are few guitarists who would argue otherwise. A 'soft' overload behaviour means that a valve amp does not sound as harsh as a transistor amp when it is overdriven - which is great for guitar, but a hi-fi should never be overdriven anyway, so the point is moot.
+ +The problems that befall valves are many, and include ...
+ +On the positive side, valve amplifiers have a 'warm' sound, partly because of the low order harmonic distortion introduced. A good valve amp will also have a very wide bandwidth, and will have an easy job driving loads that might cause some solid-state equipment to have severe heartburn (or just blow up on the spot).
+ +At low levels, valve equipment has vanishingly small distortion levels, and when all is said and done, there is something nice about little glass tubes, with little lights inside, making your music. For more on the topic of valves, see the Valve Index.
+ +Overall though, valves are an expensive, fragile and unreliable way to amplify anything these days. Well designed, modern 'solid state' equipment will easily surpass the best valve gear from any era, and even rather pedestrian circuits can easily beat the best valve designs for noise and distortion.
+ +Previous (Intro) Next (Part 2 - Bipolar Transistors)
+ +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Amplifier Basics - How Amps Work (Part 2) |
![]() ![]() |
Since it was invented, the transistor (from 'transfer resistor') has come a long way. Early transistors were made from germanium, which was 'doped' with other materials to give the desired properties required for a semiconductor. In the beginnings of the transistor era, nearly all devices were PNP (Positive Negative Positive), and it was very difficult to make the opposite (NPN) polarity. The NPN transistors that were available at that time were low power and did not work as well as their PNP counterparts.
+ +When silicon was first used, the opposite was the case, and for quite some time the only really high power devices available were all of silicon NPN construction. More recently, it has become possible to make NPN and PNP transistors that are almost identical in performance. Germanium is rarely used any more, although some examples are still available.
+ +All transistors have three 'elements':
+ +A transistor can be represented as two diodes, with a junction in the middle. This is shown for both polarities in Figure 2.1. This is only an analogy, and connecting two discrete diodes in this manner will not produce a transistor, because the point where they meet must be a common junction on the same piece of silicon (or germanium) - hence (in part) the term Bipolar Junction Transistor. The 'bipolar' term means that transistors use 'charge carriers' of both polarities - positive and negative, or minority and majority.
+ +Since the base to collector junction is reverse biased in normal operation, there will be no current flow. It is the action of injecting current into the base that causes current flow in the collector circuit. I do not intend to explain the exact conduction mechanism, as it is somewhat outside the scope of this article.
+ +This is very convenient, because it gives us an easy way to check if a transistor is likely to be good or bad, simply by measuring the 'diodes'. Early PNP germanium devices would actually work equally well if the emitter and collector were reversed, but devices are now optimised to maximum performance, so this trick is not as successful (it does still work, but the device gain is much lower when the terminals are reversed).
+ +To make the transistor actually do something useful, it is necessary to bias it correctly. This is done (having selected a suitable collector resistance) simply by applying enough base current to ensure that the collector is at 1/2 the supply voltage. In the same way that the plate load resistor determines the output impedance of a valve amplifier, the collector resistor determines the output impedance of a transistor amplifier. Unlike a valve, the transistor is not said to have a 'collector resistance' as in the equivalent resistance between emitter and collector, because this is not relevant to the operation of a transistor.
+ +Figure 2.2 shows three methods of biasing a transistor, wired in 'common emitter' configuration ¹. Of these Figure 2.2a is the least usable, because there is no mechanism to ensure that the circuit will be repeatable with different devices or with temperature. Variations caused by temperature are (and always have been) a real problem, and it is necessary always to ensure that the circuit has some feedback mechanism for DC operating conditions to ensure stability. Different transistors (of the same type and even from the same manufacturing run) will have different gains, and this, too, must be compensated for.
+ +For the three circuits below, assume that the gain of the transistor is 100 (exactly). This means that for 1mA of base current, 100mA of collector current will flow. The emitter current is the sum of the base and collector currents. To bias the transistor we need only meet this criterion (in theory), and everything will be well. With a Supply voltage (Vcc) of 20V, we want to have 10V at the collector, to allow maximum voltage swing. This will allow the voltage to go to +20V or to 0V, however the signal will be badly distorted by then.
+ +Figure 2.2A is unusable in practice, even though it appears to satisfy the criteria for correct operation. Figure 2.2B is a simple way to achieve (acceptably) stable bias, but has some drawbacks. Because the bias resistor (Rb) is supplied from the collector circuit, it will have some of the collector current flowing in it. This will introduce negative current feedback, which at DC stabilises the circuit, but with the AC signal makes the input impedance very low, as well as reducing gain for any finite value of source impedance. This is not necessarily a drawback, however, as the feedback also reduces the distortion components.
+ +This problem is overcome with the circuit in Figure 2.2C, with a bias divider providing a fixed voltage reference, and the emitter resistor (Re) providing stabilising feedback as we saw with the valve voltage amplifier. In the same way as with a valve, this also provides feedback, increasing linearity and reducing gain. With a transistor we get one additional effect - the input impedance is increased (more on this subject later). Again, to achieve maximum gain, it is common to place a capacitor in parallel with Re to defeat the feedback for AC signals, allowing maximum gain.
+ +To bias a PNP device, we use exactly the same circuitry, but the supply polarity is reversed, so the collector (and base) will have a negative voltage with respect to the emitter.
+ +One of the major differences between valves and transistors is that once we have decided on a suitable biasing circuit (or specified a gain from the amplifier), we can make device substitutions with little or no change in performance, provided the transistors have similar basic parameters. Often the same circuit will work just as well with perhaps 10 or 20 different devices, all from different manufacturers.
+ + +I shall only discuss the basic characteristics of transistors (as with valves), and there is really only one variable parameter and two fixed parameters (which are the same for every silicon transistor) to deal with. With transistors, the parameters are not as interactive as with valves, and the circuit gain is not as reliant on the device gain as with valves. In the same way as with valves, there are small signal devices (low power), working all the way up to power devices, which can have collector current ratings of 50 to over 100A for some of the very large power transistors.
+ +As stated before, the gain of a transistor is dependent on collector current, but will normally be applicable over a fairly wide range. The gain normally falls at very low currents (compared to the device maximum), and again at high current (approaching the maximum rated collector current for a given device).
+ +The signal transfer curve is similar to that of a valve, and is shown in Figure 2.3. There is generally less distortion in the linear part of the curve, but because of the lower operating voltage, a transistor amp must work closer to the supply rail and earth, so distortion may be higher with simple circuits such as those in Figure 2.2 than with an equivalent valve amplifier.
+ +The major cause of distortion in small signal transistor amplifiers is the variation in the internal emitter resistance (re). Because transistors can tolerate a wider range of supply voltage and operating current than valves, it was common (when transistors were new and frightfully expensive) to try to extract as much voltage gain as possible from each device. This is no longer an issue, but the underlying problem is still there and it is necessary to take steps to prevent distortion. It's common to operate transistors at a constant current to minimise distortion. Very high gain circuits with global feedback are now the most common with transistor circuits, which renders the circuit immune from almost any variation of the device parameters, whether intrinsic (internally fixed) or manufacture dependent.
+ +The gain of a transistor stage is approximately equal to the collector resistance divided by the emitter resistance (including the internal resistance re). So for the circuit of Figure 2.2c, the gain will be 9.75 without the emitter bypass capacitor, or about 384 with it installed. The distortion will be much higher with the emitter bypass in place, and it is uncommon to see these circuits any more.
+ +The input impedance of a transistor voltage amplifier is low, and the output impedance is determined by the collector resistance (ignoring any feedback that may be applied from collector to base).
+ +The input impedance is essentially determined by the gain of the device, and the value of emitter resistance (including the internal resistance), and in theory (that word again) is approximately equal to the emitter resistance multiplied by gain. The circuit of Fig 2.2a will therefore have an input impedance in the order of 2600 Ohms, Fig 2.2b will be very low because of the feedback, and 2.2c (without the bypass capacitor) will have an input impedance of 100k - but as this is shunted by the bias resistors, the impedance will actually only be about 12k.
+ +A transistor can be thought of as a current controlled current source (CCCS).
+ + +The current amplifier is much more common in transistor circuits than with valves, and is called an emitter follower (or occasionally common-collector). The emitter follower (like the cathode follower) has a voltage gain of less than 1 (or unity), but the difference is much less. Typically, the gain of an emitter follower circuit will be about 0.95 to 0.99 - depending on the operating current. The use of feedback to lower this further is very common, and output impedances of less than 1 Ohm are quite possible.
+ +Figure 2.4 shows a standard configuration for an emitter follower current amplifier stage. It is common to bias the base to exactly 1/2 the supply voltage, using equal value resistors. I say 'a' standard because there are many different configurations that can be (and are) used, including direct coupling, which is very common with transistor circuits.
+ +One of the great attractions of transistors is their flexibility, which is considerably enhanced by having two polarities of device to work with. Because of this, circuits such as that shown in Figure 2.5 are common (or they were before the advent of opamps). Indeed, opamps themselves use the flexibility of transistors to the full, as can be seen if you have a look at the 'simplified equivalent circuit' often published as part of the specification sheet for many opamps.
+ + +The common base amplifier is something that you rarely see these days. It was also used in valve circuits and was sometimes called a 'grounded grid' amplifier. Input impedance is very low, and the circuit shown has an input impedance of around 50 ohms. It has high gain, and can be used at radio frequencies because there is almost no collector-base feedback (or plate-grid feedback) due to stray (or internal) capacitance. In early designs common base stages were sometimes used for low impedance microphone preamps, or for other low-Z applications. The input capacitor (Cin) needs to be large to pass audio frequencies, due to the very low input impedance. The base capacitor (Cb) connects the base to ground for all AC signals.
+ +As shown, the circuit will have a gain of around 70 times (35dB), but that depends on the source impedance (50 ohms is assumed). It's an interesting circuit overall, but cannot compete with an opamp 'virtual earth' stage, which has an input impedance of close to zero. The common base arrangement was also used in 'cascode' amplifiers, as were common grid valve circuits - indeed, that's where the circuit came from. Cascode designs were mainly used where high gain at radio frequencies was necessary, but have re-emerged in valve audio gear because they (allegedly) sound 'better' than other circuits.
+ + +The vast majority of circuit 'blocks' used today are combinations of stages. A combined voltage and current amplifier are very common, and these can be found in IC equivalent circuits, as well as many of the older designs that were in general use before opamps took over for the majority of circuitry.
+ +As can be seen, this amplifier uses an emitter follower for the output, is direct coupled within the circuit itself, uses both NPN and PNP devices, and has feedback to set a gain which is dependent only on the ratio of the two resistors Rfb1 and Rfb2. It is this sort of circuit that the opamp came from in the beginning, and there are still ICs (and small power amplifiers) that use similar circuitry internally. Regular readers may even recognise the basic circuit from the Projects Pages - essentially this is a discrete opamp, and will have a very high gain, which is brought back to something sensible by the feedback.
+ +The actual gain is almost entirely dependent on the resistor values (for gains less than about 50 or so), and may be calculated by
+ ++ Av = (Rfb1 + Rfb2) / Rfb2 where Av is voltage gain (Amplification, voltage) ++ +
So to obtain a gain of 20, Rfb1 would be 22k, and Rfb2 1k2 - this is actually a gain of 19.33, representing an error of 0.3dB. This gain is so stable that a completely different set of transistors from a different manufacturer would make no difference to measured gain performance. Other factors, such as noise or distortion must vary with the quality of the active devices, but the changes will generally be very subtle, and may not be noticeable at all, depending on the similarity of the transistors.
+ + +A transistor power amplifier uses (typically) another configuration for the input stage. This is called a 'long tailed pair', (LTP) and acts as both the input stage and error amplifier (Q1 and Q2). This circuit operates in current mode, so there is little output voltage to be seen from its output.
+ +The second stage (Q3) is a Class-A amplifier, and is responsible for a large proportion of the overall gain of the circuit. Notice the current sources that are typically used for the LTP and Class-A amp sections. These are commonly made using transistors and maintain a constant current regardless of the voltage at the collector. If the current were truly constant, this implies that the impedance is infinite (which means that the gain of the transistor stage is also infinite!), and although this is not the case in reality, it will still be remarkably high.
+ +For more information on how current sources are constructed, see Section 5.1 + +
The output stage (Q4 and Q5) typically is a pair of complementary emitter followers, which must be correctly biased to ensure that as the signal passes from one transistor to another, there is no discontinuity. This form of operation is known as Class-AB, since the amp operates in Class-A for very low level signals, then changes to Class-B at higher levels. Any discontinuity while passing the signal from one transistor to the other is the cause of crossover distortion, and for many years gave transistor amplifiers a bad name in the audio world. With proper biasing, and properly applied feedback, the crossover distortion can be made to go away - although never completely, but amplifiers with distortion levels of well below 0.01% are common.
+ +The resistors at the emitters of the output transistors help to maintain a stable bias, and also introduce some local feedback to linearise the output stage. This is a simplified circuit, and in reality the output stage will usually consist of multiple transistors, commonly a driver transistor followed by the output transistor itself. This does not change the operation of the circuit, but simply gives the output stage more gain, so it does not load the Class-A driver too heavily (this will result in greatly increased distortion).
+ +Like the previous example, the gain is entirely dependent on the ratio of Rfb1 and Rfb2. As shown, the amp in Figure 2.6 is DC coupled, meaning that it will amplify any voltage from DC up to its maximum bandwidth. Not shown on this circuit are the various components needed to stabilise the circuit to prevent oscillation at high frequencies - often in the MHz range. Such oscillation is a disaster for the sound, and will quickly overheat and destroy the output transistors.
+ +There are also transistor amplifiers that operate in Class-A, which means that the output transistors conduct all the time, and are never turned off. This can produce distortion levels that are almost impossible to measure, but this is at the expense of efficiency, and Class-A amplifiers will get very hot while doing nothing. Unlike the more common Class-AB amplifier, they will actually get slightly cooler as they reproduce a signal, since some of the input power is then diverted to the loudspeaker.
+ + +Just as with valve amplifiers, I have only scratched the surface. Entire books are written on the subject, and range from basic texts used in technical schools, to very advanced tomes intended for university students. Since transistors are easy to work with (and safe), there is much to be gained by experimentation, and you will have the satisfaction of having designed and built a functioning amplifier.
+ +Transistors also have their fair share of problems, and there are some things that they are just not very good at. Some of the major failings include: + +
Again, there are many advantages as well. Transistor amplifiers are very reliable, and can be counted on to give many years of life without requiring even a basic service ( most of the time anyway).
+ +They are also very quiet (generally much quieter than valve amps) and do not suffer from microphony, so room vibrations are not re-introduced into the music. Efficiency is much higher, with lower voltages and no heaters (its a pity they don't look really nice, though).
+ +Output impedances of 0.01 Ohm are achievable, so loudspeaker damping can be very high. Because transistor amps are very mechanically rugged, they can be installed in speaker boxes, so speaker lead lengths can be very short.
+ +Typical transistor amplifiers have much wider bandwidth than valve amps, because there is no transformer, this is especially noticeable at the lowest frequencies - a transistor amp can reproduce 5Hz as easily as 500Hz.
+ +Previous (Part 1 - Valves) Next (Part 3 - FETs)
+ +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Amplifier Basics - How Amps Work (Part 3) |
![]() ![]() |
Now on to FETs and MOSFETs. FET stands for "Field Effect Transistor", and MOSFET means "Metal Oxide Semiconductor Field Effect Transistor". This topic is something of a can of worms, not because of some deficiency in the devices, but because of the huge array of different types. The basic FET types are ...
+ +There are a couple of major sub-classes of MOSFET - lateral and vertical. Lateral MOSFETs are particularly suited to audio applications, as they are far more linear than their vertical brethren, although their gain is lower. Vertical MOSFETs (e.g. HEXFETs and their ilk) are ideally suited to switching applications, and this includes Pulse Width Modulated (PWM) amplifiers.
+ +Note: Further to the material here, I suggest you also read the article Designing With JFETs. It's much more recent than this article, and describes the use of JFETs in some additional detail. It also provides some info that will come in handy when you discover that your favourite JFET is no longer made, something that's depressingly common and it's getting worse.
+ +The terms 'lateral' and 'vertical' refer to internal fabrication methods, so many others you may come across (such as HEXFETs ®) are essentially variations of the vertical process. This is still not all the possibilities, because there are additional sub-classes as well, particularly with switching MOSFETs. However, for the purpose of a general article on their characteristics and how they work, I will concentrate on the most commonly used versions. This narrows the field, and we are left with both polarities of junction FETs, and both polarities of enhancement mode MOSFETs. With these, we cover the major proportion of current designs, so even 'though I will be leaving out a lot, the stuff I leave out is not all that common (he says hopefully).
+ +FETs are 'unipolar' devices, in that they use only one polarity of carrier, in contrast to bipolar transistors, which use both majority and minority charge carriers (electrons or 'holes', depending on the polarity). FETs are far more resistant to the effects of temperature, X-Rays and cosmic radiation - any of these can cause the production of minority carriers in bipolar transistors).
+ +I shall concentrate only on three terminal FETs, and the terminals are ...
+ +There is no simple equivalent circuit for FETs (as there is for transistors), but this is of no consequence. The gate is the controlling element, and affects the electron flow not by amplifying a current (as in the transistor), but by the application of a voltage. The input impedance of junction FETs is very high at all usable frequencies, but MOSFETs are different. They have an almost infinite input resistance, but appreciable capacitance between the gate and the rest of the device. This can make MOSFETs hard to drive, because the capacitive loading makes most amplifier devices unhappy.
+ +The junction FET is common in the inputs of high performance opamps, and offers extremely high input impedance. Indeed this is the case for discrete FETs as well, and a simple voltage amplifier using a junction FET and a power MOSFET are both shown in Figure 3.1. Both devices are N-Channel, and note that the arrow points in a different direction for each. The arrows point in the opposite direction for a P-Channel device, and all polarities are reversed. Vdd is +20V.
+ +Junction FETs are depletion mode devices, and (like all depletion mode FETs and MOSFETs) can be biased in exactly the same way as a valve. Depletion mode means that without a negative bias signal on the controlling element (the gate), there will be current flow between the drain (equivalent to plate or collector) and source (equivalent to cathode or emitter).
+ +An enhancement mode device remains turned off until a threshold voltage is reached, after which the device conducts, passing more current as the voltage increases. Although there are MOSFETs made for low power operation, the majority (in audio, anyway) are power devices. These are almost exclusively enhancement mode, and can be capable of very high current.
+ +In Figure 3.1, the power MOSFET is an enhancement mode device, and the junction FET is depletion mode. These are the most commonly used in audio. Enhancement mode power MOSFETs are also used in switching power supplies, and are far better than bipolar transistors in this role. They are faster, so switching losses are not as great (therefore the MOSFETs run cooler), and they are more rugged, and able to withstand abuses that would kill a bipolar transistor almost instantly.
+ +This ruggedness (coupled with the freedom from second breakdown effects), means that MOSFETs are very popular as output devices for high power professional amplifiers. In this area, the MOSFET is second to none, and they are firmly entrenched as the device of choice for high power.
+ +This is not to say that this is the only place MOSFETs are used. There are many fine audiophile power amps (and even preamps) that use power MOSFETs, and there are many claims that they are sonically superior to bipolar transistors (again, a debate that I will not discuss here).
+ +Somewhat like valves, FETs and MOSFETs are very device dependent, and it is not normally possible to just substitute one device for a different type. Also like valves, the gain that can be expected from a voltage amplifier circuit is device dependent, and the manufacturer's data sheet (or testing) is the only way that one can be sure of obtaining the gain required in a given circuit.
+ + +The characteristics of FETs must be covered in two parts, since we are dealing with two quite different devices. The first will be the junction FET, and as with transistors, I shall only describe the N-Channel, but virtually identical P-Channel devices are available (although not as commonly used).
+ +Initially, so the transfer characteristics of the two devices can be seen side by side for comparison, Figure 3.2 shows a fairly typical device from each 'family'. The junction FET data is from a 2N5457, and the MOSFET is an IRFP240 (a vertical MOSFET - more suited to switching applications).
+ +Rather than show the input and output signals superimposed on a graph, this time I show only the graph itself. These are excerpts from manufacturers data, but with a small catch - Figure 3.2b has the drain current displayed on a logarithmic scale, so the linearity of the device cannot be seen properly. If this graph were redrawn as linear, it will show that linearity is best at higher currents (on the graph shown it looks the other way around), and the device becomes almost perfectly linear with drain currents above about 3A.
+ +Note that because the junction FET is depletion mode, drain current is highest at 0V gate-source voltage. The (most common) MOSFET on the other hand is enhancement mode, so at 0V gate-source, there is no current. Conduction starts at 4V, and by 6V the drain current is 10A (for example). This varies by MOSFET type, and they are available with low threshold (suitable for driving from 5V logic) or 'normal' threshold, requiring up to 10V or so for full conduction.
+ +![]() |
+ The term Siemens (S) is now replacing Mhos as the unit of transconductance in most literature: 1S = Mho (1µS=1 µMho). For the above graphs, it may be worked out that the + junction FET has a transconductance of 1,500µS, and for the MOSFET it is approx. 9,000µS (9,000 µMhos) |
Like valves, FET data sheets provide gain information as gm (mutual conductance - in µMhos). The junction FET shown has a gm of (typically) 1,500 µMhos (in the graph shown it is actually closer to 1425 µMhos in the linear section), which translates to about 1.5 mA/V.
+ +The most common of the quoted parameters for junction FETs are
+ +The process of amplification is almost identical to that of a valve, except that the voltages are lower. The device is biased in the same way (although fixed bias can also be used). This means that the gate must be reverse biased with respect to the source, with the gate having the opposite polarity of the source-drain voltage.
+ +FETs offer low noise, especially with high impedance inputs, and in this respect are the opposite of bipolar transistors, which are generally at their best with low source impedance.
+ +Junction FETs are predominantly low power, although there are some high power devices available. These are uncommon in audio applications.
+ +It's notable (and regrettable) that many manufacturers have 'rationalised' their range of JFETs. Many of the high performance devices we used to be able to use in (for example) very low noise circuits have disappeared, and you can almost see JFETs vanishing from supplier catalogues as you watch. While I have never believed that JFETs have some 'magical' property that makes them sound better than anything else, it would have been nice if the manufacturers hadn't just decided that we don't need these specialised devices any more. I only have a couple of designs that use FETs, and it's now difficult to find devices that are suitable.
+ + +Again, MOSFET data sheets also provide information similar to junction FETs, but there are more items of importance to the designer. The most useful of these are
+ +Enhancement mode MOSFETs pass virtually no current when there is no gate voltage present. To conduct, a voltage must be applied between source and gate (of the same polarity as the drain voltage). Once the threshold has been reached, the device will start to conduct between drain and source.
+ +At increasing gate voltages, the drain current increases until either a) the maximum permissible drain current or total dissipation limit is reached, or b) the drain voltage falls to its lowest possible value. In this instance, since the source-drain channel is now fully conducting, the value of RDS(on) determines the voltage.
+ +Typical power MOSFETs offer extremely low on resistance, with values of less than 0.2 Ohm being fairly typical. There are many devices with much lower values (<50mΩ), but this is only important in switching circuits. In an audio amp, the MOSFETs should never be turned completely on, since this means the amplifier is clipping.
+ +Another area that must be addressed with MOSFETs is the voltage between gate and source. Because the gate is insulated from the channel by a (very) thin layer of metal oxide, it is susceptible to damage by static discharge or other excessive voltage. It is common to include a zener diode between source and gate to ensure that the maximum voltage cannot be exceeded. Voltage spikes in excess of the breakdown voltage of the insulating layer will cause instantaneous failure of the device.
+ + +Again, I have shown both a junction FET and a MOSFET in Figure 3.3, both common-drain or source-follower circuits. As can be seen, the junction FET is biased almost identically to a valve, but all voltages are much lower. The MOSFET requires a positive voltage, and this must be greater than the source voltage, by an amount that takes the characteristics of the MOSFET into consideration. For the device characteristics shown in Figure 3.2 this means that at a current of 100mA, the gate must be 4V higher than the source.
+ +For the JFET source follower, the bypass capacitor (Cb) is not always used, in which case the output would normally be taken from the source. When Cb is included, the output level is the same at both ends of Rs1, and input impedance is much greater because Rg is bootstrapped. The input impedance increase depends on the transconductance of the FET. For the JFET circuit shown(with Rg being 1MΩ), input impedance is about 5MΩ if Rs1 is not bypassed, rising to around 18MΩ with Cb included.
+ +Cb needs to be large enough to ensure that the AC voltage across it remain small at the lowest frequency of interest. For example, if Rs1 is 1k, Cb must be at least 10µF (a -3dB frequency of 16Hz). A higher value is recommended to minimise low frequency distortion. For normal audio work, I'd use at least 33µF (still assuming 1k for Rs1).
+ +Included in the MOSFET version is a zener for protection of the gate insulation. A 10V zener is used, as this gives good protection and is still able to let the maximum possible MOSFET current flow. A 6V zener could have been used, and this would still allow current up to 10A, which is far more than can be achieved from this simple circuit.
+ + +In exactly the same way as a power valve can be used in single-ended Class-A, so too can a MOSFET. A simple circuit is shown in Figure 3.4 which will provide about 10W of audio. Using a constant current source as a load (as shown) gives better efficiency than a resistor, and improves linearity. The distortion from a circuit such as that shown will be roughly the same as that from a single ended triode valve circuit. Overall efficiency will be higher, since there is no cathode bias resistor needed, and no heaters as with a valve. Performance is not up to hi-fi expectations !
+ +Although there are a few, all MOSFET power amplifiers are uncommon. Most use a combination of bipolar transistors (for the input and gain stages), and MOSFETs for the output devices. This seems to be the most popular circuit arrangement, so I will concentrate on this. Figure 3.5 shows a fairly typical arrangement (in simplified form), and the operation of this is almost identical to that of an amplifier using bipolar transistors in the output. Note that emitter followers are needed to be able to provide the low impedance drive that MOSFETs need, although in some circuits they are not used. Instead, the Class-A driver stage (Q3) is operated at a higher than normal current to allow it to drive the MOSFETs properly.
+ +One problem with this arrangement is that the gate to source voltage represents a circuit loss, so the power supply voltage needs to be typically ±6V higher than the required peak output voltage to the load to turn on the MOSFETs fully. Although this is not a major problem, it does increase dissipation in the output stage, and the loss increases with lower impedance loads.
+ +Some (especially very high power) amps get around this by using a low current (but higher voltage) secondary power supply for the drive circuit, and the main high current supply for the MOSFETs. In an amp using +/-50V at 20 Amp main supplies, the secondary supply might be ±60V, but capable of perhaps 1A maximum.
+ +As with the bipolar amp (did you notice how similar they are?), I have not included components for stability. These are typically the same as for a standard bipolar transistor amp, but will usually include 'stopper' resistors in series with the gates of the MOSFETs, and sometimes additional capacitance to prevent parasitic oscillation - the need for these varies from one device type to the next.
+ + +The surface is again, only barely scratched. The junction FET (aka JFET) is ideally suited to circuits where high impedances are expected, and will give the lowest noise. They are an invaluable electronic building block when used where they excel - providing extremely high input impedance.
+ +Like all devices so far, JFETs have their limitations ...
+ +There is generally an ideal (or close to ideal) amplifying device for every application, and when used properly, the JFET is extremely versatile and at its best when high impedances are needed. If you have a need to send an amplifier into space, then JFETs are preferred due to their greater 'radiation hardness'. However, parameter spread is high, so no two JFETs can ever be assumed to be the same, even from the same batch. Where operation is critical, JFETs must be matched or provided with an adjustable source resistance to allow the operating point to be established.
+ +JFETs (in fact all FETs) are more sensible than bipolar transistors when heated, and problems of thermal runaway are not usually encountered with these devices.
+ +Most of the 'better' JFETs for audio use have now disappeared from the market. The 2SK170 was revered in some quarters, and was the 'go to' device for very low noise in many different applications. The original and any replacements that were offered subsequently are now obsolete. You might be able to buy JFETs with '2SK170' printed on them, but what's inside is anyone's guess. One thing you can be fairly sure of - it almost certainly will not be a genuine 2SK170. The LSK170 made by Linear Systems is available, as is as good as the original.
+ +Even many 'pedestrian' JFETs have all but vanished from supplier's inventory, leaving you with limited choices. Some are available if you can handle SOT (small outline transistor, SMD), but even there the range is nothing like it used to be. This situation continues to get worse with each passing year.
+ + +The MOSFET is one of the most powerful of all the current range of amplifying device, with extraordinary current handling capability. Ideally suited to very high power amplifiers, switchmode power supplies and Class-D amplifiers, where extremes of operating conditions are regularly encountered, the MOSFET has no equal. The possible exception is the Insulated Gate Bipolar Transistor (IGBT) which is a hybrid device as the name implies. IGBTs are not covered in these articles.
+ +... And, as always, there are limitations ...
+ +To some extent, all the above can be forgiven when you really need the capabilities of a MOSFET. The freedom from second breakdown and the massive current capabilities of MOSFETs are unmatched by any other active device. With a properly designed drive circuit, MOSFETs are also very fast, capable of performance that is generally superior to that of bipolar transistors. This is not very helpful in audio, but is essential for switching circuits. Note that the 'freedom from second breakdown' is (or was) often cited by manufacturers, but there is a failure mechanism that's almost identical, and is invoked when a switching MOSFET is used in linear mode. Most manufacturers state that their MOSFETs are not intended for linear operation. If you decide to do so, then be prepared for unexplained failures.
+ +Coupled with a positive temperature coefficient that can stop thermal runaway in a linear circuit (when proper precautions are taken), the (lateral) MOSFET is almost indestructible, provided that you ensure the gate voltage is kept below the breakdown voltage. It's also essential to keep the drain voltage below the maximum specified.
+ +The positive temperature coefficient can be a help in audio circuits, although it can be a problem in switching power supplies, since the 'on' resistance also increases with temperature, and in a switch-mode power supply this can cause thermal runaway (exactly the reverse of bipolar transistors in this application).
+ +Switching MOSFETs are by far the most common now, with many of the earlier 'lateral' MOSFETs now unavailable. Project 101 was designed to use lateral MOSFETs, and it simply won't work with switching MOSFETs (not only because the gate and source pins are reversed for the two types. Switching MOSFETs are not designed for linear operation, and have to be severely derated to prevent failure.
+ +Previous (Part 2 - Bipolar Transistors) Next (Part 4 - Opamps)
+ +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Amplifier Basics - How Amps Work (Part 4) |
![]() ![]() |
No discussion of amplifying devices would be complete without a discussion of opamps (aka op. amps). Although not a single device, the opamp is considered to be a building block, just like a valve or any transistor.
+ +The format I used for the other discussions is not appropriate for this topic, so will be changed to suit this most versatile of components. I shall not be covering esoteric or special purpose types, only the basic variety, as there are too many variations to cover.
+ +The operational amplifier was originally used for analogue computers, although at that time they were made using discrete components. Modern (good) opamps are so good, that it is difficult or impossible to achieve results even close with discrete transistors or FETs. However, there are still some instances where opamps are just not suitable, such as when high supply voltages are needed for large voltage swings.
+ +The majority of power amplifiers (whether bipolar or MOSFET) are in fact discrete opamps, with a +ve input and a -ve input. You tend not to see this, but have a look at Figure 3.5 again. The signal is applied to the +ve input at the base of Q1. The base of Q2 is the -ve input, and is used for the feedback signal, exactly the same as you will see in Figure 4.1a below.
+ +Unlike the other devices, opamps are primarily designed as voltage amplifiers, and their versatility comes from their input circuitry. Opamps have two inputs, designated as the non-inverting and inverting (or simply + and -).
+ +When wired into a conventional amplifier circuit, the opamp has one major goal in its little life ...
+ + Make both inputs the same voltage + +If, because some swine of a designer has made this impossible (very common with a lot of circuits), the opamp then takes another approach ...
+ + Make the output the same polarity as the most positive input + +The latter condition needs a small explanation. If the +ve input is most positive, then the output will swing to the positive supply rail (or as close as it can get). Should the -ve input be more positive, then the output will swing to the negative supply rail. The difference between the two inputs may be less than 1mV! Simple as that.
+ +I call these "The First and Second Laws of Opamps". These two statements describe everything an opamp does, and just by knowing this, makes the task of working out what most common circuits do a simple process. There is actually nothing especially complex about opamps, unless you look at the 'simplified' circuit diagram often included in data sheets. Don't do this, as it is too depressing. (By the way, the first statement is not strictly true of real-life devices, which will always have some error, however without very specialised equipment you will be unable to measure it.)
+ +Modern opamps (the good ones, anyway) are as close as anyone has ever got to the ideal amplifier. The bandwidth is very wide indeed, with very low distortion (0.00003% for one of the Burr Brown devices), and low noise. Although it is quite possible to obtain an output impedance of far less than 10 Ohms, the current output is usually limited to about +/-20mA or so. Supply voltage of most opamps is limited to a maximum of about +/-18V, although there are some that will take more, and others less.
+ +Depending on the opamp used, gains of 100 with a frequency response up to 100kHz are easily achieved, with noise levels being only very marginally worse that a dedicated discrete design using all the noise reducing tricks known. The circuits shown below have frequency response down to DC, with the upper frequency limit determined by device type and gain.
+ +Figure 4.1 shows the two most common opamp amplifier circuits. The first (4.1a) is non-inverting, and is the better connection for minimum noise. The voltage fed back through Rfb1 will cause a voltage to be developed across Rfb2. The output will correct itself until these two voltages are equal at any instant in time. It does not matter if the signal is a sinewave, square wave, or music, the opamp will keep up (provided you stay within its capabilities). Once the speed of the opamp is not significantly higher than the rate of change of the input (generally a factor of 10 is sufficient - i.e. the opamp needs to be 10 times faster than the highest frequency signal it is expected to amplify), the output will become distorted. At voltage gains of 10 or less, almost any opamp will be able to keep up with typical audio signals, but (and be warned) this is no guarantee that they will sound any good.
+ +Input impedance is equal to Rin, and voltage gain (Av) is calculated from ...
+ ++ Av = (Rfb1 + Rfb2) / Rfb2 or ... + Av = Rfb1 / Rfb2 +1 ++ +
The second circuit (4.1b) is an inverting amplifier, and is commonly used as a 'summing' amplifier - the output is the negative sum of the three (or more) inputs. It is also called a 'virtual earth' mixer, because the -ve input is a virtual earth (remember my 'First law of opamps'). If the +ve input is earthed (grounded), then the opamp must try to keep the -ve input at the same voltage - namely 0V. They are used in many diverse applications, and are common when a signal polarity must be inverted.
+ +It does this by adjusting its output until the current flowing through Rfb is exactly the same (but of the opposite polarity) as the current flowing into the inputs from each Rin. They must all sum to 0V, as they are equal and opposite. This is done with amazing speed, and good opamps will continue to succeed in fulfilling the First Law up to over 100kHz or more (depending on gain). Lesser devices will start to have trouble, and the appearance of a measurable voltage at the -ve input is an indication that the opamp can no longer keep up with the signal.
+ +Input impedance is equal to RinX (where X is the number of the input), and voltage gain is calculated from ...
+ ++ Av = Rfb / RinX ++ +
Multiple inputs can all have different gains (and input impedances). There are two catches to this circuit. The first is that if the source does not have an output impedance significantly lower than Rin, then the gain will be lower than expected. The other, not always realised, is that if the circuit is configured for a gain of 1 (actually it is technically correct to refer to it as -1), Rin1, Rin2 etc. will all be equal to Rfb. If the circuit has 10 inputs, then from the opamp's perspective it has a gain of 10, and its frequency response and noise will reflect this.
+ +There are literally hundreds of different opamp circuit configurations. Feedback circuits with frequency dependent components (capacitors or inductors) make the opamp into a filter, or a phono equaliser, or almost anything else.
+ +For an in-depth look at opamp circuits, see the Designing With Opamps series.
+ + +Opamps even come in power versions, using a TO-220 (or other specialised) case, and are typically capable of around 25W to 50W or more into an 8 Ohm speaker load. These devices, while not necessarily considered to be to audiophile standards, are still very capable, and have been used by many domestic appliance manufacturers in such things as high-end TV sets and even 'high end' hi-fi equipment. Some of the more advanced devices are capable of output power up to 80W. It is very doubtful that even the most 'golden eared' reviewer would pick that an amplifier used a monolithic power amp (power opamp) in a double-blind test.
+ +They typically have distortion figures well below 0.1%, and can be used anywhere a small, convenient and cheap power amp is required. The circuit looks almost identical to that of a small signal opamp, except that a Zobel stabilisation network is used on the output to prevent oscillation. There are several circuits amongst the ESP projects, and PCBs are available for the most popular designs.
+ +Previous (Part 3 - FETs) Next (Part 5 - Building Blocks)
+ +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Amplifier Basics - How Amps Work (Part 5) |
![]() ![]() |
There are some circuits in the world of electronics that are just too useful. While some have been around for many years, others only became practical with the advent of the transistor. The circuits described are a mixture, some are very old, and others much newer.
+ +I shall not go into the history (this is an electronics tutorial, not a history lesson), but will show the various stages in their basic form for each type of circuit.
+ +On the topic of current sources, sinks and mirrors, click here to see the full article describing how they are most commonly used in audio circuits (and why).
+ +The constant current source (or sink) is one of the most versatile and widely used of the circuits shown in this section. The ideal current source provides a current into a load that is independent of the resistance (or impedance) of the load, from zero to infinity. As always, the ideal does not exist, but within the capabilities of the power supply voltage, it is quite simple to do, and surprisingly accurate.
+ +There is no real difference between the two circuits - one sources current (or sinks electrons) or vice versa. Sometimes it might help to consider the circuit 'upside down' to see that there is no real difference, only one of terminology.
+ +As an example, if we wanted to supply a current that were fixed at 1A into any load impedance, then we might use a circuit similar to that in Figure 5.1 - a basic transistor current source. As shown, this will supply 1A into any resistance from zero Ohms up to a little under 50 Ohms. The power supply is the limiting factor - to be able to supply the same current into 1M Ohm would need a 1,000,000V power supply, which is an unrealistic expectation.
+ +The current source or sink can be imagined as a device with infinite impedance - this must be the case if the current remains unchanged even as the load resistance is varied over a wide range. Naturally, the impedance of actual current sources is not infinite, but can easily reach values of many megohms, even in a simple circuit.
+ +The operation of the circuit is simple. If the voltage across the emitter resistor of Q2 attempts to exceed 0.65V (the base turn-on voltage for a silicon transistor), then Q1 will turn on, and short out all base current to Q2 except for exactly that amount required to maintain the specified current of 10mA (10mA through 65 ohms develops 0.65V). If the collector current of Q2 falls, then the voltage across the emitter resistor also falls. This turns off Q1 until the current is again stable at the preset value. (This is only one way to make a current source - there are many others.)
+ +Thermal stability is not good. The emitter-base potential falls at 2mV / degree C, so as the temperature increases, the current will fall from the nominal 1A. At low temperatures, the opposite will occur. A precision voltage reference can be used, or an opamp can monitor the voltage across the resistor, resulting in a much more stable current. Fortunately, in most circuits, it is not that critical, so the circuit of Figure 5.1 is very common.
+ +Junction FETs, being a depletion mode device, can be used as a current source very easily, as shown in Figure 5.2. Because JFETs are mainly low current devices, the useful range is from about 0.1mA up to 10mA or so. This is ideal for many of the circuits that need a current source. The actual current is dependent on the FET's characteristics, but is sufficiently stable for many non-critical applications. Based on the FET curve shown in Figure 3.2a, this current source will supply a current of about 0.4mA into a load from zero to 72k Ohms. The voltage is also lower, because of the lower voltage rating of most FETs.
+ + +The current mirror is one of the 'new' circuits, and works well with bipolar transistors. It is unusual to see this circuit implemented with valves or FETs, and I will not change this (i.e. I will show transistors only). Figure 5.3 shows a simple current mirror (this version is not very accurate, but is still extremely effective and commonly used).
+ +Any current injected into the collector/base circuit of Q1 (via Ri) will be 'mirrored' by Q2, which will draw the same current through its load resistor (within the capability of the transistor and power supply). Current mirrors are sometimes used as current sources (one less resistor), and are not as dependent on temperature, since both transistors will ideally be at the same temperature. It is not uncommon to use dual transistors (or thermal bonding) to ensure stability.
+ + +The long tailed (or differential) pair is an old circuit, and is used with valves, FETs and BJTs. It was originally designed in the valve era, and provides a means for the comparison of two voltages. The long tailed pair (LTP) is used as the input stage of most opamps, and many (if not most) modern power amplifiers.
+ +As can be seen in Figure 5.4, the LTP can be made using valves (A), JFETs (B) or bipolar transistors (C). Valves and JFETs can be self biased as shown, but BJT circuits must have external bias resistors. A pair of MOSFETs could be used, but at the typical currents used (less than 5mA), the gain and linearity would be very poor. Although each circuit is shown using a resistor as the 'tail', in FET and bipolar circuits this is most commonly a current source (or sink if you prefer).
+ +The use of a current source stabilises the overall current, so the device input current is not affected by supply voltage changes, or variations in the input bias voltages. + +
In each case, the circuit has an inverting and a non-inverting input, and an inverted and non-inverted output. Application of the same voltage and polarity to both inputs at once results in (theoretically) zero output - this is called the common mode signal, and is commonly quoted for opamps as the common mode rejection ratio.
+ +The valve and FET versions only require capacitive coupling, as they are self biasing as shown. The bipolar circuit cannot be self biased, and requires the biasing resistors Rb1 and Rb2 for each input.
+ +The output of each version may be taken from either or both outputs, and may be capacitively or direct coupled. Direct coupling is very common with LTP circuits, especially in opamps and audio power amplifiers, where it is the rule, rather than the exception.
+ +When used as the input stage of an amplifier, the LTP uses one input as the signal input, and the other is used for the application of feedback, in the same way as in an opamp.
+ +It's worth noting that the performance of a long tailed pair is dependent on the gain of the active device. BJTs have far greater gain than valves or JFETs, and the performance of a BJT version is generally vastly superior to the valve or FET circuits. Of the three circuits shown, only the BJT will be able to provide close to identical outputs (but with one inverted of course). The other two certainly work, but the level difference between the outputs can be 20% or more.
+ + +Sometimes, it is desirable to have an extremely low input impedance, for example where the output impedance of the source is very low. One way to achieve this is to use the control element (grid, gate or base) as the reference, and apply the signal to the cathode, source or emitter (as appropriate). Figure 5.5 shows an example of grounded (or common) grid (A), gate (B) and emitter (C).
+ +Apart from having an extremely low input impedance, this class of amplifier has an additional advantage. The normal capacitance from output to input is bypassed to earth, and no longer acts as a feedback path. Such circuits are therefore capable of a much better high frequency response than when used in the 'conventional' way, and are common in radio frequency circuits. The bypass path is direct for a valve or FET, and is via a capacitance for the bipolar circuit.
+ +All inputs and outputs must be capacitively coupled, unless the preceding circuit is to be direct coupled (unusual) or the output is direct coupled to a follower (quite common).
+ +Because the input impedance is so low, there are few applications in audio, except where this circuit is used in conjunction with a 'normal' amplifier stage. This forms a new circuit, called cascode.
+ + +This circuit was developed in the valve era, primarily to obtain better response at high radio frequencies. Valves have capacitance between the plate and grid, and this acts as a feedback path at high frequencies, causing a drop in gain as the frequency is increased. This is the so-called 'Miller' effect. Operating in cascode allows the circuit to have a high input impedance (via the normal grid input), and the grounded grid amplifier (the signal is applied to the cathode) means that there is no feedback from plate to grid, and the grid acts as a shield to prevent feedback to the cathode. The lower half of the stage contributes a relatively small amount of gain, and is not subject to the feedback effect since it is operating as a current amplifier (there is very little voltage swing on the plate, so there is little or no signal to feed back).
+ +As can be seen, the grid of V2 is earthed via the capacitor for all signal frequencies, allowing V2 to operate as common grid. The capacitor (C bypass) is used to ensure that there is no gain lost due to cathode degeneration (local feedback). The cathode of V1 will also be bypassed in many cases, especially where low noise is a primary goal.
+ +The same principles can be applied to FETs or BJTs, and has similar advantages. The capacitance between Drain and gate (or collector and base) is isolated in the same way as with a valve circuit, with the signal being coupled to the source or emitter by the first device. Figure 5.7 shows a composite JFET / BJT cascode circuit, which will have better linearity than a conventional amplifier, and a much better high frequency response.
+ +This type of circuit is not uncommon in high performance opamps, where very wide bandwidth and good linearity before feedback are required. Cascode circuits are also sometimes seen in solid state power amplifier circuits, where the designer is trying to obtain the maximum possible bandwidth from the amp.
+ +The base of Q2 is grounded to all signal frequencies, so the stage operates as a common base circuit. Using a JFET as the input element means that the circuit has a high input impedance, while the BJT ensures maximum gain. To obtain even more gain, Rc might be replaced by a current source, in which case the gain from this single stage can exceed 1000 times, with wide bandwidth and excellent linearity.
+ +Previous (Part 4 - Opamps) Next (Conclusions)
+ +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Amplifier Basics - How Amps Work (Part 6) |
![]() ![]() |
Section 5 is the last of the technical pages in this series, and this page finalises the topic at this level - at least until such time as I find (or someone points out) a mistake or major omission that I will then have to fix, otherwise there will be no further updates.
+ +The articles in this series describe the essential building blocks of nearly all circuits in common use today. There are others (of course) but they are most often combinations of the above - for example, a LTP (long-tailed pair) stage can be built using two cascode circuits, a current source and a current mirror. The resulting circuit looks complex, but is simply a combination of common circuits such as those shown.
+ +Other circuits are modification of the basic stages to exploit what might otherwise be seen as a deficiency - for example circuits that deliberately exploit the temperature dependency of a BJT can be used as high gain thermal sensors, or to stabilise the quiescent current in a power amplifier.
+ +There are also some bizarre combinations possible. A valve and BJT operating in cascode would be interesting, and would no doubt have some desirable characteristics (and I have seen this particular combination used in a power amplifier). Likewise, a valve with a transistor current source instead of the load resistor has far better linearity and more gain than a simple resistor loaded version.
+ +In many cases, ICs are available to accomplish the functions described. Opamps are an obvious one, but there are also IC current sources, transistor arrays (ideal for current mirror applications because of the excellent thermal tracking), plus quite a few others.
+ +There are countless different IC power amplifiers, many of which have very high performance. There are several ESP projects that use 'power opamps' ... my terminology, because most are used just like any other opamp, but with higher voltages and the ability to drive loudspeaker loads. Complete ICs are even available for Class-D amplifiers, which combine just about every technique described in this series, but with even more circuit concepts. As you'd expect, these are also covered in separate articles.
+ +None of the techniques described here is just for audio. The same (or very similar) circuitry is used in industrial control systems, radio frequency amplifiers and any number of diverse fields. While you could be forgiven for thinking that everything is now 'digital', that's not the case. Analogue circuitry will be around for a very long time yet, and will probably never go away. Even the most sophisticated digital process controller still has to interface with the 'real world', which is 100% analogue!
+ +I hope that I have shed some light on the subject, and that you get some benefit from the information presented. Please be aware that this series is intended as a very basic introduction only, and (almost) every configuration discussed here is fully explained elsewhere on the ESP site. There are whole articles on designing with opamps, current sources, sinks and mirrors, and there's even a section dedicated to valves (vacuum tubes).
+ + +Previous (Part 5 - Building Blocks)
+ +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Amplifier Sound - What Are The Influences? |
Contents
+The sound of an amplifier is one of those ethereal things that seems to defy description. I will attempt to cover the influences I know about, and describe the effects as best I can. This is largely hypothesis on my part, since there are so many influences that, although present and audible, are almost impossible to quantify. Especially in combination, some of the effects will make one amp sound better, and another worse - I doubt that I will be able to even think of all the possibilities, but this article might help some of you a little - at least to decipher some of the possibilities.
+ +I don't claim to have all the answers, and it is quite conceivable that I don't have any (although I do hope this is not the case). This entire topic is subject to considerable interpretation, and I will try very hard to be completely objective.
+ +Reader input is encouraged, as I doubt that I will manage to get everything right first time, and there are some areas where I do not really know what the answers are. The only joy I can get from this is that I doubt that anyone else can do much better. If you can, let me know.
+ +Unfortunately, it can be extremely difficult for the novice to figure out what on-line information is reliable, what is unmitigated drivel, and which material has a random mixture of both. There are some extraordinarily dubious claims made, and as an example I offer the following gem (reproduced verbatim) ...
+ ++ "A modern high-quality audio system has excellent specifications and sounds almost perfect. Almost perfect, but not quite. There is one very important attribute missing in audio + systems - the attribute we call 'presence'. This article discusses an alternative power amplifier design with sound that often lacks in conventional amplifiers. Even the best + commercially available audio systems lack real presence - while the sound can be crystal clear, you would never mistake the recorded voices for real voices, or the recorded piano for a real + piano. The human ear immediately knows the difference. ++ +
+ As listeners, even as audiophile listeners, we don't fuss about this lack of presence because we have come to accept that what we hear from a modern audio system is as good as it gets. + Yet this just isn't true, and it doesn't have to be accepted. +
+ The lack of presence occurs almost entirely as a result of distortions inherent in the fundamental design of all commercial power amplifiers. Have you noticed how much clearer headphones + sound? It's due to the fact that they are driven by low-powered amplifiers." +
This nonsense has just enough (semi) truth to appear plausible, but as it continues the claims become less coherent. A recorded sound is different from a live sound because there's a microphone and speakers between the source and your ears. It has nothing to do with the amplifier, and especially nothing to do with the amplifier's power. Headphones sound clearer (except when they don't) because of the headphone drivers and intimate coupling with our hearing mechanism. The amplifier power is utterly irrelevant, and the third paragraph is unmitigated drivel!
+ +I could dissect the claims (which continue ad nauseam in the full text) in greater detail, but frankly it's not worth the electrons that would be used to transport the text. The article goes on to extol the 'virtues' of a rather odd amplifier topology that saw daylight for perhaps 30 seconds or so back in 1971, and never saw commercial production. It was published in Wireless World, but doesn't appear to have ever been re-published elsewhere. The amp used a single supply, so was capacitor coupled to the speaker, and while the basic design works well enough (or so it's claimed), almost no-one wants capacitively coupled speakers any more.
+ + +When people talk about the sound of an amplifier, there are many different terms used. For a typical (high quality) amplifier, the sound may be described as 'smeared', having 'air' or 'authoritative' bass. These terms - although describing a listener's experience - have no direct meaning in electrical terms. The term 'presence' referred to above is created in guitar amps (for example) by boosting the frequencies around 3kHz - it's not something found in power amplifiers.
+ +Electrically, we can discuss distortion, phase shift, current capability, slew rate and a myriad of other known phenomena. I don't have any real idea as to how we can directly link these to the common terms used by reviewers and listeners.
+ +Some writers have claimed that all amplifiers actually sound the same, and to some extent (comparing apples with apples) this is 'proven' in double-blind listening tests. I am a great believer in this technique, but there are some differences that cannot be readily explained. An amp that is deemed 'identical' to another in a test situation, may sound completely different in a normal listening environment. It is these differences that are the hardest to deal with, since we do not always measure some of the things that can have a big influence on the sound.
+ +For example; It is rare that testing is done on an amplifier's clipping performance - how the amp recovers from a brief transient overload. I have stated elsewhere that a hi-fi amplifier should never clip in normal usage - nice try, but it IS going to happen, and often is more common than we might think. Use a good clipping indicator on the amp, and this can be eliminated, but at what cost? It might be necessary to reduce the volume (and SPL) to a level that is much lower than you are used to, to eliminate a problem that you were unaware existed.
+ +Different amplifiers react in different ways to these momentary overloads, where their overall performance is otherwise almost identical. I have tested IC power amps, and was dismayed by the overload recovery waveform. My faithful old 60W design measures about the same as the IC in some areas, a little better in some, a little worse in others (as one would expect).
+ +Were these two amps compared in a double blind test (avoiding clipping), it is probable that no-one would be able to tell the difference. Advance the level so that transients started clipping, and a fence post would be able to hear the difference between them. What terms would describe the sound? I have no idea. The sound might be 'smeared' due to the loss of detail during the recovery time of the IC amp. Imaging might suffer as well, since much of the signal that provides directional cues would be lost for periods of time.
+ + +A detailed description of the more important (from a sound perspective) of the various amplifier parameters is given later in this article, but a brief description is warranted first. Items marked with a * are problem areas, and the effect should be minimised wherever possible. The parameters that should normally be measured (although for those marked # this is rare indeed) are as follows:
+ +¹ Important parameter
+² Rarely measured
+
+
Every amplifier design on the planet has the same set of constraints, and will exhibit all of the above problems to some degree. The only exception is a Class-A amplifier, which does not have crossover distortion, but is still limited by all other parameters.
+ +The difficulty is determining just how much of any of the problem items is tolerable, and under what conditions. For example, there are many single ended triode valve designs which have very high distortion figures (comparatively speaking), high output impedance and low output current capability. There are many audio enthusiasts who claim that these sound superior to all other amplifiers, so does this mean that the parameters where they perform badly (or at least not as well as other amps) can be considered unimportant? Not at all!
+ +If a conventional (i.e. not Class-A) solid state amplifier gave similar figures, it would be considered terrible, and would undoubtedly sound dreadful.
+ + +3.0 Distortion
+
Technically, distortion is any change that takes place to a signal as it travels from source to destination. If some of the signal 'goes missing', this is distortion just as much as when additional harmonics are generated.
We tend to classify distortion in different ways - the imperfect frequency response of an amplifier is not generally referred to as distortion, but it is. Instead, we talk about frequency response, phase shift, and various other parameters, but in reality they are all a form of distortion.
+ +The bottom line is that amplifiers all suffer from some degree of distortion, but if two amplifiers were to be compared that had no distortion at all, they must (by definition) be identical in both measured and perceived sound.
+ +Naturally, there is no such thing as a perfect amplifier, but there are quite a few that come perilously close, at least within the audible frequency range. What I shall attempt to do is look at the differences that do exist, and try to determine what effect these differences have on the perceived 'sonic quality' of different amplifiers. I will not be the first to try to unravel this mystery, and I doubt that I will be the last. I also doubt that I will succeed, in the sense that success in this particular area would only be achieved if everyone agreed that I was right - and of that there is not a chance! (However, one lives in hope.)
+ +In this article I use the somewhat outdated term 'solid state' to differentiate between valve amps, and those built using bipolar transistors, MOSFETs or other non-vacuum tube devices.
+ +I have also introduced a new (?) test method, which I have called a SIM (Sound Impairment Monitor), the general concept of which is described in the appendix to this article.
+ + +How can one amplifier's clipping distortion sound different from that of another? Most of the hi-fi fraternity will tend to think that clipping is undesirable in any form at any time. While this is undeniably true, many of the amps used in a typical high end setup will be found to be clipping during normal programme sessions. I'm not referring to gross overload - this is quite unmistakable and invariably sounds awful - regardless of the amplifier.
+ +There are subtle differences between the way amplifiers clip, that can make a great impact on the sound. Valve amps are the most respectable of all, having a 'soft' clipping characteristic which is comparatively unobtrusive. However, this comes at a cost. While distortion can be very low at low levels, with low feedback valve amp designs, the distortion rises as level increases. The change from 'unclipped' to 'clipped' may be less abrupt, but the distortion just before clipping can be surprisingly high. Low feedback Class-A amplifiers are next, with slightly more 'edge', but otherwise are usually free from any really nasty additions to the overall sound.
+ +Then there are the myriad of Class-AB discrete amps. Most of these (but by no means all) are reasonably well behaved, and while the clipping is 'hard' it does not have significant overhang - this is to say that once the output signal is lower than the supply voltage again it just carries on as normal. This is the ideal case - when any amp clips, it should add no more nastiness to the sound than is absolutely necessary. Clipping refers to the fact that when the instantaneous value of output signal attempts to exceed the amplifier's power supply voltage, it simply stops, because it cannot be greater than the supply. We know it must stop, but what is of interest is how it stops, and what the amplifier does in the brief period during and immediately after the clipping has occurred.
+ +
Figure 1 - Comparison of Basic Clipping Waveforms
In Figure 1, you can see the different clipping waveforms I am referring to, with 'A' being representative of typical push-pull valve amps, 'B' is the waveform from a conventional discrete Class-AB solid state amp, and 'C' shows the overhang that is typical of some IC power amps as well as quite a few discrete designs. This is a most insidious behaviour for an amp, because while the supply is 'stuck' to the power rail, any signal that might have been present in the programme material is lost, and a 100Hz (or 120Hz) component is added if the clipping + 'stuck to rail' period lasts long enough. This comes from the power supply, and is only avoidable by using a regulated supply or batteries. Neither of these is cheap to implement, and they are rarely found in amplifier designs.
+ +Although Figure 1 shows the signal as a sinewave for ease of identification, in a real music signal it will be a sharp transient that will clip, and if the amp behaves itself, this will be (or should be) more or less inaudible. Should it stick to the supply rail, the resulting description of the effect is unlikely to accurately describe the actual problem, but describe what it has done to the sound - from that listener's perspective. A simple clipped transient should not be audible in isolation, but will have an overall effect on the sound quality. Again, the description of this is unlikely to indicate that the amp was clipping, and regrettably few amps have clipping indicators so most of the time we simply don't know it is happening.
+ +To be able to visualise the real effect of clipping, we need to see a section of 'real' signal waveform, with the lowest and highest signal frequencies present at the same time. If the amp is clipped because of a bass transient (this is the most common), the period of the waveform is long. even if the signal is clipped for only 5 milliseconds, this means that 5 complete cycles of any signal at 1000Hz are removed completely, or 50 complete cycles at 10kHz. This represents a significant loss of intended information, which is replaced by a series of harmonics of the clipped frequency (if clipping lasts for long enough), or more typically a series of harmonics that is not especially related to anything (musically speaking - all harmonics are related to something, but this is not necessarily musical!)
+ +I think that no review of any amplifier should ever be performed without some method of indicating that the amp is clipping (or is subject to some other form of signal impairment), and this can be added to the reviewer's notes - along the lines of ...
+ ++ "This amplifier was flawless when kept below clipping (or as long as the SIM (or other signal integrity monitoring facility) failed to show any noticeable impairment), but even the smallest + amount of overload caused the amp to sound very hard. Transparency was completely lost, imaging was ruined, and it created listener fatigue very quickly." ++ +
Now, wouldn't that be cool? Instead of us being unaware (as was the reviewer in many cases) that the amp in review was being overdriven - however slightly - we now (all of us) have that missing piece of information that is not included at the moment. I have never seen a review of an amp where the output was monitored with an accurate clipping indicator to ensure that the reviewer was not listening to a signal that was undistorted. I'm not saying that no-one does this, just none that I have read.
+ +The next type of overload behaviour is dramatically worse, and I have seen this in various amps over the years. Most commonly associated with overload protection circuits, the sound is gross. I do not know the exact mechanism that allows this to happen, but it can be surmised that the protection system has 'hysteresis', a term that is more commonly associated with thermal controllers, steel transformer laminations and Schmitt trigger devices. Basically, a circuit with hysteresis will operate once a certain trigger point is reached, but will not reset until the input signal has fallen below a threshold that is lower than the trigger point. The typical waveform of an amplifier with this problem is shown in Figure 2, and I believe it IS a problem, and should be checked for as a normal part of the test process. This type of overload characteristic is not desirable in any way, shape or form.
+ +
Figure 2 - Hysteresis Overload Waveform
In this case, the additional harmonic components added to the original sound will be more prominent than with 'normal' clipping. As before, I cannot even begin to imagine how the sound might be described - all the more reason to ensure that testing includes informing the reader if the amp was clipping or not during the listening tests. The loss of signal with this type of distortion will generally be much greater than simple clipping, and the added harmonic content will be much more pronounced, especially in the upper frequencies.
+ +Clipping Synopsis
+
Tests conducted as a part of any review would be far more revealing if the clipping waveform were shown as a matter of course. After some learning on our behalf, we would get to know what various of the hi-fi press meant when they described the sound while the amp was clipping, versus not clipping, or what the amp sounded like when its overload protection circuits came into action.
To this end I have designed a new distortion indicator circuit, which not only indicates clipping, but will show when the amp is producing distortion of any kind beyond an acceptable level. One version has been published as a project, and I have chosen the acronym SIM (Signal/ Sound Impairment Monitor) for this circuit.
+ +The SIM will react to any form of signal modification, and this includes phase distortion and frequency response distortion. I do not believe that this approach has been used before in this way. It is not an uncommon method for distortion measurement, but has not been seen anywhere as a visual indicator for identifying problem areas that an amp may show in use. This circuit will also show when an amplifier's protection circuit has come into effect.
+ +Although the detector has no idea what type of problem is indicated, it does indicate when the input and output signals no longer match each other - for whatever reason. Oscilloscope analysis would be very useful using this circuit, as with a little practice we would be able to identify many of the currently unknown effects of various amplifier aberrations. Any amp behaviour that results in the input and output signals being unequal (within a few millivolts at most) indicates that something is wrong.
+ + +Class-A amplifiers have no crossover distortion at all, because they maintain conduction in the output device(s) for the entire waveform cycle and never turn off. Class-A is specifically excluded from this section for that reason.
+ +For the rest, a similar question as the one before - how can one amplifier's crossover distortion sound different from another? Surely if there is crossover distortion it will sound much the same? Not so at all. Again, valve amplifiers are much better in this area than solid state amps (at least in open loop conditions). When valves cross over from one output device to the next (standard push-pull circuit is assumed), the harmonic structure is comprised of mainly low order odd harmonics. There will be some 3rd harmonics, a smaller amount of 5th, and so on.
+ +Solid state amps tend to create high order odd harmonics, so there will be the 3rd harmonic, only a tiny bit less of the 5th harmonic, and the harmonics will extend across the full audio bandwidth. Transistor and MOSFET amps have very high open loop gains, and use feedback to reduce distortion. In all cases, the crossover distortion is caused because the power output devices are non-linear. At the low currents at which the changeover occurs, these non-linearities are worse, as well, the devices usually have a lower gain at these currents.
+ +This has two effects. The open loop gain of the amplifier is reduced because of the lower output device gain, so there is less negative feedback where it is most needed. Secondly, the feedback tries to compensate for the lower gain (and tries to eliminate the crossover distortion), but is limited by the overall speed of the internal circuitry of the amplifier. This results in sharp transitions in the crossover region, and any sharp transition means high order harmonics are produced (however small they might be).
+ +One method to minimise this is to increase the quiescent (no signal) current in the output transistors. With a linear output stage in a well designed circuit, crossover distortion should be all but non-existent with any current above about 50 to 100mA (but note that if the quiescent current is increased too far, overall distortion may actually get worse). Figure 3 shows the crossover distortion (at the centre of the red trace) and the residue as seen on an oscilloscope (green trace, amplified by 10 for clarity) - this is the typical output from a distortion meter, with an amplifier that has noticeable crossover distortion. If measured properly, the distortion is highly visible, even though it may be barely audible. Note that the waveform below would not qualify for the last statement - this amount of crossover distortion would be very audible indeed.
+ +
Figure 3 - Crossover Distortion Waveform
If THD is quoted without reference to its harmonic content, then it is quite possible that two amplifiers may indicate identical distortion figures, but one will sound much worse than the other. Distortion at a level of 1W should always be quoted, and the waveform shown. Once the waveform can be seen, it is easy to determine whether it will sound acceptable or dreadful - before we even listen to the amp. Listening tests will confirm the measured results with great accuracy, although the descriptive terms used will vary, and may not indicate the real problem.
+ + +Crossover Distortion Synopsis
+
Although this is one area where modern amplifiers rarely perform badly, it is still important, and should be measured and described with more care than is usually the case. While few amplifiers will show up badly in this test now, crossover distortion was one of the main culprits that gave solid state a bad name when transistors were first used in amplifiers.
I do not believe that we can simply ignore crossover distortion on the basis that "everyone knows how to fix it, and it is not a problem any more". I would suggest that it is still a real problem, only the magnitude has been reduced - the problem is still alive and well. Will you be able to hear it with most good quality amp? Almost certainly not.
+ + +Distortion of the frequency response should not be an issue with modern amplifiers, but with some (such as single ended triode valve designs), it does pose some problems. The effect is that not all frequencies are amplified equally, and the first to go are the extremes at both ends of the spectrum. It is uncommon for solid state amps to have a frequency response at low powers that extends to anything less than the full bandwidth from 20Hz to 20kHz. This is not the case with some of the simple designs, and single ended triode (SET) Class-A - as well as inductance loaded solid state Class-A amps - will often have a less than ideal response.
+ +I would expect any amplifier today should be no more than 0.5dB down at 20Hz and 20kHz, referred to the mid-band frequency (usually taken as 1kHz, but is actually about 905Hz). (My preferred test frequency is 440Hz (concert pitch A, below middle C), but none of this is of great consequence.) 0.5dB loss is acceptable in that it is basically inaudible, but most amps will do much better than this, with virtually no droop in the response from 10Hz to over 50kHz.
+ +For reference, the octaves included for 'normal' sound are:
+ ++ 20 40 80 160 320 640 1,280 2,560 5,120 10,240 20,480 (all in Hertz) ++ +
To determine the halfway point between two frequencies one octave apart, we multiply the lower frequency by the square root of 2 (1.414). The halfway point is between 650 and 1280Hz, or 904.96Hz. You must be so pleased to have been provided with this piece of completely useless information! Just think yourselves lucky that I didn't tell you how to calculate the distance between the frets on a guitar.
Most amplifiers will manage well beyond the range necessary for accurate reproduction, at all power levels required to cater for the requirements of music. So why are some amps described as having poor rendition of the high frequencies? They may be described as 'veiled' or something similar, but there is no measurement that can be applied to reveal this when an amplifier is tested. Interestingly, some of the simpler amplifiers (again, such as the single ended triode amps) have poorer response than most of the solid state designs, yet will regularly be described as having highs that 'sparkle', and are 'transparent'.
+ +These terms are not immediately translatable, since they are subjective, and there is no known measurement that reveals this quality. We must try to determine what measurable effect might cause such a phenomenon. There are few real clues, since amplifiers that should not be classified as exceptional in this area are often described as such. Other amps may be similarly described, and these will not have the distortion of a single ended triode and will have a far better response.
+ +We can (almost) rule out distortion as a factor in this equation, since amps with comparatively high distortion can be comparable to others with negligible distortion. Phase shift is also out of the question, since amps with a lot of phase shift can be favourably compared to others with virtually none. One major difference is that typical SET amplifiers have quite high levels of low order even harmonics. Although these will give the sound a unique character, I doubt that this is the sole reason for the perceived high frequency performance - I could also be wrong.
+ +Phase distortion occurs in many amplifiers, and is worst in designs using an output transformer or inductor (sometimes called a choke). The effect is that some frequencies are effectively delayed by a small amount. This delay is usually less than that caused by moving one's head closer to the loudspeakers by a few millimetres. It is generally thought to be inaudible, and tests that I (and many others) have conducted seem to bear this out.
+ + +Frequency And Phase Distortion - Synopsis
+
There must be some mechanism that causes multiple reviewers to describe an amplifier as having a poor high frequency performance, such as (for example) a lack of transparency. There are few real clues that allow us to determine exactly what is happening to cause these reviewers to describe the sound of the amp in such terms, and one may be tempted to put it all down to imagination or 'experimenter expectancy'. This is likely to be a mistake, and regardless of what we might think about reviewers as a species, they do get to listen to many more amplifiers than most of us.
One of the few variables is a phenomenon called slew rate. This is discussed fully in the next section.
+ + +This has always been somewhat controversial, but no-one has ever been able to confirm satisfactorily that slew rate (within certain sensible limits) has any real effect on the sound. Figure 4 is a nomograph that shows the required slew rate for any given power output to allow full power at any frequency. To use it, determine the power and calculate the peak voltage, and place the edge of a ruler at that voltage level. Tilt the ruler until the edge also aligns with the maximum full power frequency on the top scale. The slew rate is indicated on the bottom scale.
+ +For example, if the peak voltage is 50V (a 150W/8 ohm amp) and you expect full power to 20kHz, the required slew rate is 6V/µs. Bear in mind that no amplifier is ever expected to provide full power at 20kHz, and if it did the tweeters would fail very quickly.
+ +
Figure 4 - Slew Rate Nomograph
Slew rate distortion is caused when a signal frequency and amplitude is such that the amplifier is unable to reproduce the signal as a sine wave. Instead, the input sine wave is 'converted' into a triangle wave by the amplifier. This is shown in Figure 5, and is indicative of this behaviour in any amplifier with a limited slew rate. The basic problem is caused by the 'dominant pole' filter included in most amplifiers to maintain stability and prevent high frequency oscillation. While very few amplifiers even come close to slew rate induced distortion (AKA Transient Intermodulation Distortion) with a normal signal, this is one of the very few possibilities left to explain why some amps seem to have a less than enthusiastic response from the reviewers' perspective.
+ +If you don't like the nomograph, you can calculate the maximum slewrate if a sinewave easily. The formula is ...
+ ++ SR = 2π × f × Vp+ +
+ Where SR is slewrate in V/s and Vp is the peak voltage of the sinewave (VRMS × 1.414) +
For example, 20kHz at 28V RMS (100W/ 8 ohms) requires a slewrate of ...
+ ++ SR = 2π × 20,000 × 40+ +
+ SR = 5,026,548 V/s = 5.03V/µs +
We already know absolutely that no music source will ever provide a full power signal at 20kHz, but to allow it the amp needs a slewrate of 5V/µs (close enough). Should someone claim that you need 100V/µs or better, that their amp can do just that and you'll miss out on much of your music, then you know that the claims are fallacious. Having a higher slewrate than strictly necessary does no harm, provided that the design's stability hasn't been compromised to achieve the claimed figure. All design is the art of compromise, and some compromises can be a giant leap backwards if the designer concentrates on one issue and ignores others. I happen to think that stability is extremely important - no amp should oscillate when operated normally into any likely speaker load ... ever!
+ +
Figure 5 - Slew Rate Limiting In An Amplifier
The red trace shows the amp operating normally, and the green trace shows what happens if the slew rate is deliberately reduced. Is this the answer, then? I wish it were, since we could all sleep soundly knowing exactly what caused one amp to sound the way it did, compared to another, which should have sounded almost identical.
+ +A further test is to apply a low frequency square wave at about half to 3/4 power, mixed with a low-level high frequency sinewave to the amplifier. At the transitions of the squarewave, the sinewave should simply move up and down - 'riding' the squarewave. If there is any misbehaviour in the amp, the sinewave may be seen to be compressed so its shape will change, or a few cycles may even go missing entirely. Either is unacceptable, and should not occur.
+ +This is an extremely savage test, but most amplifiers should be able to cope with it quite well. Those that don't will modify the music signal in an unacceptable way in extreme cases (which this test simulates). Again, this is an uncommon test to perform, but may be quite revealing of differences between amps.
+ +Frequency And Slew Rate Distortion - Synopsis
+
We need to delve deeper, and although there seems to be little (if any) useful evidence we can use to explain this particular problem, there is an answer, and it therefore possible to measure the mechanism that causes the problem to exist.
The performance of a feedback amplifier is determined by two primary factors. These are
+ +If the amp has a poor open loop gain and high distortion, then sensible amounts of feedback will not be able to correct the deficiencies, because there is not sufficient gain reserve. By the time the performance is acceptable, it may mean that the amplifier has unity gain, and is now impossible to drive with any normal preamp.
+ +Many amplifiers have a very high open loop gain, but may have a restricted frequency response. Let's assume an amp that has a gain of 100dB at 20Hz, and 40dB gain at 20kHz. If we want 30dB of overall gain (which is about standard), then there is 70dB of feedback at 20Hz, but only 10dB at 20kHz. As a very rough calculation, distortion and output impedance are reduced by the feedback ratio, so if open loop distortion were 3% (not an unreasonable figure), then at 20Hz, this is reduced to 0.0015%, but will be only just under 1% at 20kHz.
+ +Because these figures are so rarely quoted (and I must admit, I have not really measured all the characteristics of the 60W amp in Project 03 - open loop measurements are difficult to make accurately), we have no idea if amplifiers with poor open loop responses are responsible for so many of the failings we hear about. It is logical to assume that there must be some correlation, but we don't really know for sure.
+ +Ideally, an amplifier should have wide bandwidth and low distortion before global feedback is applied, which will just make a good amp better. Or will it? I have read reviews where a very simple amp was deemed one of the best around (this was quite a few years ago), and I was astonished when I finally saw the circuit - it was almost identical to the 'El Cheapo' amplifier (see the projects pages for more info on this amp).
+ +The only major difference between this amp and most of the others at the time was the comparatively low open loop gain, and a somewhat wider bandwidth than was typical at the time, because it does not need a Miller capacitor for stability. So the amp was better in one respect, worse in another.
+ +In the end, it doesn't really matter what the open loop response is like, as long as closed loop (i.e. with feedback applied) performance does not degrade the sound. Again, we have the same quandary as before - unless we can monitor the difference between input and output at all levels and with normal signal applied, we really don't know what is going on. The usual tests are useful, but cannot predict how an amp will sound. I have heard countless stories about amps that measure up extremely well, but sound 'hard and dry', and have no 'music' in them.
+ +Unless these measurements are made (or at least some modified form), we will still be no further in understanding why so many people prefer one brand of amp over another (other than peer pressure or advertising hype).
+ +One possibility is to measure the amp with a gain of 40dB. This is an easy enough modification to make for testing, and the performance is far easier to measure than if we attempt open loop testing. The difference between measured performance at 30dB gain (about 32) versus 40dB (100) would be an excellent indicator of the amp's performance, and it is not too hard to predict the approximate open loop response from the different measurements. To be able to do this requires that all measurements be very accurate.
+ +Would these results have any correlation with the review results? We will never know if someone doesn't try it - work the techniques discussed here thoroughly, with a number of different amps. It would be useful to ensure that the reviewer was unaware of the test results before listening, to guard against experimenter expectancy or sub-conscious prejudice.
+ +It is very hard to do a synopsis of this topic, since I have too little data to work with. Only by adopting new ideas and test methods will we be able to determine if the 'golden-ear' brigade really does have golden ears, or that they actually hear much the same 'stuff' as the rest of us, but have a better vocabulary. That is not intended as a slur, just a comment that we have to find out if there is anything happening that we (the 'engineering' types) don't know about, or not. Unless we can get a match between measured and described performance, we get nowhere (which is to say that we stay where we are, on opposite sides of the fence).
+ + +Many is the claim that the ear is one of the most finely tuned and sensitive measuring instrument known. I am not going to dispute this - not so that I will not offend anyone (I seem to have done this many times already), but because in some respects it is true. Having said that, I must also point out that although extremely sensitive, the ear (or to be more correct, the brain) is also easily fooled. We can imagine that we can hear things that absolutely do not exist, and can just as easily imagine that one amplifier sounds better than another, only to discover that the reverse is true under different circumstances. Listeners have even declared one amp to be clearly superior to another when the amp hasn't been changed at all.
+ +Could it be the influence of speaker cables, or even loudspeakers themselves? This is quite possible, since when amps are reviewed it is generally with the reviewer's favourite speaker and lead combination. This might suit one amplifier perfectly, while the capacitance and inductance of the cable might cause minute instabilities in other otherwise perfectly good amplifiers. Although it a fine theory to suggest that a speaker lead should not affect the performance of a well designed amplifier, there are likely to be some combinations of cable characteristics that simply freak out some amps. Likewise, some amps just might not like the impedance presented by some loudspeakers - this is an area that has been the subject of many studies, and entire amplifiers have been designed specifically to combat these very problems [ 1 ].
+ +Many published designs never get the chance of a review, at least not in the same sense as a manufactured amplifier, so it can be difficult (if not impossible) to make worthwhile comparisons. In addition, we sometimes have different reviewers making contradictory remarks about the same amp. Some might think it is wonderful, while others are less enthusiastic. Is this because of different speakers, cables, or some other influence? The answer (of course) is that we have no idea.
+ +We come back to the same problem I described earlier, which is that the standard tests are not necessarily appropriate. A frequency response graph showing that an amp is ruler flat from DC to daylight is of absolutely no use if everyone says that the highs are 'veiled', or that imaging is poor. Compare this with another amp that is also ruler flat, and (nearly) everyone agrees that the highs are detailed, transparent, and that imaging is superb.
+ +We need to employ different testing methodologies to see if there is a way to determine from bench (i.e. objective) testing, what a listening (i.e. subjective) test might reveal. This is a daunting task, but is one that must be sought vigorously if we are to learn the secrets of amplifier sound. It is there - we just don't know where to look, or what to look for ... yet. Until we have correlation between the two testing methods, we are at the mercy of the purveyors of amplifier snake oil and other magic potions.
+ +The SIM distortion indicator is one possible method that might help us, but it may also react to the wrong stimulus. Perhaps we need to add the ability to detect small amounts of high frequencies with greater sensitivity, but now a simple idea becomes quite complex, possibly to no avail. It is also important that such a device has zero effect on the incoming signal itself, so some care is needed to ensure that there is negligible loading on the source preamplifier.
+ +This is not the only avenue open to us to correlate subjective versus objective testing. Both are important, the problem is that one is purely concerned with the way an amplifier behaves on the test bench, and a whole series of more or less identical results can be expected. The other is veiled in 'reviewer speak', and although it might be useful if the reviewer is known and trusted, is not measurable or repeatable. The whole object is to try to determine what physical factors cause amplifiers to sound different, despite that fact that conventional testing indicates that they should sound the same.
+ + +The output impedance of any amplifier is finite. There is no such thing as an amplifier with zero output impedance, so all amps are influenced to some degree by the load. An ideal load is perfectly resistive, and has no reactive elements (inductance or capacitance) at all. Just as there is no such thing as a perfect amplifier, there is also no such thing as a perfect load. Speakers are especially gruesome in this respect, having significant reactance, which varies with frequency.
+ +A genuine zero impedance source is completely unaffected by the load, and it does not matter if it is reactive or not. If such a source were to be connected to a loudspeaker load, the influence of the load will be zero, regardless of frequency, load impedance variations, or anything else. It's worth mentioning that by clever manipulation of feedback, it is (theoretically) possible to achieve zero output impedance (and even negative impedance which I have done in a test amp I use in my workshop). The problem is that doing so involves a small amount of positive feedback, which is inherently unstable. All amps normally have a low but measurable positive (i.e. 'normal') output impedance, but it's possible that internal wiring can be mis-routed such that an amplifier does have a small amount of negative impedance. Poor grounding practices can achieve this, and it's definitely not something to aim for!
+ +Since true zero impedance is not the case in the real world, the goal is generally to make the amplifier have the lowest output impedance possible (but remaining positive at all times), in the somewhat futile hope that the amp will not be adversely affected by the variable load impedance. In essence, this is futile, since there will always be some output impedance, and therefore the load will always have some influence on the behaviour of the amp.
+ +Another approach might be to make the output impedance infinite, and again, the load will have zero effect on the amplifier itself (the amplifier will, however, have a great influence on the load!). Alas, this too is impossible. Given that the conventional approaches obviously cannot work, we are faced with the problem that all amplifiers are affected by the load, and therefore all amplifiers must show some degree of sensitivity to the speaker lead and speaker.
+ +The biggest problem is that no-one really knows what an amplifier will do when a reactive load reflects some of the power back into the amp's output. We can hope (without success) that the effects will be negligible, or we can try to make speakers appear as pure resistance (again, without success).
+ +A test method already exists for this, and uses one channel of an amp to drive a signal back into the output of another. The passive amplifier is the one under test. It is also possible to use a different source amplifier altogether, since there is no need for it to be identical to the test amp. Use of a 'standard' amplifier whose characteristics are well known is useful, since the source will be a constant in all tests. Differences may then be seen clearly from one test to the next.
+ +The method is shown in Figure 6, and is a useful test of the behaviour of an amp when a signal is driven into its output. This is exactly what speakers do - the reactive part of the loudspeaker impedance causes some of the power to be 'reflected' back into the amplifier. Since one amplifier in this test is the source, the device under test can be considered a 'sink'.
+ +
Figure 6 - Amplifier Power Sink Test
I have used this test, and although it does show some interesting results, the test is essentially not useful unless used as a comparative test method. The amplifier under test is also subjected to very high dissipation (well above that expected with any loudspeaker load), because the transistors are expected to 'dump' a possibly large current while they have the full rail voltage across them. There is a real risk of damaging the amplifier, and I suggest that you don't try this unless you are very sure of the driven amplifier's abilities.
+ +We may now ask "Why is this not a standard test for amplifiers, then?" The answer is that no-one has really thought about it enough to decide that this will (or should) be part of the standard set of tests for objective testing of an amplifier. The results might be quite revealing, showing a signal that may be non-linear (i.e. distorted), or perhaps showing a wide variation in measured signal versus frequency. The result of this test with amps having extensive protection circuits will be a lottery - most will react (often very) badly at only moderate current.
+ +If there is high distortion or a large frequency dependence, then we have some more information about the amplifier that was previously unknown. It might be possible to correlate this with subjective assessments of the amp, and gain further understanding of why some amps supposedly sound better than others. We might discover that amps with certain characteristics using this test are subjectively judged as sounding better than others ... or not.
+ +If this test became standard, and was routinely allied with the SIM tester described above, we may become aware of many of the problems that currently are (apparently and/or allegedly) audible, but for which there is no known measurement technique.
+ + +This article has described some tests that although not new, are possibly the answer to so many questions we have about amplifiers. The tests themselves have been known for some time, but their application is potentially of benefit. We may be able to finally perform an objective test, and be able to predict with a degree of confidence how the amp will sound. It may also happen that these tests are not sufficient to reveal all the subtleties of amplifier sound, but will certainly be more useful than a simple frequency response and distortion test.
+ +Any change to the testing methods used is not going to happen overnight, and nor are we going to be able to see immediately which problems cause a difference, and which ones have little apparent effect. Time, patience, and careful correlation of the data are essential if this is to succeed. There are laws of physics, and there are ears. Somewhere the two must meet in common ground. We already know that this happens, since there are amplifiers that sound excellent - according to a large number of owners, reviewers, etc. - now we need to know why.
+ +There is a test method (or a series of methods) that will allow us to obtain a suite of tests that makes sense to designers and listeners alike, so we can get closer to the ideal amplifier, namely the mythical 'straight wire with gain', but from the listener perspective rather than the senseless repetition of tests that seem to have no bearing on the perceived quality of the amp. This is not to say that the standard tests are redundant (far from it), but they do not seem to reveal enough information.
+ +For this to succeed, the subjectivists must be convinced, as must the 'objectivists'. We are all looking for the same thing - the flawless reproduction of sound - but the two camps have drifted further and further apart over the years. This is not helped by the common practice of reviewers to connect everything up themselves, and rely not just on the sound, but their knowledge of which amplifier they are listening to. Sighted tests are invariably flawed, and the only test methodology that should ever be used is a full blind or double-blind test, with the ability to switch from one amplifier to the other, but without knowing which is which.
+ +These are my musings, and I am open to suggestions for other testing methods that may reveal the subtle differences that undeniably exist between amplifiers. At the moment we have a chasm between those who can (or think they can) hear the difference between a valve and an opamp, a bipolar junction transistor and a MOSFET, or Brand 'A' versus Brand 'B', and those who claim that there is no difference at all.
+ +The fact that there are differences is obvious. The degree of difference and why there are differences is not. It would be nice for all lovers of music (and the accurate reproduction of same) if we can arrive at a mutually agreeable explanation for these differences, that is accurate, repeatable, and measurable.
+ +If these criteria are not met, then the assessment is not useful to either camp, and the chasm will simply widen. This is bad news - it is high time we all get together and stop arguing amongst ourselves whether (for example) it is better to use one brand of capacitor in the signal path or another. The continued use of sighted test procedures does nothing to advance the state of the art.
+ +These testing methods can also be applied to the measurement of individual components, speaker cables, interconnects and preamps, particularly the SIM tester. Using the amplifier power sink test with different cables and speakers might give us some clues as to why so many people are adamant that one speaker cable sounds better than another, even though there is no measurable difference using conventional means.
+ +The greatest benefit of these tests is that they will reveal things we have not been looking at (or for) in the past, and may show differences that come as a very great surprise to designers and listeners alike.
+ +Another very useful test is a 'null test', as proposed by Ethan Winer. For example, a signal is applied simultaneously to two leads. and the tester adjusted until there is nothing left of the original signal. If a complete null can be achieved, the two leads are essentially identical. If it were otherwise, it would not be possible to achieve a null, and the original signal will still be present, either as a distorted version of the original, or a low-level version of the original. This is not new - It was used by Peter Baxandall and Peter Walker (Quad) many years ago, but it's not trivial when measuring active circuits. This is mainly due to tiny phase shifts that are very hard to duplicate perfectly.
+ +For information on the use of the SIM, and an initial article describing how it works and my results so far, please see 'Sound Impairment Monitor - The Answer?'.
+ +![]() | + + + + + + + |
Elliott Sound Products | +Power Amplifier Design Guidelines |
I am amazed at the number of amplifier designers who have, for one reason or another, failed to take some of the well known basics and pitfalls of amp design into consideration during the design phase. While some of these errors (whether of judgement or through ignorance is uncertain) are of no great consequence, others can lead to the slow but sure or instantaneous destruction of an amplifier's output devices.
+ +When I say 'of no great consequence', this is possibly contentious, since a dramatic increase in distortion is hardly that, however in this context it will at least not destroy anything - other than the listener's enjoyment.
+ +Even well known and respected designs can fall foul of some basic errors - this is naturally ignoring the multitude of 'off the wall' designs (e.g. Single-ended MOSFETs without feedback (yecch! - 5% distortion, phtooey), transformer-coupled monstrosities, amplifiers so complex and bizarre that they defy logic or description, etc). This is not including valve amps, these are a 'special' case and in many areas, such as guitar amps, as far as many players are concerned they remain unsurpassed.
+ +In this article, I have attempted to cover some of the areas which require their own special consideration, and the references quoted at the end are excellent sources of more detailed information on the items where a reference is given.
+ + +Reference Amplifier
+
My reference amplifier is shown in Project 3A, and is a hard act to follow. As I have been refining these pages and experimenting with simulations and real life, I have found that this amp is exemplary. It does need a comparatively high quiescent current to keep the output devices well away from crossover distortion, but this is easily accommodated by using decent heatsinks. Even a Class-A system (Death of Zen) fails to come close at medium power, and is barely better at low power.
This amp uses the following ...
+ +It is stable with all conventional loads, capable of 80W into 8 Ohms, and simple to build. Using only commonly available parts, it is also very inexpensive.
+ +Note:
+
This article is not intended to be the 'designers' handbook', but is a collection of notes and ideas showing the influences of the various stages in a typical amplifier. Although I have made suggestions that various topologies are superior to others, this does not mean to imply that they should automatically be used. If one were to combine all the 'best' configurations into a single amp, this is no guarantee that it will perform or sound any better than one using 'lesser' building blocks.
There is a school of thought that the fewer active devices one uses, the better an amp will sound. I do not believe this to be the case, but my own design philosophy is to make any given design as simple as possible, consistent with the level of performance expected of it.
+ +Additional schools of thought will make all manner of claims regarding esoteric components, 'unexplained' phenomena, or will imply that most amplifiers as we know them are useless for audio because they do not have predictable performance at DC and/or 10GHz, cannot drive pure inductance or capacitance, etc., etc. Regardless of these claims, most amplifiers actually work just fine, and do not have to do any of the things that the claimants may imply. The vast majority of all the off-the-wall claims you will come across can safely be ignored.
+ +It's also worth noting that making a design more complex (more parts) doesn't necessarily mean that it will have better performance. More active parts in the signal chain tend to add delays, and this can make it very difficult to keep the final circuit stable. No-one wants (or needs) an amplifier that has marginal stability, meaning that it may be on the verge of oscillation during normal operation. Connecting a speaker lead with above average capacitance may cause spurious (and intermittent) oscillations on parts of the waveform. This is always audible, but might not show up when the amp is on the test bench.
+ + +There are two main possibilities for an input stage for a power amplifier. The most common is the long tailed pair, so we shall look at this first. It's not uncommon to see two long-tailed-pairs, one using NPN and the other using PNP transistors. While this makes the circuit appear to be fully symmetrical, it isn't, because the NPN and PNP transistors will never be exact complements of each other.
+ + +Long Tailed Pair
+
It has been shown [ 1 ] that failing to balance the input Long Tailed Pair properly leads to a large increase in the distortion contributed by the stage. Some designers attempt to remedy the situation by including a resistor in the 'unused' collector circuit, but this is an aesthetic solution - i.e. it looks balanced, but serves no other useful purpose. (See Figure 1a) Note that the 'driver' transistor is simply there to allow us to make comparisons between the circuit topologies, and to provide current to voltage conversion. It is worth noting that even though this resistor serves no purpose electronically, it can make the PCB layout easier.
Use of the long-tailed (or differential) pair in an amplifier means that the amplifier will operate with what is generally called 'voltage feedback' (VFB). The feedback is introduced as a voltage, since the input impedance of both inputs is high (and approximately equal), and input current is (relatively speaking) negligible.
+ +The feedback resistor and capacitor are selected to allow the circuit to operate at full open loop gain for the applied AC, but unity gain for DC to allow the circuit to stabilise correctly with a collector voltage at (or near) 0V. The transistors used in the simulations that follow are 'ideal', without internal capacitances etc, and have an hFE of 235 in all cases, measured with a base current of 10µA. The simulated circuits were operated at a voltage of ±12V. Different simulators will give different results, but the trends will be the same.
+ +
Figure 1a - Aesthetic Addition Of Resistor To Balance The Collector Load
As shown, and with a 12mA collector current for Q3, the load imbalance at the LTP collectors is 94µA for Q1, and 1mA for Q2. Simply by reducing the value of R1 it is possible to improve matters, but it is still not going to give the performance of which the circuit is capable. Again, as shown the gain of the LTP is a rather dismal 32 (as measured at the collector of Q2). The inclusion of R3 is purely cosmetic. It does provide a convenient means to measure the gain of the LTP, but otherwise serves no purpose.
+ +Changing R1 for a current source does not help with gain, but provides a worthwhile improvement in power supply hum rejection, and in particular improves common mode rejection. A common mode signal is one that is applied in the same phase and amplitude to both inputs at once.
+ +The overall gain of this configuration (measured at the collector of Q3) is 842, but by reducing R2 to 1.8k it can be raised to 1,850. This also improves collector current matching in the LTP, but the value will be device dependent, and is not reliable for production units.
+ +
Figure 1b - A Current Mirror And Local Feedback Applied To The LTP
The circuit shown in Figure 1b has improved overall gain to 6,860, a fairly dramatic improvement on the earlier attempt. A further improvement in linearity is to be had by adding resistors (100 Ohm or thereabouts) into the emitter circuits of the current mirror transistors. This will swamp the base-emitter non-linearities, and provide greater tolerance to device gain variations. Overall gain is not affected.
+ +Proper selection of the operating current will improve matters considerably, and also help to reduce distortion, especially if local negative feedback (as shown in Figure 1b) is applied. This has been discussed at length by various writers [ 1 ], and a bit of simple logic reveals that benefits are bound to accrue to the designer who takes this seriously.
+ +Since the value of the transistor's internal emitter resistance (re) is determined by the current flow -
+ ++ re = 26 / Ie (in mA) ++ +
at very low operating currents this value can be quite high. For example, at 0.5 mA, re will be about 52 ohms, increasing further as the current is reduced. Although this will introduce local feedback (and reduce the available gain), it is non-linear, resulting in distortion as the current varies during normal operation. Increasing the current, and using resistors (which are nice and linear) to bring the gain back to where it was before will reduce the distortion, since the resistor value - if properly chosen - will 'swamp' the variations in the internal re due to signal levels.
+ +At small currents (where the current variation during operation is comparatively high), this internal resistance has a pronounced effect on the performance of the stage. Simple solutions to apparently complex problems abound.
+ +Use of a current mirror as the load for the long-tailed pair (LTP) again improves linearity and gain, allowing either more local feedback elsewhere, or more global feedback. Either of these will improve the performance of an amplifier, provided precautions are taken to ensure stability - i.e. freedom from oscillation at any frequency or amplitude, regardless of applied load impedance.
+ + +There is another (not often used these days) version of an amplifier input stage. This is a single transistor, with the feedback applied to the emitter. It has been claimed by many that this is a grossly inferior circuit, but it does have some very nice characteristics. Technically, it uses current feedback, rather than the more common voltage feedback.
+ +
Figure 2a - Single Transistor Input Stage
So what is so nice about this? In a word, stability. An amplifier using this input stage requires little or no additional stabilisation (the 'Miller' cap, aka 'dominant pole') which is mandatory with amps having LTP input stages.
+ +An amplifier using this input stage is referred to as a 'current feedback' (CFB) circuit, since the feedback 'node' (the emitter of the input transistor) is a very low impedance. The base circuit is the non-inverting input, and has a relatively high input impedance - but not generally as high as the differential pair. The +ve and -ve inputs are therefore asymmetrical. CFB amplifiers are used extensively in extremely fast linear ICs, and are capable of bandwidths in excess of 300MHz (that is not a misprint!).
+ +This is the input stage used in the 10W Class-A amp (John Linsley-Hood's design, which is now part of TCCAS (The Class-A Audio Site), and also in the 'El-Cheapo' amp described in my Projects Pages. "Well if it is so good, why doesn't anybody use it?" I hear you ask (you must have said it pretty loudly, then, because Australia is a long way from everywhere ).
There is one basic limitation with this circuit, and this was 'created' by the sudden requirement of all power amplifiers to be able to faithfully reproduce DC, lest they be disgraced by reviewers and spurned by buyers.
+ ++ (I remain perplexed by this, since I know for a fact that I cannot hear DC, my speakers cannot reproduce it, I know of no musical instrument that creates it, + and it would probably sound pretty boring if any of the above did apply. If you don't believe me, connect a 1.5V torch cell to your speaker, and let me know + if I'm wrong. I seem to recall something about phase shift being bandied about at the time, but given the acoustics involved in recording in the studio and + reproducing in a typical listening room - not to mention the 'interesting' phase shifts generated by loudspeaker enclosures as the speaker approaches resonance + - I feel that the effects of a few degrees of low frequency phase shift generated in an amplifier are unlikely to be audible. This is of course assuming that + human ears are capable of resolving absolute phase anyway - which they have been categorically proven to be unable to do.) ++ +
This input stage cannot be DC coupled (at least not without using a level shifting circuit), because of the voltage drop in the emitter circuit and between the emitter-base junction of the transistor. Since these cannot be balanced out as they are with an LTP input stage, the input must be capacitively coupled.
+ +In addition, some form of biasing circuit is needed, and unfortunately this will either have to be made adjustable (which means a trimpot), or an opamp can be used to act as a DC 'servo', comparing the output DC voltage with the zero volt reference and adjusting the input voltage to maintain 0V DC at the output. The use of such techniques will not be examined here, but can provide DC offsets far lower than can be achieved using the amplifier circuit itself. There is no sonic degradation caused by the opamp (assuming for the sake of the discussion that decent opamps cause sonic degradation anyway), since it operates at DC only (it might have some small influence at 0.5Hz or so, but this is unlikely to be audible).
+ +It has also been claimed that the single transistor has a lower gain than the LTP, but this is simply untrue. Open loop gain of the stage is - if anything - higher than that of a simple LTP for the same device current.
+ +
Figure 2b - Voltage Gain Comparison Of Input Stages
I simulated a very simple pair of circuits (shown in Figure 2b) to see the difference between the two. Collector current is approximately 1mA in each, and the output of the LTP shows a voltage gain of 1,770 from the combined circuit (the input stage cannot properly be measured by itself, since it operates as a current amp in both cases). In neither case did I worry about DC offset, since the effects are minimal for the purpose of simply looking at the gain - therefore offset is not shown. (Did you notice that the gains obtained in this simulation are completely different from those obtained earlier for the simple LTP circuit - I used a different voltage (the previous example used ±12V). This in no way invalidates anything, they are just different.)
+ +By comparison, the open-loop gain of the single transistor stage is 2,000 - this (perhaps unexpectedly) is somewhat higher. Admittedly, the addition of a current mirror would improve the LTP even more dramatically, but do we really need that much more gain? A quick test indicates that we can get a gain of 3,570. This looks very impressive, but is only an increase of a little over 4.2dB compared to the single transistor. By the same logic, the single transistor only has a 1.06dB advantage over the simple LTP, however the difference may be moot ....
+ +Because the single transistor stage requires no dominant pole Miller capacitor for stability, it will maintain the gain for a much wider frequency range, so in the long run might actually be far superior to the LTP. Further tests were obviously required, so I built them. Real life is never quite like the simulated version, so there was a bit less gain from each circuit than the simulator claimed. The LTP came in with an open loop gain of 1000, while the single transistor managed 1400. The test conditions were a little different from the simulation, in that ±15 volts was used, so the gain difference is about what would be expected, and is very close to the ±12V results obtained in the first set of simulations on the LTP.
+ +Distortion was interesting, with the LTP producing 0.7% which was predominantly 3rd harmonic. The single transistor was slightly worse for the same output voltage with 0.9%, and this had a dominant 2nd harmonic. This is an open loop test, so it's really an examination of the 'worst-case' performance. If the gain is reduced with feedback, distortion falls dramatically. However, it doesn't necessarily fall in a direct relationship
+ +As expected, the LTP was unstable without a Miller capacitor, and 56pF managed to tame it down. Quite unexpectedly, the single transistor also required a Miller cap, but only when running open-loop. When it was allowed to have some feedback the oscillation disappeared. The LTP could not be operated without the Miller capacitor at any gain, and as the gain approached unity, more capacitance was needed to prevent oscillation.
+ +The next step was a test of each circuit providing a gain of about 27, since this is around the 'normal' figure for a 60W power amp. Here, the LTP is clearly superior, with a level of distortion I could not measure. The single transistor circuit had 0.04% distortion, and again this was predominantly 2nd harmonic. In this mode, no Miller capacitor was needed for the single transistor, and it showed a very wide frequency response, with a slight rise in gain at frequencies above 100kHz. This was also noticeable with a 10kHz square wave, which had overshoot, although this was reasonably similar for positive and negative half-cycles. The LTP was well behaved, and showed no overshoot (it had the 56pF Miller cap installed), but it started to run out of gain at about 80kHz, and there was evidence of slew-rate limiting. This effect was not apparent with the single transistor.
+ +All in all, I thought this was a worthwhile experiment, and the use of a simple resistor for the collector load of the gain stage allowed the final circuit to have a manageable gain. Had a current source or similar been used as the load, I would not have been able to measure the gain accurately, since the input levels would have been too small. As it was, noise pickup proved to be a major problem, and it was difficult to get accurate results without using the signal averaging capability on the oscilloscope.
+ + +There are many designs that you'll see with what appear to be fully symmetrical input stages. It's implied that the symmetry improves performance, but it may be an illusion. While the schematic looks symmetrical, the fact is that the NPN and PNP devices (or N-Channel and P-Channel FETs) are not perfect mirror images of each other. There are usually easily measured differences between NPN and PNP devices from the same family, and datasheets will quickly disabuse you of the notion that they are the same.
+ +There is some evidence to show that an apparently symmetrical input stage may be better than a more conventional asymmetrical stage, but there are countless very good amps that don't use the extra circuitry. In some bases, the symmetry is continued throughout the amplifier (the output stages are normally symmetrical anyway, but the Class-A gain stage usually is not). Again, it's easy to run simulations that may show that (apparent) symmetry improves things. However, it requires more parts, and if they don't make a significant (and audible) difference then they are basically wasted.
+ +
Figure 2C - Asymmetrical Vs. Symmetrical Input Stage Examples
The drawing above shows an example, but excluding any caps needed for stability. I included current mirrors, but only used a resistor to bias the two complementary long-tailed pairs. In reality, these would probably be replaced by current sources. While the circuit certainly looks 'nice and symmetrical', that doesn't mean that it really is, electrically speaking. In a simulation, one thing you'd really expect would be lower DC offset with the symmetrical arrangement, but in fact it simulated as being slightly worse. Depending on how the voltage amplifier stages are configured, the distortion can be less, greater, or about the same. My simulation shows lower distortion, but simulators use ideal parts, and real parts may not actually improve matters at all if the devices aren't carefully matched.
+ +Note that I've only shown the input stage and Class-A amplifier (aka 'VAS' - voltage amplifier stage), and have not included output stage bias networks or the output stage itself. Feedback is normally taken from the output to the speakers, but as shown it works as intended for analysis. One distinct benefit of the symmetrical stage is that the output current is also symmetrical because it's push-pull, and isn't limited by the current available to the Class-A amplifier stage. This means greater drive is possible, but it also makes it easier to destroy the output stage if it doesn't have protection circuits. With no load, the current through the Class-A stages is roughly the same - 5mA.
+ +None of this means that designs that are symmetrical are worse than asymmetrical designs, but nor does it mean that a symmetrical amp is necessarily 'better'. Many claims are made, but usually with little or no science to substantiate them. There are undoubtedly some very fine amplifiers that use symmetrical input and gain stages, just as there are many very fine amplifier that do not use symmetry as part of the design. It seems that to some people, what the circuit looks like is more important than how it performs. Sighted listening tests will invariably support this bias, and the myths become self-perpetuating.
+ +A couple of things that help the symmetrical argument is lower noise (gain stages effectively in parallel, so gain is increased by 6dB, noise by 3dB). The gain is also higher, but this is not necessarily a good thing if it leads to instability, or requires much more complex networks to remain stable under all operating conditions. In amplifier design (and indeed virtually all electronics design), everything we do is ultimately a compromise, and it's the designer's job to get performance that meets or exceeds expectations, but not if it requires far greater complexity (unless it can't be avoided).
+ +To obtain 'true' symmetry, use two amplifiers in BTL (bridge tied load) configuration. If the devices in each amplifier are matched, then the amplifier is completely symmetrical as far as the signal is concerned. Unfortunately, this comes with its own issues, not the least of which is that each amp 'sees' half the actual load impedance. That makes driving 4 ohm loads difficult, because the output current from each amp is double that which would be the case for a single amp driving the same impedance.
+ +Very high current in BTL amps is always a problem, because the supply has to be able to provide it with minimal ripple, and transistors generally lose linearity at high currents. The entire amp becomes more complex (and expensive), but often with no genuine benefits. I've been asked about symmetrical designs many times, and my answer is the same - feel free to use a design, but don't expect it to measure (or sound) any better than a competently designed 'conventional' amplifier.
+ + +Based on the tests, there are pros and cons to all approaches (single transistor, long-tailed pair and symmetrical - and I'll bet that came as a surprise). The LTP in its simple form is a clear loser for gain, but use of a current mirror allows it to 'blow away' the single transistor, which cannot capitalise on this technique since there is nothing to mirror. Symmetrical inputs are considerably more complex, and you may (or may not) actually measure a difference between the simple LTP input and a symmetrical version.
+ +Stability is very important to me, and I tend towards an amp which absolutely does not oscillate, even at the expense of a little more distortion. My own 60W reference amp is unconditionally stable with normal loads, and it uses an LTP for the input. Although I have experimented with symmetrical input stages, I have not published a design using this technique.
+ +While there is no doubt at all that a symmetrical input stage can work very well, it does not automatically mean that the amp will sound any better. Adding the extra components makes the PCB more complex and the layout is critical. There's also a lot more to go wrong, especially with a compact input stage with many closely spaced transistors. Whether it's worth the effort depends on what you are trying to achieve, and you need to run tests to verify that what you think is 'better' is actually better. In many cases, a blind test may reveal that there's no audible difference, so the extra effort and parts serve no useful purpose.
+ + +A favourite pastime of many designers is to connect a small capacitor as shown in Figure 3 directly to the base of the input transistor. This is supposed to prevent detection (rectification) of radio frequency signals picked up by the input leads. Well, to a certain degree this is true, as the Resistor-Capacitor (RC) combination forms a low pass filter, which will reduce the amount of RF applied to the input. As shown this has a 3dB frequency of 159kHz (although this will be affected by the output impedance of the preceding stage).
+ +
Figure 3 - The Traditional Method for Preventing RF Detection
This approach might work if PCB track lengths in that part of the circuit are very short, ensuring minimal inductance. This is not always the case, and some layouts may include more than enough track length to not only act as an inductor, but as an antenna as well. Then things can get really sneaky, such as when the levels of RF energy are so high that some amount manages to get through anyway. I once had a workshop/lab which was triangulated by three TV transmission towers - very nasty. RF interference was a fact of life there.
+ +The traditional method not only did not work, but often made matters worse by ensuring that the transistor base was fed from a very low impedance (from an RF perspective) because of C1. A vast number of commercial amplifiers and other equipment which I worked on in that time picked up quite unacceptable amounts of TV frame buzz, caused by the detection of the 50Hz vertical synchronisation pulses in the TV signal. As the picture component of analogue TV is (or was - it's almost completely digital now) amplitude modulated RF, this was readily converted into audio - of the most objectionable kind.
+ +
Figure 4 - Use of a Stopper Resistor to Prevent RF Detection
Figure 4 shows the remedy - but to be effective the R2 must be as close as possible to the base, or the performance is degraded. How does this work? Simple, the base-emitter junction of a transistor is a diode, and even when conducting it will retain non-linearities. These are often sufficient to enable the input stage to act as a crude AM detector, which will be quite effective with high-level TV or CB radio signals. Adding the external resistance again swamps the internal non-linearities, reducing the diode effect to negligible levels. This is not to say that it will entirely eliminate the problem where strong RF fields are present, but will at least reduce it to 'nuisance' rather than 'intolerable' levels.
+ +UPDATE: I have been advised by a reader who works in a transmitting station that connecting the capacitor directly between base and emitter (in conjunction with the stopper resistor) is very effective. He too found that the traditional method was useless, but that when high strength fields are encountered, the simple stopper is not enough.
+ +With opamps, the equivalent solution is to connect the stopper resistor in series with the +ve input, and the capacitor between the +ve and -ve inputs, with no connection to earth.
+ +In all both cases it is essential to keep all leads and PCB tracks as short as possible, so they cannot act as an antenna for the RF. Needless to say, a shielded (and grounded) equipment case is mandatory in such conditions.
+ + +The Class-A amp stage is also commonly known as the Voltage Amplification Stage (VAS), but both terms are common, and are generally interchangeable. There are a number of traps here, not the least of which is that it is commonly assumed that the load (from the output stage) is infinite. Oh, sure, every designer knows that the Class-A stage must carry a current of at least 50% more than the output stage will draw, and this is easily calculated ...
+ ++ IA = Peak_V / Op_R / Op_Gain × 1.5 ++ +
where IA is the Class-A current, Peak_V is the maximum voltage across the load Op_R, and Op_Gain is the current gain of the output transistor combination.
+ +For a typical 100W / 8 Ohm amplifier this will be somewhere between 5 and 10mA. Assuming an output transistor combination with a current gain of 1000 (50 for the driver, and 20 for the power transistor), with an 8 Ohm load, the impedance presented to the Class-A stage will be about 2k Ohms, which is a little shy of infinity.
+ +Added to this is the fact that the impedance reflected back is non-linear, since the driver and output transistors change their gain with current - as do all real-life semiconductors. There are some devices available today which are far better than the average, but they are still not perfect in this respect.
+ +The voltage gain is typically about 0.95 to 0.97 with the compound pair configuration. It must be noted that this figure will only be true for mid-range currents, and will be reduced at lower and higher values. Figure 5 shows the basic stage type - the same basic amplifier we used before, with the addition of a current source as the collector load. Also common is the bootstrapped circuit (not shown here, but evident on many ESP designs).
+ +There is not a lot of difference between current source and bootstrap circuits, but the current source gives slightly higher gain. With either type, there are some fairly simple additions which will improve linearity quite dramatically. Figure 5 shows the typical arrangement, including the 100pF dominant pole stabilisation capacitor connected between the Class-A transistor's collector and base.
+ +
Figure 5 - Typical Class-A Driver Configuration
It is important to try to make the Class-A stage capable of high gain, even when loaded by the output stage. There have been many different methods used to achieve this, but none is completely successful. The output stage is not a simple impedance, and it varies as the load impedance changes. Bipolar transistors reflect the load impedance back to the base, adjusted according to the device's gain. A potential problem is that some designers seem completely oblivious to this problem area, or create such amazingly complex 'solutions' as to make stabilisation (against oscillation) very difficult.
+ +This is one area where MOSFETs may be found superior to BJTs. The gate capacitance is not affected by the load impedance, and nothing is reflected back to the Class-A driver. This will typically allow it to have higher gain - especially when low load impedances are involved. The Class-A driver needs only to be able to charge and discharge the gate capacitance of the MOSFETs, and this is not influenced by the output current or load.
+ +
Figure 6 - Improving Open Loop Output impedance of Class-A Driver
The above is simple and very effective. This straightforward addition of an emitter follower to the Class-A driver (with the 1k 'bootstrap' resistor) has increased the combined LTP and Class-A driver gain to 1,800,000 (yes, 1.8 million!) or 125dB (open loop and without the dominant pole capacitor connected). Open loop output impedance is about 10k, again without the cap. Once the latter is in circuit, gain is reduced to a slightly more sensible 37,000 at 1kHz with the 100pF value shown. Output impedance at 1kHz is now (comparatively) very low, at about 150 Ohms.
+ +Note that in the above, I have used a 5k resistor instead of the more usual current source to bias the long-tailed pair. This is for clarity of the drawing, and not a suggestion that the current source should be forsaken in this position.
+ +A special note for the unwary - If one is to use a single current control transistor for both the LTP and Class-A driver, do not use the Class-A (aka VAS - voltage amplifier stage) current as the reference, but rather the LTP. If not, the varying current in the Class-A circuit will cause modulation of the LTP emitter current, with results that are sure to be as unwelcome as they are unpredictable [ 4 ]. Where the current source reference is based on the VAS (Class-A driver), it's advisable to decouple the voltage reference for the LTP source to minimise interactions.
+ +I have often seen amplifier designs where the circuit is of such complexity that one must wonder how they ever managed to stop them from becoming high power radio frequency oscillators. The maze of low value capacitors sometimes used - some with series resistance - some without, truly makes one wonder what the open loop frequency and phase response must look like. Couple this with the fact that many of these amps do not have wonderful specifications anyway, and one is forced to ponder what the designer was actually trying to accomplish (being 'different' is not a valid reason to publish or promote a circuit in my view, unless it offers some benefit otherwise unattainable).
+ +Having carried out quite a few experiments, I am not convinced that vast amounts of gain from the input stage and Class-A amplifier stage are necessary or desirable. As long as the circuit is linear (i.e. has low distortion levels before the addition of feedback), the final result is likely to be satisfactory. I have seen many circuits with far more open loop gain than my reference amp (Project 3A), that in theory should be vastly superior - yet they apparently are not.
+ + +There are essentially two ways to create a constant current feed to the Class-A driver stage. The active current source is one method, and this is very common. It does introduce additional active devices, but it is possible to make a current source that has an impedance so close to infinity that it will be almost impossible to measure it without affecting the result just by attaching measurement equipment. For more detailed information on current sources, see the article Current Sources Sinks and Mirrors. Figure 6A shows an active current source for reference.
+ +A simpler way is to use the bootstrap circuit, where a capacitor is used from the output to maintain a relatively constant voltage across a resistor. If the voltage across a resistor is constant, then it follows that the current flowing through it must also be constant. Figure 6a shows the circuit of a bootstrap constant current source. Unlike a true current source, the current through the bootstrap circuit will change with the supply voltage. This is a gradual change, and is outside the audio spectrum - or at least it should be if the circuit is designed correctly.
+ +
Figure 6A - Active And Bootstrapped Current Source
The bootstrap circuit works as follows. Under quiescent conditions, the output is at zero volts, and the positive supply is divided by Rb1 and Rb2. The base of the upper transistor will be at about +0.7V - just sufficient to bias the transistor. As the output swings positive or negative, the voltage swing is coupled via Cb, so the voltage across Rb2 remains constant. The current through Rb2 is therefore constant, since it maintains an essentially constant voltage across it. Note that this applies only for AC voltages, as the capacitor cannot retain an indefinite charge if there is a DC variation.
+ +The overall difference is not great in a complete design. Although the current source is theoretically better, a bootstrap circuit is simpler and cheaper, and introduces no additional active devices. The capacitor needs to be large enough to ensure that the AC across it remains small (less than a few hundred millivolts) at the lowest frequency of interest. Assuming Rb1 and Rb2 are equal, the cap's voltage rating needs to be a minimum of ½ the positive supply voltage, but preferably greater.
+ + +There are countless amplifiers which still use the Darlington type configuration, even though this was shown by many [ 2 ] to be inferior to the Sziklai/ complementary pair. Both configurations (in basic form, since there are many variations) are shown in Figure 7. There are two main areas where the Darlington configuration is inferior, and we shall look at each. In the following, bias networks and Class-A driver(s) are not included, only the output and driver transistors ...
+ +
Figure 7 - The Basic Configurations Of Output Stages In Common Use
Of the two shown, it will be apparent that I have not included MOSFET output stages - this is because MOSFETs require no driver transistor as such - they are normally driven directly from the Class-A amplifier (or a modified version - often an additional long-tailed pair. As can be seen, the component count is the same for those shown, but instead of using two same polarity (both PNP or both NPN), the compound pair (also called a Sziklai pair) uses one device of each polarity. The final compound device assumes the characteristics of the driver in terms of polarity, and the Emitter, Base and Collector connections for each are shown. The 220 ohm resistor (or other value determined by the design) is added to prevent output transistor collector to base leakage current from allowing the device to turn itself on, and also speeds up the turn-off time. Omission of this resistor is not a common mistake to make, but it has been done. In some cases, you'll see a comparatively high value used. The results are degraded distortion figures, especially at high frequency, and poor thermal stability.
+ +The value must be selected with reasonable care, if it is too low, the output transistor will not turn on under quiescent (no signal) conditions, the driver transistor(s) will be subject to excessive dissipation, and crossover distortion will result. If too high, turn-off performance of output devices will be impaired and thermal stability will not be as good. The final value depends (to some extent) on the current in the Class-A driver stage and the gain of the driver transistor, but the final arbiter of quiescent is the Vbe multiplier stage. These comments apply equally to the Darlington and compound pairs.
+ +Values of between 100 Ohms up to a maximum of perhaps 1k should be fine for most amplifiers, with lower values used as power increases. High power creates higher currents throughout the output stage and makes the transistors harder to turn off again, especially at high frequencies. This can lead to a phenomenon called 'cross-conduction', which occurs because the transistors cannot switch off quickly enough, so there is a period where both power transistors are conducting simultaneously. It won't happen at normal audio frequencies, although you may get slightly higher than normal current drawn from the power supply even at 20kHz.
+ +If an amp is driven to any reasonable power at higher frequencies, it can spontaneously self-destruct if there is sufficient cross conduction happening. The easiest way to reduce it is to use smaller resistors between base and emitter of the power transistors, but be aware that this will increase the demands on the drivers. For example, with 220 ohm resistors as shown above, the resistors will only pass around 3-5mA, but if they are reduced to (say) 47 ohms, that increases to perhaps 16mA or more. The drivers have to supply this current, even at idle, and their quiescent power dissipation increases from 120mW to over 550mW with ±35V supplies. A heatsink for the drivers becomes a necessity.
+ +Normally, there should be little or no need to use resistors less than ~100 ohms. If you want to get full power at 100kHz or more (why? it serves no purpose for an audio amplifier), then you'll need to make these resistors even lower in value and ensure proper heatsinks for the drivers. You will also need to increase the power rating for the Zobel network resistor, or it will overheat at high frequencies.
+ + +It can be seen that in the Darlington configuration, there are two emitter-base junctions for each output device. Since each has its own thermal characteristic (a fall of about 2mV per degree C), the combination can be difficult to make thermally stable. In addition, the gain of transistors often increases as they get hotter, thus compounding the problem. The bias 'servo', typically a transistor Vbe multiplier, must be mounted on the heatsink to ensure good thermal equilibrium with the output devices, and in some cases can still barely manage to maintain thermal stability.
+ +If stability is not maintained, the amplifier may be subject to thermal runaway, where after a certain output device temperature is reached, the continued fall of Vbe causes even more quiescent current to flow, causing the temperature to rise further, and so on. A point is reached where the power dissipated is so high that the output transistors fail - often with catastrophic results to the remainder of the circuit and/or the attached loudspeakers.
+ +The Sziklai/ compound pair has only one controlling Vbe, and is thus far easier to stabilise. Since the single Vbe is that of the driver (which should not be mounted on the main heatsink, and in some will have no heatsink at all), the requirements for the Vbe multiplier are less stringent, mounting is far simpler and thermal stability is generally very good to excellent.
+ +I have used the compound pair since the early 1970s, and when I saw it for the first time, it made too much sense in all respects to ignore. Thermal stability in a fairly basic 100W/4 Ohm amplifier of my design (of which many hundreds were built - it was the predecessor of the P3A design in the projects section) was assured with a simple 2-diode string - no adjustment was ever needed. (However, there were a couple of other tricks used at the time to guarantee stable operation.)
+ + +It would seem (at first glance at least) that there is nothing to this piece of circuitry. It is a very basic Vbe multiplier circuit, and seemingly, nothing can go wrong. This is almost true, except for the following points.
+ +
Figure 9 - The Basic Bias Servo Circuit
The design of many amps (especially those using a Darlington output stage) requires that the bias servo be made adjustable, to account for the differing characteristics of the transistors. If resistor R1 (in Fig 9) is instead a trimpot (i.e. variable resistor), what happens when (if) the wiper decides (through age, contamination or rough handling) to go open-circuit?
+ +The answer is simple - the voltage across the bias servo is now the full supply voltage (less a transistor drop or two), causing both the positive and negative output devices to turn on as hard as they possibly can. The result of this is the instantaneous destruction of the output devices - this will happen so fast that fuses cannot possibly prevent it, and even the inclusion of sophisticated Load-Line output protection circuitry is unlikely to be able to save the day.
+ +The answer of course is so simple that it should be immediately obvious to all, but sadly this is not always the case. By making R2 the variable component, should it happen to become open-circuited the bias servo simply removes the bias. This will introduce crossover distortion, but the devices are saved. To prevent the possibility of reducing the pot value to 0 ohms (which will have the same effect as described above!), there is often a series resistor, whose value is selected to allow adequate adjustment while retaining a respectable safety margin. It's not essential, provided that the setup instructions are followed carefully.
+ +An additional precaution must be taken here, in that if the resistor values are too low, the offset voltage seen by the output transistors is simply the voltage drop across the resistors, with the transistor having little or no control over the result. This is easily avoided by ensuring that the resistor current is 1/10 (or thereabouts) of the total Class-A bias current.
+ +It is also possible to make the resistance too large, and the bias servo will be less stable with varying current. This may also cause the bias servo to have too much gain, which can cause the amplifier's quiescent current to fall as it gets hotter. While this is a good thing from the reliability point of view, if it causes crossover distortion to appear when the amp is hot, the audible effect will obviously be disappointing. It will generally be necessary to experiment with the values to ensure that stability is maintained - there is no way to calculate this that comes to mind, although I am sure it is possible. The base-emitter voltage falls at 2mV /°C, but the variation in gain with temperature is not as readily calculated.
+ +As a secondary safeguard, using a suitable diode string in parallel with the servo may be useful. These should be chosen to prevent destructive current, but some method of over temperature protection will be needed. This can be a fan blowing onto the heatsink, or a thermal cutout to switch off the power if the amp gets too hot.
+ +Note that if the output stage uses the Darlington arrangement, the bias servo transistor must be located on the main heatsink. If you use a compound (Sziklai) pair, it is imperative that the bias servo senses the driver transistor(s) (which should not be on the main heatsink). Failure to locate the bias servo properly is inviting output stage failure due to thermal runaway.
+ + +Numerous articles have been written on the superior linearity of the compound (Sziklai) stage (Otala [ 3 ], Self, Linsley Hood among others) and I cannot help but be astonished when I see a new design in a magazine, still using the Darlington arrangement. The use of the compound pair requires no more components - the same components are simply arranged in a different manner. It was with great gusto that an Australian electronics magazine proudly announced (in 1998) that "this is the first time we have used this arrangement in a published design" (or words to that effect). I don't know the reason(s) they may have had for not using the complementary pair in every design they published (this magazine is a lot younger than I). Words fail me. The magazine in question is not the only one, and the Web abounds with designs old and new - all using the Darlington emitter-follower.
+ +This is not to say that the Darlington stage shouldn't be used - there are many fine amplifiers that use it, and with a bit of extra effort to get the bias servo right, such amps will give many years of reliable service. It is particularly suited to very high power amps, because of its simplicity - especially with multiple paralleled output devices. Parallel operation is more irksome with the Sziklai configuration. An example of paralleled Sziklai pairs is seen in Project 27. Having to use additional emitter resistors for each output transistor (in series with its physical emitter) is a nuisance, but the arrangement works very well indeed.
+ +Darlington | ||
Driver | O/P Transistor | Total Gain |
50 | 25 | 1310 |
Compound (Sziklai) | ||
Driver | O/P Transistor | Total Gain |
50 | 25 | 1290 |
The lower gain of the compound pair indicates that there is internal local negative feedback inherent in the configuration, and all tests that have been performed indicate that this is indeed true. Although the gain difference is not great, much of the improved linearity can be assumed to result from the fact that only one emitter-base junction is directly involved in the signal path rather than two, so only one set of direct non-linearities is brought into the equation. The second (output) device effectively acts as a buffer for the driver.
+ +Having said that, there are some very well respected amplifiers using Darlington emitter-follower output stages. There are no hard and fast rules that can be applied to make the perfect amplifier (especially since it does not yet exist), and with careful design it is quite possible to make a very fine amplifier using almost any topology.
+ +One thing that can (and does) cause problems is the output stage gain. If it's biased to a lower than optimum current, the gain falls dramatically. If the output stage's current gain falls too far, the entire amplifier effectively has no gain, so negative feedback can do nothing to reduce crossover distortion. This is why an amplifier should be as linear as possible before the application of feedback, but claims that feedback "ruins the sound" are divorced from reality. While it's possible to design an amplifier with no feedback, there's really very little point. It won't perform as well as a more conventional design, regardless of 'reviews' that may extol it's alleged virtues.
+ + +It is a simple fact of life that an emitter follower (whether Darlington or compound) is perfectly happy to become an oscillator - generally at very high frequencies. This is especially true when the output lead looks like a tuned circuit. A length of speaker cable, while quite innocuous at audio frequencies, is a transmission line at some frequency determined by its length, conductor diameter and conductor spacing. A copy of the ARRL handbook (from any year) will provide all the formulae needed to calculate this, if you really want to go that far.
+ +All power amplifiers (well, nearly all) use emitter follower type output stages, and when a speaker lead and speaker (or even a non-inductive dummy load) are connected, oscillation often results. This is nearly always when the amp is driven, and is more likely when current is being drawn from the circuit. It is a little sad that the compound pair is actually more prone to this errant behaviour than a Darlington, possibly because the driver is the controlling element (and its emitter is connected to the load), and has a higher bandwidth.
+ +Some of the 'super' cables - much beloved by audiophiles - are often worse in this respect for their ability to act as RF transmission lines than ordinary Figure-8, zip cord or 3-core mains flex, and are therefore more likely to cause this problem.
+ +
Figure 10 - The Standard Output Arrangement For Power Amp Stability
The conventional Zobel network (consisting of the 10 Ohm resistor and 100nF capacitor) generally swamps the external transmission line effect of the speaker cables and loudspeaker internal wiring, and provides stability under most normal operating conditions.
+ +In a great many amplifiers, the amp may oscillate with no load or speaker cables attached, and a Zobel network as shown stops this, too. The reasons are a little difficult to see at first, but can be traced to small amounts of stray inductance and capacitance around the output stage in particular. At very high frequencies, these strays can easily form a tuned circuit, causing phase shift between the amp's output and inverting input. At these high frequencies, few amplifiers have a great deal of phase margin (the difference between the amplifier's phase shift and 180°). Any stray inductance and/or capacitance may only need to create a few additional degrees of phase shift to cause oscillation. Because there is very little feedback at such high frequencies, the overall impedance can be much higher than expected.
+ +At these frequencies, the Zobel capacitor is essentially a short circuit, so there is now a 10 ohm resistor in parallel with a high impedance tuned circuit. The 10 ohm resistor ruins the Q of the tuned circuit(s), and applies heavy damping, thus negating the phase shift to a large degree and restoring stability. Personally, I don't recommend that this network be omitted from any amplifier, even if it appears to be stable without it.
+ +With capacitive loading (as may be the case when a loudspeaker and passive crossover are connected), the Zobel network has very little additional effect - may have no effect whatsoever. The only sure way to prevent oscillation or severe ringing with highly capacitive cables is to include an inductor in the output of the amplifier. This should be bypassed with a suitable resistor to reduce the Q of the inductor, and the typical arrangement is shown in Fig 10. For readers wishing to explore this in greater depth, read 'The Audio Power Interface' [ 2 ]. In many cases it might be better to use a far lower resistance than the 10 Ohms normally specified - I am thinking around 1 Ohm or so. Some National Semiconductor power opamps specify 2.7 ohms as the optimum. Ideally, cables with low inductance and high capacitance should always have an additional 100nF/10 ohm Zobel network at the loudspeaker end. When this is done, the cable no longer appears as a capacitor at high frequencies. Regrettably, few (if any) loudspeaker manufacturers see fit to include this at the input terminals.
+ +Another alternative is to include a resistor in series with the output of the amplifier, but this will naturally have the dual effect of reducing power output and reducing damping factor. At resistor values sufficient to prevent oscillation, the above losses become excessive - and all wasted power must be converted into heat in the resistor.
+ +The choice of inductor size is not difficult - for an 8 Ohm load it will be typically a maximum of 20µH, any larger than this will cause unacceptable attenuation of high frequencies. A 6µH inductor as shown in Figure 10 will introduce a low frequency loss (assuming 0.03 Ohm resistance) of 0.03dB and will be about 0.2dB down at 20kHz. These losses are insignificant, and will not be audible. In contrast, ringing (or in extreme cases, oscillation) of the output devices will be audible (even at very low levels) as increased distortion, and in extreme cases may destroy the transistors.
+ + +It is not realised by everyone, but a single unity gain transistor stage can oscillate. Opamps and power amps commonly use emitter followers for their outputs, and failure to isolate the transistor stage from cable effects can (and regularly does) cause the stage to oscillate. All opamps that connect to the outside world (via connectors on the front or rear panel for example) must use a series resistor. Values from 47 ohm up to 220 ohms are usually enough. I use 100 ohms as a matter of course, but lower (or higher) values may be needed, depending on what you are trying to achieve.
+ +
Figure 11 - Lumped Component Transmission Line Causes Emitter Follower To Oscillate
In simulations and on the lab bench, I have been able to make a single transistor emitter follower circuit oscillate quite happily, with a real transmission line (such as a length of co-axial cable), or a lumped component equivalent of a transmission line, consisting of a 500µH inductor and 100pF as a series tuned circuit. This is shown in Figure 11.
+ +
Figure 12 - Simulator Oscilloscope Display Of Oscillation In Emitter Follower
This effect is made worse as the source impedance is lowered, but even a base stopper resistor will not prevent oscillation - only the swamping effect of the transmission line by a Zobel network or a series resistance succeeds. In case you were wondering why the oscilloscope take-off point is at the junction of the L and C components, this allows series resonance to amplify the HF component, making it more readily seen.
+ +For power amplifiers, this problem is solved by using a Zobel network, optionally with a series inductor. For low-level stages, it is more sensible to use a resistor in series with the output. The resistor is normally between 22 and 100 ohms, and this will be seen in all ESP designs where an opamp connects to the outside world (or even an internal cable). A resistor can be used with power amps too, but at the expense of power loss, heat, and loss of damping factor. For a power amp, the output inductor can be replaced by a 1 ohm resistor (sometimes less), but this is extremely rare.
+ +In my own amp (P3A being the latest incarnation), I did not use an output inductor, but instead made the dominant pole (the capacitance from the collector to base of the Class-A driver) a little larger than you mat see in other designs. This keeps the amp stable under all operating conditions, but at the expense of slew rate (and consequent slew rate limited power at high frequencies). This was initially largely an economic decision, since a couple of ceramic capacitors are much cheaper than an inductor, and the amp was used in large numbers at the time largely for musical instrument amplification, so an extended high frequency response was actually undesirable. Full power bandwidth - the ability of an amp to supply full power over its entire operating frequency range - is a sure way to destroy hearing, HF horn drivers (etc) in a live music situation, so the compromise was not a limitation. P3A does allow the value to be changed, provided you have an oscilloscope and can check for (sometimes parasitic) oscillation. However ...
+ +There is another reason that a series output inductor may be helpful. It has been suggested (but by whom I cannot remember) that radio frequencies picked up by the speaker leads may be injected back into the input stage via the negative feedback path. When one looks at a typical circuit, this seems plausible, but I have not tested the theory too deeply.
+ +The basics behind it are not too difficult to work out. Since it is known that there must be a dominant pole in the amplifier's open-loop frequency response (the capacitor shown in all figures including a Class-A amp stage) if it is to remain stable when feedback is applied, it follows that as internal gain decreases with increasing frequency then the output impedance must rise (due to less global feedback). Indeed this is the case, and by the time the frequency is into the MHz regions, there will be negligible loading of any such frequencies by the output stage.
+ +If appropriate precautions are not taken (as in Figure 4) for the negative feedback return path, then it is entirely likely that RF detection could occur. In my own bi-amped system (which uses the predecessor of the P3A amplifier described above, still without an output inductor), I recently had problems with detection of a local AM radio station. Fitting of RF 'EMI' suppression chokes (basically, loop the speaker cable through a ferrite ring 3 or 4 times) completely eliminated the problem, so I must conclude that it is indeed possible or even probable.
+ +If an amplifier is ever likely to be connected to 'exotic' (expensive 'audiophile') cables then it is essential that an output inductor is used. As noted above, the inductance has to be limited to prevent high frequency rolloff, and for load impedances down to 4 ohms, the inductance should not exceed about 10µH. In most cases, as many turns as will fit onto a 10 ohm 1W resistor will be sufficient, and the wire used must be thick enough to carry the full speaker current.
+ + +The maximum output current of a power amplifier is often thought to be something that affects the output transistors only, and that adding more transistors will automatically provide more current to drive lower impedances. This is only partially true, because bipolar transistors need base current, and this must come from the driver stage.
+ +It is common to bias the Class-A driver stage so that it can provide between 1.5 to 5 times the expected base current needed by the output transistors and their drivers. As the current in this stage is lowered, there is likely to be a substantial increase in the distortion, since the current will change by a larger percentage. If the Class-A driver current is too high, there will be too much heat to get rid of, and it is possible to exceed the transistor's maximum ratings. I normally work to a figure of about double the expected output device base current, but in some cases it will be more or less than this. We also have to design around the lowest expected current gain for all transistors used.
+ +As an example, let's look at a typical power amplifier output stage. Assuming a power supply of ±35V, the maximum output current will be 35 / 8 = 4.375 Amps (an 8 ohm load is assumed). Since we know that there will be some losses in the driver / power transistor combination, we can safely assume a maximum (peak) current of 4A. A suitable power transistor may be specified for a minimum gain (hFE) of 25, with a collector current of 4A. The driver transistors will generally have a higher gain - perhaps 50 at a collector current of 250mA. The product of the two current gains is accurate enough for what we need, and this gives a combined hFE of 1,000. The peak base current will therefore be 4mA.
+ +If we choose to use a Class-A driver current of double the expected output device base current, this means that the driver will operate at about 8mA. This could be achieved with a current source, or a bootstrapped circuit using a pair of 2.2k resistors in series. At the maximum voltage swing (close to ±35V), the driver current will be increased to 12mA or decreased to 4mA, depending on the polarity. The current source or bootstrap circuit will maintain a constant current, but the driver has to deal with a current that varies by ±4mA as the current into the load changes.
+ +If the load impedance is dropped to 4 ohms, the current source will still only be able to provide 8mA, so output current will be limited to 8A - the driver at this point in the cycle has zero current. At the opposite extreme, the driver will have to cope with 16mA when it is turned on fully. At lower impedances, the driver will be able to supply more current, but the current source will steadfastly refuse to provide more than the 8mA it was designed for, so the peak output current will be limited to 8A in one direction (when the current source provides the drive signal and the Class-A driver is turned off), or some other (possibly destructive) maximum current in the opposite polarity.
+ +But hang on! A Class-A driver is called a Class-A driver because it never turns off - we now have a Class-AB driver, which is not the desired objective and doesn't even work for a single-ended amplifier stage! The power amplifier will clip asymmetrically, and is no longer operating in the linear range - it is distorting.
+ +Adding more power transistors will provide a very limited benefit, since the maximum base current is still limited by the current source supplying the Class-A driver. In order to obtain maximum power at lower impedances requires that either the gain of the output stage is increased, or the Class-A driver current must be increased. Increasing the gain of the output stage devices is not trivial - you must either use a different topology or higher gain power and driver transistors.
+ +The design phase of an amplifier follows similar guidelines, regardless of topology. From Amplifier Basics ...
+ ++ Power Output vs. Impedance ++ ++ The power output is determined by the load impedance and the available voltage and current of the amplifier. An amplifier that is capable of a maximum of 2A output current will be + unable to provide more just because you want it to. Such an amp will be limited to 16W 'RMS' into 8 ohms, regardless of the supply voltage. Likewise, an amp with a supply voltage + of ±16V will be unable to provide more than 16W into 8 ohms, regardless of the available current. Having more current available will allow the amp to provide (for example) + 32W into 4 ohms (4A peak current) or 64W into 2 ohms (8A peak current), but will give no more power into 8 ohms than the supply voltage will allow. ++ + Driver Current ++ Especially in the case of bipolar transistors, the driver stage must be able to supply enough current to the output transistors - with MOSFETs, the driver must be able to charge and + discharge the gate-source capacitance quickly enough to allow you to get the needed power at the highest frequencies of interest.+ + Class-A Driver Stage +
+ For the sake of simplicity, if bipolar output transistors have a gain of 20 at the maximum current into the load, the drivers must be able to supply enough base current to allow this. + If the maximum current is 4A, then the drivers must be able to supply at least 200mA of base current to the output devices. ++ The stages that come before the drivers must be able to supply sufficient current for the load imposed. The Class-A driver of a bipolar or MOSFET amp must be able to supply enough + current to satisfy the base current needs of bipolar drivers, or the gate capacitance of MOSFETs. + ++ + Input Stages +
Again, using the bipolar example from above, the maximum base current for the output transistors was 200mA. If the drivers have a minimum specified gain of 50, then + their base current will be ...
+ ++ 200 / 50 = 4mA. ++ + Since the Class-A driver must operate in Class-A (what a surprise), it will need to operate with a current of 1.5 to 5 times the expected maximum driver transistor base current, to + ensure that it never turns off. The same applies with a MOSFET amp that will expect (for example) a maximum gate capacitance charge (or discharge) current of 4mA at the highest + amplitudes and frequencies. For the sake of the exercise, we shall assume a Class-A driver (VAS) current of double the base current needs of the drivers ... 8mA. ++ The input stages of all transistor amps must be able to supply the base current of the Class-A driver. This time, a margin of between 2 and 5 times the expected maximum base current + is needed. If the Class-A driver operates with a quiescent current of 8mA, the maximum current will be 12mA (quiescent + driver base current). Assuming a gain of 50 (again), this means + that the input stage has to be able to supply 12 / 50 = 240µA, so it must operate at a minimum current of + 240µA × 2 = 480µA to preserve linearity. ++ + Input Current ++ The input current of the first stage determines the input impedance of the amplifier. Using the above figures, with a collector current of 480µA, the base current will be + 4.8µA for input devices with a gain of 100. If maximum power is developed with an input voltage of 1V, then the impedance is 208k ( R = V / I ). + ++
Since the stage must be biased, we apply the same rules as before - a margin of between 2 and 5, so the maximum value of the bias resistors should be 208 / 2 = 104k. + A lower value is preferred, and I suggest that a factor of 5 is more appropriate, giving 208 / 5 = 42k (47k can be used without a problem). +
These are only guidelines (of course), and there are many cases where currents are greater (or smaller) than suggested. The end result is in the performance of the amp, and the textbook approach is not always going to give the result you are after. Remember that higher value resistors mean greater thermal noise, although this is rarely a problem with power amps.
+ +Be careful if you decide to use a lower than normal feedback resistor, as it may run quite hot. A 100W (8 ohm) amp will have about 28V across the feedback resistance, so a 22k resistor will dissipate 35mW. Reduce that to 1k (which would be silly for a variety of reasons), and dissipation is nearly 800mW. Of course, increasing the amplitude increases dissipation by the square of the voltage, so even a 22k resistor will dissipate over 220mW in a 600W amplifier.
+ +Reality is different of course - we generally don't listen to full power sinewaves, and normal music keeps the feedback resistor cool enough not to cause problems in the majority of designs. Resistors that are run close to their maximum power (or voltage) ratings have a much shorter life than those that run cool and/or well within voltage ratings. And yes, resistors do have voltage ratings that are independent of their power rating.
+ + +When specified, transformer regulation is based upon a resistive load over the full cycle, but when used in a capacitor input filter (99.9% of all amplifier power supplies), the quoted and measured figures will never match.
+ +Since the applied AC from the transformer secondary spends so much of its time at a voltage lower than that of the capacitor, there is no diode conduction. During the brief periods when the diode conducts, the transformer has to replace all energy drained from the capacitor in the intervening period between diode conductions, as well as provide instantaneous output current.
+ +Consider a power supply as shown in Figure 13. This is a completely conventional full-wave capacitor input filter (it is shown as single polarity for convenience). The circuit is assumed to have a total effective series resistance of 1 Ohm - this is made up by the transformer winding resistances (primary and secondary). The capacitor C1 has a value of 4,700µF. The transformer has a secondary voltage of 28V. Diodes will lose around 760mV at full power.
+ +
Figure 13 - Full Wave, Capacitor Input Filter Rectifier
The transformer is rated at 60VA and has a primary resistance of 4.3 Ohms, and a secondary resistance of 0.5 Ohms. This calculates to an internal copper loss resistance of 1.0 Ohm.
+ +With a 20 Ohm load as shown and at an output current of 1.57A, diode conduction is about 3.5ms, and the peak value of the current flowing into the capacitor is 4.8A - 100 times per second (10ms interval). Diode conduction is therefore 35% of the cycle. RMS current in the transformer secondary is 2.84A.
+ +++ ++
++ Secondary AC Amps 2.84A RMS 6.4A Peak + Secondary AC Volts (loaded) 25.9V RMS 34.1V Peak + Secondary AC Volts (unloaded) 28.0V RMS 39.6V Peak + DC Current 1.57A Capacitor Ripple Current 2.36A + + DC Voltage (loaded) 31.6V + DC Voltage (unloaded) 38.3V + DC Ripple Voltage 692mV RMS 2.2V Peak-Peak
Ripple across the load is 2.2V peak-peak (692mV RMS), and is the expected sawtooth waveform. Average DC loaded voltage is 31.6V. The no-load voltage of this supply is 38.3V, so at a load current of 1.57A, the regulation is ...
+ ++ Reg (%) = (( Vn - Vl ) / Vn ) × 100 ++ +
Where Vn is the no-load voltage, and Vl is the loaded voltage + +
For this example, this works out to close enough to 17% which is hardly a good result. By comparison, the actual transformer regulation would be in the order of 8% for a load current of 2.14A at 28V. Note that the RMS current in the secondary of the transformer is 2.84A AC (approximately the DC current multiplied by 1.8) for a load current of 1.57A DC - this must be so, since otherwise we would be getting something for nothing - a practice frowned upon by physics and the taxman.
+ +Output power is 31.6V × 1.57A = 49.6W, and the input is 28V × 2.84A = 79 VA.
+ +The input power to the transformer is 60W, so power factor is ...
+ ++ PF (Power Factor) = Actual Power / Apparent Power = 60 / 79 = 0.76 ++ +
There are many losses to account for, with most being caused by the diode voltage drop (600mW each diode - 2.4W total) and winding resistance of the transformer (8W at full load). Even the capacitors ESR (equivalent series resistance) adds a small loss, as does external wiring. There is an additional loss as well - the transformer core's 'iron loss' - being a combination of the current needed to maintain the transformer's flux level, plus eddy current losses which heat the core itself. Iron loss is most significant at no load and can generally be ignored at full load.
+ +Even though the transformer is overloaded for this example, provided the overload is short-term no damage will be caused. Transformers are typically rated for average power (VA), and can sustain large overloads as long as the average long-term rating is not exceeded. The duration of an acceptable overload is largely determined by the thermal mass of the transformer itself.
+ + +![]() |
+ Capacitor Ripple Current - It is well known that bigger transformers have better efficiency that small ones, so it is a common practice to use a
+ transformer that is over-rated for the application. This can improve the regulation considerably, but also places greater stresses on the filter capacitor due to higher
+ ripple current. This is quoted in manufacturer data for capacitors intended for use in power supplies, and must not be exceeded. Excessive ripple current will cause
+ overheating and eventual failure of the capacitor.
+
+ Large capacitors usually have a higher ripple current rating than small ones (both physical size and capacitance). It is useful to know that two 4,700µF caps will + usually have a higher combined ripple current than a single 10,000µF cap, and will also show a lower ESR (equivalent series resistance). The combination will generally + be cheaper as well - one of the very few instances where you really can get something for nothing. |
For further reading on this topic, see the Linear Power Supply Design article.
+ + +If I never hear someone complaining that "distortion measurements are invalid, and a waste of time" again, it will be too soon. I am so fed up with self-proclaimed experts (where 'x' is an unknown quantity, and a 'spurt' is a drip under pressure) claiming that 'real world' signals are so much more complicated than a sinewave, and that static distortion measurements are completely meaningless. Likewise, some complain that sinewaves are 'too simple', and that somehow they fail to stress an amplifier as much as music will.
+ +Measurements are not meaningless, and real world signals are sinewaves! The only difference is that with music, there is usually a large number of sinewaves, all added together. There is not a myriad of simultaneous signals passing through an amp, just one (for a single channel, naturally).
+ +Since physics tells us that no two masses can occupy the same physical space at the same time, so it is with voltages and currents. There can only ever be one value of voltage and one value of current flowing through a single circuit element at any instant of time - if it were any different, the concept of digital recording could never exist, since in a digital recording the instantaneous voltage is sampled and digitised at the sampling rate. This would clearly be impossible if there were say 3 different voltages all present simultaneously.
+ +So, how do these x-spurts determine if an amplifier has a tiny bit of crossover distortion (for example). I can see it as the residual from my distortion meter, and it is instantly recognisable for what it really is, and I can see the difference when I make a change to a circuit to correct the problem. If I had to rely on my ears (which although getting older, still work quite well), It would take me much longer to identify the problem, and even longer to be certain that it was gone. I'm not talking about the really gross crossover distortion that one gets from an under-biased amp, I am referring to vestiges - miniscule amounts that will barely register on the meter - I use my oscilloscope to see the exact distortion waveform. I suspect that this dilemma is 'solved' by some by simply not using the push-pull arrangement at all, thereby ensuring that power is severely limited, and other distortion is so high that they would not dare to publish the results.
+ +These same x-spurts may wax lyrical about some really grotty single ended triode amp, with almost no power and a highly questionable output transformer, limited frequency response and a damping factor of unity if it is lucky.
+ +Don't get me wrong - I'm not saying that this is a definition of single-ended triode amps (for example), there are some which I am sure sound very nice - not my cup of tea, but 'nice'. I have seen circuits published on the web that I would not use to drive a clock radio speaker (no names, so don't ask), and 'testimonials' from people who have purchased this rubbish, but there are undoubtedly some that do use quality components and probably sound ok at low volume levels.
+ +Sorry if I sound vehement (vitriolic, even), but quite frankly this p****s me off badly. There are so many people waving their 'knowledge' about, and many of them are either pandering to the Magic Market, or talking through their hats.
+ +The whole idea of taking measurements is to ensure that the product meets some quality standard. Once this standard is removed and we are expected to let our ears be the judge, how are we supposed to know if we got what we paid for? If the product turns out to sound 'bad', should we accept this, or perhaps we should listen to it for long enough that we get used to the sound (this will happen - eventually - it's called 'burn-in' by the subjectivists). I am not willing to accept this, and I know that many others feel the same.
+ +Please don't think that I am advocating specsmanship, because I'm not. I just happen to think that consumers are entitled to some minimum performance standard that the equipment should meet (or exceed). I have yet to hear any amplifier with high distortion levels and/or limited bandwidth sound better than a similar amplifier with lower distortion and wider bandwidth. This implies that we compare like with like - a comparison between a nice valve amp and a nasty transistor amp will still show the transistor amp as having better specs, but we can be assured that it will sound worse. In similar vein, a nice transistor amp compared against a rather poor valve amp may cause some confusion, often due to low damping from the valve amp which makes it easy to imagine that it sounds 'better'.
+ +We need measurements, because they tell us about the things that we often either can't hear, or that may be audible in a way that confuses our senses. Listening tests are also necessary, but they must be properly conducted as a true blind A-B test or the results are meaningless. Sighted tests (where you know exactly which piece of gear you are listening to) are fatally flawed and will almost always provide the expected outcome.
+ + +This is an argument that has been going for years, and it seems we are no closer to resolving the dilemma than we ever were. I have worked with all three, and each has its own sonic quality. Briefly, we shall have a look at the differences - this is not an exhaustive list, nor is it meant to be - these are the main points, influenced by my own experiences (and I must admit, prejudices). Please excuse the somewhat random order of the comparisons ...
+ +++ +Valves: +
Valves are Voltage to Current Converters, so the output current is controlled by an input voltage. It is necessary to apply the varying output current to a load (the anode resistor or transformer) to derive an output voltage. + ++
+- Valves themselves are inherently passably linear, and can operate with no feedback at all within a restricted range, and still provide a high quality + signal. The range is usually more than sufficient for preamps, but is pushed to its limits in power amplifiers.
+ +- Relatively low gain per device, meaning that more are needed, or less feedback can be used.
+ +- 'Soft' distortion characteristics, meaning that most of the distortion is low order (including crossover distortion and clipping) - this is not (quite) as + obtrusive or fatiguing as 'hard' distortion.
+ +- Distortion onset is gradual, and effectively warns the listener that the limits are being approached by losing clarity, but in a manner that is not + too obtrusive.
+ +- Distortion is usually measurable at nearly any power level, but is low order (mainly 2nd and 3rd harmonics - small amounts of additional harmonics + are usually also present).
+ +- Limited feedback, mainly due to the fact that the output transformer introduces low and high frequency phase shift, so large amounts of global feedback + are generally not possible without oscillation. This results in a (comparatively) limited bandwidth.
+ +- High output impedance, meaning that damping factor in power amps is generally rather poor. Extremely low values of output impedance are very difficult + to achieve (although it can be done at considerable extra expense).
+ +- Valves have a perfect dielectric (mainly a vacuum, with some mica), leading to a highly linear Miller capacitance - it is unknown if this contributes + any audible benefit.
+ +- Inefficient output stage, allowing the amp to sound louder than it really is on a watt for watt basis. This may sound like a contradiction, but a valve + amp has a 'compliant' output, that allows it to provide a larger voltage swing to high impedance loads (such as a loudspeaker driver at resonance).
+ +- Fairly rugged, and can withstand short circuits without damage - BUT open circuits can cause the output transformer to create high flyback voltages + that can cause insulation breakdown in the transformer windings or the valve sockets (short circuits are OK, open circuits are bad)
+ +- Usually quite tolerant of difficult loads, such as electrostatic loudspeakers.
+ +- A wonderful nostalgia value, which allows people to accept the shortcomings, and truly believe that the amp really does sound better than a really + good solid-state unit. Proper double-blind testing will usually reveal the truth - provided that the solid-state equivalent is modified to match the + output impedance of the valve unit!
+
++ +Transistors (BJTs): +
+ +
By default, bipolar transistors are Current to Current Converters. This means that they use an input current change to derive an output current change that is greater than the input (therefore amplification occurs). Again, it is necessary to use a resistor or other load to allow an output voltage to be developed. It's worth noting that in some texts you will see that the author insists that transistors are voltage controlled, but I find this to be at odds with reality. I have always worked with them as current controlled devices, and will continue to do so.+
+- Transistors are also quite linear within a restricted range, but due to the lower operating voltages usually cannot successfully be used without + feedback if a very high quality signal is desired, even in preamp stages.
+ +- High to very high gain per device, allowing local feedback to linearise the circuit before the application of global feedback.
+ +- Onset of distortion is sudden and without warning in most feedback topologies.
+ +- Low to very low distortion, provided clipping is not introduced. This creates both the low order harmonics of the valve amp, plus high order harmonics + which may be very fatiguing.
+ +- Wide to very wide bandwidth, and low phase shift, largely due to the elimination of the output transformer. The wide bandwidth is obviously an advantage, + the phase response is highly debatable as to its overall value to the listener.
+ +- Usually large amounts of global feedback, which is needed to linearise the output stage, especially at the crossover point between output devices + (0 Volts) for power amplifiers.
+ +- Completely oblivious to open circuit loads, but must be protected against instant damage with short circuited outputs (open circuits are OK, short + circuits are bad - i.e. the opposite of valves)
+ +- The Miller capacitance of transistors has an imperfect dielectric, and varies with applied voltage. This might be the reason that some transistor amps + can be seen to oscillate at a specific voltage level (small bursts of oscillation on the waveform, but only above a certain voltage across the device). + Tricky.
+ +- Intolerant of difficult loads, unless extensive measures are taken to ensure stability. This can increase complexity quite dramatically.
+
++ +MOSFETs: +
+ +
Like valves, MOSFETs are voltage to current converters, and rely on a voltage on the gate to control the output current. As before, a resistor or other load converts the varying current into a voltage. Here I discuss lateral (designed for audio) MOSFETs, not switching types. HEXFETs and similar switching MOSFETs (vertical MOSFETs) are not really suited to linear operation, and have some interesting failure mechanisms just waiting to bite you. So, for lateral MOSFETs ...+
+- Similar to most of the comments about bipolar transistors, with the following differences:
+ +- Onset of (clipping) distortion is (usually) not quite as savage as transistors, but is much more sudden than valves. This is a very minor difference, + and can safely be ignored.
+ +- May not be as linear as valves or bipolar transistors, especially near the cutoff region. Big differences between different types (lateral/ vertical)
+ +- More efficient than valves, but not as efficient as bipolar transistors. There will always be less output voltage swing available from a MOSFET amp than a + bipolar transistor amp (for the same supply voltage), unless an auxiliary power supply is used.
+ +- Gain is (usually) higher than valves, but lower than bipolar transistors - limiting the ability to apply local feedback, and even overall (global) feedback may + not produce distortion figures as good as bipolar transistors - especially with vertical MOSFETs.
+ +- Low distortion (lateral types), but may require more gain in the preceding stages to allow sufficient feedback to eliminate crossover distortion.
+ +- Very wide bandwidth (better than bipolar transistors), allowing less compensation and full power operation up to 100 kHz in some amps - the value of this + is debatable.
+ +- More rugged than bipolar transistors, and do not suffer from second breakdown effects - fuses can be used for short circuit protection, and no open circuit + protection is needed.
+ +- Reasonably tolerant of difficult loads without excessive circuit complexity.
+
To complicate matters, there are two main types of MOSFET as stated at above - lateral and vertical. This applies to the internal construction. Lateral MOSFETs are well suited to audio (see Project 101), while vertical (e.g. HEXFETs) are designed for high speed switching, and are not really suitable for audio. Despite this, it is possible to make an amplifier using HEXFETs that performs well, and this has been achieved by many hobbyists and manufacturers.
+ +Thermal stability is critical with vertical MOSFETs, and a very good bias servo is essential. Because of their high transconductance (and wide parameter spread), when used in parallel for audio they need to be matched for gate threshold voltage. If this isn't done, the device with the lowest gate threshold will take most of the current, causing it to get hotter, meaning that it will take even more of the current. This will lead to output stage failure.
+ +Lateral MOSFETs do not have this problem, because they have a relatively high RDS-On (on resistance), and they share current well despite gate threshold voltage differences.
+ +Because of the differences outlined above it is very important to compare like with like, since each has its own strengths and problems. Also, each of the solid state amp types has its niche area, where it will tend to outperform the other, regardless of specifications. The valve amp is the odd man out here, as it is far more likely to have devoted fans who would use nothing else - most solid state amp users are (or should be) a pragmatic lot, using the most appropriate configuration for the intended application.
+ +There is no such thing at the time of writing as the much sought after (but elusive) 'straight wire with gain'. But wait - there's more ....
+ + +Another aspect of amplifier design is slew rate. Slew-rate simply refers to the rate of change of voltage in a given time. It's typically quoted in volts per microsecond (V/µs). This term and (more to the point) its effects are not well understood, and the possible effects are often taken to extremes to 'prove' a point. In reality, no competent amplifier will show any sign of 'slew induced distortion' or undesirable behaviour with any normal music signal. Virtually any amplifier can be forced into slew-rate limiting if pushed to a high enough frequency at full power.
+ + +It has been claimed by many writers on the subject that a slew-rate limited amplifier will introduce transient intermodulation distortion, or TIM (aka TID - transient induced distortion). In theory, this is perfectly true, provided that the slew rate is sufficiently low as to be within the audible spectrum (i.e. below 20 kHz), and the program material has sufficient output voltage at high frequencies to cause the amplifier to limit in this fashion.
+ +The following nomograph is helpful in allowing you to determine the required slew rate of any amplifier, so that it can reproduce the required audio bandwidth without introducing distortion components as a result of not being fast enough.
+ +
Figure 14 - Slew Rate Nomograph
To use this nomograph, first select the maximum frequency on the top row. Let's assume 30kHz as an example. Next, select the actual output voltage (peak, which is RMS × 1.414) that the amplifier must be able to reproduce. For a 100W 8 Ohms amp, this is 28V RMS, or 40V peak. Now draw a line through these two points as shown, and read the slew rate off the bottom row. For the example, this is 8V/µs. This is in fact far in excess of what is really needed, since it is not possible for an amp reproducing music to have anywhere near full power at 30kHz.
+ +By 20kHz, our 100W amp will need an output of perhaps 10W (typically much less), and this is only about 12V peak. Using the nomograph with this data reveals that a slew rate of about 2V/µs is quite sufficient. Such an amp will go into what is known as slew-rate limiting at full power with frequencies above 10kHz or so, converting the input sinewave into a triangular wave whose amplitude decreases with increasing frequency.
+ +Some claim that this is audible, and although this is largely subjective it can be measured by a variety of means. That a typical audio signal is a complex mixture of signals is of no real consequence, because an amp has no inherent concept of 'complex' any more than it has an opinion about today's date or the colour of your knickers. At any given point in time, there is an instantaneous value of input voltage that must be increased in amplitude and provide the current needed to drive the loudspeaker. As long as this input voltage does not change so fast that the amplifier cannot keep up with the change then little or no degradation should occur, other than (hopefully) minor non-linearities that represent distortion.
+ +Although this is a fine theory, there seems to be much entrenched prejudice against 'slow' amplifiers. Whether they sound different from another that is not constrained by slew rate limiting within the full audible range remains debatable. These differences are easily measured, but may be irrelevant when the system is used for music, which simply does not have very fast rise or fall times.
+ +As shown above, the slew rate of an amplifier is usually measured in Volts / microsecond, and is a measure of how fast the amplifier's output can respond to a rapidly changing input signal. Few manufacturers specify slew rate these days (mainly because few buyers understand what it is), but it is an important aspect of an amplifier's design. It's also important to understand that music never contains any signals that produce full power at 10 or 20kHz. It's generally accepted that the amplitude falls at ~6dB/octave above 1-2kHz, so a 100W amp with a peak output of 40V won't be called upon to provide much above 5V (peak) at 20kHz. There will always be exceptions, and it's safer to assume and plan for at least 10V peak at 20kHz. More doesn't hurt anything, but usually doesn't make an audible difference (assuming a proper double blind test of course).
+ +As can be seen from the above, for an amplifier (of any configuration) to reproduce 28V RMS at 20 kHz (about 100W / 8 Ohms) requires a slew rate of 4.4 V / µs. This is to say that the output voltage can change (in either direction) at the rate of 8 Volts in one microsecond. This is not especially fast, and as should be obvious, is dependent upon output voltage. A low power amp need not slew as fast as a higher powered amp. There is no real requirement for any amp to be able to slew faster than this, as there is a significantly large margin provided already. This can be calculated or measured.
+ +Doubling the amplifier's output voltage (four times the power) requires that the slew rate doubles, and vice versa, so a 400W amp needs a slew rate of 8.8 V / µs, while a 25W amp only needs 2.2 V /µs. This is a very good reason to use a smaller amplifier for tweeters in a triamped system, since it is much easier to achieve a respectable slew rate when vast numbers of output devices are not required.
+ +Essentially, if the amplifier's output cannot respond to the rapidly changing input signal, an error voltage is developed at the long-tailed pair stage, which tries to correct the error. The LTP is an amplifier, but more importantly, an error amplifier, whose sole purpose is to keep both of its inputs at the same voltage. This is critical to the operation of a solid state amp, and the LTP output will generally be a very distorted voltage and current waveform, producing a signal that is the exact opposite of all the accumulated distortions within the remainder of the amp (this also applies to opamps).
+ +The result is (or is supposed to be) that the signal applied to the inverting input is an inverted exact replica of the input signal. Were this to be achieved in practice, the amp would have no distortion at all. In reality, there is always some small difference, and if the Class-A driver or some other stage enters (or approaches) the slew rate limited region of operation, the error amp (LTP) can no longer compensate for the error.
+ +Once this happens distortion rises, but more importantly, the input signal is exceeding the capabilities of the amplifier, and the intermodulation products rise dramatically. Intermodulation distortion is characterised by the fact that a low frequency signal modulates the amplitude (and / or shape) of a higher frequency signal, generating additional frequencies that were not present in the original signal. This also occurs when an amplifier clips, or if it has measurable crossover distortion.
+ +Sounds like ordinary distortion, doesn't it? That too creates frequencies that were not in the original, but the difference is that harmonic distortion creates harmonics (hence the name), whereas intermodulation distortion creates frequencies that have no harmonic relationship to either of the original frequencies. Rather, the new frequencies are the sum and difference of the original two frequencies. (This effect is used extensively in radio, to create the intermediate frequency from which the audio, video or other wanted signal can be extracted.) The term 'harmonic' basically can be translated to 'musical', and 'non-harmonic' is mathematically derived, but not musically related .... if you see what I mean.
+ +Whenever the LTP (error amplifier) loses control of the signal, intermodulation products will be generated, so the bandwidth of an amplifier must be wide enough to ensure that this cannot happen with any normal audio input signal. There is nothing wrong or difficult about this approach, and it is quite realisable in any modern design. Although unrealistic from a musical point of view, it is better if an amplifier is capable of reproducing full power at the maximum audible frequency (20 kHz) than if it starts to go into slew rate limiting at some lower frequency.
+ +The reason I say it is unrealistic musically is simply because there is no known instrument - other than a badly set up synthesiser - that is capable of producing any full power harmonic at 20 kHz, so in theory, the amp does not have to be able to reproduce this. In reality, inability to reproduce full power at 20 kHz means that the amp might suffer from some degree of transient intermodulation distortion with some program material. Or it might not.
+ +This is not a problem that affects simple amps with little or no feedback - they generate enough harmonic distortion to more than make up for the failings of more complex circuits with lots of global feedback. This fact tends to annoy the minimalists, who are often great believers in no feedback under any circumstance, which relegates them to listening to equipment that would have been considered inferior in the 1950s.
+ +If preferred, you can calculate the slew rate of any signal at any amplitude. Use the formula ...
+ ++ ΔV / Δt = 2π × f × VPeak ++ +
ΔV / Δt is the slew rate (change of voltage vs change of time). VPeak is the peak voltage of the sinewave. For example, if you use a voltage of 40V (peak) and a frequency of 20kHz, you get 5,026,548V/s, which is (close enough to) 5.03V/µs.
+ + +Few sensible people would argue that measurements of frequency response are unimportant or irrelevant, and this is one of the simplest measurements to take on an amplifier. Again, the subjectivists would have it that these fail to take into account some mysterious area of our brain that will compensate for a restricted response, and allow us to just enjoy the experience of the sound system. This is true - we will compensate for diminished (or deranged) frequency response, but it need not be so.
+ +If you listen to a clock radio for long enough, your brain will think that this is normal, and will adjust itself accordingly. Imagine your surprise when you hear something that actually has real low and high frequencies to offer - the first reaction is that there is too much of everything, but again, the brain will make the required allowances and this will sound normal after a time.
+ +There are so many standard measurements on amplifiers that are essential to allow us to make an informed judgement (is this amp even worth listening to?). I really object to the attitude that "it does not matter what the measurements say, it sounds great". In reality this is rarely the case - if it measures as disgusting, then it will almost invariably sound disgusting. There is no place for hi-fi equipment that simply does not meet some basic standards - and I have never heard an amp that looked awful on the oscilloscope, measured as awful on my distortion meter, but sounded good - period. I have heard some amps that fall into that category that sound 'interesting' - not necessarily bad, but definitely not hi-fi by any stretch of the imagination. To the dyed-in-the-wool subjectivist, it seems that 'different' means 'better', regardless of any evidence one way or another.
+ + +There are some designs that should simply be avoided. Two in particular are shown here, but this doesn't mean that there aren't others as well. The two shown below suffer from a number of problems, with the biggest issue being thermal stability. This is by no means all though - the first to avoid is shown in Figure 14, and includes a compound (Sziklai) pair for comparison. As you can see, the 'output stage with gain' (output 1) simply breaks the feedback loop within the compound pair and adds resistors. The gain is directly proportional to the resistor divider ratio, so the gain is 3.2 with 220 ohms and 100 ohms as shown. The problem is that this applies to DC as well as AC, so the stage amplifies its own thermal instability. Because of the relatively high output impedance, the actual gain will be less than calculated.
+ +
Figure 15 - Output Stage with Gain (Sziklai Pair for Comparison)
Why would anyone bother? The stage has the advantage that having gain, so it can be driven directly by an opamp whose output level would normally be too low to be useful. Several amplifiers have been built using this circuit over the years, and all those I have seen have been thermally unstable, and some also have high frequency instability issues. Because the output stage local feedback is reduced by the amount of gain used, distortion is significantly higher than with a conventional compound pair (for example). In the above circuits, the stage with gain has an open loop distortion of 4%, while the compound pair stage has distortion less than 0.1%. This was simulated using an 8 ohm load - in reality, the distortion difference is usually greater than this, with the gain version showing even higher distortion. A vast amount of negative feedback is needed to make the circuit linear enough to be usable. As noted above, output impedance is also much higher than the compound pair.
+ +If the circuit is driven by an opamp, the opamp's high gain helps to linearise the output stage, but high frequency instability remains an issue. It can be solved, but usually requires several HF stability networks. Such arrangements are usually easy to coax into oscillation because they tend to have a poor phase margin (the difference between the actual phase shift and 180°, where the amp will oscillate).
+ +There is no simple cure for the thermal instability though. A single transistor cannot compensate for the quiescent current shift, and a Darlington pair overcompensates. While it is certainly possible to come up with a composite circuit that will work, the complexity is not warranted for an output stage that doesn't perform well at the best of times.
+ +Another travesty was unleashed many years ago, and fortunately I've not seen it re-surface for many years. I am almost unwilling to post the circuit, lest someone think it's a good idea. It isn't a good idea, and never was. Again, thermal instability was a major problem, and HF instability was also common. The idea was to use an opamp's supply pins to drive output transistors. By loading the opamp, the supply current varies from a couple of milliamps at idle, up to perhaps 20-30mA (depending on the opamp). An example circuit of this disaster is shown below.
+ +
Figure 16 - Opamp Based Power Amplifier
If you happen to see this circuit (or any of its variations) anywhere, avert your eyes immediately . I recall messing around with it some time in the 1970s, and while it was (barely) possible to make it reasonably stable (against HF instability), the only way to achieve thermal stability is to use relatively large resistors in the output transistor emitters. This limits the output power, but it is capable of driving headphones, provided you can live with it's other failings which are many and varied. Since there are so many circuits that outperform it (including cheap and cheerful 'power opamps' - IC power amps), there is no reason to consider it for anything other than your own amusement.
Note that the values shown on these circuits are for example only. I cannot guarantee that the opamp based amp will even work as shown - the circuit is there only so you can see the general arrangement. Since I strongly suggest that you stay well clear of this topology, I do not propose to waste time to ensure that the circuit will function as shown.
+ + +For further reading, I can recommend The Self Site, and in particular 'Science and Subjectivism in Audio' and also my own article on the subject Cables, Interconnects & Other Stuff - The Truth. There is also an article called Amplifier Sound - What Are The Influences? that goes a little deeper into the measured and subjective performance of amplifiers, and suggests a couple of new tests that might be applied.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +AN-001 |
To be able to understand much of the following, the basic rules of opamps need to be firmly embedded in the skull of the reader. I came up with these many years ago, and - ignoring small errors caused by finite gain, input and output impedances - all opamp circuits make sense once these rules are understood. They are also discussed in the article Designing With Opamps in somewhat greater detail. Highly recommended if you are in the least bit unsure.
+ +The two rules are as follows ...+ +
+ ++
- An opamp will attempt to make both inputs exactly the same voltage (via the feedback path) +
- If it cannot achieve #1, the output will assume the polarity of the most positive input +
These two rules describe everything an opamp does in any circuit, with no exceptions ... provided that the opamp is operating within its normal parameters. This means power supply voltage(s) must be within specifications, signal voltage is within the allowable range, and load impedance is equal to or greater than the minimum specified. The signal frequency must also be low enough to ensure that the opamp can perform normally for the chosen gain. For most cheap opamps, a gain of 100 with a frequency of 1kHz should be considered the maximum allowable, since the opamp's open loop gain may not be high enough to accommodate higher gain or frequency.
+ +Armed with these rules and a basic understanding of Ohm's Law and analogue circuitry, it is possible to figure out what any opamp circuit will do under all normal operating conditions. Needless to say, the rules no longer apply if the opamp itself is faulty, or is operating outside its normal parameters (as discussed briefly above).
+ + +The choice of opamp determines the highest frequency that can be accommodated. While many of the circuits are shown using a TL072 or similar, these are very limiting. The highest frequency will be well above the audio band, but if you need to rectify higher frequencies you will need something faster. A TL072 will get to about 150kHz, but if you need to rectify (say) 500kHz or so, you need an opamp that has a much higher upper frequency. The LM318 is a reasonable candidate and is fairly cheap. These are rated for a unity gain frequency (gain bandwidth product or GBW) of 15MHz with a 50V/µs slew rate (the TL07x is 3MHz and 13V/µs).
+ +It's worthwhile reading the article Opamp Bandwidth Vs. Gain And Slew Rate, which goes into detail as to how these factors influence the frequency response of a circuit. If you need to go even higher in frequency, consider using an LM4562 (GBW of 55MHz and 20V/µs). Selection of a suitable candidate isn't always easy, but you don't need to be concerned if you're only interested in the audio range (20Hz - 20kHz).
+ + +There are many applications for precision rectifiers, and most are suitable for use in audio frequency circuits, so I thought it best to make this the first ESP Application Note. While some of the existing projects in the audio section have a rather tenuous link to audio, this information is more likely to be used for instrumentation purposes than pure audio applications. There are exceptions of course.
+ +Typically, the precision rectifier is not commonly used to drive analogue meter movements, as there are usually much simpler methods to drive floating loads such as meters. Precision rectifiers are more common where there is some degree of post processing needed, feeding the side chain of compressors and limiters, or to drive digital meters.
+ +There are several different types of precision rectifier, but before we look any further, it is necessary to explain what a precision rectifier actually is. In its simplest form, a half wave precision rectifier is implemented using an opamp, and includes the diode in the feedback loop. This effectively cancels the forward voltage drop of the diode, so very low level signals (well below the diode's forward voltage) can still be rectified with minimal error.
+ +The most basic form is shown in Figure 1, and while it does work, it has some serious limitations. The main one is speed - it will not work well with high frequency signals. To understand the reason, we need to examine the circuit closely. This knowledge applies to all subsequent circuits, and explains the reason for the apparent complexity.
+ +
Figure 1 - Basic Precision Half Wave Rectifier
For a low frequency positive input signal, 100% negative feedback is applied when the diode conducts. The forward voltage is effectively removed by the feedback, and the inverting input follows the positive half of the input signal almost perfectly. When the input signal becomes negative, the opamp has no feedback at all, so the output pin of the opamp swings negative as far as it can. Assuming 15V supplies, that means perhaps -14V on the opamp output.
+ +When the input signal becomes positive again, the opamp's output voltage will take a finite time to swing back to zero, then to forward bias the diode and produce an output. This time is determined by the opamp's slew rate, and even a very fast opamp will be limited to low frequencies - especially for low input levels. The test voltage for the waveforms shown was 20mV at 1kHz. Although the circuit does work very well, it is limited to relatively low frequencies (less than 10kHz) and only becomes acceptably linear above 10mV or so (opamp dependent).
+ +Note the oscillation at the rectified output. This is (more or less) real, and was confirmed with an actual (as opposed to simulated) circuit. This is the result of the opamp becoming open-loop with negative inputs. In most cases it is not actually a problem. The large voltage swing is a problem though.
+ +
Figure 2 - Rectified Output and Opamp Output
Figure 2 shows the output waveform (left) and the waveform at the opamp output (right). The recovery time is obvious on the rectified signal, but the real source of the problem is quite apparent from the huge voltage swing before the diode. While this is of little consequence for high level signals, it causes considerable non-linearity for low levels, such as the 20mV signal used in these examples.
+ +The circuit is improved by reconfiguration, as shown in Figure 3. The additional diode prevents the opamp's output from swinging to the negative supply rail, and low level linearity is improved dramatically. A 2mV (peak) signal is rectified with reasonably good accuracy. Although the opamp still operates open-loop at the point where the input swings from positive to negative or vice versa, the range is limited by the diode and resistor. Recovery time is therefore a great deal shorter.
+ +
Figure 3 - Improved Precision Half Wave Rectifier
This circuit also has its limitations. The input impedance is now determined by the input resistor, and of course it is more complicated than the basic version. It must be driven from a low impedance source. Not quite as apparent, the Figure 3 circuit also has a defined output load resistance (equal to R2), so if this circuit were to be used for charging a capacitor, the cap will also discharge through R2. Although it would seem that the same problem exists with the simple version as well, R2 (in Figure 1) can actually be omitted, thus preventing capacitor discharge. Likewise, the input resistor (R1) shown in Figure 1 is also optional, and is needed only if there is no DC path to ground.
+ + +Figure 4 shows the standard full wave version of the precision rectifier. This circuit is very common, and is pretty much the textbook version. It has been around for a very long time now, and I would include a reference to it if I knew where it originated. The tolerance of R1, 2, 3, 4 and 5 are critical for good performance, and all five resistors should be 1% or better. Note that the diodes are connected to obtain a positive rectified signal. The second stage inverts the signal polarity. To obtain improved high frequency response, the resistor values should be reduced. + +
Figure 4 - Precision Full Wave Rectifier
This circuit is sensitive to source impedance, so it is important to ensure that it is driven from a low impedance, such as an opamp buffer stage. Input impedance as shown is 6.66k, and any additional series resistance at the input will cause errors in the output signal. The input impedance is linear. As shown, and using TL072 opamps, the circuit of Figure 4 has good linearity down to a couple of mV at low frequencies, but has a limited high frequency response. Use of high speed diodes, lower resistance values and faster opamps is recommended if you need greater sensitivity and/ or higher frequencies.
+ + +A little known variation of the full wave rectifier was published by Analog Devices, in Application Brief AB-109 [ 1 ]. In the original, a JFET was used as the rectifier for D2, although this is not necessary if a small amount of low level non-linearity is acceptable. The resistors marked with an asterisk (*) should be matched, although for normal use 1% tolerance will be acceptable. C1 may be needed to prevent oscillation.
+ +
Figure 5 - Original Analog Devices Circuit
It was pointed out in the original application note that the forward voltage drop for D2 (the FET) must be less than that for D1, although no reason was given. As it turns out, this may make a difference for very low level signals, but appears to make little or no difference for sensible levels (above 20mV or so).
+ + +For most applications, the circuit shown in Figure 6 will be more than acceptable. Linearity is very good at 20mV, but speed is still limited by the opamp. To obtain the best high frequency performance use a very fast opamp and reduce the resistor values.
+ +
Figure 6 - Simplified Version of the AD Circuit
It is virtually impossible to make a full wave precision rectifier any simpler, and the circuit shown will satisfy the majority of low frequency applications. Where very low levels are to be rectified, it is recommended that the signal be amplified first. While the use of Schottky (or germanium) diodes will improve low level and/or high frequency performance, it is unreasonable to expect perfect linearity from any rectifier circuit at extremely low levels. Operation up to 100kHz or more is possible by using fast opamps and diodes. R1 is optional, and is only needed if the source is AC coupled, so extremely high input impedance (with no non-linearity) is possible. C1 may be needed to prevent oscillation.
+ +The simplified version shown above (Figure 6) is also found in a Burr-Brown application note [ 3 ].
+ + +Purely by chance, I found the following variant in a phase meter circuit. This version is used in older SSL (Solid Stage Logic) mixers, as part of the phase correlation meter. This circuit exists on the Net in a few forum posts and a site where several SSL schematics are re-published. The original drawing I found is dated 1984. It's also referenced in a Burr-Brown paper from 1973 and an electronics engineering textbook [ 5, 6 ].
+ +
Figure 6A - Another Version of the AD Circuit
While it initially looks completely different, that's simply because of the way it's drawn (I copied the drawing layout of the original). This version is interesting, in that the input is not only inverting, but provides the opportunity for the rectifier to have gain. The inverting input is of no consequence (it is a full wave rectifier after all), but it does mean that the input impedance is lower than normal ... although you could make all resistor values higher of course. Input impedance is equal to the value of R1, and is linear as long as the opamp is working well within its limits.
+ +R6 isn't used in the SSL circuit I have, and while the circuit works without it, there can be a significant difference between the rectified positive and negative parts of the input waveform. Without R6, the loading on D2 is less than that of D1, causing asymmetrical rectification. This resistor is included in the Figure 6 version, and the need for it was found as I was researching precision rectifiers for a project. It's not a problem with normal silicon small-signal diodes (e.g. 1N4148), but it becomes very important if you use germanium or Schottky diodes due to their higher leakage.
+ +If R1 is made lower than R2-R5, the circuit has gain. If R1 is higher than R2-R5, the circuit can accept higher input voltages because it acts as an attenuator. For example, if R1 is 1k, the circuit has a gain of 10, and if 100k, the gain is 0.1 (an attenuation of 10). All normal opamp restrictions apply, so if a high gain is used frequency response will be affected. C1 is optional - you may need to include it if the circuit oscillates. The value will normally be between 10pF and 100pF, depending on the speed you need and circuit layout.
+ +One interesting result of using the inverting topology is that the input node is a 'virtual earth' and it enables the circuit to sum multiple inputs. R1 can be duplicated to give another input, and this can be extended. The original SSL circuit used two of these rectifiers with four inputs each. Remember that this is the same as operating the first opamp with a gain of four, so high frequency response may be affected without you realising it.
+ +The circuits shown in Figures 6 and 6A are the simplest high performance full wave rectifiers I've come across, and are the most suitable for general work with audio frequencies. In most applications, you'll see the Figure 4 circuit, because it's been around for a long time, and most designers know it well. However, it is definitely not the best performer, and has no advantages over the Figure 6 and 6A simpler alternatives, but it uses more parts and has a comparatively low input impedance.
+ + +I've been advised by a reader that Neve also used a similar circuit in their BA374 PPM drive circuit. In the interests of consistency I've shown the resistors (R1-R5 & R8) as 10k, where 51k was used in the original circuit. This doesn't change the way the circuit works, but it reduces resistive loading on the opamps (which doesn't affect low-frequency operation). The amended schematic is shown below.
+ +
Figure 6B - Neve PPM Rectifier Circuit
The R/C network (R6, R7 and C1) sets the ballistics of the meter, which is determined by the attack and release times. The output of the rectifier is processed further in the BA374 circuit to provide a logarithmic response which allows the meter scale to be linear. This isn't shown because it's not relevant here. Unfortunately, it's extremely difficult to determine who came up with the idea first. The Neve schematic I was sent is dated 1981 if that helps.
+ + +A simple precision rectifier circuit was published by Intersil [ 2 ]. This is an interesting variation, because it uses a single supply opamp but still gives full-wave rectification, with both input and output earth (ground) referenced. Unfortunately, the specified opamp is not especially common, although other devices could be used. The CA3140 is a reasonably fast opamp, having a slew rate of 7V/µs. I will leave it to the reader to determine suitable types (other than that suggested below). The essential features are that the two inputs must be able to operate at below zero volts (typically -0.5V), and the output must also include close to zero volts.
+ +
Figure 7 - Original Intersil Precision Rectifier Circuit
During the positive cycle of the input, the signal is directly fed through the feedback network to the output. R3 actually consists of R3 itself, plus the set value of VR2. The nominal value of the pair is 15k, and VR2 can be usually be dispensed with if precision resistors are used (R3 and VR2 are replaced by a single 15k resistor).
+ +This gives a transfer function of ...
+ ++ Gain = 1 / ( 1 + (( R1 + R2 ) / R3 )) ... 0.5 with the values shown above ++ +
1V input will therefore give an output voltage of 0.5V. During this positive half-cycle of the input, the diode disconnects the op-amp output, which is at (or near) zero volts. Note that the application note shows a different gain equation which is incorrect. The equation shown above works.
+ +During a negative half-cycle of the input signal, the CA3140 functions as a normal inverting amplifier with a gain equal to -( R2 / R1 ) ... 0.5 as shown. Since the inverting input is a virtual earth point, during a negative input it remains at or very near to zero volts. When the two gain equations are equal, the full wave output is symmetrical. Note that the output is not buffered, so the output should be connected only to high impedance stage, with an impedance much higher than R3.
+ +
Figure 8 - Modified Intersil Circuit Using Common Opamp
Where a simple, low output impedance precision rectifier is needed for low frequency signals (up to perhaps 10kHz as an upper limit), the simplified version above will do the job nicely. It does require an input voltage of at least 100mV because there is no DC offset compensation. Expect around 30mV DC at the output with no signal. Because the LM358 is a dual opamp, the second half can be used as a buffer, providing a low output impedance. The second half of the opamp can be used as an amplifier if you need more signal level. Minimum suggested input voltage is around 100mV peak (71mV RMS), which will give an average output voltage of 73mV. Higher input voltages will provide greater accuracy, but the maximum is a little under 10V RMS with a 15V DC supply as shown. The LM358 is not especially fast, but is readily available at low cost.
+ +Limitations: Note that the input impedance of this rectifier topology is non-linear. The impedance presented to the driving circuit is very high for positive half cycles, but only 10k for negative half-cycles. This means that it must be driven from a low impedance source - typically another opamp. This increases the overall complexity of the final circuit. Note that symmetry can be improved by changing the value of R3. It can be made adjustable by using a 20k trimpot (preferably multi-turn). This isn't necessary unless your input voltage is less than 100mV, and the optimum setting depends on the signal voltage.
+ + +An interesting variation was shown in a Burr-Brown application note [ 3 ]. This rectifier operates from a single supply, but accepts a normal earth (ground) referenced AC input. The only restriction is that the incoming peak AC signal must be below the supply voltage (typically +5V for the OPA2337 or OPA2340). The opamps used must be rail-to-rail, and the inputs must also accept a zero volt signal without causing the opamp to lose control.
+ +The circuit is interesting for a number of reasons, not the least being that it uses a completely different approach from most of the others shown. The rectifier is not in the main feedback loop like all the others shown, but uses an ideal diode (created by U1B and D1) at the non-inverting input, and this is outside the feedback loop.
+ +
Figure 9 - Burr-Brown Circuit Using Suggested Opamp
For a positive-going input signal, the opamp (U1A) can only function as a unity gain buffer, since both inputs are driven positive. Both the non-inverting and inverting inputs have an identical signal, a condition that can only be achieved if the output is also identical. If the output signal attempted to differ, that would cause an offset at the inverting input which the opamp will correct. It is worth remembering my opamp rules described at the beginning of this app. note.
+ +For a negative-going input signal, The ideal diode (D1 and U2B) prevents the non-inverting input from being pulled below zero volts. Should this happen, the opamp can no longer function normally, because input voltages are outside normal operating conditions. The opamp (U1A) now functions as a unity gain inverting buffer, with the inverting input maintained at zero volts by the feedback loop. If -10µA flows in R1, the opamp will ensure that +10uA flows through R2, thereby maintaining the inverting input at 0V as required.
+ +Limitations: Input impedance is non-linear, having an almost infinite impedance for positive half-cycles, and a 5k input impedance for negative half-cycles. The input must be driven from an earth (ground) referenced low impedance source. Capacitor coupled sources are especially problematical, because of the widely differing impedances for positive and negative going signals. The maximum source resistance for a capacitor-coupled signal input is 100 ohms for the circuit as shown (one hundredth of the resistor values used for the circuit), and preferably less. The capacitance is selected for the lowest frequency of interest.
+ + +This rectifier is something of an oddity, in that it is not really a precision rectifier, but it is full wave. It is an interesting circuit - sufficiently so that it warranted inclusion even if no-one ever uses it. This rectifier was used as part of an oscillator [ 4 ] and is interesting because of its apparent simplicity and wide bandwidth even with rather pedestrian opamps.
+ +A simulation using TL072 opamps indicates that even with a tiny 5mV peak input signal (3.5mV RMS) the frequency response extends well past 10kHz but for low level signals serious amplitude non-linearity can be seen. The original article didn't even mention the rectifier, and no details were given at all. However, I have been able to determine the strengths and weaknesses by simulation. Additional weaknesses may show up in use of course. A reader has since pointed out something I should have seen (but obviously did not) - R3 should not be installed. Without R3, linearity is far better than expected.
+ +It's not known why R3 was included in the original JLH design, but in the case of an oscillator stabilisation circuit it's a moot point. The circuit will always have more or less the same input voltage, and voltage non-linearity isn't a problem.
+ +
Figure 10 - Simple Precision Full Wave Rectifier
One thing that is absolutely critical to the sensible operation of the circuit at low signal levels is that all diodes must be matched, and in excellent thermal contact with each other. The actual forward voltage of the diodes doesn't matter, but all must be identical. The lower signal level limit is determined by how well you match the diodes and how well they track each other with temperature changes.
+ +The first stage allows the rectifier to have a high input impedance (R1 is 10k as an example only). Nominal gain as shown is 1 (with R3 shorted). R3 was included in the original circuit, but is actually a really bad idea, as it ruins the circuit's linearity. Without it, the circuit is very linear over a 60dB range. This is more than enough for any analogue measurement system.
+ +Limitations: Linearity is very good, but the circuit requires closely matched diodes for low level use because the diode voltage drops in the first stage (D1 & D2) are used to offset the voltage drops of D3 & D4. At input voltages of more than a volt or so, the non-linearities are unlikely to cause a problem, but diode matching is still essential (IMO). Low level performance will be woeful if accurate diode forward voltage and temperature matching aren't up to scratch. A forward voltage difference of only 10mV between any two diodes will create an unacceptable error. The overall linearity is considerably worse if R3 is included.
+ +Simple capacitor smoothing cannot be used at the output because the output is direct from an opamp, so a separate integrator is needed to get a smooth DC output. This applies to most of the other circuits shown here as well and isn't a serious limitation.
+ + +The final circuit is a precision full-wave rectifier, but unlike the others shown it is specifically designed to drive a moving coil meter movement. There is no output voltage as such, but the circuit rectifies the incoming signal and converts it to a current to drive the meter. This general arrangement is (or was) extremely common, and could be found in audio millivoltmeters, distortion analysers, VU meters, and anywhere else where an AC voltage needed to be displayed on a moving coil meter. Digital meters have replaced it in most cases, but it's still useful, and there are some places where a moving coil meter is the best display for the purpose. This type of rectifier circuit is discussed in greater detail in AN002.
+ +
Figure 11 - Moving Coil Meter Amplifier
The circuit is a voltage to current converter, and with R2 as 1k as shown, the current is 1mA/V. If a 1V RMS sinewave is applied to the input, the meter will read the average, which is 900µA. Adjusting R2 varies the sensitivity, and changing R2 to 900 ohms means the meter will show 1mA for each volt (RMS) at the input. This assumes a meter with a reasonably low resistance coil, although in theory the circuit will compensate for any series resistance.
+ +This type of circuit almost always has R2 made up from a fixed value and a trimpot, so the meter can be calibrated. Although shown with an opamp IC, the amplifying circuit will often be discrete so that it can drive as much current as needed, as well as having a wide enough bandwidth for the purpose. Millivoltmeters and distortion analysers in particular often need an extended response (100kHz or more is common), and few opamp ICs are able to provide a wide enough bandwidth to work well with anything much over 15kHz. The problem is worse at low levels because the opamp output has to swing very quickly to overcome the diode forward voltage drop. It's common to use a capacitor in parallel with the movement to provide damping, but that also changes the calibration.
+ +Limitations: The output is very high impedance, so the meter movement is not damped unless a capacitor is used in parallel. The meter will then show the peak value which might not be desirable, depending on the application.
+ +As already noted, the opamp needs to be very fast. Linearity is good provided the amplifier used has high bandwidth. The circuit works better with low-threshold diodes (Schottky or germanium for example), which do not need to be matched because the circuit relies on current, and not voltage. It also only works as intended with a moving coil meter and is not suited to driving digital panel meters or other electronic circuits. It can be done, but there's no point as the circuit would be far more complex than others shown here.
+ + +Although the waveforms and tests described above were simulated, the Figure 6 circuit was built on my opamp test board. This board uses LM1458s - very slow and extremely ordinary opamps, but the circuit operated with very good linearity from below 20mV up to 2V RMS, and at all levels worked flawlessly up to 35kHz using 1k resistors throughout. Variations of Figure 11 have been used in several published projects and in test equipment I've built over the years. While most of the circuits show standard signal-level diodes (e.g. 1N4148 or similar), most circuits perform better with Schottky diodes, and even germanium diodes can be used with some of the circuits. These both have the advantage of a lower forward voltage drop, but they have higher reverse leakage current which may cause problems in some cases.
+ +One thing that became very apparent is that the Figure 6 circuit is very intolerant of stray capacitance, including capacitive loading at the output. Construction is therefore fairly critical, although adding a small cap (as shown in Figures 5 & 6) will help to some extent. I don't know why this circuit has not overtaken the 'standard' version in Figure 4, but that standard implementation still seems to be the default, despite its many limitations. Chief among these are the number of parts and the requirement for a low impedance source, which typically means another opamp. The impedance limitation does not exist in the alternative version, and it is far simpler.
+ +The Intersil and Burr-Brown alternatives are useful, but both have low (and non-linear) input impedance. They do have the advantage of using a single supply, making both more suitable for battery operated equipment or along with logic circuitry. Remember that all versions (Figures 7, 8 & 9) must be driven from a low impedance source, and the Figure 7 circuit must also be followed by a buffer because it has a high output impedance.
+ +In all, the Figure 6 circuit is the most useful. It is simple, has a very high (and linear) input impedance, low output impedance, and good linearity within the frequency limits of the opamps. The Figure 6A version is also useful, but has a lower input impedance and requires 2 additional resistors (R1 in Figure 6 is not needed if the signal is earth referenced).
+ +The above circuits show just how many different circuits can be applied to perform (essentially) the same task. Each has advantages and limitations, and it is the responsibility of the designer to choose the topology that best suits the application. Not shown here, but just as real and important, is a software version. Digital signal processors (DSPs) are capable of rectification, conversion to RMS and almost anything else you may want to achieve, but are only applicable in a predominantly digital system.
+ +With all of these circuits, it's unrealistic to expect more than 50dB of dynamic range with good linearity. This gives a range from 10mV up to 3.2V (peak or RMS) with supplies of ±12-15V. Use of precision high speed opamps may increase that, but if displayed on an analogue (moving coil) meter, you can't read that much range anyway - even reading 40dB is difficult. 100:1 (full scale to minimum) is not easily read on most analogue movements - even assuming that the movement itself is linear at 100th of its nominal FSD current.
+ +Many of the circuits shown have low impedance outputs, so the output waveform can be averaged using a resistor and capacitor filter. The value appearing across the filter cap is the average of the rectified signal - for a sinewave, the average is calculated by ...
+ ++ VAVG = ( 2 × VPeak ) / π or ...+ +
+ VAVG = VPeak × 0.637 +
It turns out that the RMS value of a sinewave is (close enough to) the average value times 1.11 (the inverse is 0.9) and this makes it easy enough to convert one to another. However, it only gives an accurate reading with a sinewave, and will show serious errors with more complex waveforms. To see just how much error is involved, see AN012 which covers true RMS conversion techniques and includes a table showing the error with non-sinusoidal waveforms.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +AN-002 |
![]() ![]() |
This app. note is adapted from the AC millivoltmeter described in the project pages, as well as some additional ideas. while none of the information here is original, it is offered as a potentially useful collection of different metering amplifiers. Meter amplifiers are a special variation of the precision rectifiers described in AN-001, and typically they need extended frequency response. Very high linearity is nice to have, but in reality few analogue meter movements will match the accuracy of any of the circuits shown here.
+ +Two of the circuits shown here are peak reading (Figures 2 and 3), but calibrated for RMS. If you need an average reading meter (but still usually calibrated for RMS), see Figures 1 and 4, or you can use diodes in place of the voltage-doubler caps in Figure 2 (C4 and C5). This has been tested in the simulator, and it functions as expected. The disadvantage is that there are two diode voltage drops that the amplifier circuit has to overcome, and this reduces high frequency performance.
+ +The discrete version is shown in Figure 1. This is almost identical to that shown in Project 16, with the addition of an input resistor to ground, and a higher value cap between the FET preamp and the discrete opamp formed by Q2-Q4. This circuit has the advantage of wide frequency response, and the gain is high enough to enable full scale deflection with signals as low as 3mV RMS. Making the circuit less sensitive is quite simple, and more information is given below.
+ +Recommended supply voltage is ±15V, although the circuit works well with ±9V as shown in Project 16.
+ +If you cannot obtain the 2N5459 JFET, you can substitute a BF244. Almost any other JFET can be used, provided the source resistor (R2) is changed to suit. Because this resistor sets the bias conditions for the JFET, you may need to experiment a little to get best performance. Ideally, the voltage on the drain should be half the voltage between the source and the positive supply. If the JFET has 2V on the source and you use a 15V supply, the optimum voltage is therefore ...
+ ++ Vdrain = ((+V - Vsource) / 2) + Vsource = ((15 - 2) / 2) +2 = (13 / 2) + 2 = 8.5V ++ +
Although it is possible to improve the circuit in terms of linearity, this is not necessary for metering applications. A small amount of signal distortion will cause a very small overall error - usually better than the accuracy of the meter movement itself.
+ +As shown, the circuit has a -3dB frequency of 1.17Hz, and according to the simulator the upper -3dB frequency is about 1MHz. I tend to think this is rather optimistic, however it is certainly possible with careful layout. It should be possible to get up to 100kHz with a reasonable error margin (about 5%).
+ +The meter movement should ideally be either 50µA or 100µA, however it is possible to use less sensitive movements. I would not recommend anything above 100µA though, because the drive circuit has limited current capability. The maximum with the circuit as shown is capable of an absolute maximum of 300µA output, but this is just below the amplifier's clipping level.
+ +The maximum input level is limited to around 75mV RMS, although you can increase that by removing C1 (which reduces the gain of the JFET amplifier), or if you need even higher input levels the JFET stage can be removed altogether. The absolute maximum recommended input voltage is 2V RMS - if you need it to be less sensitive, it's easy to add a simple attenuator in front of the circuit.
+ +To obtain better performance than the standard circuit, replace D1-D4 with OA91 (or OA95, 1N60, 1N34A etc.) or similar germanium diodes, but BAT43 or similar Schottky diodes are almost as good. These are all faster than 1N4148 silicon diodes, and they also have a lower forward voltage drop. It may appear that this would be of no importance, but the low voltage drop is beneficial. Any speed limitation of the amplifier circuit causes a measurable time delay as the signal goes from one polarity to the other. Lower forward voltage means there is less 'dead time', where no diodes are conducting.
+ + +The next circuit is based on one that has been used by Hewlett-Packard (which became Agilent and is now Keysight) in some of their older instruments. It has been modified to use standard E12 value resistors and more readily available transistors. One feature of the circuit is the use of R9, and it is used to partially overcome the forward voltage drop of the diodes to get better linearity with low input voltages, for example at 10% of full-scale. The circuit was designed with germanium diodes, because they are fast, and have a very low forward voltage drop. While it is possible to adjust the value of R9 to enable the use of silicon diodes, this is not really recommended. Schottky diodes are a suitable alternative, however germanium diodes (such as the OA91) are still available if you look around.
+ +The sensitivity of the meter amp can be such as to obtain FSD (full scale deflection) with an input of 5mV RMS, and as shown the maximum is around 28mV with the sensitivity pot at maximum resistance. This can be changed, but frequency response will almost certainly suffer because of limited open loop gain and insufficient feedback. As shown, if adjusted for 5mV, the response is flat (within 5%) up to around 300kHz.
+ +This circuit doesn't have any particular vices, but it is completely unsuitable for DC operation because it uses only AC feedback. While circuit #1 (above) can be modified to allow DC operation, the DC stability almost certainly will not be good enough for precision work. These two discrete meter amps can be expected to perform well up to at least 100kHz, and with some tweaking can probably exceed that quite easily. The suggested power supply is +15V, although they should work with lower (or higher) voltages if needs be. Some modifications may be required.
+ + +Opamps are very convenient, but unfortunately are not always suitable as meter amps. One thing they do offer is simplicity and great flexibility, with potentially much wider input range and the ability to drive less sensitive meter movements. Very fast opamps should be able to give good frequency response, and up to 100kHz is possible with some care, or if the circuit is designed to have a relatively low sensitivity.
+ +While it may appear that the circuit shown below cannot work properly because there is virtually no DC feedback path, it actually functions fine even without R4. The electrolytic capacitors have a very small leakage, and the circuit topology generally means that the bias point of the opamp's output will be well within limits. It may make you feel better if you include R4, and although it really doesn't do a great deal it's preferable to include it.
+ +By using low forward voltage diodes (germanium or Schottky), this circuit is capable of very good results, and will be around 0.6dB down at 50kHz (opamp dependent). With small signal silicon diodes (e.g. 1N4148), it is almost worthless if set for high sensitivity. To maintain flat response, it is necessary to keep the gain fairly low, otherwise the internal Miller cap in the opamp will cause premature roll-off of high frequencies. 100mV input sensitivity is about the best you can hope for, and response should extend to 20kHz (-1dB or so). By using an uncompensated opamp, the necessary stability cap becomes an external component, so it can be selected to give the required bandwidth. A TL071 opamp (for example) has 13V/us slew rate, and this is fairly fast ... despite this, it is much too slow to be useful at higher frequencies. Consider the NE5534 with an external compensation cap, as it should be possible to obtain flat response to 100kHz fairly easily.
+ +The issue with opamps in this role is simply one of slew rate - the amplifier must be able to overcome the diode voltage drop as quickly as possible. Ideally, the opamp would offer an infinite slew rate, but such opamps do not exist, so it is necessary to make do with what we have. Interestingly, it used to be possible to get an opamp that would work with 30mV input, driving a 1mA meter movement, at frequencies up to at least 500kHz. The HA2625 (Harris Technology) opamp had 100MHz unity gain bandwidth, 600kHz full-power bandwidth, with very low input current. With extremely low input offset current and 'respectable' offset voltage as well, there's very little available today that can match the (now ancient) HA2625.
+ +The gain of the opamp version is much lower than the discrete versions - about 80mV RMS input for 50µA meter current. This can be varied by changing the value of the sensitivity pot and associated series resistor, but I do not recommend using anything less than about 600 Ohms.
+ +Any additional gain needed may be supplied with a preamplifier circuit, to lift the typical 3mV signal to 100mV. Consider using a JFET in front of any BJT input opamp if high input impedance is needed, otherwise noise will become a problem.
+ +To get a real-life idea of performance, the opamp circuit was built on an opamp test board. This uses LM1458 dual opamps (equivalent to the µA741), and as predicted it was useless with 1N4148 silicon signal diodes. However, using Schottky diodes and set to have FSD at around 1V (using a 250µA meter that I had handy), response was 0.2dB down at about 35kHz - not a bad effort for a very slow opamp. Performance was degraded significantly if sensitivity was increased. Virtually no opamp circuit is likely to be quite as good as discrete for sensitivity, but given the low cost and great simplicity of the opamp approach it is certainly worth considering.
+ +The version shown above is suitable for most 'general purpose' applications, and can drive a meter of up to 1mA coil current. It's suitable for voltages of 100mV or more, and has an upper frequency response of about 10kHz (-0.1dB), depending on the opamp used. Even with 1N4148 diodes, response is respectable, but if higher sensitivity is needed you'll need an amplifier circuit in front to boost the level and/ or a more sensitive meter movement. Unlike the version in Figure 3, there is no capacitance in parallel with the meter, so it is average reading. A cap can be added of course, and if large enough it will convert the meter to peak reading. A lower value can be used to damp the meter if necessary.
+ + +There are many examples of meter amps that use a voltage doubler (e.g. Fig. 2 and Fig.3) rather than a bridge rectifier (e.g. Fig. 1 and Fig. 4). There are good reasons for using a doubler, in particular because there's only one diode voltage drop to be overcome rather than two with a bridge. As always with electronic circuitry there's a trade-off (a compromise). The doubler demands twice the output current from the driver circuit, which means the metering amplifier has to provide twice as much gain as a circuit using a bridge rectifier. These are non-linear circuits, and the effort needed to present enough voltage (quickly enough) to overcome the diode forward voltage drop is the biggest limiting factor (this applies to all metering amplifiers).
+ +The current is rarely a problem, because it's usually no more than ±3mA (assuming a 1mA meter movement), but when the gain is doubled, the amplifier's full-power bandwidth is reduced. The bridge rectifier demands a higher slew rate than a doubler for a given maximum frequency. A wide bandwidth opamp with moderate slew rate will work best with a doubler, while a moderate bandwidth opamp with high slew rate is probably better with a bridge. Opamps that use external compensation provide greater flexibility than those that are internally compensated.
+ +To test this hypothesis, I simulated two circuits, with the same sensitivity and the same meter resistance, one using a bridge and the other a doubler. The simulation was based on 4558 opamps (not bad, but far from 'top shelf'). The input was 1V RMS, and both meters were calibrated for ~1mA (there's some variance, as they are tricky to get exact in the simulator). Calibration normally relies on a trimpot. Normally one would choose a very fast opamp, with a full power bandwidth of at least 10MHz (preferably closer to 100MHz) and a slew rate of no less than 20V/µs. There are suitable opamps available, but they aren't cheap.
+ +The bridge has lower gain (R2A is 820Ω), but there's more output voltage because there are two diode drops for each polarity. The doubler has to provide twice the current, so R1B is 410Ω. There's no difference between the two at 1kHz, but at 100kHz the bridge will read low by over 10%, vs. about 5% low for the doubler. The opamp has enough bandwidth, but the slew rate is too low to allow the output to overcome the diode forward voltage. With two diodes for each polarity, the bridge rectifier is never quite as good as the doubler, which has only one diode for each polarity.
+ +With a very fast opamp, the difference is academic. Some designers prefer a doubler because the capacitors (C2B, C3B) damp the meter movement so the deflection is smoother than a bridge. A well-damped meter movement won't care either way. While you might think that the doubler must be peak-reading (rather than average-reading), it's not. Both circuits show the average value of the rectified input waveform.
+ + +Instrumentation meter amplifiers are a special case of rectifier, and present the designer with a great many sometimes conflicting requirements. Because measurement instruments are expected to perform well below and above the audio frequency range, it becomes a challenge to design a circuit that has sufficient gain and wide enough bandwidth to cover the required frequencies accurately.
+ +Sensitive meters make the design easier, and in nearly all cases the lowest diode voltage drop possible is highly desirable. Metering amplifiers such as those shown in this article are used in a wide variety of test instruments, including AC millivoltmeters, distortion analysers, impedance meters, etc. They are my no means limited to audio usage, and are used in almost every area of electronics and engineering where analogue metering is required.
+ +While most meters are now digital, analogue meters have a special place in test equipment. They are generally easier to read, and you can visually gauge the average when the pointer is fluctuating. This isn't possible with a digital meter unless it's designed to be slow, averaging the waveform before it's displayed.
+ + ++ 1 - Hewlett Packard instrumentation manuals (various). + 2 - Opamp Datasheets (for the devices mentioned) +
![]() ![]() + |
![]() | + + + + + + + |
Elliott Sound Products | +AN-003 |
There are quite a few high power light-emitting diodes now available, but the standard is still the Luxeon Star. Available in a variety of power ratings, colours and light patterns, these LEDs are causing something of a revolution in many areas. They have relatively low heat dissipation compared to light output, long life and there is great flexibility of use - they can be used safely where an incandescent lamp could not.
+ +Being a LED, they do have the rather annoying trait of being a current driven device, having a relatively low forward voltage. The current must not be allowed to exceed the design maximum, or the LED will be damaged. This requires that a current regulator must be used between the voltage source and the LED itself, so complexity is increased compared to using a normal lamp.
+ +Although there are many ICs available that can be adapted to drive the Star LEDs (or their cheaper generic equivalents), not all are easy to obtain, many are available only in surface mount packages, and they can be rather expensive. Most also require external support components as well, increasing the price even further.
+ +An alternative is to use a linear regulator, but these are very inefficient. The full current (typically around 300mA) is drawn at all supply voltages, so with 12V input, the total circuit dissipation is 3.6W. Admittedly, this is not a great deal, but where efficiency is paramount such as with battery operation, this is not a good solution. The circuit shown in Figure 1 was the result of a sudden brainwave on my part - it may have been triggered by something I saw somewhere, but if so that reference was well gone by the time I decided to simulate it to see if it would work.
+ +
Figure 1 - Ultra-Simple LED Switchmode Supply
Using only three cheap transistors, the circuit works remarkably well. It is not as efficient as some of the dedicated ICs, but is far more efficient than a linear regulator. It has the great advantage that you can actually see what it does and how it does it. From the experimenters' perspective, this is probably one of its major benefits.
+ +One of the features of this circuit is that it will change from switchmode to linear as the input voltage falls. It still remains a current supply, and the design current (set by R1) does not change appreciably as the operation changes from linear to switchmode or vice versa.
+ +Operation is quite simple - Q1 monitors the voltage across R1, and turns on as soon as it reaches about 0.7V. This turns off Q2, which then turns off Q3 by removing base current. If the voltage is low, a state of equilibrium is reached where the voltage across R1 remains constant, and so therefore does the current through it (and likewise through the LED). The value of R1 can be changed to suit the maximum LED current ... + +
+ I = 0.7 / R1 (approx.) ++ +
At higher input voltages, the circuit will over-react. Because of the delay caused by the inductor, the voltage across R1 will manage to get above the threshold voltage by a small amount. Q3 will get to turn on hard, current flows through the inductor and into C1 and the LED. By this time, the transistors will have reacted to the high voltage across R1, so Q1 turns on, turning off Q2 and Q3. The magnetic field in L1 collapses, and the reverse voltage created causes current to flow through D1 and into C2. The cap now discharges through the LED and R1, until the voltage across R1 is such that Q1 turns off again. Q2 and Q3 then turn back on.
+ +This cycle repeats for as long as power is applied at above the threshold needed for oscillation (a bit over 5V). As shown in the table below, the circuit changes its operating frequency as its method of changing the pulse width. This is not uncommon with self-oscillating switchmode supplies.
+ +Voltage | Current | Frequency | Input Power |
4.5 | 260mA | Not Oscillating | 1.17W |
6.0 | 202mA | 230kHz | 1.21W |
8.0 | 164mA | 172kHz | 1.31W |
12 | 123mA | 123kHz | 1.48W |
16 | 104mA | 100kHz | 1.66W |
The table above shows the operating characteristic of the prototype. I also checked the performance with an ultrafast silicon diode, and the input operating current was increased by almost 10%. The suggested Schottky diode is well worth the effort. LED current remains fairly steady at 260mA, since I used a 2.7 ohm current sensing resistor as shown in the circuit diagram.
+ +Q1 and Q2 can be any low power NPN transistor. BC549s are shown in the circuit, but most are quite fast enough in this application. Q3 needs to be a medium power device, and the BD140 as shown works well in practice. D1 should be a high speed diode, and a Schottky device will improve efficiency over a standard high speed silicon diode. D1 needs to be rated at a minimum of 1A. L1 is a 100µH choke, and will typically be either a small 'drum' core or a powdered iron toroid. An air cored coil can be used, but will be rather large (at least as big as the rest of the circuit).
+ +The efficiency is not as high as you would get from a dedicated IC, because the switching losses are higher due to relatively slow transitions. At best, I measured around 60%, which isn't bad for such a simple circuit. Input voltage can range from the minimum to turn on the LED up to about 16V or so. Higher voltages may be acceptable, but that has not been tried at the time of writing.
+ +All resistors can be 0.25 or 0.5W except R1 - this needs to be rated at 0.5W. Paralleled low value resistors may be used to get the exact current you need, but always make sure that you start with a higher resistance than you think you will need. If resistance is too low, the LED may be damaged by excess current.
+ +![]() | + + + + + + + |
Elliott Sound Products | +AN-004 |
There are countless dome light extenders on the Net and in magazines, but most of them suffer from one problem ... complexity. Ok, they are not actually complex, but most are far more complex than they need to be. Some are completely over the top, and require additional car wiring, a PCB, ICs, trimpots and lots of other stuff, while others seem to be someone's untested idea or maybe just a brain fart - some circuits I saw will never work. My goal was extreme simplicity, and I think that has been achieved. It is helpful if it works too - there's not much point making it otherwise. Efficiency is not an issue, since the dimming phase is relatively short lived anyway. Worst case dissipation in Q2 should not exceed about 2W or so (momentary) with a standard 6W dome lamp.
+ +As with so many projects on the ESP site, this came of necessity (or is that desire?). My car has most of the bells and whistles that one expects these days, but the dome light switched off as soon as the door was closed. I figured that about 15 seconds was a reasonable time delay, and I only had a very small space in which to locate the unit - namely in the dome light housing itself.
+ +Not much to it, really. It is obviously important that the existing car wiring be used - the last thing one wants to do is have to run additional wires in a car. Standard dome lights use the door switch to make the negative connection to the lamp, with the positive being permanently connected to the car's positive battery terminal (via the obligatory fuse).
+ +
Figure 1 - Dome Light Extender Schematic
As you can see, it is very simple. Cheap (mainly 'junk box') transistors are used throughout, and the resistors can be very ordinary carbon film types. The cap only needs to have a voltage rating of 16V, but higher voltage caps can be used if you have them to hand.
+ +When the car door is opened, the 'Trigger' terminal is connected to chassis. This turns on Q1, which promptly charges C1, thus turning on MOSFET Q2. Provided there is enough gate voltage for Q2, the lamp will remain on, but as the cap discharges the gate voltage gets to the point where Q2 is no longer saturated and the lamp starts to dim. As the cap discharges further, the lamp dims more, eventually going out altogether. Full brightness remained in my circuit for about 20 seconds, and the lamp was extinguished within 22 seconds.
+ +Because a switching MOSFET has a fairly rapid transition from conducting to non-conducting states with a relatively small voltage range between fully on and fully off, that makes the ideal switch. The transition period is quite narrow, so no heatsink is needed. Timing is also reasonably predictable, since it is determined by the resistor and cap. A low value cap can be used, minimising size. The zener is essential to protect the gate against transients (all too common in a car's electrics). The resistor (R3) provides a high impedance for any transients so they don't just blow the zener and the MOSFET gate.
+ +
Figure 2 - Dome Light Extender With Voltage Detector
Figure 2 shows an enhanced version, that uses Q3 as a battery voltage detector. When the engine is off, Q3 remains off too, because the zener (D2) doesn't have enough voltage to conduct. C1 therefore discharges through R4 normally, and the full timeout period applies. When the engine is running, the battery voltage quickly rises to ~13.8V (the normal float charge voltage for a lead-acid car battery. This allows D2 to conduct, turning on Q3, and discharging C1 via R7, so the cap is discharged much faster.
+ +This addition was made to my unit after I fitted a (home made) LED light to replace the silly incandescent bulb. Because the new LED lamp is so bright (yet only draws about 200mA), it became annoying at night because it was too bright inside the car. By adding the extra bits, it now extinguishes in about 4 seconds when the engine is started or is running. There is now plenty of time to get organised having opened the door and clambered in, but when the engine is started the lamp goes out much more quickly. R7 can be reduced in value if faster operation is required. It can be reduced to about 22k to get a really fast turn off. If the value is too low, the lamp will not turn on at all if the engine is running.
+ +Nothing is critical, except that all the usual precautions against short circuits must be taken. If the time delay is too long (or short), simply reduce (or increase) the value of C1 or R4 as appropriate. Because of the design, the existing wiring in the dome light is retained except that the door switch lead needs to connect to the trigger input, rather than directly to the lamp. R7 can be reduced as well to get a faster turn-off when the engine is running.
+ +The resistor values shown are a guide only, and the circuit will work fine with a fairly wide range of values. Those shown are not bad though, so feel free to use them. Likewise, almost any small signal PNP transistor can be used, the MOSFET can be almost any N-Channel switching device capable of at least a couple of amps.
+ +Since the typical dome light is only rated at about 6W (0.5A at 12V), high current wiring is not necessary. Just make sure that everything is properly insulated so that nothing can short to chassis.
+ +I suggest that the dome light switch is wired directly to the lamp as normal - if possible (not all switches will allow this). This prevents the delay from operating should you turn on the interior light, so it goes off immediately when switched. While it is possible to add an extra transistor to reduce the on time if the engine is running (as suggested in at least one circuit I saw), this would normally require running an extra wire - an exercise in futility with most cars.
+ +The method shown in Figure 2 does not require any additional wiring, and is probably the easiest way to modify the timing to make the lamp turn off faster when the engine is running.
+ +![]() | + + + + + + + |
Elliott Sound Products | +AN-005 |
![]() ![]() |
Zero crossing detectors as a group are not a well-understood application, although they are essential elements in a wide range of products. It has probably escaped the notice of readers who have looked at the lighting controller or the Linkwitz Cosine Burst Generator (both are on the ESP website), but these rely on a zero crossing detector for their operation. So too does the ESP Tone Burst Generator project.
+ +A zero crossing detector (ZCD) literally detects the transition of a signal waveform from positive to negative (and vice versa), ideally providing a narrow pulse that coincides exactly with the zero voltage condition. At first glance, this would appear to be an easy enough task, but in fact it is quite complex, especially where high frequencies are involved. In this instance, even 1kHz starts to present a real challenge if extreme accuracy is needed.
+ +The not so humble comparator plays a vital role - without it, most precision zero crossing detectors would not work, and we'd be without digital audio, PWM and a multitude of other applications that are perhaps taken for granted.
+ +If you search the Net for zero crossing detectors, you will see a multitude of circuits suggesting the venerable µA741. The circuits will work, but the 741 is several orders of magnitude too slow to be even remotely usable at frequencies above perhaps 100Hz or so. The slew rate of a µA741 is 0.5V/µs - it's one of the slowest opamps around. In all cases, the 741 should be replaced with something considerably faster, such as an uncompensated LM301 or a 'real' comparator. By comparison, a TL071 opamp has a typical unity gain slew rate of 13V/µs, and even that is slow compared to most comparators (note however, this slew rate is not necessarily achieved open-loop). Expect dedicated comparators to have a slew rate of at least 100V/µs!
+ +The reader may also wish to have a look at the zero crossing detector described in the article about Comparators, which includes a circuit that can perform very well with audio frequencies up to at least 10kHz. It's more complex than the ones shown here, but is also a great deal more versatile. It's easy to get a pulse duty cycle of less than 2% at 1kHz. Similar results can be obtained from some of the other circuits described here, provided a fast enough comparator is used.
+ +The ideal zero crossing detector has infinite gain, and will change its output state at the exact moment the input signal passes through zero. The output state change should be instantaneous.
+ +It goes without saying that the 'ideal' does not exist, and there are many factors that influence the end result. All devices have finite gain (typically up to 100dB or so), and that limits the ultimate sensitivity to a change of voltage at the input. The input transistors of a comparator based circuit will never be perfectly matched, so the zero point can be displaced by several (or many) millivolts. All active circuits are subject to speed limitations, and nothing is instant. The output voltage can't change from (say) zero to 5V without some finite speed limit (known as slew rate). There is also the circuit's reaction time (propagation delay) that has to be considered, as that determines how quickly a signal gets from the input to the output. The limitations of real circuits have to be considered during the design process. While reality can be disappointing, that's what we have to live with.
+ + +Figure 1 shows the zero crossing detector as used for the dimmer ramp generator in Project 62. This circuit has been around (almost) forever, and it does work reasonably well. Although it has almost zero phase inaccuracy, that is largely because the pulse is so broad that any inaccuracy is completely swamped. The comparator function is handled by transistor Q1 - very basic, but adequate for the job.
+ +The circuit is also sensitive to level, and for acceptable performance the AC waveform needs to be of reasonably high amplitude. 12-15V AC is typical. If the voltage is too low, the pulse width will increase. The arrangement shown actually gives better performance than the version shown in Project 62 and elsewhere on the Net. In case you were wondering, R1 is there to ensure that the voltage falls to zero - stray capacitance and even the tiniest amount of diode leakage current is sufficient to stop the circuit from working without it.
+ +The pulse width of this circuit (at 50Hz) is typically around 600µs (0.6ms) which sounds fast enough. The problem is that at 50Hz each half cycle takes only 10ms (8.33ms at 60Hz), so the pulse width is over 5% of the total period. This is why most dimmers can only claim a range of 10%-90% - the zero crossing pulse lasts too long to allow more range.
+ +While this is not a problem with the average dimmer, it is not acceptable for precision applications. For a tone burst generator (either the cosine burst or a 'conventional' tone burst generator), any inaccuracy will cause the switched waveform to contain glitches. The seriousness of this depends on the application.
+ +Precision zero crossing detectors come in a fairly wide range of topologies, some interesting, others not. One of the most common is shown in Project 58, and is commonly used for this application. The exclusive-OR (XOR) gate makes an excellent edge detector, as shown in Figure 2. The risetime of the input signal is critical - if it's too slow, there will be no output. The total risetime must be less than the delay determined by R1 and C1 (nominally 56ns in the circuit shown).
+ +There is no doubt that the circuit shown above is more than capable of excellent results up to quite respectable frequencies. The upper frequency is limited only by the speed of the device used, and a 74HC86 has a propagation delay of only 11ns [ 1 ] and a transition time of 7ns, so operation at 100kHz or above is achievable. The CMOS 4070 can be used, but it has a much greater propagation delay (110ns with a 5V supply) and transition time (100ns with a 5V supply). Timings are 'typical', as shown in datasheets.
+ +The XOR gate is a special case in logic. It will output a '1' only when the inputs are different (i.e. one input must be at logic high (1) and the other at logic low (0V). The resistor and cap form a delay so that when an edge is presented (either rising or falling), the delayed input holds its previous value for a short time. In the example shown, the pulse width is 50ns. The signal is delayed by the propagation time of the device itself (around 11ns), so a small phase error has been introduced. The rise and fall time of the squarewave signal applied was 50ns, and this adds some more phase shift.
+ +Depending on the application, you will need to change the values of R1 and C1. The values shown provide a very narrow pulse (around 50ns), but most circuits don't need to be that fast. The length of the pulse is nominally just the product of the two values (56ns as shown), but that pulse width is too short for some oscilloscopes to display properly. For audio (up to around 10kHz), you can use 10k for R1 and 100pF for C1, giving a pulse width of 1µs.
+ +There is a pattern emerging in this article - the biggest limitation is speed, even for relatively slow signals. Digital logic can operate at very high speeds, and we have well reached the point where the signals can no longer be referred to as '1' and '0' - digital signals are back into the analogue domain, specifically RF technology. PCB tracks become transmission lines, and must often be terminated to prevent serious corruption of the digital waveform.
+ +The next challenge we face is converting the input waveform (we will assume a sinewave or other audio frequency waveform) into sharply defined edges so the XOR can work its magic. Another terribly under-rated building block is the comparator. While opamps can be used for low speed operation (and depending on the application), extreme speed is needed for accurate digitisation of an analogue signal. It may not appear so at first glance, but a zero crossing detector is a special purpose analogue to digital converter (ADC). In some cases, you can use an uncompensated opamp (such as the LM301) as a comparator, but most 'real' comparators are significantly faster. An LM301 was used as a zero crossing detector in Project 143.
+ + +The comparator used for a high speed zero crossing detector, a PWM converter or conventional ADC is critical. Low propagation delay and extremely fast operation are not only desirable, they are essential.
+ ++ Comparators may be the most underrated and under utilised monolithic linear component. This is unfortunate because comparators are one of the most flexible and + universally applicable components available. In large measure the lack of recognition is due to the IC opamp, whose versatility allows it to dominate the analog + design world. Comparators are frequently perceived as devices that crudely express analog signals in digital form - a 1-bit A/D converter. Strictly speaking, + this viewpoint is correct. It is also wastefully constrictive in its outlook. Comparators don't "just compare" in the same way that opamps don't "just amplify". + [ 2 ] ++ +
The above quote from Linear Technology was so perfect that I just had to include it. Comparators are indeed underrated as a building block, and they have two chief requirements ... low input offset and speed. For the application at hand (a zero crossing detector), both of these factors will determine the final accuracy of the circuit. The XOR has been demonstrated to give a precise and repeatable pulse, but its accuracy depends upon the exact time it 'sees' the transition of the AC waveform across zero. This task belongs to the comparator.
+ +In Figure 3 we see a typical comparator used for this application. The output is a square wave, which is then sent to a circuit such as that in Figure 2. This will create a single pulse for each squarewave transition, and this equates to the zero crossings of the input signal. It is assumed for this application that the input waveform is referenced to zero volts, so swings equally above and below zero. If the input voltage is outside the allowable input voltage of the comparator, it will need to be clamped to ensure the input transistors are not damaged.
+ +Note that most comparators have an open collector output, and the output pin must be connected to a positive supply with a suitable resistor. This is shown in Figure 3, with R2 connected to +Vcc. In most cases, the pull-up resistor (as it's known) can connect to a higher or lower voltage than the comparator's supply, allowing it to act as a level shifter. In some cases, the output can be used to activate a relay, provided the relay current is within the IC's ratings.
+ +Figure 4 shows how the comparator can mess with our signal, causing the transition to be displaced in time, thereby causing an error. The significance of the error depends entirely on our expectations - there is no point trying to get an error of less than 10ns for a 50/60Hz lamp dimmer, for example.
+ +The LM393 comparator that was used for the simulation is a basic, comparatively low speed type, and with a quoted response time of 300ns it is too slow to be usable in this application. This is made a great deal worse by the propagation delay, which (as simulated) is 1.5µs. In general, the lower the power dissipation of a comparator, the slower it will be, although modern IC techniques have overcome this to some extent. Another choice here would be an LM393, which is very similar.
+ +You can see that the zero crossing of the sinewave (shown in green) occurs well before the output (red) transition - the cursor positions are set for the exact zero crossing of each signal. The output transition starts as the input passes through zero, but because of device delays, the output transition is almost 5µs later. Most of this delay is caused by the rather leisurely pace at which the output changes - in this case, about 5µs for the total 7V peak to peak swing. That gives us a slew rate of 1.4V/µs which is useless for anything above 100Hz or so.
+ +One of the critical factors with the comparator is its supply voltage. Ideally, this should be as low as possible, typically with no more than ±5V. The higher the supply voltage, the further the output voltage has to swing to get from maximum negative to maximum positive and vice versa. While a slew rate of 100V/µs may seem high, that may be too slow for an accurate ADC, pulse width modulator or high frequency zero crossing detector.
+ +At 100V/µs and a total supply voltage of 10V (±5V), it will take 0.1µs (100ns) for the output to swing from one extreme to the other. To get that into the realm of what we need, the slew rate would need to be 1kV/µs, giving a 10ns transition time. Working from Figure 3, you can see that even then there is an additional timing error of 5ns - not large, and in reality probably as good as we can expect.
+ +The problem is that the output doesn't even start to change until the input voltage passes through the reference point (usually ground). If there is any delay caused by slew rate limiting ('transition time') and propagation delay, by the time the output voltage passes through zero volts it is already many nanoseconds late. Extremely high slew rates are possible, and Reference 2 has details of a comparator (LT1016) that is faster than a TTL inverter! Very careful board layout and attention to bypassing is essential at such speeds, or the performance will be worse than woeful.
+ +While zero crossing detectors intended for mains (120V, 60Hz/ 230V, 50Hz) phase control are fairly straightforward, once you are working with higher frequencies (including audio), the requirement for high speed becomes imperative. Naturally, any significant speed increase also means a more expensive part that draws higher current, and much greater care is needed when laying out a PCB than needed for more pedestrian comparators.
+ + +This version is contributed by John Rowland [ 3 ] and is a very clever use of an existing IC for a completely new purpose. The DS3486 is a quad RS-422/ RS-423 differential line receiver. Although it only operates from a single 5V supply, the IC can accept an input signal of up to ±25V without damage - however, that's the absolute maximum, and recommended input voltage is ±7V. It is also fairly fast, with a typical quoted propagation time of 19ns and internal hysteresis of 140mV.
+ +The general scheme is shown in Figure 5. Two of the comparators in the IC are used - one detects when the input voltage is positive and the other detects negative (with respect to earth/ ground). The NOR gate can only produce an output during the brief period when both comparator outputs are low (i.e. close to earth potential).
+ +However, tests show that the two differential receiver channels do not switch at exactly 0.00V. With a typical DS3486 device, the positive detector switches at about 0.015V and the negative detector switches at approximately -0.010V. This results in an asymmetrical dead band of 25mV around 0V. Adding resistors as shown in Figure 6 allows the dead band to be made smaller, and (perhaps more importantly for some applications), it can be made to be symmetrical. + +Although fixed resistors are shown, it will generally be necessary to use trimpots. This allows for the variations between individual comparators - even within the same package. This is necessary because the DS3486 is only specified to switch with voltages no greater than ±200mV. The typical voltage is specified to be 70mV (exactly half the hysteresis voltage), but this is not a guaranteed parameter.
+ +Indeed, John Rowland (the original designer of the circuit) told me that only the National Semiconductor devices worked in the circuit - supposedly identical ICs from other manufacturers refused to function. I quote ...
+ ++ We did some testing with 'equivalent' parts made by other manufacturers, and found very different behavior in the near-zero region. Some parts have lots of hysteresis, + some have none, detection thresholds vary from device to device, and in fact even in a quad part like the DS3486 they are different from channel to channel within the + same package. Eventually we settled on the National DS3486 with some added resistors on its input pins as shown in Figure 6. The most recent version of the circuit + uses trimpots, 100 ohm on the positive detector and 200 ohm on the negative detector. These values allow us to trim almost every DS3486 to balance the noise threshold + in the ±5mV to ±15mV range. Occasionally we do get a DS3486 which will not detect in this range. Sometimes, we find that both the positive and negative detectors + are tripping on the same side (polarity) of zero, if so we pull that chip and replace it. ++ +
The additional resistors allow the detection thresholds to be adjusted to balance the detection region around 0V. The resistor from pin 1 to earth makes the positive detector threshold more positive. The resistor from the input to pin 7 forces the negative detector threshold to become more negative. Typical values are shown for ±25mV detection using National's DS3486 parts. In reality, trimpots are essential to provide in-circuit adjustment.
+ + +There are countless ways to make a mains zero crossing detector. In many cases, the simplest circuit will be the most appropriate for a variety of reasons. The most common reason is cost - higher performance circuits need more parts, and that adds not only the cost of the parts, but the PCB real estate needed to accommodate them. When powering anything from the mains, series resistors must be physically larger than their power rating would indicate due to the large voltage gradients across them. Adding more parts simply means that the circuit takes up more space, and that may not be convenient.
+ +The two circuits shown below are examples of simple (but with comparatively high dissipation = wasted power), and more complex, but drawing much lower current from the mains. Many other designs are possible of course, but the two shown should be enough to get you started. There is a balance that needs to be struck between cost, complexity and performance. For example, a high cost precision circuit is not needed for a light dimmer, but a simple, low cost circuit will not have the accuracy required for test instrumentation. Further approaches are shown in the next section.
+ +A zero-crossing detector can be used to detect phase anomalies, or even as a 'loss of AC' detector. If the AC input is interrupted, the output pulse will be much longer than the nominal 1ms, and this is easily picked up by a microcontroller or other circuitry. The Figure 7 or Figure 8 circuit can be used, with the difference being that the output from Figure 7 will simply remain low if the AC fails. Should it remain low for more than 2ms or so, that means that there is no AC.
+ +If your application uses a conventional iron-core transformer power supply, you can use a zero crossing detector as shown in the LX-800 Power Control Section, part of the stage lighting controller that was published back in 2001. While this is a safe and effective option, it can't be used if your circuit relies on a switchmode power supply because the mains waveform isn't available.
+ +![]() | WARNING - The circuits described below involve mains wiring, and in some jurisdictions it may be illegal to work on or build mains powered equipment unless + suitably qualified. Electrical safety is critical, and all wiring must be performed to the standards required in your country. ESP will not be held responsible for any loss or + damage howsoever caused by the use or misuse of the material provided in this article. If you are not qualified and/or experienced with electrical mains wiring, then you must not + attempt to build the circuit described. By continuing and/or building any of the circuits described, you agree that all responsibility for loss, damage (including personal injury + or death) is yours alone. Never work on mains equipment while the mains is connected ! | +![]() |
In the circuits below, there is a line indicated as 'Isolation Barrier'. Everything to the left of the optocoupler (including the LED input pins) is at mains potential, and is waiting to kill you if you're not careful. The section of PCB beneath the optocoupler must not have any copper tracks, and there is an advantage if even the PCB material itself is removed to create an air gap between the 'live' and 'safe' sections. Live wiring should be isolated by an absolute minimum of 5mm from any wiring that is user accessible (connections to potentiometers, input/ output plugs or sockets, etc.).
+ +Mains voltage zero-crossing detectors are common, and are essential with advanced 'phase cut' dimmers and many other mains switching applications. A simple version is shown below, and this was used in the trailing edge dimmer Project 157A and leading edge dimmer Project 157B projects. Resistor dissipation is acceptable (around 400mW in each resistor, 800mW total wasted power), but it's not a precision or low power circuit by any definition. Two resistors are shown to limit the mains current, not because of power, but voltage rating. Ideally they will be 1W types to minimise temperature and provide the maximum voltage rating. Most resistors have a maximum voltage limit that's well below the 325V peak from 230V mains, and using two (or even four) in series limits the voltage across each resistor to a safe value and extends resistor life.
+ +The pulse width depends on the optocoupler, and particularly the transfer ratio (which is based on the LED efficiency and the gain of the transistor). R1 and R2 should be reduced to 15k for 120V operation. It can also be done using an optocoupler with two back-to-back LEDs (e.g. SHF620A, H11AA1 or similar), eliminating the need for a diode bridge. This type of ZCD provides a positive pulse at the zero crossing, which can be converted to negative-going by using the optocoupler's transistor as an emitter follower. (There is no difference in the transfer ratio just because the transistor position is changed.)
+ +As shown, the peak LED current is just under 5mA, but the circuit will work with less. The minimum suggested peak current is around 2.4mA, making R1 and R2 68k. This reduces total dissipation to just under 300mW, but the load on the phototransistor has to be minimised (R3 should not be less than 10k with a 5V supply). This requirement can be relaxed (a little) if the optocoupler has a high current transfer ratio (at least 200%). As the LED ages it will lose output [ 4 ], but maintaining a low forward current keeps this to a minimum. The LED can be expected to last for at least 20 years if the current is kept low (~ 10% of rated maximum is a good starting point).
+ +The 'transfer ratio' (or 'CTR' - current transfer ratio) of optocouplers needs some explanation. If described as '100%' (not uncommon for basic types), that means that 5mA in the LED will allow a maximum transistor current of 5mA. However, this is not a linear function, and the transfer ratio changes depending on LED current, hours of use, transistor collector (or emitter) external resistance and supply voltage. Unless specified for true linear operation, don't imagine that you can use an optocoupler for any signal transfer that requires high linearity. This general class of optocoupler is intended for 'on-off' operation, or for switchmode power supply regulation where linearity is not a requirement.
+ +One disadvantage of the circuit shown above is that the LED in the optocoupler gets current for at least 90% of the time. The zero crossing is indicated by the absence of current as the voltage across the LED falls to zero. Since the LED's useful life is determined by the amount of current it must pass and the total 'on' time, this reduces LED life. By maintaining a relatively low current, the optocoupler should last for a long time, but it's not the optimum way to drive it.
+ +The next circuit was found completely by accident, and because it works so well I asked the designer for permission to publish it here. The detector is very low power, and has particularly good detection of the mains zero crossing point. It's easy to get the pulse down to less than 1ms, and with some component value changes the pulse width can be reduced to around 500µs. While this level of precision isn't needed for most applications, it's inexpensive to implement, and works very well. Note that it will not be operative for a couple of hundred milliseconds after power is applied, because C2 has to charge before the LED current is useful.
+ +The LED gets current only when the input voltage is (close to) zero, so it has a much shorter duty cycle and should therefore last longer. However, the circuit needs an electrolytic capacitor, and these normally have a shorter life than LEDs. However, I don't consider this to be a limitation, because the circuitry on the isolated side will also use electros, and the other benefits of the circuit outweigh that one (very) small negative. The pulse width remains almost constant despite input voltage, with only the slightest change if the mains voltage falls from 230V to 120V. Peak LED current is affected though, and is proportional to the mains input voltage.
+ +The author's page [ 5 ] has a lot of additional information and is recommended reading. R6 is an addition that can be used to reduce the width of the zero-crossing pulse. With the other values as shown, adding R6 reduces the pulse width from 830µs to 440µs, but it also reduces the LED current to about 2mA. R3 is different from the original as well. At 22k (as shown on the author's website), the pulse width is a little over 1ms, but increasing the value provides shorter pulses (and a corresponding increase in precision). The pulse polarity can be reversed by placing the phototransistor's load in the emitter rather than the collector as shown. This is shown in the next drawing.
+ +Because of the high value of the input resistors and the presence of C2 (10µF), the circuit requires some time before it operates normally. It will be fully operational after about 200ms with 230V or 120V mains, but the LED current is reduced with lower mains voltages. For use at 120V, R1 and R2 can be reduced to 100k, which will bring the LED current up to a little over 4mA peak. All diodes are 1N4148 or similar. High voltage diodes are not necessary because the voltage across the diodes is limited by the input resistors, and will not exceed a maximum of perhaps 6-7 volts.
+ +C1 is optional, and can be omitted. It provides a measure of HF noise reduction, but leaving it out is unlikely to cause any issues. Note that as shown, the detector outputs a negative-going as the mains voltage crosses zero. As described above, this can be reversed by using the optocoupler's transistor as an emitter follower. The optocoupler shown in the original circuit is a 4N35, but there are many that can be used. I have a tube of EL817 (4-pin) devices that work well (the LTV817 is an equivalent), but there are countless readily available parts to choose from.
+ + + +It's worth pointing out that one of the ZCD circuits published on the EDN Network website (and referenced on the DEXTREL site) is wrong in several places, and will not work without corrections. There are also some significant changes that can be made to the EDN circuit, which both simplify the circuit and improve performance. A reader posted a comment to query one error, but no-one ever bothered to reply. I've now included it, and it's actually a good circuit with the changes. It does use more parts than the circuit shown above though. It operates with a significantly higher voltage (across C1) than any of the other circuits, and this is one reason it can produce a ZCD pulse only 150µs wide. Personally, I don't think it's worth any additional complexity, but it may be useful.
+ +The circuit is somewhat sensitive to component value changes. As shown, the voltage across C1 (and Q2) will reach about 45-50V, and this can be reduced by increasing the values of R1 and R2. With 230V mains, you may be able to use up to 330k, but that may not work depending on the transistors. Overall, while it can be made to work well, IMO it's a bit too component-sensitive to be viable. If R1 and R2 are reduced in value it's more predictable, but the voltage across C1 and Q2 can exceed 70V, so higher-voltage parts are necessary.
+ +If you think that you need exceptionally high-precision narrow (less than 100µs) zero-crossing pulses, the next two circuits will do just that. The first circuit uses a CMOS 4093 quad Schmitt NAND gate, with all sections wired in parallel. This can achieve a pulse-width of less than 100µs, which is far better than can be achieved with a transistor or two. The input impedance is very high, and using the gates in parallel makes sure they can deliver enough current. The LED in the optocoupler is only pulsed during the zero-crossing. The circuit is fairly immune to noise because of the Schmitt trigger within the IC. You can also use the 74HC132 (also a quad Schmitt NAND gate), but note that it has a different pinout! The 74HC series can provide more output current. The LED current will be about 3mA with the 1k series resistor for the optocoupler, and the value of R5 can be reduced if you need more current.
+ +The next circuit will almost certainly exceed anything that's needed for zero-crossing detection. It's based on an LM393 dual comparator, but only one section is used. It's capable of achieving a pulse width of around 70µs, faster than any other I've seen. It may need some adjustment to the value of R4 if you don't get any output. Reducing R4 increases the pulse width. Again, the LED is pulsed only during the zero-crossing period. Unlike the Fig. 10 circuit, there's no Schmitt trigger and very noisy mains may cause timing problems. I've tested many ZCDs over the years and noise is rarely a problem, contrary to what you might expect.
+ +Current drawn from the mains is minimal (about 2mA RMS). Ideally, you'll use two 27k resistors in series for R1 and R2 to ensure the voltage across each resistor is kept to the minimum. The voltage on U1.2 is about 650mV (set by D6). Because the LM393 comparator can have its inputs at (or even slightly below) ground, the low voltage isn't a problem. The output pulses low when Pin 3 falls below 650mV, and pulls current through the optocoupler's LED. The Fig. 11 circuit is the highest performance ZCD you're likely to find, consistent with physical size and parts count. It's possible to improve it, but there's little reason to do so.
+ +The 10µF filter cap for both circuits looks as if it's far too small, but it can maintain the voltage for the very brief current pulses. Be careful of stray capacitance around the comparator's input pin. If it's more than around 100pF the circuit may not work, and R4 will need to be reduced. Lower values give a wider pulse. The LED current is around 7mA with the 1k series resistor for the optocoupler.
+ +These circuits are included so you can experiment, and they have both been simulated, but the Fig 10 circuit was not tested on the workbench. If this changes this page will be updated with a scope capture. If used with 120V, 60Hz, reduce the value of R1 and R2 (2 x 27k), as only half the resistance is needed, and two resistors in series for each isn't a requirement. The following scope capture was done with an LM358 opamp rather than the LM393 comparator, as I don't have the latter in stock. There are a few circuit changes, but the comparator is better than the LM358. Even so, the very slow LM358 easily managed a pulse-width of only 250µs, perfectly centred on the zero-crossing point of the mains waveform.
+ +There may be no good reason to use these, as I can't think of an application that needs such a high-precision ZCD pulse. However, if a precision circuit is available, there's no reason to use a 'lesser' version. Both are low cost (although PCB real-estate is greater than the other examples described). Both circuits can be powered from the output of a step-down transformer if this suits your application. R1 and R2 need to be adjusted accordingly.
+ +Both of these circuits are ESP 'originals', and while they use more parts than any of the others, they also have higher performance. One thing that I've found puzzling is the fact that no IC manufacturer has seen fit to offer an integrated ZCD with good performance. The MOC306x and MOC316x zero-cross TRIAC optocouplers demonstrate that it can be done, but they aren't suitable for general-purpose zero-crossing detection.
+ +In May 2022 I became aware of a new IC from Texas Instruments. The AMC23C12 is described as a 'Fast Response, Reinforced Isolated Window Comparator With Adjustable Threshold and Latch Function'. While the datasheet concentrates on power applications (monitoring motor current in particular), it was immediately obvious (to me) that it would make a fine zero-crossing detector. The IC has many different possibilities depending on some specific resistor values (in particular the resistor from the 'Ref' input to 'high-side' ground). There's now another IC - the AMC23C10 (Fast Response, Reinforced Isolated Comparator With Dual Output) which comes with an application note (SBAA542, published March 2022) describing a ZCD.
+ +The datasheet claims that both DC inputs need bypass caps (100nF) as close to the IC pins as possible. C1 (10µF) is likely to suffice, but without a unit to test it's not known if this will be enough. At the time of writing, no-one seems to have the ICs available, only evaluation modules that are prohibitively expensive. I've shown the 'Latch' input grounded, but some versions of the IC have dual outputs. If grounded this will not affect operation as the outputs are open-drain (hence R5 pull-up resistor). Note that the ZCD pulse is from 'high' to 'low', the opposite of the other schemes shown.
+ +The IC has very high isolation (7kV DC, 5kV RMS, 1 minute) and is suitable for continuous operation with standard mains voltages. Propagation delay is well below 1µs with the arrangement shown above. There is one characteristic that's a little unfortunate, the 'high-side' current. It's rated for a 'typical' value of 2.9mA with a maximum of 3.6mA, so the input feed resistors have to be a lower value than with the other schemes shown. That means the resistance for 230V should not be less than 40k in total, or 20k for 120V. The resistors should be 1W types to ensure they run cool (the larger surface area results in better heat dissipation).
+ +This final circuit has been included because it's interesting, and shows that other methods of isolation between mains and 'safe' low-voltage circuitry exist. Optocouplers still remain the most common, and this isn't expected to change any time soon. Despite 'lumen depreciation' in LEDs, most optocouplers last for many years.
+ + +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +AN-006 |
There are numerous switchmode regulators available, most based on ICs. While convenient (if you can get the IC), they are not always readily available, and like all ICs, exclude the user from understanding their operation for the most part. This appnote is based on AN-003 - a constant current version of an otherwise almost identical circuit.
+ +Instead of operating in constant current mode, the circuit shown uses a shunt zener diode to set the output voltage. This does have some limitations, the main one being that the output voltages available are limited to the zener voltages available, and the output voltage is about 0.65V above the zener voltage. As shown, the circuit will regulate at around 5.35 - 5.4V, but this is well within the recommended maximum rating for 5V logic circuits - typically 5.5V. In fact, it is quite common to set (adjustable) 5V regulators a little high to account for resistive losses in PCB tracks and other wiring.
+ +Although there are many ICs available, ranging from simple linear regulators such as the 7805, buck regulator chips and complete encapsulated regulator modules, most are not available in hobbyist quantities, and/or are relatively expensive. Not so the circuit shown here. All parts are cheap, nothing is especially critical, but (of course) efficiency is not as good as the dedicated circuits.
+ +Linear regulators are very inefficient. The full output current is drawn at all supply voltages, so with 12V input and (say) 500mA output, the total circuit dissipation is 3.5W. While this is not a great deal, where efficiency is paramount such as with battery operation or where limited space is available for heatsinking, this is not a good solution.
+ +
Figure 1 - Ultra-Simple Switchmode Regulator
Like the original in AN-003, it uses only three cheap transistors, and works remarkably well. It is less efficient than most of the dedicated ICs, but is still more efficient than a linear regulator. It has the great advantage that you can actually see what it does and how it does it. From the experimenters' perspective, this is probably one of its major benefits.
+ +Again, like the LED current regulator, it will change from switchmode to linear as the input voltage falls. It still remains a voltage limited supply, and the design voltage (set by D2) does not change appreciably as the operation changes from linear to switchmode or vice versa.
+ +Operation is quite simple - Q1 monitors the voltage across R1, and turns on as soon as it reaches about 0.7V. This turns off Q2, which then turns off Q3 by removing base current. If the voltage is low, a state of equilibrium is reached where the voltage across R1 remains constant, and so therefore does the current through it (and likewise through the zener).
+ +The value of D2 can be changed to provide the output voltage required. Although in theory the output voltage should be ...
+ ++ V = VD2 + 0.65V (approx.) ++ +
... in reality it will be less. A typical 4.7V zener will normally operate at around 150mA, but this circuit operates the zener at a lot less (around 15mA), and this will cause the output voltage to be lower than expected. R1 can be reduced to increase zener current, but at the expense of efficiency.
+ +Operation is almost identical to that described in AN-003. At higher input voltages (typically about 1.5-2V above the output voltage), the circuit will over-react. Because of the delay caused by the inductor, the voltage across R1 will manage to get above the threshold voltage by a small amount. Q3 will get to turn on hard, current flows through the inductor and into C1, the load and through D2. By this time, the transistors will have reacted to the high voltage across R1, so Q1 turns on, turning off Q2 and Q3. The magnetic field in L1 collapses, and the reverse voltage created causes current to flow through D1 and into C2. The cap now discharges through the LED and R1, until the voltage across R1 is such that Q1 turns off again. Q2 and Q3 then turn back on.
+ +This cycle repeats for as long as power is applied at above the threshold needed for oscillation. The circuit changes its operating frequency as its method of changing the pulse width. This is not uncommon with self-oscillating switchmode supplies.
+ +For testing, I only had a 3.9V zener available, so I used that. In theory, the output voltage should have been around 4.6V, but the zener current was much too low to get good voltage stability. This will also apply for most other zener voltages in the range this supply will be used, so ...
+ +Vin | Iin | Vout | Efficiency |
5.0 | 390mA | 3.86 | 77% |
6.0 | 370mA | 3.87 | 67% |
7.0 | 315mA | 3.88 | 68% |
8.0 | 275mA | 3.89 | 68% |
10 | 219mA | 3.90 | 68% |
12 | 188mA | 3.92 | 67% |
14 | 167mA | 3.94 | 64% |
16 | 152mA | 3.97 | 62% |
The table above shows the operating characteristics of the prototype. The output voltage remained stable at 3.93V (±0.01V) with loads ranging from infinity down to 10 Ohms (393mA output). Although the switching waveform becomes chaotic with no load (the frequency and waveform are rather unpredictable), the voltage remains stable.
+ +The efficiency is not as high as you would get from a dedicated IC, because the switching losses are higher due to relatively slow transitions. At best it manages 68%, which is not bad for such a simple circuit. Input voltage can range from just above the zener voltage up to about 16V or so. Maximum efficiency is provided with an input voltage of between 7 and 12V. It is still much better than a linear regulator though - at 7V input, a linear regulator will manage 55%, and this falls as input voltage is increased - about 32% at 12V and 24% at 16V. All wasted power is dissipated as heat.
+ +Construction is not critical, but a compact layout is recommended. L1 needs to be rated for the full load continuous current, Q1 may or may not need a heatsink, depending on the input voltage and output load. The ripple current rating for C2 needs to be at least equal to the load current, so a higher voltage cap than you think you need should be used. I recommend that a minimum voltage rating of 25V be used for both C1 and C2.
+ +Q1 and Q2 can be any low power NPN transistor. BC549s are shown in the circuit, but most are quite fast enough in this application. Q3 needs to be a medium power device, and the BD140 as shown works well in practice. D1 should be a high speed diode, and a Schottky device will improve efficiency over a standard high speed silicon diode. D1 needs to be rated at a minimum of 1A. L1 is a 100uH choke, and will typically be either a small 'drum' core or a powdered iron toroid. The current rating of L1 must be at least the expected output load current to minimise losses (and heat). All resistors can be 0.25 or 0.5W.
+ +![]() | + + + + + + + |
Elliott Sound Products | +AN-007 |
While high power zener diodes are made, they are usually not readily available. They also tend to be rather expensive, and are often stud-mounted types. These are not always easy to install on a heatsink, and the mounting hardware (insulating bush and washer) seems to be all but unobtainable.
+ +Provided you (or your application) can tolerate a slightly higher voltage than may have been specified, a high power zener can be made using an additional transistor and a resistor. Note that this is design guide - it is not a 'final' design, and has to be adapted for your needs. None of the parts shown (or the calculations) can possibly replicate all possibilities, but they will help you to understand the requirements for this kind of circuit.
+ + +The method described is not new, and has been used in at least two of the projects described on the ESP website, as well as many commercial products. By using the zener to supply base current to a power transistor, the power rating is limited only by the transistor, with a likely additional limitation imposed by the device current gain at the design current. While zeners generally allow peak (momentary) currents that are much higher than their rated current, the transistor assisted version may not - again, this depends on the transistor.
+ +
Figure 1 - High Power Transistor Assisted Zener Diode
The transistor needs to be selected based on the maximum voltage and current expected. If the zener is used only for protection of more sensitive systems on the same power supply bus, the transistor may not even need a heatsink. This depends on the application, so you need to be careful before deciding not to use a heatsink, and/or in the selection of a suitable heatsink based on the power dissipated. R(limit) is the current limiting resistor that's always used with any zener diode. Selection of the value depends on your application and is not covered here.
+ +The circuit shown is simply an example, and Q1 can be any transistor that is suitable for your needs. In most cases, a TIP41 or similar will be more than adequate unless very high voltage or power is needed. For lower powers a BD139 may be acceptable, and you need to select the transistor to suit the voltage and current required for your application. Make sure that you check the safe operating area of the intended transistor!
+ +The maximum allowable current through a zener diode is determined by ...
+ ++ I = P / V where I = current, P = zener power rating, and V = zener voltage rating. ++ +
For example, a 27V 1W zener can carry a maximum continuous current of ...
+ ++ I = 1 / 27 = 0.037A = 37mA ++ +
For optimum zener operation, it is best to keep the current to a maximum of 0.5 of the rated limit, so the 27V zener should not be run at more than about 18mA. Using a lower current is preferable, but always ensure that the zener current is greater than 10% of the maximum, or regulation will suffer. I will generally aim for about 20-50% if practical. The zener current becomes the base current for the power transistor (less the current through R1), and assuming a current gain of 50 and a zener current of (say) 15mA, that means the maximum total 'composite zener' current is ...
+ ++ 15 × 50 = 900mA (Note that R1 current has not been included) ++ +
When the current through R1 is considered this will increase the zener current. With 100 ohms for R1, the zener current is increased by about 6.5mA. A 1k resistor will reduce that to 0.65mA (650µA). The voltage is increased slightly (to about 27.7V), and the (maximum recommended) power rating is now ...
+ ++ P = V × I = 25W ++ +
A Darlington transistor can also be used for higher current with low power zener diodes, but will add around 1.5V to the zener voltage. Whether this will cause a problem or not depends on the circuit itself, and is not something that can be predicted in advance. Selection of R1 is somewhat arbitrary, and it will generally be between 100 and 1k ohms. If the transistor has very high gain (or you use a Darlington), R1 needs to be sized so that it forces enough current through the zener diode to get past the 'knee' of its curve - around 10% of the maximum rated current. The zener's total current is the sum of the base current of Q1 and the current through R1. In most cases, the required current will remain fairly constant, but if it varies widely you need to be more diligent with your calculations to ensure performance is maintained over the full range.
+ +The calculations shown here are intended as an example only. This is not a complete design, and you need to determine the requirements for the zener and power transistor to suit your application. The general principles have been covered, but the final circuit has to be designed to ensure that power dissipation of all parts is within their ratings, the zener current is between 10% and 50% of its ratings (considering the operating temperature) and the transistor can dissipate the power needed. R1 is selected to ensure that the zener current is at least 10% of the rated current if the total current is comparatively low. As noted earlier, any resistance between 100 ohms and 1k will generally work, but it's preferable that you either calculate it or make an educated guess. It only becomes (slightly) critical at very low currents, where Q1 passes a small fraction of the total current.
+ +
Figure 2 - 'Normal' Vs. Assisted Zener Performance
The above shows the difference between a normal (2.5W) zener vs. a transistor assisted version. The standard zener shows a steady rise of voltage as current increases, but the transistor assisted version maintains a very steady voltage. The voltage varies by only 150mV for a current change from 8mA to 180mA. The maximum current for both is about 180mA, with 15V zener diodes fed via 470 ohm current limiting resistors. In contrast, the voltage across a single zener will change by somewhere between 1V to over 2V for the same current range (this depends on the zener specification).
+ +Construction is not critical, but a heatsink will almost certainly be needed for Q1. Using a clip to attach D1 to the heatsink will allow a higher dissipation, and will allow you to operate the zener at its maximum operating current. Select Q1 to suit the application - in many cases, a raid on the junk box will almost certainly provide something usable. R1 can be 0.25 or 0.5W. Avoid carbon composition resistors which have much higher noise levels than carbon film or metal film types.
+ +Note that the 'composite' or 'assisted' zener has a much lower impedance than a zener by itself, and adding a capacitor in parallel will have very little effect in reducing hum and noise. For example, over a range of 100mA, the voltage may only change by around 100mV, meaning that the 'dynamic impedance' is only 1 ohm. Compare that to a zener by itself that will have a dynamic impedance of many times that value - around 35 ohms for a 1N4750 27V zener. A capacitor can only suppress noise when its impedance is much lower (by a factor of at least 10) than that of the source - at all frequencies of interest. Even a 10,000µF capacitor in parallel is marginal at 100Hz if the composite zener impedance is only 1 ohm. The reactance of the capacitor is 0.16 ohms at 100Hz. If the supply has to be very low noise, using an assisted zener is not appropriate and more complex circuitry is necessary.
+ + +![]() | + + + + + + |
Elliott Sound Products | +AN-008 |
Zener diodes are very common for basic voltage regulation tasks. They are used as discrete components, and also within ICs that require a reference voltage. Zener diodes (also sometimes called voltage reference diodes) act like a normal silicon diode in the forward direction, but are designed to break down at a specific voltage when subjected to a reverse voltage.
+ +All diodes will do this, but usually at voltages that are unpredictable and much too high for normal voltage regulation tasks. There are two different effects that are used in Zener diodes ...
+ +Below around 5.5 Volts, the zener effect is predominant, with avalanche breakdown the primary effect at voltages of 8V or more. While I have no intention to go into specific details, there is a great deal of information on the Net (See References) for those who want to know more. Because the two effects have opposite thermal characteristics, zener diodes at close to 6V usually have very stable performance with respect to temperature because the positive and negative temperature coefficients cancel.
+ +Very high thermal stability can be obtained by using a zener in series with a normal diode. There are no hard and fast rules here, and it normally requires device selection to get the combination to be as stable as possible. A zener of around 7-8V can be selected to work with a diode to cancel the temperature drift. Needless to say, the diode and zener junctions need to be in intimate thermal contact, or temperature cancellation will not be a success.
+ +The zener diode is a unique semiconductor device, and it fulfils many different needs unlike any other component. A similar device (which is in fact a specialised zener itself) is the TVS (transient voltage suppressor) diode. There are several alternatives to TVS diodes though, unlike zeners. Precision voltage reference ICs may be thought of as being similar to zeners, but they aren't - they are ICs that use a bandgap reference (typically around 1.25V). These are ICs, containing many internal parts. A zener diode is a single part, with a single P-N junction.
+ + +For reasons that I don't understand, there is almost no information on the Net on exactly how to use a zener diode. Contrary to what one might expect, there are limitations on the correct usage, and if these are not observed, the performance will be much worse than expected. Figure 1 shows the standard characteristics of a zener, but as with almost all such diagrams omits important information.
+ +
Figure 1 - Zener Diode Conduction
So, what's missing? The important part that is easily missed is that the slope of the breakdown section is not a straight line. Zeners have what is called 'dynamic resistance' (or impedance), and this is something that should be considered when designing a circuit using a zener diode. Standard (rectifier) diodes are no different, except that their dynamic resistance is important when they are forward biased.
+ +The actual voltage where breakdown starts is called the knee of the curve, and in this region the voltage is quite unstable. It varies quite dramatically with small current changes, so it is important that the zener is operated above the knee, where the slope is most linear.
+ +Some data sheets will give the figure for dynamic resistance, and this is usually specified at around 0.25 of the maximum rated current. Dynamic resistance can be as low as a couple of ohms at that current, with zener voltages around 5 - 6V giving the best result. Note that this coincides with the best thermal performance as well.
+ +This is all well and good, but what is dynamic resistance? It is simply the 'apparent' resistance that can be measured by changing the current. This is best explained with an example. Let's assume that the dynamic resistance is quoted as 10 ohms for a particular zener diode. If we vary the current by 10mA, the voltage across the zener will change by ...
+ ++ V = R × I = 10Ω × 10mA = 0.1V (or 100mV) ++ +
So the voltage across the zener will change by 100mV for a 10mA change in current. While that may not seem like much with a 15V zener for example, it still represents a significant error. For this reason, it is common to feed zeners in regulator circuits from a constant current source, or via a resistor from the regulated output. This minimises the current variation and improves regulation.
+ +Manufacturers' data sheets will often specify the dynamic resistance at both the knee and at a specified current. It is worth noting that while the dynamic resistance of a zener may be as low as 2-15 ohms at 25% of maximum current (depending on voltage and power ratings), it can be well over 500 ohms at the knee, just as the zener starts to break down. The actual figures vary with breakdown voltage, with high voltage zeners having very much higher dynamic resistance (at all parts of the breakdown curve) than low voltage units. Likewise, higher power parts will have a lower dynamic resistance than low power versions (but require more current to reach a stable operating point).
+ +Finally, it is useful to look at how to determine the maximum current for a zener, and establish a rule of thumb for optimising the current for best performance. Zener data sheets usually give the maximum current for various voltages, but it can be worked out very easily if you don't have the datasheet to hand ...
+ ++ I = P / V where I = current, P = zener power rating, and V = zener voltage rating. ++ +
For example, a 27V 1W zener can carry a maximum continuous current of ...
+ ++ I = 1 / 27 = 0.037A = 37mA (at 25°C) ++ +
As noted in the 'transistor assisted zener' app note (AN-007), for optimum zener operation, it is best to keep the current to a maximum of 0.5 of the rated current, so a 27V/1W zener should not be run at more than about 18mA. The ideal is between 20-30% of the maximum , as this minimises wasted energy, keeps the zener at a reasonable temperature, and ensures that the zener is operating within the most linear part of the curve. If you look at the zener data table below, you will see that the test current is typically between 25% and 36% of the maximum continuous current. The wise reader will figure out that this range has been chosen to show the diode in the best possible light, and is therefore the recommended operating current.
While none of this is complex, it does show that there is a bit more to the (not so) humble zener diode than beginners (and many professionals as well) tend to realise. Only by understanding the component you are using can you get the best performance from it. This does not only apply to zeners of course - most (so called) simple components have characteristics of which many people are unaware.
+ +Remember that a zener is much the same as a normal diode, except that it has a defined reverse breakdown voltage that is far lower than any standard rectifier diode. Zeners are always connected with reverse polarity compared to a rectifier diode, so the cathode (the terminal with the band on the case) connects to the most positive point in the circuit.
+ + +Often, it is necessary to apply a clamp to prevent an AC voltage from exceeding a specified value. Figure 2 shows the two ways you may attempt this. The first is obviously wrong - while it will work as a clamp, the peak output voltage (across the zeners) will only be 0.65V. Zeners act like normal diodes with reverse polarity applied, so the first figure is identical to a pair of conventional diodes.
+ +
Figure 2 - Zener Diode AC Clamp
In the first case, both zener diodes will conduct as conventional diodes, because the zener voltage can never be reached. In the second case, the actual clamped voltage will be 0.65V higher than the zener voltage because of the series diode. 12V zeners will therefore clamp at around 12.65V - R1 is designed to limit the current to a safe value for the zeners, as described above.
+ +The important thing to remember is that zener diodes are identical to standard diodes below their zener voltage - in fact, conventional diodes can be used as zeners. The actual breakdown voltage is usually much higher than is normally useful, and each diode (even from the same manufacturing run) will have a different breakdown voltage that is normally far too high to be useful.
+ + +The data below is fairly typical of 1W zeners in general, and shows the zener voltage and one of the most important values of all - the dynamic resistance. This is useful because it tells you how well the zener will regulate, and (with a bit of calculation) how much ripple you'll get when the zener is supplied from a typical power supply. An example of the calculation is shown further below.
+ +If if you wanted to measure the dynamic resistance for yourself, it's quite easy to do. First, use a current of about 20% of the rated maximum from a regulated power supply via a suitable resistor. Measure and note down the voltage across the zener diode. Now, increase the current by (say) 10mA for zeners less than 33V. You'll need to use a smaller current increase for higher voltage types. Measure the zener voltage again, and note the exact current increase.
+ +For example, you might measure the following ...
+ ++ Zener voltage = 11.97 V at 20 mA+ +
+ Zener voltage = 12.06 V at 30 mA
+ ΔV = 90 mV, ΔI = 10 mA
+ R = ΔV / ΔI = 0.09 / 0.01 = 9 ohms +
This process can be used with any zener. You just need to adjust the current to suit, ensuring that the initial and final test currents are within the linear section of the zener's characteristics. The accuracy depends on the accuracy of your test equipment, and it's important to ensure the zener temperature remains stable during the test or you'll get the wrong answer due to the zener's thermal coefficient. If at all possible, the tests should be of very short duration using pulses, but this is very difficult without specialised equipment.
+ +The following data is a useful quick reference for standard 1W zeners. The basic information is from the Semtech Electronics data sheet for the 1N47xx series of zeners. Note that an 'A' suffix (e.g. 1N4747A) means the tolerance is 5%, and standard tolerance is usually 10%. Zener voltage is measured under thermal equilibrium and DC test conditions, at the test current shown (Izt).
+ +Notice that the 6.2V zener (1N4735) has the lowest dynamic resistance of all those shown, and will generally also show close to zero temperature coefficient. This means that it is one of the best values to use where a fairly stable voltage reference is needed. Because it's such a useful value, it has been highlighted in the table. If you need a really stable voltage reference then don't use a zener diode, but use a dedicated precision voltage reference IC instead. Or you can use one of the circuits shown further below - you can get surprisingly high stability with the right techniques.
+ +Type | VZ (Nom) | +IZt mA | RZt Ω at Test Current |
+RZ Ω at Knee Current | Knee Current (mA) |
+Leakage µA | Leakage Voltage |
+Peak Current (mA) | Cont. Current (mA) |
---|---|---|---|---|---|---|---|---|---|
1N4728 | 3.3 | 76 | 10 | 400 | 1 | 150 | 1 | 1375 | 275 |
1N4729 | 3.6 | 69 | 10 | 400 | 1 | 100 | 1 | 1260 | 252 |
1N4730 | 3.9 | 64 | 9.0 | 400 | 1 | 100 | 1 | 1190 | 234 |
1N4731 | 4.3 | 58 | 9.0 | 400 | 1 | 50 | 1 | 1070 | 217 |
1N4732 | 4.7 | 53 | 8.0 | 500 | 1 | 10 | 1 | 970 | 193 |
1N4733 | 5.1 | 49 | 7.0 | 550 | 1 | 10 | 1 | 890 | 178 |
1N4734 | 5.6 | 45 | 5.0 | 600 | 1 | 10 | 2 | 810 | 162 |
1N4735 | 6.2 | 41 | 2.0 | 700 | 1 | 10 | 3 | 730 | 146 |
1N4736 | 6.8 | 37 | 3.5 | 700 | 1 | 10 | 4 | 660 | 133 |
1N4737 | 7.5 | 34 | 4.0 | 700 | 0.5 | 10 | 5 | 605 | 121 |
1N4738 | 8.2 | 31 | 4.5 | 700 | 0.5 | 10 | 6 | 550 | 110 |
1N4739 | 9.1 | 28 | 5.0 | 700 | 0.5 | 10 | 7 | 500 | 100 |
1N4740 | 10 | 25 | 7.0 | 700 | 0.25 | 10 | 7.6 | 454 | 91 |
1N4741 | 11 | 23 | 8.0 | 700 | 0.25 | 5 | 8.4 | 414 | 83 |
1N4742 | 12 | 21 | 9.0 | 700 | 0.25 | 5 | 9.1 | 380 | 76 |
1N4743 | 13 | 19 | 10 | 700 | 0.25 | 5 | 9.9 | 344 | 69 |
1N4744 | 15 | 17 | 14 | 700 | 0.25 | 5 | 11.4 | 304 | 61 |
1N4745 | 16 | 15.5 | 16 | 700 | 0.25 | 5 | 12.2 | 285 | 57 |
1N4746 | 18 | 14 | 20 | 750 | 0.25 | 5 | 13.7 | 250 | 50 |
1N4747 | 20 | 12.5 | 22 | 750 | 0.25 | 5 | 15.2 | 225 | 45 |
1N4748 | 22 | 11.5 | 23 | 750 | 0.25 | 5 | 16.7 | 205 | 41 |
1N4749 | 24 | 10.5 | 25 | 750 | 0.25 | 5 | 18.2 | 190 | 38 |
1N4750 | 27 | 9.5 | 35 | 750 | 0.25 | 5 | 20.6 | 170 | 34 |
1N4751 | 30 | 8.5 | 40 | 1000 | 0.25 | 5 | 22.8 | 150 | 30 |
1N4752 | 33 | 7.5 | 45 | 1000 | 0.25 | 5 | 25.1 | 135 | 27 |
1N4753 | 36 | 7.0 | 50 | 1000 | 0.25 | 5 | 27.4 | 125 | 25 |
1N4754 | 39 | 6.5 | 60 | 1000 | 0.25 | 5 | 29.7 | 115 | 23 |
1N4755 | 43 | 6.0 | 70 | 1500 | 0.25 | 5 | 32.7 | 110 | 22 |
1N4756 | 47 | 5.5 | 80 | 1500 | 0.25 | 5 | 35.8 | 95 | 19 |
1N4757 | 51 | 5.0 | 95 | 1500 | 0.25 | 5 | 38.8 | 90 | 18 |
1N4758 | 56 | 4.5 | 110 | 2000 | 0.25 | 5 | 42.6 | 80 | 16 |
1N4759 | 62 | 4.0 | 125 | 2000 | 0.25 | 5 | 47.1 | 70 | 14 |
1N4760 | 68 | 3.7 | 150 | 2000 | 0.25 | 5 | 51.7 | 65 | 13 |
1N4761 | 75 | 3.3 | 175 | 2000 | 0.25 | 5 | 56.0 | 60 | 12 |
1N4762 | 82 | 3.0 | 200 | 3000 | 0.25 | 5 | 62.2 | 55 | 11 |
1N4763 | 91 | 2.8 | 250 | 3000 | 0.25 | 5 | 69.2 | 50 | 10 |
1N4764 | 100 | 2.5 | 350 | 3000 | 0.25 | 5 | 76.0 | 45 | 9 |
Figure 3 - Zener Diode Temperature Derating
Like all semiconductors, zeners must be derated if their temperature is above 25°C. This is always the case in normal use, and if the guidelines above are used then you usually won't need to be concerned. The above graph shows the typical derating curve for zener diodes, and this must be observed for reliability. Like any other semiconductor, if a zener is too hot to touch, it's hotter than it should be. Reduce the current, or use the boosted zener arrangement described in AN-007.
+ +Zener diodes can be used in series, either to increase power handling or to obtain a voltage not otherwise available. Do not use zeners in parallel, as they will not share the current equally (remember that most are 10% tolerance). The zener with the lower voltage will 'hog' the current, overheat and fail. When used in series, try to keep the individual zener voltages close to the same, as this ensures that the optimum current through each is within the optimum range. For example, using a 27V zener in series with a 5.1V zener would be a bad idea because the optimum current through both cannot be achieved easily.
+ + +Using zener diodes as regulators is easy enough, but there are some things that you need to know before you wire everything up. A typical circuit is shown below for reference, and is not intended to be anything in particular - it's simply an example. Note that if you need a dual supply (e.g. ±15V), then the circuit is simply duplicated for the negative supply, reversing the polarity of the zener and C1 as required. We will use a 1W zener, in this case a 1N4744, a 15V diode. The maximum current we'd want to use is about half the calculated maximum (no more than 33mA). The minimum acceptable current is around 10% (close enough to 7mA).
+ +
Figure 4 - Typical Zener Regulator Circuit
Firstly, you need to know the following details about your intended circuit ...
+ +When you have this information, you can determine the series resistance needed for the zener and load. The resistor needs to pass enough current to ensure that the zener is within its linear region, but well below the maximum to reduce power dissipation. If the source voltage varies over a wide range, it may not be possible to use a simple zener regulator successfully. + +
Let's assume that the source voltage comes from a 35V power supply used for a power amplifier. The maximum voltage might be as high as 38V, falling to 30V when the power amp is driven to full power at low mains voltage. Meanwhile, the preamp that needs a regulated supply uses a pair of opamps, and draws 10mA. You want to use a 15V supply for the opamps. This is all the required info, so we can do the calculations. Vs is source voltage, Is is source current, Iz is zener current, IL is load current and Rs is source resistance.
+ ++ Iz (max) = 30mA (worst case, no load on main supply and maximum mains voltage)+ +
+ IL = 10mA (current drawn by opamps)
+ Is (max) = 40mA (again, worst case total current from source) +
From this we can work out the resistance Rs. The voltage across Rs is 23V when the source voltage is at its maximum, so Rs needs to be ...
+ ++ Rs = Vs / I = 23 / 40m = 575 ohms ++ +
When the source voltage is at its minimum, there will be only 15V across Rs, so we need to check that there will still be enough zener current ...
+ ++ Is = V / R = 15 / 575 ohms = 26mA+ +
+ Iz = Is - IL = 26mA - 10mA = 16mA +
When we take away the load current (10mA for the opamps), we still have a zener current of 16mA available, so the regulation will be quite acceptable, and the zener diode won't be stressed. 575 ohms is not a standard value, so we'd use a 560 ohm resistor instead. There's no need to re-calculate everything because the change is small and we were careful to ensure that the design was conservative to start with. The next step is to work out the worst case power dissipated in the source resistor Rs ...
+ ++ Is = 23V / 560 ohms = 41mA + P = Is² × R = 41mA² * 560 ohms = 941mW ++ +
In this case, it would be unwise to use less than a 2W resistor, but a 5W wirewound type would be better. In the same way as the resistor power was calculated, it's also a good idea to double check the zener's worst case dissipation. It may be possible to disconnect the opamps, in which case the zener will have to absorb the full 41mA, so dissipation will be 615mW. That's higher than the target set at the beginning of this exercise, but it's within the zener's 1W rating and will never be a long-term issue. Normal worst case dissipation is only 465mW when the opamps are connected, and that's quite acceptable.
+ +Figure 4 shows a 220µF capacitor in parallel with the zener. This does not make any appreciable difference to the output noise, because the impedance (aka dynamic resistance) of the zener is so low. We used an example of a 15V zener, so we expect it's impedance to be about 14 ohms (from the table). To be useful at reducing noise, C1 would need to be at least 1,000uF, but in most cases much lower values are used (typically 100-220uF). The purpose is to supply instantaneous (pulse) current that may be demanded by the circuit, or in the case of opamps, to ensure that the supply impedance remains low up to at least 2MHz or so.
+ +Because zener diodes have a dynamic resistance, there will be some ripple at the output. It's possible to calculate it based on the input ripple, change of source current and the zener's dynamic resistance. Let's assume that there is 2V P-P ripple on the source voltage. That means the current through Rs will vary by 3.57mA ( I = V / R ). The zener has a dynamic resistance of 14 ohms, so the voltage change across the zener must be ...
+ ++ V = R × I = 14 × 3.57m = 50mV peak-peak (less than 20mV RMS) ++ +
Provided the active circuitry has a good power supply rejection ratio (PSRR), 20mV of ripple at 100Hz (or 120Hz) will not be a problem. If that can't be tolerated for some reason, then it's cheaper to use a 3-terminal regulator or a capacitance multiplier than to use any of the established methods for ripple reduction. The most common of these is to use two resistors in place of Rs, and place a high value cap (not less than 470µF) from the junction of the resistors to ground. Doing this will reduce the ripple to well below 1mV, depending on the size of the extra capacitor.
+ + +The standard resistor zener feed is subject to wide variations of current and power dissipation as the input voltage varies. A simple feedback circuit can help to maintain a very stable current through the zener, and therefore provide a more stable reference voltage. As discussed earlier, a 6.2V zener diode has a very low thermal coefficient of voltage, and if we can ensure it gets a stable current, this further improves the voltage regulation. Feeding a zener with a current source is standard practice in IC fabrication, and it's easy enough to do in discrete designs as well.
+ +Note that all of the circuits shown (with the exception of Figure 7a) are intended to provide a reference voltage into a high impedance load. If significant output current is needed, the outputs should be buffered with a low-offset opamp. This isn't needed provided the load current is at most 1/10th of the zener current (around 250µA for all except Figure 5a).
+ +The circuits shown below are not power supplies, but they provide a fixed reference voltage for a power supply or other circuitry that may require a stable voltage (e.g. a comparator). The circuits compete well with dedicated precision voltage reference, and they are surprisingly good for many applications (other than Figure 5a, which has the worst performance of them all). In each case, the voltage variation is shown as Δ, which indicates the change over the full input voltage range (10V to 30V).
+ +
Figure 5 - 'Conventional' Vs. JFET CCS Zener Regulator Circuit
The standard zener regulator (a) will show a typical voltage change of around 85mV from an input voltage of 10-30V, with zener current changing from 1.7mA to over 15mA. This is significantly worse than the stabilised versions (including the JFET), but it may not represent a problem at all if the input voltage is already fairly stable. The JFET current source (b) is a significant improvement. It would be better with a JFET optimised for linear operation, but they are becoming very hard to obtain. I selected the J112 as they are still readily available, but the value of R1b may need to be altered to get a usable zener current (around 2.5mA).
+ +Compare (a) and (b) in the Figure 5 circuits, and it's immediately apparent that the voltage from the JFET stabilised version (b) should be more stable, even with a wide variation of input voltage. Simulated over a voltage range from 10V up to 30V, and a 1.9mV voltage change across the zener in (b), and it follows that zener current and zener power dissipation barely change over the entire voltage range. This also means that ripple rejection is extremely high, so with the addition of a cheap JFET, we can get close to a real precision voltage reference circuit.
+ +In Figure 6, the current mirror (Q2b and Q3b) is fed from a current source (Q1b) which takes its reference from the zener, so there is a closed loop and the current variation through the zener itself can be very small. The circuits shown cannot 'self-start' without R4, because there's no base current available for Q1 until the circuit is operating. R4 provides enough current to start conduction, after which the operation is self-sustaining.
+ +
Figure 6 - Precision CCS Zener Regulator Circuits
Using a precision constant current source (CCS) to provide zener current improves performance over a JFET. My original circuit is shown in (a), and a very small change shown in (b) improves matters even more ¹, with the zener variation reduced to 455µV over the input voltage 10-30V range. Note that these were analysed by simulation, but I also built the circuit (results are shown below).
+ +With the values shown, the zener current is only 2.5mA, which seems to defy the guidelines given earlier. However, increasing the zener current doesn't help a great deal, but it increases dissipation in the transistors. For example, if R1 is reduced to 1k, the zener current is increased to 5.4mA, dissipation in Q1 and Q3 is doubled, but the regulation is only improved marginally.
+ +R4 is needed so the circuit can start when voltage is applied, but unfortunately it does adversely affect the performance. A higher resistance reduces the effects, but may cause unreliable start-up. The modification shown in (b) above has better performance than my original and is the recommended connection for optimum performance.
+ ++ ¹ The idea to change the connection of R4 was supplied by a reader who calls himself 'Volt Second'. I extend my thanks, as this improves performance markedly. ++ +
Figure 7 - Opamp Zener Regulator Circuit
The opamp version (a) is a bit of an oddity. The opamp itself has both negative and positive feedback, with the zener diode providing negative feedback once it conducts. The circuit relies on the PSRR of the opamp to minimise voltage variations, and the zener current is a fixed value, based on the zener diode voltage and the resistor to ground (R3). The way the circuit is configured means that the output (reference) voltage is double that of the zener diode, but this can be altered over a small range by varying one or both feedback resistors (R1, R2). Although shown with a TL071, an opamp with better PSRR will improve accuracy. One thing that is not ideal is the zener voltage. Having established that a 6.2V zener is best for thermal stability, that would provide an output voltage of 12.4V. R1a and R2a can only be different by a small amount before the circuit misbehaves. This circuit has one major advantage over the others, in that the opamp can supply output current without it affecting the zener current.
+ +The TL431 programmable voltage reference/ shunt regulator is as good as you might expect. The IC will operate with as little as 1mA cathode current, with a maximum of 100mA, provided the power dissipation limit isn't exceeded. As simulated it's very good, but 'real life' may be different. The temperature variation also has to be considered (typically 4.5mV/°C).
+ +This time around I ran a workbench test on the Figure 6 circuits, and the output voltage changed by only 1.7mV when the input was varied from 10V to 25V (113µV/V). That works out to an input voltage variation attenuation of close to 79dB. I didn't match transistors, and used 5% tolerance resistors in order to get a 'worst-case' result. Considering that a zener diode by itself will vary by at least 86mV over the same voltage range, this is a pretty good result. The supply current changed by only 35µA/V. The measured performance is not as good as the simulation, largely because the simulator has perfectly matched transistors and 0% tolerance resistors. I didn't bother to fiddle with transistors or resistors in the simulations.
+ +In reality, it's unlikely that you will ever need to use any of the more complex stabilised zeners, and they are included here purely in the interests of completeness. Most people will use a TL431 or other adjustable voltage reference (e.g. LM4040, LM329, LM113, etc.) if high performance is needed, but you need to experiment to find the optimum solution for your application.
+ ++ 1 Reverse Biased / Breakdown - Discussing the phenomenon when the diode is reverse biased/breakdown. Bill Wilson+ + +
+ 2 RadioElectronics.com - Summary of the zener diode
+ 3 Data Sheet Archive - BZX2C16V Micro Commercial Components 2 Watt Zener Diode 3.6 to 200 Volts.
+ 4 Zener Diode Theory - OnSemi Handbook HBD854/D (No longer available from OnSemi.) +
![]() | + + + + + + + |
Elliott Sound Products | +AN-009-2 |
As mentioned in part 1, it is possible to rearrange the circuit to provide a constant frequency as the pulse width is varied. The additional effort involves having to run three wires from the pot rather than just two, so it's not really arduous. As shown below, R1 and R2 can be used to set the maximum and minimum speeds, and can be made variable if specific max/min speeds are needed. + +
Note: There is a project (and PCB) for a motor speed controller/ LED dimmer - see Project 126 for details. + +
The original circuit can also be easily rearranged to only use 3 of the 6 Schmitt triggers, and this connection method is shown below. Feel free to mix and match between the two versions - for example the oscillator from Figure 1 can be used in the Figure 2 version or vice versa.
+ +
Figure 2 - Alternative Motor Speed Controller
To set a maximum speed, vary R1, and vary R2 to set the minimum. Trimpots can be used if your requirement is critical, but with no feedback, there will always be some variation anyway.
+ +With the values shown, the on and off times will change, but the period (for a complete on/off cycle) remains fairly constant regardless of pot setting. There will be some variation if R1 and R2 are not equal, but the frequency change will not have any effect on operation. The oscillator frequency is again approximately 560Hz, and this can be changed by making C1 larger (lower frequency) or smaller (higher frequency).
+ +MOSFET and diode requirements are unchanged from the Figure 1 version, and can be selected according to your requirements or what you have available - provided that the devices are rated sufficiently for the load.
+ +Like the previous version, this controller can also be used as a DC lamp dimmer, heater controller, or any other application that lends itself to PWM operation.
+ + + +![]() | + + + + + + + |
Elliott Sound Products | +AN-009 |
At one stage (a while ago, admittedly) DC motors had fallen from favour, with most applications using AC motors. In recent years though, this has changed dramatically. Most electronics suppliers have geared DC motors intended for robotics and the like, but there is another source of powerful and cheap motors that is worth looking into. Many hardware suppliers now have battery drills for insanely low prices - so low that you can't even buy a set of Ni-Cd batteries for the same money.
+ +While the extremely cheap ones (less than AU$20.00 at one major hardware chain in Australia) may have a pretty marginal battery pack, they do have an excellent motor with a planetary gearbox, torque limiter and keyless chuck. You can't buy a motor of the same power for anything like the money. Even if you have to pay a little more (typically around AU$30.00), if you get one that is the same as one you already own, you get a set of Ni-Cd batteries (and the charger) free, and the motor/ gearbox assembly can then be used for whatever you need to do. As an example, I fitted one of these motors to motorise the major axis of my milling machine, and will shortly be forced to build a coil winder using another.
+ +These cordless drills do have a speed controller built in, but it is not readily adaptable to fixed use, with a speed knob rather than the trigger. Alternatively, you may have some other motor that you need to control, and do not have a suitable speed controller. This was exactly the quandary I found myself in, and trying to adapt the existing trigger speed control (all surface mount on a ceramic substrate) was such a pain that I abandoned the idea very quickly.
+ +Note: There is a project (and PCB) for a motor speed controller/ LED dimmer - see Project 126 for details.
+ +DC motor speed controls (as used in cordless drills and the like) are most commonly a relatively low frequency PWM, and while higher frequencies can be used, there is really not much point. While the switching speed is almost invariably within the audible range, the motor noise is louder than the switching noise at all but the lowest speed setting.
+ +There is no reason that the frequency needs to be fixed (the inbuilt ones aren't), and that makes the controller marginally simpler to build. As shown below, the controller featured uses one readily available (and cheap) CMOS hex Schmitt trigger IC and a few passive components. The MOSFET can be salvaged from the drill if you choose to cannibalise one for the motor, and you may be able to rescue the diode as well - if you can find it!
+ +The unit described is designed for 12V motors, but higher (or lower) voltages can be used. If the voltage is less than around 9V, you may need an auxiliary supply for the oscillator or it may not have enough voltage swing to drive the MOSFET gate properly. The oscillator voltage must not exceed 15V, or the CMOS IC will be destroyed. I suggest that the supply for the oscillator/ gate driver section should be between 10V and 14V. I have tried the controller with a couple of different sized motors - one from the drill, and another (much smaller) robotics motor. It worked perfectly with both, giving a smooth speed change and starting the motor at even the lowest speed setting.
+ +
Figure 1 - DC Motor Speed Controller
It might look complex, but it isn't. There are a number of inputs and outputs that are paralleled, and as shown, U1A is the entire oscillator. The output of this could be used to drive the MOSFET directly (ignoring the other circuits), but this output already has a fairly heavy load because of the feedback components. You could also reverse the polarity (just reverse D1 and D2), and all remaining circuits can be used to drive the output. Why did I do it this way? Because I wired it up without really thinking about the polarity, and since there were 5 Schmitt inverters left in the package I knew that I could invert it if needed with no need to de-solder what I had done already.
+ +With the values shown, the on time is fixed by R1 at 146us, and the frequency for minimum speed is just over 560Hz. At maximum speed, the frequency is about 6.5kHz, with an off period of only 2.6us - limited by the fact the U1A will insist on oscillating, and the small residual resistance of VR1. You can increase the minimum on time by increasing R1 (some motors may need this to run), and the maximum speed can be limited by installing a resistor in series with VR1.
+ +As noted above, the MOSFET can probably be salvaged from the drill along with its heatsink - my unit used a P45NF MOSFET, which appears to be a manufacturer's special part number. Otherwise, use an IRF540 or anything else that will do the job. One IRF540 will be sufficient for motors drawing up to around 20A - the MOSFET is rated at 33A, but some safety margin is always advisable. The diode may cause a problem, as it needs to be rated at around the same current as the motor at full load. You may get away with less, but you also may not. During tests, I was able to get the diode quite hot, depending on motor speed. I used a MUR1560 (15A/600V ultrafast) because I had them handy, although it might be overkill.
+ +D1 and D2 need only be 1N4148 or similar. Do not use 1N400x diodes, as they are not fast enough and will cause problems with the oscillator. The 15V (1W) zener is used to protect the CMOS IC from excessive spike voltages. If you intend using the circuit shown from a supply voltage above 15V, then you will have to increase the value of R3. As shown, it's purpose is only to limit peak zener current from spikes, but increasing it will allow the circuit to operate from higher voltages.
+ +There is no real reason that the circuit couldn't be scaled up to handle very powerful motors, but for such applications, a feedback system would probably be expected to maintain the set speed regardless of load. Needless to say that is not available in the above circuit, and for many tasks (such as coil winder or motorised axis on a milling machine) it is not always a good idea - it's nice to be able to stop the motor by hand in an emergency without it trying to tear your arm off.
The diode is critical for motor speed control. It allows the back EMF from the motor (which occurs when the MOSFET switches off) to be put to good use - in this case it is re-applied to the motor, so is not wasted generating a high voltage pulse that may damage the motor's insulation. Without the diode, speed control is poor, low speed torque is minimal, and the motor will probably refuse to even start at less than 50% duty cycle.
+ +Although the circuit was designed as a motor speed controller, it will also work just as well as a lamp dimmer. Any (DC) filament lamp operating from 12 - 24V (or more with appropriate MOSFET selection) can be controlled, with a single IRF540 being more than adequate for lamps rated at up to around 20A (over 250W at 12V, more at higher voltages). The reversing switch is not much use in this application, and D4 is not needed either.
+ +The circuit can also be used as a heater control for DC heaters - for example, it could be used to reduce the power to a rear window demister, allowing it to be set for just enough power to keep the rear window of your car clear. While everything is cold, full power is needed, but after the window is free of condensation, a lot less power is needed to keep it that way. While you might think that there isn't much point, remember that every Watt of power that is used in a car is paid for by increased fuel consumption. The 12V car supply is not free, although most people tend to think of it that way.
+ +The MOSFET and power diode (D4) will need a heatsink, but given the circuit flexibility (and the almost endless uses for it), the dimensions are left to the constructor. Keep wiring short - especially to the MOSFET. Although it probably won't cause any problems if the MOSFET oscillates at some high (even RF) frequency, it's better to keep operation in the design range. You can add a gate resistor (10 - 100 Ohms) if it makes you feel better.
+ +While it is possible to make the controller maintain approximately the same frequency with a small re-organisation of the oscillator circuit, there appears to be no benefit, since it works perfectly as shown.
+ +The reversing switch is optional - some applications won't need it, in which case it can be omitted. If you got the motor from a cordless drill, you can always adapt the reversing switch that is usually a part of the existing controller.
+ +Other possible applications might be to control remote controlled battery driven model motors (cars, boats or even planes), in which case the pot would be attached to a servo (or use a servo controlled pot). The benefit is that battery drain is greatly reduced at low speeds compared to a simple switched series resistance controller.
+ +Part 2 shows an alternative method of doing exactly the same thing, except it only uses 3 of the 6 Schmitt triggers, so you can have two speed controllers using only one CMOS IC. It also uses a constant oscillator speed, which may be preferred in some cases.
+ + + +![]() | + + + + + + + |
Elliott Sound Products | +AN-010 |
The analogue telephone system is commonly known as the PSTN - public switched telephone network, but is also called POTS - plain old telephone system. It is characterised by the operating voltage of 48V DC supplied from the exchange when the phone is 'on-hook' (not connected to the local exchange), and around 5-12V when 'off-hook' (during a call). It's a 2-wire system, with simultaneous bidirectional communication. Dialling is either by DTMF (dual tone multi-frequency, aka 'Touch Tone' in the US) or (rarely now) pulse (aka decadic), where the line is connected and disconnected to create pulses that signal the dialed number to the exchange. One pulse signals the digit '1', two pulses for '2', etc. The details for DTMF signalling can be found on the Net if you want to know.
+ +Ringing is provided by an AC voltage superimposed on the line, at a frequency of about 20Hz, and with a voltage of 90V RMS. The ring current is 'cadenced' which is to say it's interrupted to create a ringing pattern. This differs in different countries, but part of the reason is to minimise the risk of electric shock. When the handset is lifted (off-hook), the exchange sends 'dial tone' to signal to the user that dialling may commence. Like the ringing cadence, dial tone differs in different countries. When the called number is ringing, 'ring tone' is sent to the calling party to indicate that the remote phone is ringing. If the remote phone is busy (off hook), the caller hears a 'busy' tone.
+ +While the specifics of all these functions are subject to individual country's standards, the principle is unchanged. Mobile ('cell') phones operate completely differently, and are not included in the above. Communication (dialling, speech, etc.) are all digital, dial tone is usually not provided with modern systems, but ring and busy tones are still supplied so the caller knows that the call did or did not get through. In some cases, special tones are used to signal network congestion when no spare radio channels are available or the exchange is at capacity.
+ +While the PSTN is being superseded worldwide by mobile/ cell phones and VOIP (Voice Over Internet Protocol), the principles are no less interesting. They are also no less important, but many of the principles are (or can be) 're-purposed' to suit particular requirements. One of these is 'talk back' radio, where the requirement for a hybrid are still essential, regardless of the type of phone system in use. The adventurous experimenter may also find other uses for a system that can use full duplex (simultaneous 2-way information over a single pair of wires). To be useful, the individual signals need to be separated at each end, and that's what a hybrid does.
+ +This article does not cover the signalling or power systems, or the main infrastructure, but concentrates on one small but vital part of the system as a whole - the hybrid circuit (as it is commonly known by telephone engineers). A hybrid is used to convert a bidirectional 2-wire circuit into separate 'send' and 'receive' channels, commonly known as a 4-wire interface. More information is available in Reference 4, which is a fairly comprehensive overview. It's based on the US system, but those used worldwide are similar, and the general ideas are representative of other systems.
+ +++ + ++
+
NOTE: It is an offence in most countries to connect non-approved equipment to the phone network, and the information here is not + intended to allow you to make any connection to your phone line. This material is for your information only. Obtaining approval is a costly exercise + and it's highly unlikely that any 'home made' equipment would even be considered. +
Hybrids are the heart of the analogue telephone system. They allow two people to speak and listen simultaneously over a single pair of wires, with little or no interaction. This Application Note is not about producing a telephone system or even a part thereof, but is intended to introduce the concept of a hybrid, and explain how it works. In one form or another, hybrids have been used since the early telegraph days, and they are an essential part of the telecommunications system.
+ +You can also build a pair - not because it's inherently useful for anything, but to experiment and learn. There's nothing especially critical about the principle, but it does become a lot harder (and there is inevitable degradation) when transformers are included. While these are not used in your home telephone, most exchanges (aka central offices) use transformers to ensure complete isolation from the outside world and all the dangers it represents.
+ +Using the techniques shown here, you could build a nice intercom or similar, but they are so cheap that no-one would bother. It's much better to build it for the pure sake of doing so, and to learn about techniques used in signal processing systems. You don't even need to physically build anything if you have a circuit simulator - this lets you do lots of 'what if' experiments without wasting any parts.
+ +One very common application of stand-alone hybrids is in the radio broadcast system, where callers are broadcast during conversations or (generally one-sided) debates with the on-air 'personality'. There is a considerable amount of additional circuitry necessary to interface with an analogue telephone circuit, and this is not covered here.
+ +The phone system has had a great deal of influence over the audio standards that have developed over the years. A vast number of things we use daily in audio are the result of inventions and standards developed by telephone research laboratories. Bell Labs, which has been part of many different phone companies over the years, is preeminent amongst these. Negative feedback, the decibel system, transistors, and many other developments we take for granted all came from Bell Labs. Many of the things we take for granted in audio came (perhaps indirectly) from the phone system, for example the 'phone' plug as used on guitar amps (tip/ sleeve) and for headphones (tip/ ring/ sleeve) came directly from the manual telephone exchanges (central offices) of old.
+ +The idea of 600 ohm balanced lines also came directly from the phone system - these are the mainstay of all interconnects used in recording and PA systems. For many years, radio stations relied on phone lines for their live broadcasts, and even for connection between the studio and transmitter, as well as callers going to-air as described above. Most outside broadcasts are now handled by portable microwave links, but a fixed line is still one of the most reliable connections known.
+ +If any 2-wire full duplex (meaning that traffic can pass in both directions simultaneously) analogue line requires amplification, this can only be done after separating the send and receive signals into separate pairs. This is shown in Figure 1. If a single amplifier were used, one direction would be blocked. With two amplifiers but no hybrid, the amplifiers would simply oscillate, wiping out all communications.
+ +
Figure 1 - Amplifying A Bidirectional (Duplex) Pair Signal
The scheme shown above is very common. Care is needed to ensure that the amplifier gain is not too high, otherwise the system will still oscillate. Gain must be at least 6dB lower than the worst-case transhybrid loss (see below for an explanation of this). Most digital signals are transmitted as 4-wire (separate send and receive pairs). Amplification may not be necessary for short lines, and the digital signal is 'reconstituted' to produce clean transitions between the two digital levels. These may be a voltage signal, or light if a fibre-optic cable is used.
+ + +The terms used should not be taken literally. A 2-wire system may only use 1 physical wire, with earth/ ground being used for the return. Likewise, 4-wire systems may only have 3 physical wires ... earth (ground or common), transmit and receive. The primary point of difference is that a 2-wire system is full duplex - traffic (voice or data) can travel in both directions without interaction.
+ +4-wire systems are simplex - data travels in one direction only, and each direction (in or out) has its own separate circuit. While the 4-wire circuit is much better, having zero interaction regardless of termination impedances or other issues, it uses twice as much cable. This was not an option when telephones first became available, so the 2-wire system was used to minimise cable usage but still provide an acceptable service. Many early telegraph and telephone systems used a single wire, with the return path provided by the earth (as in the planet).
+ + +The analogue telephone systems worldwide rely on a single pair of wires for both transmit and receive of speech or modem data. DC (48V nominal) is also sent along the same pair, allowing the exchange to provide power to the telephone. This is important in case of blackouts (and even more so before even electric lighting was readily available), since the phone can still be used. The standard wired analogue telephone system is probably one of the most reliable pieces of engineering on the planet, and although other methods for communication are becoming more popular, none can approach the reliability of the wired phone.
+ +The systems have been gradually changing for a long time, but the basic requirements have never changed. It is still possible to use a 70 year old rotary-dial telephone on the latest exchange equipment, and it will work just fine. This level of long-term compatibility has not been achieved with many products - I can't actually think of a single one that offers the same level of compliance as the humble telephone.
+ +
Figure 2 - Demonstration Hybrid Pair
It might look complex, but it isn't really. The section on the right is one 'station', and that on the left is another. Each can transmit information to the other, with the level at the Out terminal of the sending station being suppressed by at least 40dB. This is adjustable with VR1 on each section. The signal transmitted (via the In terminal) arrives at the Out terminal of the other station, attenuated by 6dB - half the level. Needless to say, this isn't an issue, since the transmit level simply needs to be increased (or the receive gain increased) to compensate. The two stations are connected by a line, which may be ordinary twisted pair, and somewhere between a couple of metres to a kilometre or more in length.
+ +If you look at the circuit closely, and ignore the send buffer (U1Bx), you'll see a modified version of the standard balanced input opamp stage. Signal is applied to both opamp inputs when sending a signal to the line. Both inputs will be at the same voltage (by adjusting VR1), so there will be no signal from the output. The total value of VR1 and R2 will be about 7.25k to achieve balance. When a signal is received from the line, only the negative input has that signal present, and it is amplified (by -1) and appears on the output terminal.
+ +Once the system is installed, simply inject a tone into one of the hybrids, and adjust VR1 to get the minimum level from the Out terminal. Do the same for the second hybrid. If you vary the frequency, you will find that the maximum rejection changes, but with a short line the variation will not be great. In a system with a short line (less than 100 metres), the rejection will be at least 40dB.
+ + +It is customary to refer to impedance balance and unwanted signal rejection in terms that cannot be considered intuitive unless you've worked within the industry. The telephone line is a complex impedance, and consists of resistance, inductance and capacitance. The US and a few other countries simply designate the impedance as 600 ohms, but in most others the 'official' impedance is an attempt to consolidate the cable and end-equipment parameters. In Australia, the impedance is 220 ohms, in series with the parallel combination of 820 ohms and 120nF. The UK and Europe also use complex impedance networks, but all are different. While this is extremely puzzling (after all, cables used for telephones are not all that different), it's simply a fact of life. In reality, it doesn't make much difference - all perform roughly equally, including 600 ohms resistive (although this is commonly modified with extra resistance and the addition of some capacitance).
+ +Because the sending impedance is determined to be a particular value to match the cable, the receiving equipment (along with the connecting cable) must have the same impedance. If this is done well, there is a minimum of echo returned to the phone line and exchange, and the minimum disruption to long distance calls. This is actually an extremely complex topic, and it will not be discussed in any further detail. Suffice to say that the CPE (customer premises equipment) needs to be able to satisfy a minimum impedance standard before it's allowed to be connected to the phone system. This applies almost everywhere, world-wide.
+ +Rather than attempt to state the impedance in terms of reactance, resistance or measured impedance, it is measured by a meter called a return loss bridge. This gives a reading of the impedance imbalance in dB. A perfectly matched impedance has a return loss of infinity, but even as little as 20dB can be surprisingly difficult to achieve in the real world. Figure 3 shows the Australian phone impedance as part of a return loss bridge, along with a graph showing the return loss when connected to a (slightly) mismatched load and artificial line. Remember that the ideal case is a return loss of infinity, but even a relatively small mismatch is sufficient to reduce the return loss dramatically. A good design is expected to be able to meet the requirements with various line lengths - including zero. The return loss bridge is shown here with the Australian standard telephone impedance, but any other impedance (including a 600 ohm resistor) can be substituted.
+ +
Figure 3 - Return Loss Bridge & Graph
Measurements are shown from 200Hz to 4kHz. The telephone bandwidth is deemed to be from 300Hz to 3600Hz, and this has been the standard for a very long time. The upper limit is now strictly enforced because so much of the network is digital. The sampling rate is only 8kHz, so 3.6kHz gives an acceptable safety margin before digital aliasing becomes a problem. It is extremely difficult to get a good return loss below 300Hz, because so much equipment uses transformers. With limited inductance imposed by small size, phase shift makes a good impedance balance almost impossible at low frequencies. Some countries impose return loss limits at frequencies below 300Hz, but they are usually fairly generous (perhaps 10dB or so).
+ +The next figure of merit is known by many names, but the most descriptive is simply 'transhybrid loss'. This is a measurement of the amount of signal picked up at the receive port, while it is transmitting signals within the allowed frequency range. An ideal hybrid will have an infinite transhybrid loss, but this is influenced by the line impedance. A good hybrid will achieve around 25-30dB, but for telephones it is common to use a lower value. This gives the person speaking some of their own voice in the earpiece, so they can hear themselves at a low level. This is called 'side tone' in telephony, and provides a level of confidence that the phone is working. Humans expect to hear themselves when speaking, and the phone system is designed to make sure this happens.
+ +
Figure 4 - Transhybrid Loss & Graph
Figure 4 shows the same hybrid and artificial line as used in Figure 3 (terminated with the Australian standard impedance), but this time measuring the transhybrid loss. Although the transhybrid loss was measured at over 50dB with a perfectly matched line impedance, this is degraded significantly by a comparatively small mismatch.
+ +Unfortunately, but by necessity, the transhybrid loss is affected by the degree of line matching (return loss). If there is a poor match, both return and transhybrid loss will be compromised. Excellent results have been achieved by telephone systems the world over, despite the huge variations met in practice. Phone lines can range from less than 100 metres to several kilometres, so will have dramatically different resistance, capacitance and inductance between the phone and the exchange. The impedance refuses to conform to legislation or standards bodies, and rather perversely chooses to obey the laws of physics instead.
+ + +Before we had any form of electronics, we had a phone system. Early systems had to rely solely on their own ability to generate a high level signal from a microphone. Carbon mics were the choice, because no other microphone is capable of producing such a high level, low impedance signal. Carbon mics actually have gain - enough to cause feedback if the mic is placed next to an earpiece. Because of this, a basic hybrid was essential to prevent the phone from squealing. Earlier systems used a separate mouthpiece and earpiece, but to enable a single handset with both required a system that provided electrical signal isolation.
+ +The earliest hybrids consisted of a coil with multiple windings - it may be considered as a transformer or an inductor, but in many cases it's really an autotransformer, with all windings joined at some point. Figure 5 shows one of the arrangements that were used - the hookswitch and ringer circuits have been omitted for clarity. Note that in all cases, the resistance of the transformer windings must be considered when determining impedance matching.
+ +
Figure 5 - Single Transformer Hybrid
The dots indicate the winding start, needed because the direction is important. While it may not look very impressive, the single transformer hybrid is capable of extremely good performance. In exactly the same way as an active hybrid (using opamps or digital signal processing), the performance in both directions is degraded if the line impedance does not match the design value. The impedance matching network must be located in the position shown. Theoretical transhybrid loss (with a perfectly matched impedance) is infinite, but this can never be achieved in practice. Maximum return loss is achieved with the load impedance (at the receive port) as high as possible.
+ +This arrangement was used in huge numbers, as it was the heart of almost all non-electronic telephones. See Figure 7, and you can see the exact same arrangement, although the matching impedance is different. No part of the circuit may be earthed, with the exception of the receive winding. While not a problem for a telephone, this makes it unsuitable for fixed equipment where an earth is required for electrical safety.
+ +
Figure 6 - Dual Transformer Hybrid
The dual transformer version has the benefit that all ports (line, transmit and receive) are isolated from each other. This is of no consequence in a telephone, but may be important for some exchange equipment. While this version is still used (transformers are still available), it is uncommon in general phone circuits. Performance can be extremely good, but compared to IC replacements that are now very common, the space and expense make it unattractive.
+ +Termination impedance is rather interesting. The terminating impedance shown affects only the transhybrid loss, and has no influence at all on the impedance presented to the 2-wire port. The proper impedance is set by using a modified network in series with the transmit port, and in parallel with the receive port. In this respect, this hybrid is unique, in that return loss and transhybrid loss are effectively independent of each other. However - both are affected by the actual external impedance, so an artificial line will mess up both. This is in deference to the 'no free lunch' principle.
Figure 7 - Schematic of Old Rotary Dial Telephone
The above is a scan of the little piece of paper that was inside an old telephone (as supplied by the Australian PMG (Postmaster General, aka APO - Australian Post Office), and is the actual schematic (with options) of this series of telephones. The phone itself is one of the old black Bakelite types - very retro, and still functional despite the fact that it's at least 50 years old. Remember to add the winding resistance to any external resistances when determining impedances. The hybrid is a single transformer type as shown in Figure 5, but is wired somewhat differently.
+ +![]() ![]() + |
![]() | + + + + + + + |
Elliott Sound Products | +AN-011 |
The 4-20mA current loop signalling protocol has been with us for many years, and despite all the digital advances remains popular. It has one particular characteristic that makes it very suitable for hostile environments, and (within sensible limits) it is immune to the distance from the transmitter or sender and the receiver. Cable can be added or removed without affecting the accuracy - a unique feature for analogue interfaces. It only needs 2 wires to work, because the power and signal use the same pair of wires. + +
Digital interfaces can be used of course, but how many can operate cheerfully with hundreds of metres of cable? The cable itself may be anything from a telephone wire twisted pair through to a length of twin mains cable that "just happened to be handy". Because the minimum current is 4mA (representing zero input), the system is self-monitoring. Should a fault develop, it is immediately apparent because the current will be out-of-range ... zero for a break, or exceeding 20mA if there is a short to some other voltage source. + +
The interface can be tested with nothing more sophisticated than a multimeter (analogue is fine!). There are specialised ICs available to convert just about any sensor imaginable to the 4-20mA standard, but there is one caveat ... no sensor may draw more than 4mA at idle. Since the IC or other interface circuit will need some power, there is usually less than 4mA available. This eliminates many gas sensors and the like, because they commonly have a heated sensing element that draws more current than the 4-20mA standard allows. Accordingly, some 4-20mA applications require local power for the sensor and transmitter circuits, but this only applies to a few specialised sensors. A third wire may be used in some cases, providing power to the sensor electronics. + +
4-20mA is the standard current loop, but there are also others that have been used over the years. Various manufacturers have come up with their own variants, but none of these can be considered standard. + +
There are also measurement microphones that use a 4mA current loop, but this is a completely different arrangement. These microphones are supplied with a fixed current of 4mA, and it does not change with variations of signal level. If there is enough interest, this is covered in some detail in Project 134 - 4mA Current Loop Microphone.
+ + +The general arrangement of a 4-20mA interface is shown below. The receiver is simply any device that can measure the voltage across a resistor, and may be analogue or digital. The sense resistor is typically between 100 and 500 ohms, but in many cases 250 ohms is considered 'standard'. 125 ohms is also a common sense resistor value, but it really depends on the receiver electronics.
+ +The transmitter circuit is the interface between the sensor and the 4-20mA loop. Whether discrete or using a specialised IC, the transmitter takes a signal (typically a voltage from a sensor, but resistance is also common) and converts it to a current. The current is directly proportional to the input. With no signal or at the lower limit of the sensor, the current is 4mA, rising to the maximum of 20mA at the upper limit of the sensor. The 4mA standing current is an offset that allows the transmitter to function, and provides a confidence signal to show that the loop is operational.
+ +The final part is the power supply. This may be within the receiver unit in some cases, but it's not at all uncommon for it to be completely separate. The voltage supply needs only to be capable of supplying a maximum of 20mA for as many sensors that it powers. The voltage can be anywhere between 12V and 36V, although you may see 48V used on occasion. 24V is the most common and is well suited to most applications. It is important to understand that the voltage doesn't actually matter, provided it is enough to overcome the loss across the sense resistor and wiring resistance, and still leave enough voltage at the transmitter to allow it to function.
+ +
Figure 1 - Typical 4-20mA Block Diagram
If the sense resistor is 250 ohms, the voltage across it will be 5V at 20mA and 1V at idle (4mA). The cable resistance might be 400 ohms, so 8V will be 'lost' across the cable itself. The sensor and sender may need a minimum of 12V to function, so we add the voltages ...
+ ++ Vtotal = Vsense + Vcable + Vsender+ +
+ Vtotal = 5V + 8V + 12V = 25V +
In this case, you would choose a 36V supply, as this provides a good safety margin and allows for the cabling to be extended if needs be. While this might seem like a strange thing to do, this signalling scheme has been used in thousands of industrial applications (including mining) where things are changed regularly. The last thing anyone needs is a requirement that the system be recalibrated just because someone extended or shortened a cable!
+ +Herein lies the real advantage of using a current loop. If a good current sink is used in the sender unit, the cable resistance and power supply voltage don't change the calibration at all, provided they remain within the designated range. The above system with its 36V supply will work perfectly with as much as 950 ohms of cable, or as little as 1 ohm. If someone were to replace the power supply with a 48V unit, even more cable could be used, and none of these radical changes affects the calibration. No other analogue system can compete, and very few digital signalling schemes can be used either. Because the signal is analogue, it is often possible to operate happily even with high noise levels on the signal pair, because the noise can be filtered out without affecting the DC voltage.
+ +In some cases, it is useful to connect the transmitter between the +ve and -ve terminals of a bridge rectifier as shown in the above block diagram. This means that the transmitter is not polarity sensitive, so if the wires are swapped around inadvertently the system keeps operating normally. Because of the current loop, this will not affect calibration.
+ +4-20mA current loops are not used or suitable for high speed applications, and the applications where they are most commonly used don't need high speed. The speed limitation is due to the fact that by definition, a current source has a very high impedance, so cable capacitance will limit the frequency response even at quite low values of capacitance. However, if the pressure in a large (perhaps LPG) gas tank is monitored, it will never change quickly under normal conditions. If it does change fast the reason is likely to be visible! Even so, the current loop is certainly fast enough to show there's a problem.
+ + +Figure 2 shows a simple sender, which is actually a dedicated tester and is based on a design I did for a client who needed a 4-20mA checker/calibrator unit. In this case, the sensor is simply a 200 ohm pot, and the range is from 3.977mA to 20.01mA. Both are well within 1% of the design values. Needless to say, the odd value resistors are either obtained using parallel resistors or trimpots for calibration.
+ +![]() | Please note that there is a circuit all over the Net that claims to be a 4-20mA tester. It is
+ no such thing. You will find that there are several discrepancies between the text and the drawing, and the text refers to a 4-20mA signal that ramps up and down
+ and also indicates that a PICAXE micro-controller chip is used. The circuit shows a 7555 timer and some other stuff that is in no way, shape or form suited to
+ testing 4-20mA interfaces. + + I have no idea who was the first moron who screwed up the text and (stolen) circuit, but countless others have done likewise and followed like a flock of + dumber-than-average sheep. The Net is now completely polluted with a circuit that is utterly useless for the claimed task. No-one seems to have noticed that + they have simply stolen a schematic and text that don't match. |
If the supply voltage or series resistance is changed across the range limits (12-24V and 125 ohms to 500 ohms in this case, but specifically excluding the zener), the current changes by less than 0.1% using this very basic circuit. The most critical part is the temperature coefficient of the resistors and zener voltage regulator, as these will have more effect than external electrical variations. The circuit as shown relies on the zener regulator having an external supply, because the zener draws much more than 4mA. This doesn't matter in this case, because it's a tester and is self contained and mains powered.
+ +
Figure 2 - 4-20mA Test Sender
This is the circuit of the tester, and it is not intended as any kind of real 4-20mA interface. However, it can be used to test receiver units and their associated analogue to digital converters, software, etc. That is exactly what it was designed for, and it works very well indeed. There is a dearth of any published circuits for 4-20mA testers, and that's the reason this unit ended up in the Application Notes section.
+ +To understand how it works, there are two very important parts of the circuit. VR1 and R6 are the parts that determine the current. VR1 set a voltage at the non-inverting input of U1, and due to the opamp insisting on making both its inputs the same voltage, exactly the same voltage will appear across R6. VR1 can be varied from 160mV to 0.8V and the same voltage will appear across R6 because of the feedback around U1. 160mV across 40 ohms is 4mA, and 0.8V across 40 ohms is 20mA ... there is the 4-20mA current required.
+ +Everything else in the circuit is simply there as support. The zener diode provides a stable reference voltage from the already regulated 12V supply, the resistors around VR1 set the upper and lower voltage limits, and Q1 supplies the current. The meter is simply there because this is a tester. D1 gives the opamp a negative supply - it's only 0.65V, but enough to allow the inputs to work properly at very low voltages. D3 and C3 help protect the MOSFET from external nasties, and C3 also prevents the MOSFET from oscillating with long leads attached.
+ + +These days, you'd be hard pressed to find a modern 4-20mA interface that uses anything other than a dedicated IC. They are made by several manufacturers, and have a variety of special characteristics. Simply select the IC that suits your sensor, and the IC does the rest.
+ +One of the biggest problems with any 4-20mA interface is the minimum current of 4mA. The minimum current is all that's available to the sender, including the sensor. This is difficult to achieve in some cases, so remote power supplies for senders are not uncommon. However, there are still plenty of applications where no remote power is needed, and this is the way the interface was originally designed to operate. A 3-wire variant exists where the sensor needs additional processing. The third wire is power - typically 12V or 24V DC.
+ +While there is quite a lot of info on the Net about the new ICs that are used, there seems to be remarkably little that discusses the older discrete senders. Figure 2 is an example, but a critical part of any measurement system is the voltage reference. Specialised devices exist today, but many years ago a zener diode was as close as you'd get. Choosing the correct voltage is important - only a very limited range of zener voltages have an acceptable temperature coefficient. A 5.6V zener is generally accepted as having as close to zero tempco as you can normally expect, but this may not apply at the low current needed for a 4-20mA current loop.
+ +These days, it much simpler to use a precision voltage reference, such as the LM4040 shunt regulator, available in several different voltages. You may also use one of the many band-gap voltage references available - these are typically 1.25V and have excellent performance, but can be comparatively expensive.
+ +The AD693 is pretty much a complete system on a chip. The only thing you need to add is your sensor, and the data sheet has many examples and other info to help you to create a working sensor and sender unit. Unfortunately, the application details are not intended for those who have no prior experience with 4-20mA interfaces.
+ + +Normally, you would simply read the voltage across the sensing resistor, but there is always the 4mA offset, so this has to be removed. If the sense resistor is 250 ohms, 4mA leaves you with 1V across the resistor. This is easily subtracted by using a circuit such as that shown below.
+ +
Figure 3 - 4-20mA Receiver
In the circuit shown, the input is buffered by an opamp. This prevents the input circuit around U1B from placing a load across the resistor and changing the calibration. It makes far more difference than you might imagine, but the same result can be achieved by increasing the value of R1 very slightly so that the total resistance (R1 in parallel with R3 + R5) remains at 250 ohms. Without the buffer or any correction, the error is over 1%, yet current loop interfaces can be better than 0.1% accuracy and linearity. A 1% error is therefore significant.
+ +The offset voltage is also critical, and needs to be stable with temperature. The arrangement shown is a very simplified version, but like the transmitter, a precision reference is needed for high accuracy applications. U1B simply subtracts the reference voltage from the voltage across R1, so with no signal (4mA or 1V) the output voltage will be zero.
+ +Should this voltage ever become negative, a fault is indicated (loop current missing or minimum is too low). With no loop current at all, the output will be 0V. At the maximum current of 20mA, 5V is developed across R1, 1V is subtracted, and the output is 4V. This voltage may be amplified further if needed, and used to drive an analogue instrument (such as a meter) directly, or can be digitised and used by a data logger, computer or micro-controller based circuit, or read by a digital meter. Resolution and accuracy are determined by the stability of the voltage references in the transmitter and receiver, as well as the sensor itself.
+ +Any 4-20mA system can be set up with additional detectors. For example, when the loop is active, there will be a minimum of 1V developed across R1, because 4mA will be flowing. Should the loop be broken by a damaged cable or faulty connector, the voltage across R1 may fall to zero, or may show AC (typically 50 or 60Hz) due to leakage. These abnormal conditions are easily detected and can be used to trigger an alarm. It may also be necessary to provide input protection, using MOVs (metal oxide varistors) or TVS (transient voltage suppressor) diodes. Many industrial systems can be particularly hostile.
+ +In summary, while the 4-20mA current loop protocol is seemingly well past it's 'best before' date, it is still used in countless industrial processes. It is a robust and well proven technology that refuses to go away because it is robust and reliable, and works where other more recent protocols may give nothing but trouble. While it lacks the fancy attributes of many digital bus systems, it will work over almost anything that conducts electricity and is easily extended, shortened, tested or repaired in the most hostile of environments. Don't expect it to disappear any time soon.
+ +![]() | + + + + + + + |
Elliott Sound Products | +AN-012 |
There are countless reasons to measure an AC voltage, and the type of measurement used can be critical in many cases. With few exceptions, the AC component of the waveform will have any DC removed by means of a capacitor, and the voltage is then rectified. For most measurement systems, this will be done using one of the full wave precision rectifier circuits described in AN-001. This part of the process is critical, and the type of circuit used is determined by the required accuracy, signal frequency and the level.
+ +Even the very best precision rectifier will give poor results if the signal level is too low, so a preamp is often needed. High frequencies (above 1MHz) create additional problems, and these will not be covered here. I will concentrate on systems that work at normal audio frequencies, which includes mains frequencies (50 and 60Hz). The range covered can often be very limited - especially when a system is designed specifically for mains and other frequencies below around 1kHz or so.
+ +When the signal is to be digitised, there are two options. The signal can be and fed directly to an ADC (analogue to digital converter) without rectification, and all calculations are done digitally. The incoming signal has to be level-shifted (typically so that zero is represented by 2.50V DC), and the digital sampling frequency has to be an absolute minimum of double the highest frequency of interest. For example, frequencies up to 20kHz require a minimum sampling frequency of 40kHz, and as we know, a common standard is 44.1kHz as used for CD quality audio.
+ +The second option is to remain within the analogue domain, and the signal can be displayed by a moving coil meter movement, or digitised using a low frequency ADC as used in most multimeters. In some cases, the rectified AC is not used to drive any form of metering, but may be used to provide automatic gain control (AGC), compression, limiting or other functions in an audio processing system.
+ +The circuits described below assume the second option. The incoming AC will have any DC component removed with a coupling capacitor, and will be rectified with a precision rectifier. The pulsating DC output from the rectifier is then processed to obtain the desired type of measurement - peak, RMS or average. The output can vary widely depending on which method is used, and some of the results can be surprising.
+ +
Figure 1 - Precision Full-Wave Rectifier
For the sake of consistency, I will assume the rectifier shown above. This is taken from AN-001 Figure 6, and was chosen because it's a simple circuit that works well. Any of the other circuits can be used, but for peak detection, they all require a half wave rectifier after the main rectifier because the cap has to be charged via a circuit that doesn't discharge it again.
+ +In some cases you can get away with highly simplified circuits, but it all depends on the application and the degree of accuracy you expect. In a few cases (such as audio processing), accuracy is not needed, and non-linear behaviour can be an advantage rather than a limitation.
+ +It's important to understand that all methods of measurement can introduce errors, and that there is no one detection technique that's suitable for all waveforms. Errors and limitations are discussed in further detail in the conclusions at the end of this article.
+ + +Obtaining the peak value of the waveform is pretty easy if extreme precision isn't needed, and is one of the most common - especially for audio processing circuits. For example, an audio peak limiter should, by definition, apply limiting based on the peak value of the waveform. Most compressor/ limiters work on this basis, but some may also use the average value, and a small number use RMS converters to derive the control voltage.
+ +This article is not about audio compressors or limiters though, so if you want more on that topic you'll have to look through the various ESP projects (see the ESP Project List and search for 'limiter'). There are many other circuits on the Net of course, and you will see a great many variations.
+ +In many cases, when an AC voltage is rectified, the peak voltage may be used because it's the fastest measurement method. If the input signal is a sinewave, the meter can be calibrated for RMS if desired. Under these conditions, the meter is simply calibrated so that when the input is 1V DC, the meter is set to display exactly 707mV. This is based on the square root of 2 (√2) which is 1.414 (and 0.707 - its reciprocal). Most readers will be aware that the peak value of a 1V sinewave is 1.414V, but may not be aware of the limitations of this measurement method.
+ +All that's needed in some cases is a capacitor, but that depends on the precision rectifier circuit used. The majority of full-wave rectifiers need an output diode (ok if precision isn't necessary) or a precision half wave rectifier as shown below. The added diode/ half wave rectifier is needed so the cap is not discharged back through the rectifier's output opamp. A highly simplified version is shown in Figure 2, and this could be used for a peak limiter circuit for example. It's not useful for accurate measurements though. If R4 is not included, some means to discharge C1 is necessary, otherwise it will retain the voltage indefinitely, but subject to drift due to diode and capacitor leakage.
+ +
Figure 2 - Simplified Peak Reading Detector
The most basic half-wave peak detector uses nothing more than a diode, a pair of resistors and a capacitor. A full-wave version of this is shown above, but because the diodes are not within a feedback loop it suffers from high non-linearity because of the forward voltage of the diodes. This arrangement is suitable if you don't need absolute accuracy, or if the range of voltages to be measured is limited. For example, if you only need to detect a voltage of between 5V and 10V (peak), the error introduced by the diode is minimal and easily compensated, but for a precision circuit that's not good enough. The circuit is full wave because there is a direct and inverted driver for the diodes.
+ +The two resistors (R4 & R5) determine how quickly the capacitor charges (R5) and discharges (R4). They are wired as shown so they don't create a voltage divider as would be the case if R4 was directly in parallel with the storage/ smoothing cap (C1). The charge (attack) time is determined by the ratio of the values of R5 and C1. If R5 is 1k and C1 is 1µF, the charge time (with DC, and to 63.2% of maximum, aka time constant) is 1ms, and the full voltage (within 1%) will be available in around 5ms. The decay time is determined by the ratio of C1, and R4 plus R5 in series. It's close enough to 1 second with the values shown. With an AC input it depends on the frequency.
+ +
Figure 3 - Precision Peak Reading Detector
When a true precision peak detector is used, the cap will always charge almost instantaneously, because it's within the feedback loop of an opamp. This isn't always convenient, especially for audio processing where the attack and decay times need to be programmable. For measurements, it provides the fastest possible reading, with a stable voltage available within as little as a single cycle, but more typically within couple of cycles of the input waveform.
+ +For a variety of reasons (based on simple reality and physics), the voltage across C1 may be a little lower than expected. It will typically be around 0.4% low with a 100mV input, but the error increases as the level is reduced, and vice versa. If there is any overshoot of the input signal, the voltage across C1 will be higher than expected. Careful layout is essential if you want an accurate circuit.
+ +Peak reading is not common in metering circuits other than Peak Programme Meters (PPMs), where their use is essential (by definition). Peak detection - usually non-precision - is far more common with audio processing systems. If you use peak detection as described here for conventional metering (RMS calibrated), the 'RMS' reading is only accurate when the input is a sinewave. Serious errors will be apparent for other waveforms.
+ +When you need to monitor the amplitude of the highest peaks of a signal (usually audio, but not always), you also need to decide on the decay time. If it's too short, you won't have time to see the peaks (and an analogue movement's pointer can't move fast enough). If the decay time is too long, you won't be able to see other (smaller) peaks until the pointer has fallen to the level of the new peaks. The ballistics of professional PPMs depend upon the standard(s) used - there are several different versions.
+ + +With most measurement systems, it's more common to use the average reading than the peak. This happens 'automatically' if a moving coil meter movement is used following a precision rectifier, because the pointer deflection depends on the average current. As with a peak measurement, the RMS value for a sinewave is obtained simply by scaling the rectified voltage, and in this case a meter would be adjusted to read 707mV with an input of 637mV. The average value of a sinewave is determined by ...
+ ++ 2 × V(peak) / π = 0.6366 (0.637) for a 1V peak sinewave (707mV RMS) ++ +
It is important to understand that the average value of a sinewave (as described above) can only be used after rectification. If the signal isn't rectified, the average is zero! This is because the positive and negative voltages are exactly equal, so they cancel. For speech or music signals, there can be a wide variance between the RMS and average values after rectification, but most analogue meters use averaging because true RMS measurement was difficult to achieve until comparatively recently.
+ +
Figure 4 - Precision Average Reading Detector
Although Figure 4 shows both input and output buffers, they may not be needed depending on the application. The time constant of R2 and C1 needs to be selected to give a reasonable averaging time, depending on the input frequency. Unless very low frequencies need to be measured, the values shown will usually work well. The time constant is 1 second, so an accurate voltage cannot be obtained for around 5 seconds.
+ +The output from Figure 4 can be used to drive a moving coil meter (even though the meter movement would have performed the averaging by itself), or can be digitised for display on an LCD or LED screen. Most cheap digital multimeters use this type of circuit to obtain the reading for AC signals, and the meter is calibrated for RMS. All that's needed is to provide a small amount of gain, so the meter reads 707mV when the DC output from the averaging filter is 637mV (or any multiple or sub-multiple of these voltages).
+ +The 'RMS' reading is only accurate when the input is a sinewave. The error can be significant, as described in the next section.
+ +However, the vast majority of multimeters (analogue and digital) use the average reading, RMS calibrated method. To avoid the inevitable errors with non-sinusoidal waveforms you have to use a 'true RMS' meter. If you only measure sinewaves (or reasonable facsimiles thereof), the errors are not significant and an average reading meter will be perfectly alright for your needs.
+ + +Early 'true RMS' meters were extraordinarily expensive, and they used a variety of means to get the RMS value of the input waveform. One popular method used a thermocouple, measuring the temperature of a sensing element, which was in turn driven by a suitable amplifier. The RMS value of a waveform is defined as that AC voltage (of any waveshape) that provides exactly the same power (heating effect) as an equal DC voltage. So, it you were to measure the temperature rise of a resistor fed with 10V DC and 10V RMS AC, it would be the same for both. It wouldn't matter if the AC was a sinewave, squarewave, or a complex waveform such as an audio signal - provided the signal was steady while the measurement was taken.
+ +Up until the advent of the first ICs that could perform the conversion, very few workshops had access to a true RMS voltmeter because of the cost, and even the early IC based versions were far more expensive than the 'average-reading, RMS calibrated' versions. These are still the most common, and all meters should be assumed to use average reading unless they are specifically stated to be true RMS.
+ +Before we go any further though, it's important to understand exactly what 'RMS' means. It's an abbreviation of 'root mean squared', where we take voltage samples, square them, obtain the mean (average) value of the squares (the sum of values divided by the number of samples), and finally take the square root of the mean, giving us the RMS value. Let's look at a cycle of a sinewave to see how this works ...
+ +
Figure 5 - Sinewave, Measured At 30° Intervals
The sinewave can be measured at as many points as you like, with four points being the minimum (0, 90, 180, 270 degrees), but 30° intervals were used for this example as it makes the process easier to understand. With other wave shapes you need enough data points to get an accurate representation of the instantaneous voltages at each point of the waveform. From these measured voltages (which can easily be calculated for a sinewave, triangle or rectangular ('square') waveform) you can then calculate the true RMS voltage for the sinewave.
+ +Mathematically, the voltage is simply the sine of the phase angle multiplied by the peak voltage. Sin(30) is 0.5, sin(60) is 0.866 etc. Note that 360° is not included in the calculation, because that marks the start of the next AC cycle, and not the end of the current cycle. In the table below, a peak voltage of 1V is used (0.707V RMS).
+ +It is usually impossible to calculate the voltages at relevant points of a complex waveform, so it could be printed on graph paper and measured, or digitally sampled and calculations made based on the value of each sample. This is the technique used for fully digital measurement systems. I doubt that many people will want to use graph paper these days, but it certainly works if you have the patience.
+ +Measurement # | Degrees | Voltage | Square + |
1 | 0 | 0 | 0 + |
2 | 30 | 0.5 | 0.25 + |
3 | 60 | 0.866 | 0.75 + |
4 | 90 | 1 | 1 + |
5 | 120 | 0.866 | 0.75 + |
6 | 150 | 0.5 | 0.25 + |
7 | 180 | 0 | 0 + |
8 | 210 | 0.5 | 0.25 + |
9 | 240 | 0.866 | 0.75 + |
10 | 270 | 1 | 1 + |
11 | 300 | 0.866 | 0.75 + |
12 | 330 | 0.5 | 0.25 + |
Sum | 6.0 + | ||
Average, aka Mean ( sum / 12 ) | 0.50 + | ||
Square Root of Mean | 0.707 + |
Now that you know the exact way an RMS value is calculated, it's obvious that an IC version has to perform similar functions. The next question might be "why?". People have used average reading meters that are calibrated to show 'RMS' for years, so why bother with true RMS converters. It's all about accuracy, and errors introduced by the averaging process. The following table list the error with different waveforms - as you can see, they can be extreme in some cases.
+ +Waveform - 1 V Peak | Crest Factor VPEAK / VRMS + | True RMS | Avg/ RMS meter¹ | Error (%) + |
Undistorted Sine Wave | 1.414 | 0.707 | 0.707 | 0 + |
Symmetrical Square Wave | 1.00 | 1.00 | 1.11 | +11.0 + |
Undistorted Triangle Wave | 1.73 | 0.577 | 0.555 | -3.8 + |
Gaussian Noise - 98% of Peaks <1V | 3 | 0.333 | 0.295 | -11.4 + |
Rectangular | 2 | 0.5 | 0.278 | -44 + |
Pulse Train | 10 | 0.1 | 0.011 | -89 + |
SCR Waveform - 50% Duty Cycle | 2 | 0.495 | 0.354 | -28 + |
SCR Waveform - 25% Duty Cycle | 4.7 | 0.212 | 0.150 | -30 + |
When measuring AC voltages and currents, we tend to assume that they are RMS, and make power calculations accordingly. Average power (commonly - and incorrectly - referred to as 'RMS power') is simply the product of RMS voltage and RMS current, but if the waveform is not sinusoidal, the error can mean that the answer we get may be way off the mark. Measuring the signal level of music or speech is no different - a high crest factor (Vpeak / VRMS) will always give an answer that is well below reality. It is the crest factor of waveforms other than sinewaves that causes the problems, and very high crest factors will even cause problems with many RMS converter ICs. Crest factors up to 5 are usually ok with common RMS converter ICs, but higher than that can cause an internal overload and the measurement may still have a significant error.
+ +The AD737 is pretty much a complete system on a chip. The only thing you need to add is a resistor and a few capacitors, and the data sheet has many examples and other info to help you to create a working RMS to DC converter. C2 (the averaging capacitor, Cavg) and C3 (the output filtering cap) should be low leakage types.
+ +
Figure 6 - True RMS To DC Converter
The drawing above is simplified, and shows only the basics. Note that the output is inverted, so a 1V peak 707mV RMS) input will give an output of -707mV. The output is also a high impedance and should not be loaded by the external circuit, so ideally the output would be connected to an output buffer as shown in Project 140. The project circuit also includes provision for a gain trim and DC offset adjustment, both of which will be necessary if you need to measure low voltages (less than 10mV RMS). Although Figure 6 shows an input of 1V peak, the AD737 input should ideally be limited to around 200mV RMS, or internal overload is likely with some waveforms.
+ +The values of C2 and C3 are a compromise, and C2 (Cavg) in particular determines the settling time. With a low input voltage of (say) 1mV), the circuit will take roughly 30 seconds to stabilise, falling to about 150ms with a 200mV input. If a smaller value is used for Cavg, the settling time is reduced, but the low frequency error increases with higher crest factors. The values shown (100µF for each) were determined after much experimentation with the IC, and give good results overall.
+ + +The three main measurement techniques are shown here, and which one is best for the task depends on your application. For accurate power measurements, true RMS is almost always the preferred measurement, but if you only work with sinewaves then an average reading meter (RMS calibrated) will be fine. For measuring complex waveforms such as speech, music, total harmonic distortion (THD) and the like, you really need true RMS (although most distortion meters are average responding, unfortunately).
+ +For audio processing (compressors, limiters, etc.), peak detection is the most common, although some compressors include an average responding circuit as well. Sometimes it may be claimed that it's 'RMS', but that is rarely the case in practice. It's also unlikely that there will be any audible difference, so the extra cost of an RMS converter is usually not warranted. This is especially true since compression and limiting are so often used to make everything the same level, so it sounds flat and lifeless.
Peak reading meters are not uncommon in recording and broadcast studios, and many people will know about the so-called PPM (Peak Programme Meter) that is used to indicate the absolute peak reading of the speech/ music signal. This is particularly important with digital recordings, because they have a 'hard' limit - commonly referred to as 0dBfs (full scale) - the absolute maximum input level to an analogue to digital converter (ADC) before it clips. Unlike analogue tape (for example), there is no 'soft clip' behaviour, so the PPM is used to indicate the waveform peaks. The PPM is also common in broadcast studios, because the maximum modulation depth (AM) or deviation (FM) allowed must never be exceeded. A full discussion of PP Metering is outside the scope of this article, but there is plenty of info on the Net for those who want to know more.
+ +Fully digital systems such as audio test sets and other measurement systems (oscilloscopes in particular), the peak, average and RMS values are generally calculated, based on making a calculation on each sample, and providing an accurate result with even very difficult waveforms. My digital oscilloscope can be relied upon to give an accurate RMS value with very high crest factors (greater than 20 in some cases), where a true RMS meter using analogue processing (perhaps an AD737) will give the wrong answer because crest factors above 5 can cause internal overload.
+ +The purpose of this application note is to demonstrate the various different measurement types, so you can choose the one that is most likely to satisfy your needs. While true RMS for everything may initially seem like a good idea, it's not always the best choice. Cost is one potential issue, but settling time (especially for low-level signals) may mean that the only sensible choice is to use averaging. Then there are the times when you must know the peak value, and a peak detector is the only thing that will display the voltage peaks.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +AN-013 |
Most electronic circuits will be seriously annoyed if the supply is connected with the polarity reversed. This is often announced by the immediate loss of the 'magic smoke' that all electronic parts rely on. On a slightly more serious note, irreparable damage is often caused, especially with supply voltages of 5V or more. The traditional reverse polarity protection circuit consists of a diode, wired in series with the incoming supply, or in parallel with a fuse or other protective device that will blow.
+ +A series diode reduces the voltage available to the circuit being powered. If it's running on batteries, the voltage reduction can easily mean that a significant part of the battery capacity is unavailable to the circuit. 0.7V isn't much, but it's a real challenge if the circuit relies on a voltage of at least 5V, and 4 x 1.5V cells only provides a nominal 6V. A series diode may also dissipate many watts in a circuit that draws high current - whether permanently or intermittently.
+ +A parallel diode has to be robust enough to survive the full short-circuit current from the source until the fuse opens. That usually means a very large and expensive diode. A smaller one can be used, but in 'sacrificial' mode. That means it will likely fail (diode failure is always short circuit), but it must be robust enough to ensure that it doesn't become open circuit during the fault period due to bond or lead wired fusing.
+ +A relay can also be used, and this has the advantage of virtually zero voltage drop across the contacts. However, relay coils draw significant current, and this can easily exceed the current drawn by the circuit being protected. If the supply is a large battery that has on-demand charging facilities this isn't a problem, other than the small cost of running the relay. In many cases though, this is not a viable option.
+ +The alternative is to use a MOSFET. In many cases, it's a matter of the MOSFET alone, with no requirement for any other parts. This works if the supply voltage is lower than the maximum gate-source voltage, but additional parts are needed with voltages over 12V or so. The advantage of the MOSFET is that the voltage drop is vanishingly small if the right device is selected.
+ +It's often possible to use a BJT (bipolar junction transistor) for reverse polarity protection as well, but they don't work as well as a MOSFET and have several inherent disadvantages that make them far less suitable. For a start, the base must be provided with current so the transistor will turn on, and this is wasted power. A BJT cannot turn on as hard as a MOSFET, so the voltage dropped across the transistor is greater. While it will usually beat a diode (even Schottky) there is no real advantage because the MOSFET is a far better option.
+ +In the drawings that follow, there is a section simply marked as 'Electronics'. It shows an electrolytic capacitor and an opamp, but may be anything from a simple audio circuit, logic gates (etc.) or a microprocessor. Current drain may be anything from a few milliamps to many amps, and you need to choose the scheme that best suits you application. This is not a design guide, but rather a collection of ideas that can be expanded and adapted as required.
+ + +A series diode is the simplest and cheapest form of protection. In low voltage circuits, a Schottky diode means that the voltage drop is reduced from the typical 0.7V down to perhaps 200mV or so. This is very much current dependent though, and at maximum rated current the voltage drop may exceed 1V of a standard silicon diode, or around 500mV for Schottky types. Only the diode is required - no other parts are needed, so it is by far the simplest and cheapest.
+ +
Figure 1 - Diode Protection, Series (Left), Parallel (Right)
While a series diode is dead easy to implement, as noted above there is a minimum 650mV or so voltage loss at low current, increasing with higher load current. With a 1A diode, the voltage loss will be close to 900mV at 1A, almost a volt reduction of the supply voltage. If the circuit is powered by batteries, this represents a serious loss of capacity, because around 900mW of available power is wasted for no good reason. If you have plenty of power to spare, or with high voltages (25V or more) the diode loss is insignificant.
+ +A Schottky diode is better, but they are usually more expensive, and are not available for high voltages. For a 1A Schottky diode, you can expect to lose around 400mV at 1A. Schottky diodes have a forward voltage ranging from 150mV to 450mV, depending on manufacturing process, current handling rating and actual current. Maximum reverse voltage is around 50V, but reverse leakage is higher than standard silicon diodes. This may cause problems with sensitive devices, but usually not. The (more or less) typical voltage with a Schottky diode is shown in brackets. A series diode can be 'assisted' by a parallel diode on the equipment side if diode leakage is likely to cause problems. This is rarely needed or used in practice.
+ +With a parallel diode (sometimes referred to as 'crowbar' protection), it must be rated for a higher current than the source can provide. If the voltage source is batteries (any chemistry), they can deliver extremely high current, so some means is needed to disconnect the circuit - preferably before the diode overheats and fails. Although diodes fail short-circuit in 99% of cases, this is not something that you'd want to rely on to protect expensive electronics. Some power supplies may object to a shorted output, and may current limit or fail.
+ +A fuse is the easiest and cheapest way to disconnect the supply if it's connected in reverse, and the fuse must be rated to carry the maximum current expected by the circuitry. There is no voltage lost across the diode in this arrangement, but there is a small voltage lost across the fuse. This voltage drop is usually insignificant. Naturally, if the supply is connected in reverse, the fuse will (should) blow, and the diode may or may not survive. This means that the system must be checked and repaired if necessary should the supply be reversed at any time, including fuse and/or diode replacement. You may be able to use a 'PolySwitch' PTC (positive temperature coefficient) thermistor switch - this depends on many factors that need to be researched first.
+ + +While it may sound like a silly idea at first, a relay is an excellent way to provide reverse polarity protection. This is provided the voltage source can power the relay without reducing its capacity. In battery powered equipment, this is usually not an option, but it can be useful for equipment in cars or trucks, where the battery has high capacity and is continuously charged while the engine is running. A relay should not be used for any equipment that is connected permanently, as it will discharge the battery eventually.
+ +As you can see below, the relay coil can only get current when the polarity is correct. With positive at the (positive) input, D1 is forward biased, and the coil receives about 11.3V which is more than sufficient for it to pull in. When the N.O. (normally open) contacts close, power is applied to the electronics. If the polarity is reversed, no current flows in the coil and the electronics are completely isolated from the supply because the relay cannot activate.
+ +
Figure 2 - Relay Protection
The advantage of a relay is that it can handle extremely high current with almost no voltage drop across the contacts. Relays are rugged, and can last for many, many years without any attention whatsoever. They don't need a heatsink (regardless of current drawn), and are readily available in countless configurations and for almost any known requirement. Automotive relays will also have already passed all the mandatory tests required, so can reduce the cost of compliance testing where this is a requirement.
+ +The inherent ruggedness of a relay is a huge advantage in automotive applications, where 'load-dump' events are common. These occur when a heavy load is disconnected from the electrical system, and the alternator is unable to correct quickly enough to prevent over-voltage. There are other causes, and all automotive equipment must be designed to withstand significant over-voltage without failure. Relays can manage this with ease.
+ +Relays are available with many different coil voltages (e.g. 5, 12, 24, 36, 48V), and there are models for any conceivable contact current requirement. Where the input voltage is too high for the coil, a resistor can be used to reduce the voltage to a safe value. An 'efficiency' circuit can also be incorporated (a series resistor with a parallel electrolytic capacitor) that pulses the relay with a higher than normal voltage to pull it in, then reduces the current as the cap charges to a value a little more than the guaranteed holding current (determined by the resistor). The holding current can be as low as 1/3 of the nominal coil current, sometimes less.
+ + +MOSFETs have a very desirable feature. They all have a reverse diode which determines the voltage polarity, but when a MOSFET is turned on it conducts equally in either direction. So, when the diode is forward biased and the MOSFET is on, the voltage across the MOSFET is determined by RDSon (drain-source 'on' resistance) and current, and not by the forward voltage of the diode. This useful property has made MOSFETs the device of choice for reverse polarity protection circuits.
+ +However, you must consider the fact that MOSFETs require some voltage between gate and source to conduct, and in a very low voltage circuit (less than 5V) you may not have enough voltage available to turn on the MOSFET. Logic-level MOSFETs can turn on with lower voltages than standard types, but are also more limited in terms of RDSon, and fewer devices are readily available - especially P-Channel types.
+ +In the drawing, a resistor and zener diode are shown. These provide gate protection for the MOSFET's gate if there is any chance that the maximum gate-source voltage may be exceeded. While they can be omitted, it's generally unwise to do so. Should a transient spike exceed the gate's breakdown voltage (typically around ±20V), the MOSFET will be damaged, and will almost certainly conduct in both directions. This negates the protection circuit completely!
+ +For equipment that is powered from batteries it is unlikely that a 'destructive event' will occur, but the MOSFET's gate may still be damaged under some circumstances. It appears unlikely, but a high reverse voltage (static for example) may cause breakdown if protection isn't used. Some MOSFETs have an in-built gate zener, and the resistor is then essential to prevent destructive current with voltages greater than the zener voltage.
+ +
Figure 3 - MOSFET Protection - N-Channel (Left), P-Channel (Right)
You can use N-Channel or P-Channel devices, depending on the circuit polarity and whether or not you can interrupt the earth/ ground connection without causing circuit misbehaviour. In an automotive environment, the chassis is the negative supply, and it's difficult or impossible to interrupt it. That means that the protection circuit must be in the positive supply rail, which is slightly less convenient because it usually requires a P-Channel MOSFET. These are usually lower power and current than their N-Channel counterparts. You can still use an N-Channel device, but it's more irksome and needs more circuitry (shown below).
+ +If you use a P-Channel MOSFET, there is no interruption to the earth/ ground (negative) connection. This is useful with automotive electronics in particular. However, there are some limitations that you must be aware of. The most important (and the one most likely to cause problems) is the required gate-source voltage. This isn't an issue with automotive applications because 12V is available, but it's a concern for lower voltages.
+ +Logic level (5V) P-Channel MOSFETs are certainly available, but as noted they are very limited compared to N-Channel types. They are also usually more expensive for equivalent current ratings, and many are only available in surface mount (SMD) packages. This does limit their usefulness in low voltage, high current circuits, where it's not possible or sensible to interrupt the negative rail (allowing the use of N-Channel devices).
+ +Where the voltage is otherwise too low to turn on a MOSFET, there is the option of using a charge-pump circuit to bias on an N-Channel device. This adds complexity and cost, but is a viable option when other methods are unsuitable for any reason. The charge-pump is used to generate a voltage that's greater than the incoming supply (typically by around 10-12V or so), and this voltage turns on the MOSFET. The general idea is shown below, but the details of the charge-pump are not provided - it is a 'conceptual' circuit, rather than a complete solution. Protective diodes shown may or may not be necessary, depending on the circuit.
+ +
Figure 4 - N-Channel MOSFET With Charge Pump
There are many different ways the charge pump can be designed, and the circuit is outside the scope of this article. However, it must be arranged so that the charge pump itself cannot be subjected to reverse polarity. When power of the correct polarity is applied, the intrinsic diode in Q1 conducts and provides power to the charge pump and the rest of the circuit. Within a few milliseconds, the charge pump has produced enough voltage to turn on Q1, and the MOSFET turns on and bypasses its own diode. The voltage loss is determined purely by the on resistance of the MOSFET and the current drawn by the circuitry. An encapsulated DC-DC converter (with a floating output) can replace the charge pump if preferred.
+ + +Use of a BJT is appropriate for low current loads, but where the voltage may be too low for a MOSFET because there's insufficient gate voltage for it to turn on properly. In the examples shown below, there is about 125-150mV drop across the transistor with a load current of 40mA. The voltage drop is far less at lower currents. R1 must be selected to ensure that there is adequate base current to saturate the transistor. This usually means that you need to provide at least three and up to five times as much base current as you would calculate from the transistor's beta.
+ +For example, a transistor with a gain (Beta or hFE) of 100 needs 400µA for 40mA load current, but you should supply no less than 5mA or the voltage dropped across the transistor will be excessive. In the drawing, the transistor is assumed to have a gain of at least 65 (from the datasheet), and the 2.2k resistor provides about 2mA base current - this keeps the loss below 50mV at 40mA. It is unrealistic to expect much better than this without the base current becoming excessive. The transistor will dissipate less than 10mW with the circuits shown. You can use a small signal transistor (e.g. BC549 or BC559) for low current loads.
+ +
Figure 5 - PNP Transistor (Left), NPN (Right)
There is an inherent limitation with using a BJT, and that's the emitter-base reverse breakdown voltage. With most, the breakdown voltage is rated for around 5V, although it might be greater for some examples. That means that having an input voltage greater than 5V is probably unwise, because the emitter-base junction will be reverse biased. This causes degradation of the transistor's performance and may pass some reverse voltage to the electronics. A complete breakdown may pass the full reverse voltage to the electronics, resulting in failure. This issue appears to have escaped detection in most of the circuits I've seen.
+ +An NPN transistor is supposedly better, because they usually have higher gain and therefore lower losses due to a higher resistance being used to supply the base. In practice the difference will be marginal at best. Like an N-Channel MOSFET, NPN transistors must be used in the negative lead and require that the negative input and chassis can be isolated. The same problem of reverse breakdown of the emitter-base junction applies.
+ + +As always in electronics, each of these circuits offers advantages and disadvantages. You need to choose the option that is most suitable for your application, based on the current required, available voltage and permissible voltage drop. In commercial products, cost may be an over-riding factor, often at the expense of better performance.
+ +In some cases, the product may require survival when subjected to high pulse energy as part of the test and/or approvals process. This can be difficult to achieve with some of the mandated high-energy pulse tests used by various agencies worldwide, and it's also something that must always be considered in automotive applications, where 'load-dump' spikes can cause high voltage spikes throughout the vehicle's electrical system. Consequently, the info here will be no more then a starting point for some applications. Thorough testing is needed for any product intended for a hostile environment.
+ +You also have to consider the likelihood (or otherwise) of reverse voltage being applied. In many cases, it's something that can only ever happen when the product is assembled, and if that's done in such a way as to all but eliminate errors, reverse polarity will never come about. Most products don't have internal polarity protection if they are powered from the mains. This is because once the equipment is assembled, there is no possibility that the polarity can ever be reversed, other than someone inexperienced trying to service it. Few (if any) products make allowances for errors made during servicing.
+ +If your circuit can handle the voltage drop from a diode and draws low current, a simple blocking diode (standard or Schottky) is probably all that's needed. Don't assume that because the MOSFET circuit has the best performance it is automatically the best choice. That performance comes with increased cost and has its own special limitations. Good engineering should minimise cost and complexity, and provide the approach that best meets your design requirements.
+ +Finally, never underestimate the use of a relay. They are one of the oldest 'electronic' components known (actually they're electro-mechanical, but that's beside the point. ) Their ruggedness and versatility is unmatched by any other component, and the fact that they are still used in their hundreds of millions is testament to that fact. The down side is their coil current, but that is often of secondary importance.
![]() | + + + + + + + |
Elliott Sound Products | +AN-014 |
Precision rectifiers have been discussed in AN001, and here is another common circuit is used to detect the peak of an AC waveform. If the peak detection is to function on both positive and negative half cycles (and they can be very different), a precision rectifier is used in front of the peak detector. This is usually necessary when the signal is asymmetrical, something which is very common with audio signals. The circuits shown here all work on the positive peak only.
+ +Peak detectors come in many different types, from very simple to rather complex. It all depends on the application, and how long the peak value needs to be retained. In some cases, it's just a matter of using a resistor (or current sink) to discharge the capacitor that holds the peak value, but in some cases the value has to be retained for a significant period with very low droop (slow capacitor discharge), and a separate discharge circuit is then necessary. This can be an electronic switch or a manual push-button, depending on the application.
+ +The question that is most likely to be asked is "why?". It is a good question, because most electronics enthusiasts may never have a need for a peak detector, or have already used one without realising they've done so. Peak detectors are often used to capture transient events that may otherwise remain undetected, but can cause circuit malfunctions. They are also common in audio processing systems, in particular audio peak limiter circuits.
+ +They can also be used to capture the instantaneous voltage peaks from a power amp, and may be used for analysis ("is my amp powerful enough?") or to activate a clipping indicator. They can be used in power engineering (e.g. mains powered circuits) to monitor the worst-case inrush current of a power supply, or to see if mains voltage transients exceed a given threshold.
+ +So, while many readers will never need one, others will see an immediate application for a peak detector. The purpose of this application note is to provide some info so that the optimum circuit can be determined for any given requirement. Like other ESP app. notes, this is not intended as a project article. The circuits will work as intended, but changes will be needed to ensure that the circuit suits your needs.
+ +While there are many circuits on the Net that claim to be peak detectors, many (if not most) are primitive, and cannot be considered to be precision circuits in any way. That's fine for non-critical applications, but it's not useful if you actually need a circuit that has predictable performance and an output that accurately represents the peaks of the input waveform.
+ + +There are many things that need to be considered when building circuits that hold a voltage for more than a few milliseconds. Where things like PCB surface leakage and/ or capacitor leakage are rarely an issue with audio, they become critical when a voltage is stored in a capacitor. High values generally can't be used because they require too much energy to charge, and the characteristics of high value caps are largely inconsistent with the requirements of peak detectors or sample-and-hold circuits, which are similar in many respects.
+ +In cases where the peak value needs to be retained for even a couple of seconds, extreme care is needed to minimise the capacitor discharge. Even the surface resistance of a printed circuit board is enough to discharge a capacitor given enough time. For example, the time constant of a 100nF capacitor and 1 GΩ (1,000 Megohms) is 100 seconds, or 1.67 minutes. At this time, the voltage has fallen to 0.632 (63.2%) of the original value stored. This combination is only suitable for a hold-up time of around 4 seconds for 2% accuracy. If you use a 10nF cap, these times are reduced to 10 seconds and 400ms respectively.
+ +We also need to be careful about the type of capacitor used to store the peak voltage. Dielectric absorption (aka 'soakage') isn't an issue with an audio circuit (despite what you may see elsewhere), but it's critical in peak detectors, sample & hold circuits and anywhere else that an accurate and consistent voltage has to be retained. Polyester caps are suitable in this role for non-critical applications, but polypropylene is the cheapest affordable alternative to otherwise very expensive/ exotic dielectrics. There's more information about this property of capacitors in the Capacitors article on this site. Dielectric absorption manifests itself as a voltage 'rebound' after the capacitor is discharged, which can mask low level signals making their detection either unreliable or useless.
+ +Type of Capacitor | Dielectric Absorption + |
Air and vacuum capacitors | Not measurable + |
Class-1 ceramic capacitors, NP0 | 0.6% + |
Class-2 ceramic capacitors, X7R | 2.5% + |
Polystyrene film capacitors (PS) | 0.02% * + |
Polytetrafluoroethylene film capacitors (PTFE/ Teflon) | 0.02% * + |
Polypropylene film capacitors (PP) | 0.05 to 0.1% + |
Polyester film capacitors (PET) | 0.2 to 0.5% + |
Polyphenylene sulfide film capacitors (PPS) | 0.05 to 0.1% + |
Polyethylene naphthalate film capacitors (PEN) | 1.0 to 1.2% + |
Tantalum electrolytic capacitors (solid electrolyte) | 2 to 3% + |
Aluminium electrolytic capacitors (fluid electrolyte) | 10 to 15% + |
Some common types of capacitor are tabled above [ 1 ]. Those indicated with * are unverified, as little information could be located. Once upon a time, you could buy polystyrene caps in high values (100nF or more), but sadly they are no longer made other than as a special order for very exacting requirements. Polystyrene has very low temperature tolerance and they are much larger than other types for the same value. Low value (up to 10nF) polystyrene caps are still available. PTFE (Teflon) is also supposed to be good, but I could find little information.
+ +The capacitor value is important. If it's around 10nF, it's easy to charge quickly even from opamps with low output current, but hold-up time is limited due to leakage resistance. A 100nF cap requires 10 times the energy to charge to the same voltage, so the current may be limited by the opamp if a very fast transient is to be captured, as the opamp may current-limit and not be able to charge the cap to the peak value in a high speed circuit. For a long hold-up time, C1 should be polypropylene, as that has a higher dielectric resistance than Mylar (polyester/ PET).
+ + +A diode is the basis for all peak detectors, but if used alone the forward voltage means that any signal below 0.7V can't be monitored. An 'active diode' (using an opamp) as used in precision rectifiers solves this problem, but there are many other considerations. It may seem appropriate to use Schottky diodes to reduce the forward voltage, but these have comparatively high leakage and are unsuitable, although 'low leakage' Schottky diodes may be ok for circuits where droop of the stored voltage isn't an issue. The venerable 1N4148 has a rated reverse leakage current of 25nA at 20V, an equivalent resistance of only 800 MΩ. While that may sound like a high resistance, remember that 100nF and 1 GΩ has a time constant of 100 seconds, but the voltage will fall from 5V to 4.9V (2%) in just over 4 seconds.
+ +That means that the total impedance needs to be a great deal higher than 1 GΩ if the value needs to be stored for more than 5 seconds or so. The voltage across the storage capacitor can't be measured with a multimeter, because even a digital meter with 10 MΩ impedance will discharge the cap in a few milliseconds. With a 10 Meg load, a 100nF cap discharges from 5V to 4.9V in just over 20ms. An opamp needs to be used as a buffer to enable the stored peak voltage to be measured or processed. FET input opamps are necessary in anything but the most rudimentary circuits. In the following schematic, the signal source must be a low impedance, because it has to charge C1 directly via D1.
+ +
Figure 1 - Simple Diode Detector
The simple detector is probably just fine if the voltages are fairly high, where the diode conduction error is small compared to the voltage being sampled. However, the voltage must not exceed the input range of the opamp, so that usually means a maximum of around 12V (assuming ±15V supplies). Unfortunately, this is rarely an option other than for very simple circuits where accuracy is not a major concern.
+ +The opamp must be a FET or MOSFET (CMOS) input type, so input current doesn't discharge (or charge !) the storage capacitor. All opamps with BJT (bipolar junction transistors) inputs are unsuitable as a buffer. The venerable TL071 has a claimed input impedance of 1 TΩ (1012 ohms), far higher than any bipolar transistor opamp. CMOS opamps such as the TLC277 offer the same, and it will be difficult to improve on this without using specialised (and expensive) parts. Impedances at this level require highly specialised PCB layout to minimise stray leakage which may be far greater than the opamp's inputs.
+ +The circuit is reset by pressing the button. This can also be done using an electronic switch, but the leakage resistance of that needs to be considered too. A CMOS switch (such as the 4066) would be alright for most circuits, but they do have a limited voltage range and the on resistance is fairly high, so the reset would need to be activated for several milliseconds to ensure a full discharge of C1. The leakage current of the 4066 is claimed to be 0.1nA at 10V (typical), representing a resistance of around 100 GΩ. In this (and many other) peak detection circuits, the limitation is the diode. The BAS45A suggested is a far better alternative than the common 1N4148, having an effective reverse resistance of 75 GΩ at 125V.
+ +![]() | Note: Most glass diodes will show increased leakage if they are illuminated, so a lightproof cover may be necessary to ensure the leakage is
+ maintained at its claimed figure. This is not something you normally have to worry about, but it becomes critical in high impedance circuits. The effect is not widely
+ known for normal small signal diodes, so feel free to be a little surprised. ![]() |
I ran some tests on a 1N4148 diode with a reverse voltage of 10V. At low light levels (below 10 lux) the resistance was 10 GΩ, and at my normal bench light level (1,200 lux) this fell to 2.5 GΩ. When the light level was increased to 18,000 lux, resistance fell to 670 MΩ. By way of comparison, autumn direct sunlight (in Australia) measured over 80,000 lux at 2pm on the day I ran the tests. Philips/ NXP rate the BAS45A leakage current at light levels of 100 lux or less.
+ +You can use a typical 10 MΩ digital multimeter to measure very high resistances easily. Place the meter in series with the DUT (device under test), and apply a suitable voltage (say 10V). The meter may show 1V, so the current through the device is calculated using Ohm's law. 1V across 10 MΩ is 100nA, so the resistance of the external device can be determined using Ohm's law again. If the supply voltage was 10V, there must be 9V across the DUT, with a current of 100nA. Therefore, the resistance of the DUT is 90 MΩ.
+ + +An active diode uses an opamp to effectively remove the diode offset. However, unless care is taken, the circuit has an undesirable characteristic, in that the opamp used will swing to the negative supply rail when the input voltage is lower than the stored voltage on the capacitor. This has two undesirable effects. Firstly, it means that the opamp must swing for a minimum of half the total supply voltage before it can do anything (such as recharge the holding capacitor), and this seriously limits the high frequency response.
+ +Secondly, it means that the diode has a much higher than necessary reverse voltage, which increases the leakage current. It might not be very much, but we are generally looking for the lowest leakage possible so the hold-up time can be extended. High leakage anywhere means that hold-up time is reduced dramatically. In many cases, it's necessary to use a smaller capacitor than might be imagined, especially if very fast transients need to be captured. The following circuit assumes ±15V supplies.
+ +
Figure 2 - Active Diode Detector
The active diode circuit shown uses the diode inside the opamp's feedback loop to effectively remove the 0.7V offset that happens with the simple version shown above. This is a common circuit, and it works well enough in practice if long hold-up time and high speed are not essential. In some cases it will be advisable to include a resistor in series with C1 to limit overshoot that can occur if the input signal is too fast for the opamp. This means the opamp operates open loop (without any feedback) until the output can 'catch up' with the input. This can be a very real problem with measurement circuits where inputs may be a great deal faster than any audio signal.
+ +There are still some minor issues with the circuit, with the main ones being the limited capacitor charge current and the fact that speed is restricted because the output of U1 swings close to the negative supply rail when the input voltage is negative. The opamp's slew rate means that it takes time for the output to swing from -13V or so up to the peak voltage, plus diode voltage drops. The opamp operates open-loop until the output voltage is the same as the instantaneous (positive) value of the input.
+ + +This version includes all the necessary extras to improve speed and minimise diode reverse current. The cap is charged directly from the opamp's output. This can supply enough current unless the input signal is particularly fast. In most cases, this would be the most appropriate version of a peak detector for audio frequency signals, and when the hold-up time doesn't need to be more than a couple of seconds. U1 does not need to be a FET input type because its input is not connected to the storage cap.
+ +
Figure 3 - Improved Active Diode Peak Detector
The additional diode (D2) ensures that the opamp's output cannot swing below the negative input voltage (plus the diode voltage drop), which improves the speed of the detector and minimises the voltage across the peak detection diode (D1). This helps reduce reverse leakage current, but it is not a real cure. The final piece of the circuit is R3 and D3, which bootstrap the detection diode. During the hold period, the same voltage exists on both ends of the D1. Under that condition, there can be no leakage through the diode and a 1N4148 will work perfectly even with several seconds of hold-up.
+ +The values of R2 and R3 aren't entirely arbitrary. The 10k shown work well in the simulations, but in a real-life circuit it may be necessary to adjust them for best accuracy. The effects are fairly subtle, so (for example) increasing R2 to 100k means the output will be ever so slightly greater than the input peak, and 10k means it's a similar amount lower. 10k is a fairly generalised value (and a nice round number ), but 47k proved 'perfect' (at least as simulated), but the differences are a fraction of 1% and will be extremely hard to measure. The value of R3 makes little difference, but for convenience 10k was chosen.
Note that because the two opamps are within a feedback loop (via R2), the probability of transient overshoot must be considered if the input signal has a very fast risetime. If this is expected, you may use a resistor (R4) in series with C1. The value will need to be selected to allow you to capture the pulses expected, but minimise overshoot. The combination shown (10nF and 100 ohms) allows a pulse of 5µs or more to be captured accurately (better than 1%), but this is dependent on the opamps used and must be optimised to suit your needs. If U2 can provide the current, R2 can be reduced in value to improve speed (less than 2.2k is probably ill advised). Expecting extreme accuracy with high frequencies is unrealistic unless very fast opamps are used.
+ +The idealised case for the output waveform is shown next. The circuit's reaction is fast enough to ensure that the voltage reaches the peak value on the first cycle. Direct drive from the opamp's output is only usable at relatively low frequencies (typically below 10kHz sinewave or a pulse wave with slower than a 15µs risetime). A high-current. high-speed charge circuit is shown in Figure 5 if a large storage cap is required, or where very high speed peak detection is necessary. An opamp with a fast slew rate will be required for U1 to allow high speed operation.
+ +
Figure 4 - Peak Detection Waveform
There are 3 bursts of a 1kHz sinewave signal, each lasting 2ms (2 cycles), and with each having a gap of 3ms before the next input burst. Negative values are not processed. Inputs are at 100mV, 1V and 2V peak. You would not expect the stored voltage to change during the gap (no signal), but a simulation shows that there is a very small drop in voltage over the period where there is no signal. It's only about 30µV from a 1V input, and that can be ignored. Naturally, if the charge is stored for longer the voltage will fall further.
+ +Based on the simulation, which includes diode and opamp leakage, but not capacitor, PCB or switch leakages, a 2V peak stored by a 10nF cap will fall by less than 10mV over a 2 second period. With careful design this should be realised in practice. That is an overall accuracy of just under 0.5% for a 2 second holding time. A larger capacitor (e.g. 100nF) can be used to improve this.
+ +
Figure 5 - High Current Peak Detector
There may be occasions where you need to provide a high capacitor charging current. This will be the case if you are attempting to catch very fast transients, of if the storage cap has to be much larger than normal to obtain a long hold-up time. Adding the transistor allows the peak current into C1 to be far greater than most opamps can provide. The diode (D3) is required, because without it the transistor's base-emitter junction may be forced into reverse breakdown.
+ +The transistor is not necessary in most cases, even with relatively large values for C1. However, the circuit will be restricted to low frequency operation only, with a typical upper limit of around 1-10kHz sinewave, depending on capacitor size. The output waveform doesn't change with or without the transistor, but R4 needs to be chosen carefully to ensure minimum overshoot. You should normally expect around 1% or better accuracy, but that means that optimal component selection is needed, and lots of testing to verify performance.
+ +Note that the connection point of C1, D1, U2+In and the reset switch should either have a PCB guard ring connected to U2Out, or be joined in mid-air to minimise surface leakage. There is information in the LF13741 BiFET Opamp data sheet [ 3 ] on how to add a guard ring if you don't know what that is or how to go about it.
+ + +It's not every day that you will need a peak detector that can retain the peak voltage captured for an extended period. In most cases, the Figure 1 circuit may be all that's necessary, and although it has a 700mV offset, that often doesn't matter. The other circuits all have better performance, and the version shown in Figure 3 is sufficient for the vast majority of precision applications. In all cases, you will need to verify that the circuit performs appropriately for your needs, and a precision rectifier may be needed in front of the peak detector for asymmetrical (or unknown) waveforms.
+ +Where FET input opamps are required, the TL071 is recommended for most low speed applications, as it's difficult to beat without spending a great deal more for a precision part. You need to be aware that all opamps have some input DC offset, and for high precision it will be necessary to use opamps that provide an offset null facility. For example, this is available in the TL071, but not the TL072. The datasheet for your chosen opamp will provide the details of how to connect the offset null. In most cases, opamps with offset null facilities are single types, although some 14 pin dual opamps also provide the connections.
+ +In all the examples shown, the 'attack' time (the time needed to charge C1) is close to instantaneous (opamp permitting), but this is not always desirable. Where a slower attack time is needed, the resistor (R4) in series with C1 as shown in Figures 3 and 5 can be increased in value to slow the charge rate. For very fast risetime input signals, R4 is essential to minimise overshoot that may cause the stored value to exceed the actual peak value by 5-10% or more. The resistor value needs to be selected based on your specific requirements, and you can use a pot (or trimpot) to adjust it for the optimum attack time. Overshoot is caused by the finite speed of the opamps, which are in a feedback loop.
+ +No discharge resistor has been shown, because these circuits are true peak detectors with an intentionally long hold-up time. Where a defined voltage decay is needed, a resistor is placed in lieu of (or in parallel with) the 'Reset' button. The value depends on your application. The time constant can be worked out for both attack and decay using the standard formula ...
+ ++ t = R × C Where t is time in seconds, R is resistance in ohms, and C is capacitance in farads ++ +
Remember that 1 'time constant' means that the voltage has risen to 63.2% of the maximum, or fallen to 36.8% of the peak. To make it easier to work out, use resistance in megohms and capacitance in microfarads. This gives you the answer in seconds. For example, 1 MΩ and 220nF (0.22µF) has a time constant of 220ms. In some cases, the resistor value needed may be extremely large (10 MΩ up to 1 GΩ or more). If this is the case, it is usually better to increase the value of C1 so a lower resistor value can be used.
+ +As with all the Application Notes on the ESP site, this is intended to provide you with the basics, alert you to potential problems, and give you a starting point for further research. These are not construction projects, so opamp types (and pin numbers) are not shown, and nor are power supplies or supply bypass caps. The latter are essential in any real circuit, and the value depends on the demands made of the electronics.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +AN-015 |
Every circuit made doesn't necessarily need input protection, but where it's included it makes sense that it should actually work. It's very common to see input protection schemes that use diodes from the input to the power supply rails. While this can work well, there are circumstances where it not only doesn't provide protection for the input stage, but it can destroy the rest of the circuit as well. Admittedly, such occurrences are rare and somewhat unusual, but that does not mean they can't (or won't) happen. I know for a fact that they can (and do) happen!
+ +Look in almost any datasheet that provides 'application examples', and whenever input 'protection' is shown (which isn't as common as you might hope), it will almost invariably use low current diodes from the input(s) to the supply rails. A limiting resistor may or may not be included, with the latter case more likely. This is fake protection - it will only provide the most basic protection for obviously foreseeable connection errors, but will do nothing to protect an input circuit that's accidentally been connected to a speaker output. This does happen, and probably far more often than most people might think. Perhaps surprisingly, using bigger diodes (e.g. 1N4004 or similar) only makes matters worse.
+ +Consider the input to a PC oscilloscope adapter, as shown in Project 154. Because of the way I designed it, it's pretty safe even if the input is connected to a high voltage AC or DC supply, because there's capacitor to block DC and a 100k input resistor that limits the current. Even if 400V were to be applied to the input, there will be a brief voltage pulse as the input cap charges, but the peak current is limited to 4mA. Even a high AC voltage won't hurt it, because the battery is a low impedance and can absorb a small 'charge' current (although it will not actually recharge).
+ +This doesn't stress anything for very long, and it will survive. However, there are countless circuits in magazines and on the Net where no such limiting resistor is included, and in many cases the circuit may be such that it could easily be connected to a high voltage supply, either by accident, due to a component failure or because the user doesn't understand that (for example) 5V circuitry such as microcontrollers or analogue to digital converters really don't like high voltages, and will show their displeasure by failing - usually catastrophically.
+ +There is a strong likelihood that other low voltage circuitry will be damaged as well, and it could spell the end of a project, requiring a complete re-build. This is certainly not something that happens regularly, but it's unreasonable to expect that it will not happen on occasion. The user is left wondering how so many parts were fried, even though there are protection diodes in place. In some cases you may even be unaware that the worst-case 'protection' system is in place, because it's sometimes included in ICs (the datasheet will usually indicate that it's present). This is generally included for ESD (electrostatic discharge) protection, and being integrated, the diodes are small and very limited current. I have heard of a complete multi-channel mixer that had most of its opamps destroyed because someone mistook the speaker output jack for the line output jack on a guitar amplifier.
+ +A protection topology has to be chosen to suit the specific needs of your circuit. If you only need to protect against ESD (electrostatic discharge) the voltage may be high (several thousand volts is not uncommon), but the available current is low because ESD 'events' are limited by the circuit capacitance which usually includes a person. Standard ESD tests assume a 'human body model' having a capacitance of around 100pF in series with 1,500 ohms [ 1 ]. The test voltages range from 2kV to 8kV, so worst case input current ranges from 1.33A (2kV) up to 5.33A (8kV). While 1N4148 diodes can handle the lower current easily, they may not survive with 5.3A, even if it's very brief. At a test voltage of 2kV, the current is greater than 500mA for around 150µs. With 8kV, that's extended to 350µs.
+ +The input resistor RLIM is expected, but isn't always used. It's also a compromise, because the resistance has to be low enough so as to not generate excessive noise, yet needs to be large enough to ensure the current is limited to a safe value. With 1.5k as shown, peak current is half the theoretical 'worst case' value for an ESD test. The value of the input capacitor (C1) has not been specified, because it's dependent on the circuit's usage. Low values may offer better protection because the peak current pulse may be shorter (for very low values at least), but for low impedance, low frequency circuits it needs to be fairly large.
+ +In the following circuits, an opamp is shown as the 'input device' that requires protection. In reality, it could just as easily be a small signal MOSFET, an ADC (analogue to digital converter) or any other IC or active device. Examples are shown for dual supply and single supply operation. Single supply input ICs are a little easier to protect than those using a dual supply. The examples also include an (optional) input capacitor, which may or may not be essential, depending on the purpose of a circuit. Where single supply inputs are used, the input cap is almost always needed so the source doesn't short circuit the input biasing.
+ +While component destruction may be common due to ESD transients or other events, it's not always immediately apparent. During testing of a discrete transistor, I found that a single impulse left the transistor in a working condition, but its performance had deteriorated. The gain was lower, and although not tested at the time, I'd also expect noise to be increased. Further 'events' caused the gain to drop again, and it took several test cycles before the degradation would have been immediately apparent.
+ +In the meantime (after perhaps two or three test cycles), in a complete circuit I would expect to see a small reduction in AC voltage gain, but likely a disproportionately large distortion (and noise) increase because open loop gain is reduced and feedback is not as effective. The average user (and quite possibly any user) could be unaware of this, but left wondering why the sound quality just doesn't seem 'right'. This insidious degradation could continue over a period of time before being positively identified as a fault. ICs usually save you the trouble - a single transient event will kill a FET input opamp first try (I know this because I did it several times while running tests).
+ +It's important to understand that virtually no input protection scheme will provide any useful safeguard against the input being connected to the mains AC supply. The mains is at a very low impedance and can provide more than enough current to blow up almost anything that's not rated for mains input. Although it's very unlikely that anyone would be silly enough to expect an electronic device to survive a direct connection to the mains supply, it's worth mentioning 'just in case'. Devices intended to measure/ monitor mains voltages must be designed accordingly, otherwise they simply go bang !
+ +One of the reasons that the 'traditional' protection circuit is flawed is that power supply regulator ICs are designed to do one thing - provide output current at the specified voltage. They cannot sink (absorb) current that's provided at their output terminal by a fault, and without any restraint the output voltage can easily be forced to a dangerously high level.
+ + +The 'traditional' schemes are shown below, but unlike many you'll see, they include RLIM which helps a little. Should a high input voltage (of either polarity) be connected to the input, the appropriate diode conducts and the input is protected. Well, not always. What happens if the circuit shown is inadvertently connected to a +35V DC supply (a power amp's supply rail perhaps). The diode will conduct, but it will force current back into the supply rail. This is not a problem if the input capacitor is present, but DC coupled circuits that do not use a capacitor are at some risk. AC voltages above the supply rails can still cause havoc even if the capacitor is present, provided its value is high enough. 10µF or more could easily cause problems, and if the input is from a power amp, the frequency may be high enough to make the capacitor irrelevant. A 10µF cap has a reactance of only 16 ohms at 1kHz.
+ +A circuit that runs from ±15V will not be happy if one rail (or both with AC) is suddenly raised to more than 30V, and further down the line, there's a voltage regulator that now has over 30V on its output pin. Unless there is a diode across the regulator (as shown in all ESP regulator designs), the regulator IC will be reverse biased and will probably fail. Even if the diode is present, the maximum operating voltage of the IC may be exceeded if the fault condition is sustained, leading to destruction.
+ +
Figure 1 - Traditional Over-Voltage Protection
+
+
The standard arrangement provides a false sense of security, and can lead to catastrophic failures. In a great many cases, the vulnerability of the circuit will never be tested, so a product can have a built-in failure mechanism that only a few people will ever find. Many products will have a 'user manual' that points out that "incorrect usage voids any warranty". The unfortunate user who failed to realise that some high voltage was present is left with a dead unit, with no hope of restitution.
+ +For single supply applications (which are typically powered from a 5V supply), the diode to ground will usually provide reasonable protection provided the current is limited, but no such protection is offered if a DC coupled input is connected to a DC voltage of more than 5V. Even 12V from an opamp supply may be enough to cause damage if current limiting is not provided. Remember that the input capacitor will prevent long term over-voltage from causing havoc, but only if its voltage rating is high enough to withstand the applied voltage. Naturally, this doesn't apply if the input is AC, whether from the secondary of a transformer, the output from a power amplifier, or some other source of AC at any frequency. Consider the following (with RLIM not installed ...
+ +
Figure 2 - Over-Voltage Protection (Epic) Fail
+
+
With the input voltage shown (roughly ±35V peak) and a more-or-less standard input 'protection' circuit, it takes less than 3 cycles (3ms) to 'pump' the supply rails up to over ±30V, even with a total nominal load of 15mA on each supply (rising to 30mA). Will the opamp survive this? What about the regulator, which has an output voltage perhaps 10V greater than the input voltage from the rectifiers? Some may survive (especially if the regulators have reverse diodes which passes the voltage rise to the input, as seen in all ESP designs), but many will not. Larger bypass caps (Cb+ and Cb-) slow the process down, but do not fix the problem.
+ +If an input limiting resistor is included the circuit will work properly, but only if the value is high enough. Even 100 ohms provides minimal real protection, and it needs to be at least 1k, and preferably 1.5k as shown in the other examples.
+ +It has to be admitted that the likelihood of this happening is small, but it's still real. Commercially produced equipment is used by 'ordinary' people (i.e. those with no electronics knowledge), who will be unaware that you must never connect low-level circuitry to the outputs of an amplifier (whether by accident or otherwise). Even dedicated hobbyists may do it accidentally, but they will be able to repair the damage. The average consumer is left with a piece of junk that no longer works, there's no warranty and not many people around any more who can fix it.
+ + +A better method is shown next. The power ratings for the zeners is determined by the level of protection required and the series input resistance RLIM. In many cases, the latter will be fairly low (around 100 ohms or so, rather than 1.5k as shown), and the zener diodes will clamp the input voltage even if operating at several times their continuous current rating. This cannot be maintained for long of course, because the conducting zener will overheat and fail, probably along with the input resistor. This is a cheap and easy repair, something a 'handyman' (or woman) can likely do themselves.
+ +
Figure 3 - Alternative Over-Voltage Protection
+
+
More importantly, the remainder of the circuit is safe. The input resistor will (hopefully) fail first, but even if a zener fails, like all semiconductors it will fail short circuit. Now, only an input resistor and a pair of zener diodes need to be replaced, and not the entire circuit and power supply. Naturally, this scheme is not completely foolproof (apparently fools are too ingenious), but it's a lot better than the traditional scheme. The zener voltage needs to be selected carefully to ensure that the input signal isn't distorted. For the single supply version, the zener would probably be 5.1V to suit a 5V supply.
+ +There are several things you need to be aware of though, and these can be a real problem for high impedance circuits that are expected to operate at high frequencies. The biggest issue is the capacitance of the zener diodes. Where a 1N4148 diode has a capacitance of around 4pF, the junction capacitance of zeners is often not specified. It is a great deal higher than that for small-signal diodes (e.g. 1N4148), and it also depends on the zener voltage. Low voltage zeners have higher capacitance than high voltage versions of the same family.
+ +For example, a 5.1V zener may have a junction capacitance of over 100pF, while a 20V version could be as low as 20pF [ 2 ]. This is often specified at a particular reverse voltage (2V for example), but the capacitance is voltage dependent and increases as the reverse voltage is reduced. So, while zeners are fine for low impedance circuits, they may cause premature high frequency rolloff once the impedance gets much over 22k or so. The arrangement shown was used in the Project 96 phantom power adaptor for microphones, because it's the only way to be certain that damaging transients cannot be delivered to the microphone preamp's inputs.
+ + +All is not lost though. It's possible to have a high impedance input that is well protected against most possible over-voltage conditions. There are limits of course, because small-signal diodes with low capacitance are also limited to relatively low current. Even a 1N4148 (or the low capacitance version, the 1N4448) can withstand 1A for one second, or 4A for 1µs. The voltage across it will be a great deal higher than the nominal 650mV normally expected though, and this has to be considered. It's obvious that the supply rails must be maintained at a safe voltage for the IC, and for its inputs. While many ICs have at least some degree of input protection built in, many don't. The specifications will state that (for example) the inputs must be maintained within the range from -Vee - 0.3V to +Vcc + 0.3V or the circuit may malfunction.
+ +
Figure 4 - Combination Over-Voltage Protection
+
+
The arrangement shown above can achieve everything needed, but it now requires four diodes. However, if you really need to protect the inputs from potentially dangerous voltages, then it's a small price to pay to achieve reliability. The advantage is that low capacitance diodes can be used in series with the zeners, so their relatively high capacitance is isolated from the input circuitry. This improves high frequency response in high impedance circuits. The current limiting resistor is still very important, and needs to be selected to suit the expected worst case input voltage.
+ +The zener diodes (ZD1 and ZD2) would typically be selected for around 2-3V less than the supply rail voltages. For ±15V, 12V zeners are pretty much ideal. Zener diodes usually have a fairly high capacitance, and this is important for high impedance applications, as it will cause premature high frequency rolloff. Expect up to 40pF for 10V zeners, which is reduced to ~20pF when two are in series (back-to-back).
+ +You may think that using Schottky diodes would be good idea, but their capacitance is typically quite a bit higher than 'ordinary' small-signal diodes, and they have much higher leakage which may cause distortion. You can expect around 7pF junction capacitance for BAT43 diodes (30V, 200mA continuous). It may not sound like a great deal, but with a 100k source impedance, a pair of BAT43 diodes will cause the -3dB frequency to be only 114kHz, assuming there is zero stray capacitance to reduce it further. This is not an issue for audio, but for a test instrument it may be very limiting.
+ + +Input protect seems like the simplest thing in the world - until you examine all the possibilities of things that may go wrong. Circuits with an input coupling capacitor fare a little better, because a high voltage applied to the input will generate a high current, but only for a short time (assuming that the capacitor is rated for the expected worst case input voltage of course). An under specified cap may fail or suffer such high leakage that damage is caused anyway. Including a protection circuit that doesn't actually protect against foreseeable accidents isn't helpful - especially if it's capable of causing further damage within the equipment.
+ +In most cases, it will be alright to use 'traditional' scheme shown in Figure 1, but you must add zeners directly across each supply rail. Obviously they must have a breakdown voltage that's greater than the supply voltage. If you have ±15V supplies, you need to use 16V zeners, which will have a typical voltage range from 15.3V to 17.1V. Zener diodes are not precision components, and 5% tolerance is probably the best you can hope for.
+ +Input protection is now an essential part of any project that connects to external sources that may be powered by a switchmode power supply. This is described in detail in the article on low voltage external switchmode power supplies (External PSUs). These commonly have a 'floating' 50/60Hz voltage present at the output, that is around 50% of the mains voltage. The available current is small, but more than sufficient to damage even a discrete transistor's base-emitter junction. Integrated circuits are even more vulnerable because the transistors are physically much smaller and are more easily damaged. In case you were wondering, this is something I have physically tested, so it's a fact, not a hypothesis.
+ +Ultimately, the level of protection provided depends on the application. Not everything needs a very high level of protection, and in some cases including it may degrade the circuit's performance. This will most commonly be in situations where the added capacitance of the diodes cause a high frequency rolloff with high source impedances, but diode leakage can also be an issue in some cases. If the lowest noise is desired, adding series resistance is not the answer, because the resistor contributes noise of its own. I usually don't include input protection circuits because hi-fi preamps, electronics crossovers and many of the other projects will be permanently wired, usually in such a way as to make it extremely unlikely that protection will ever be required.
+ +However, if you do decide to include protection, it's important that it will actually work. A 'protection' scheme that can destroy the entire circuit is neither useful nor helpful. You will be lulled into a false sense of security, which doesn't help anyone. Before you embark on any protection scheme, make sure that you test it thoroughly in the way it will normally be used, and be prepared to make changes to ensure that it does its job even if the owner does something really stupid (most won't, but rest assured that someone will! If the protection degrades the performance or causes some other undesirable anomaly, be willing to make other changes.
+ +For example, if the first opamp must have a high input impedance and minimal capacitance, isolate its power supply with diodes, use a resistor from the output to limit current into following circuitry (with zener diodes to ground), and install it in a socket. Yes, the opamp will blow up if someone does something particularly nasty to it, but with careful design the remainder of the circuit will survive.
+ +In general, it's a particularly (and spectacularly!) bad idea to use the same type of connector for inputs and outputs. For example, a certain British manufacturer of guitar amplifiers also makes (or made) rack mounted amplifiers, and used 6.35mm (1/4") jack sockets for both inputs and outputs, all in a neat row together. The opportunity for a mix-up is glaringly obvious, and no input protection is provided at all! There's also a litany of other design flaws in some, but that's outside the scope of this article.
+ +![]() | + + + + + + + |
Elliott Sound Products | +AN-016 |
Every so often you'll need to measure resistance that is well beyond the range of your digital multimeter's ohms measurement capabilities. This might be measuring the reverse resistance of a diode in a precision peak-hold circuit, or verifying that there is no leakage across a printed circuit board. Most multimeters extend to perhaps 20MΩ or so, with a few (typically more expensive bench types) able to measure as much as 200MΩ. A very ordinary 1N4148 diode has a (datasheet) reverse resistance of around 800MΩ, and that's well outside the ability of all but the most expensive laboratory instruments.
+ +This technique is described very briefly in AN-014, but it's potentially so useful that it was decided that it would make a good app. note itself.
+ +Normally, very expensive laboratory instruments are used to measure very high resistances. These include the electrometer [ 1 ] and 'source-measure units (SMUs). Both are well outside the scope of the home workshop, and few professional workshops will have anything of the sort either. It's not often that you need to make these measurements on very high resistance devices, so it should come as no surprise that there's not a great deal of useful information available.
+ + +Multimeters (of the digital kind) inject a known current into the external resistor, and measure the voltage across it. This is why many digital meters will show the forward resistance of a diode as (say) 0.55kΩ - that is not the resistance, simply the forward voltage. Not all meters do this by default though, so many have a separate 'diode test' function which does show the voltage.
+ +
Figure 1 - Traditional Resistance Measurement
The drawing above shows how the resistance is measured. Most meters have multiple ranges (or are auto-ranging), so I've just shown a single range, suitable for measuring from zero to 1.999kΩ The 1,999k is what you see with a typical 3½ digit meter - the most significant digit in such meters can only be a zero or a one.
+ +A current of 1mA is applied, so the meter reads the voltage and displays the result as resistance. The maximum voltage that can be displayed is 1.999V, and a 1k resistor will show 1.000kΩ because it has a voltage of 1V across it. Of course, 1V at 1mA equals 1k (by Ohm's law). The maximum resistance you can measure depends on the meter, but most meters will 'top out' at around 20-40MΩ or so. Some bench meters can measure up to 200MΩ.
+ + +Given the above, you may well wonder how it's possible to measure a resistance of 1GΩ or more, as I have done for 1N4148 diodes (amongst other things). Obviously, no affordable multimeter can measure that much resistance, but with some trickery it can! The meter is used on its voltage range, and connected in series with the reverse biased diode. Then a known voltage is applied (say 10V DC), and the meter will show a reading of perhaps 100mV. Note that measurements must use DC, although AC measurements are theoretically possible. However, it will be extremely difficult to ensure that no AC noise is licked up by the meter, so the measurement could easily be wrong by an order of magnitude!
+ +Almost all digital multimeters have a 'DC volts' input impedance of around 10MΩ (most of mine measure 11MΩ, so we'll use that for this exercise) on the DC voltage range, so a voltage of 109mV across 11MΩ means the current is 9.91nA. The remainder of the voltage is across the diode, which must also be passing 9.91nA. If the applied voltage is 10V, that works out to a total resistance of just over 1GΩ (10V / 9.91nA = 1GΩ). In the figure below, the 11MΩ meter resistance has been subtracted, giving the external resistance as 998MΩ.
+ +Note that for very high resistance (1GΩ or more) you need a meter that can measure down to 10mV accurately. Some meters have a millivolt range that might be usable, but you may find that the meter expects a low source impedance when measuring on the millivolts range. For example, my bench meter has a small DC offset when used on the millivolts range, which is likely due to the use of an internal amplifier which has a small (about 4mV) DC offset that makes it unusable for this application.
+ +Some meters have different input impedances depending on the range. This is easily measured with switched range meters, but it's not so easy if the meter is auto-ranging. Because the end result of a measurement using this technique is such a high resistance anyway, a variation of ±1MΩ is probably neither here nor there. Although I recommend a test voltage of 10V, you can use higher voltages if necessary. Be very careful to ensure that the voltage is less than the expected breakdown voltage of the component you are testing, and be especially careful (for your own safety) if particularly high voltages are used. The power supply used for the test should have current limiting (so it's not damaged by an accidental short-circuit), or use a series resistor to limit the maximum current if you accidentally short out the supply. As explained below, regulation has to be excellent to enable accurate measurements.
+ +
Figure 2 - Voltmeter Resistance Measurement
Extreme precision is not necessary (one could subtract the 109mV or 11MΩ for example as I've done here), but the end result is 'good enough' for most measurements. This is particularly true since such high resistance values may be dependent on temperature and/ or humidity, and even the smallest amount of moisture can affect the reading dramatically. I measured between tracks of a 50mm length of Veroboard, and when dry I obtained 6.2mV (almost 18GΩ), but just breathing on it dropped the resistance to well below 1GΩ (albeit briefly).
+ +C1 (10nF, 100V) is optional. Surprisingly, it doesn't have to be an extra-low-leakage capacitor, because it's in parallel with the 10MΩ or so of the meter. Provided it has better than 100MΩ of dielectric resistance (and most ordinary caps will be far better than that) it won't affect the reading. The charge time isn't as great as you may expect (typically a couple of seconds), but it will help to remove any noise which will make the reading unstable. The low frequency limit is determined by the cap value and the meter's input impedance (Rint). With 10nF, it's around 1.6Hz, so most mains noise should be attenuated quite well.
+ +This is a very useful technique if you ever need to measure particularly high resistances, and it doesn't appear to be widely known. There are (of course) specialised meters for measuring extraordinarily high resistances, but the humble digital multimeter does a perfectly acceptable job with some care. Quite obviously, the DUT (device under test) must be suspended away from anything that may be ever-so-slightly conductive, and the meter leads also have to be very well insulated. The smallest amount of leakage can create a very large error.
+ +You also need to check your meter's specifications to determine the error. Most are better than 1%, but the least significant digit may make a big difference for very low leakage test devices. The specifications will typically state accuracy as (for example) ±1%, ±2 'counts' (the least significant digit). That means that 100mV could be shown as anything between 97mV and 103mV, and the error is worse as the voltage is reduced.
+ +It's only after you've done this type of measurement a few times that you really come to grips with the extraordinarily high impedances that exist in some circuits. Even printed circuit tracks may be suspect unless the appropriate points are protected by a guard track or similar (which is not possible with Veroboard). If you've never heard of a 'guard track', see Designing With Opamps, High Impedance Amplifiers. The guard track (or ring) effectively 'bootstraps' the enclosed circuit, protecting it from external (surface) leakage.
+ +It's educational to monitor the reverse resistance of a 1N4148 (or any other) diode, while holding a soldering iron nearby - not touching, but a couple of millimetres away. Even a small amount of heat will reduce the reverse resistance (aka leakage) dramatically. At a barely noticeable elevated temperature, you may see the monitored voltage rise from 100mV to 400mV or more, indicating that the leakage has quadrupled. That's roughly the equivalent of the resistance falling from 1GΩ to around 250MΩ. That's a big difference, and it may be critical in some circuits.
+ +Noise may be a problem when taking measurements like this, because impedances are all very high. Some meters are better than others at rejecting mains hum and other extraneous noise, which can make the final reading unsteady. If the impedance is particularly high, you can't even use a capacitor to filter it out, because the cap's dielectric may not be much better than the device being tested. You can use a larger (preferably polypropylene) cap in parallel with the meter (rather than the 10nF cap shown above), as they have a very high resistance dielectric. This will make the measurement process a little slower though, because the cap has to charge via the external resistance of perhaps several GΩ, and the final circuit may still not be able to eliminate 50/60Hz hum effectively. The arrangement shown in Figure 2 has been used many times now, and is very successful.
+ +![]() | It's important that the external supply is free of noise and very well regulated. Small voltage changes that have no effect whatsoever on normal circuits will + cause the meter reading to change. This is especially troublesome when measuring capacitor dielectrics, because the capacitor will pass low frequency variations and cause an unsteady + reading that may not be able to be interpreted with any accuracy. I know this from personal experience, and have had to resort to using an external regulator after my (regulated) + power supply to ensure that the output voltage is as stable as possible. Only very low current is necessary, as we are looking at devices that draw only a few nA or even pA of current + once settled. + |
If this is something you discover you need to use often, it would be worthwhile to make up a very short lead for your meter (essentially a banana plug with a stub of wire), with an alligator clip at the end to hold one end of the DUT. Make another short lead for the common terminal on the meter. The negative of the external supply clips onto the the common lead, and the positive goes to the other end of the DUT. This helps to minimise external hum pickup, and also ensures that there is the greatest possible impedance at all points of interest.
+ +The insulation resistance of the leads from you power supply is of no consequence, and even the internal insulation resistance of the meter is relatively unimportant (it's in parallel with 10-11MΩ). The only point of specific interest is the connection from the DUT to the external supply, and if that's in mid air it's effectively infinite. No PCB material (or anything else) should bridge the DUT itself, as the leakage is an unknown quantity.
+ + +This apparently simple technique doesn't seem to be as widely known as it should be. It's not something you need very often, and some may never need it at all. I've used it several times while developing projects or special designs for clients, and it's certainly a far better proposition than spending $thousands on specialised equipment that may only be used every couple of years.
+ +If you want to get accurate readings, you'll need to use a second multimeter to measure the input impedance of the one you intend to use. Not all specifications include the input impedance, and around 10MΩ is often assumed, but as I found with several of my meters, they are actually 11MΩ. The error isn't great, so you may not feel that it's necessary to verify the actual impedance.
+ +This technique doesn't place your meter or the DUT at risk (provided the external voltage is less than the breakdown voltage of the DUT). The meter is in voltage mode so is a high impedance, and even a shorted DUT won't harm the meter. The test voltage depends on what you're testing, but 10V is a good starting point for most measurements. If you must use higher voltages, do so with extreme care. Anything over 50V is potentially dangerous, and you do so at your own risk.
+ + +No other references to this technique have been located on-line. Some might exist, but even an extensive search failed to locate anything even remotely close. One was found, but it was published after I suggested this technique in AN-014, so it's not unreasonable to assume that my technique was used as inspiration.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +AN-017 |
![]() ![]() |
The most common use for DC detection circuits is to protect loudspeakers from a faulty amplifier. The general principle is little different from a zero crossing detector (see AN-005), and indeed, an (almost) identical circuit can be used. The difference is that a zero crossing detector is intended to detect the zero voltage condition in 'real time', whereas a DC detector must have a high pass filter so the circuit doesn't trigger on low frequency, high amplitude signals.
+ +The filter is the thing that causes the greatest difficulty, because it is normally a fairly high impedance to keep the required capacitance low. Because the input can swing either positive or negative with different fault conditions, the capacitor can't be a conventional (polarised) electrolytic, because it may be subjected to a fairly high reverse voltage which will damage the cap.
+ +In Project 33, the input resistor is 100k, and the cap is 10µF, giving a low frequency -3dB frequency of 0.159Hz. While this may seem way too low, it's actually right for most amplifiers of 60W (8 ohms) or more. A higher frequency would mean that low frequency signals can easily cause the protection relay to chatter, introducing gross distortion.
+ +Because the impedance is so high, the available current is low, and this generally precludes the use of an optocoupler. While it's not impossible (far from it in fact), the impedances all need to be reduced so the optocoupler gets enough current to be useful. This means that a fairly large capacitance is necessary, but it doesn't need to be high voltage (6.3V will normally be sufficient).
+ +There is (or was) an IC (µPC1237HA) designed specifically for the purpose of DC detection, and (at least in theory) it required few external parts. However, the application circuit set a dangerous precedent by wiring the speaker relay incorrectly, and the same error has been repeated ad nauseum in most DC protection circuits that have been published over the years.
+ +While it seems like a good idea to use an IC designed for the purpose, there are easier ways to achieve the same results, using readily available, cheap transistors and diodes (plus a few passive components). The advantage of the latter approach is that suitable replacement parts will be available forever, so a failure doesn't render the PCB useless scrap.
+ +The relay wiring is something that requires some explanation. It's an exercise in futility to expect a relay rated for 30V DC (the typical maximum rating) to break an arc created by an amplifier using ±45V supplies or more. Even the default 30V relay will arc with 30V, and the arc current is transferred to the speaker. The solution is to wire the relay so that when off, the loudspeaker terminal is grounded. Even if (when) the relay draws an arc, the current is bypassed to ground, and the loudspeaker is protected. The relay may be destroyed, but relays are far cheaper than loudspeaker drivers, so it becomes a sacrificial component - it dies to save your speakers. That's a reasonable trade-off in my books.
+ +This app note discusses the various options that can be used to detect DC if it turns up in places where it should not be (such as at the output of a power amplifier). All test waveforms shown use a 100k input resistor, and are shown with a 1Hz signal with a 10V peak to peak amplitude. In each case, the supply voltage is 12V DC, and the presence of DC is indicated by a zero output voltage. The high pass filter is deliberately omitted so the instantaneous action of the circuits can be seen.
+ +Outputs are simulated, but I know from tests that I've run that the simulated and bench tested versions are virtually identical. These circuits can all be considered window comparators. Provided the signal is within the defined voltage 'window' the output voltage is low, and rises to 12V (or thereabouts) when the voltage is above or below the set values.
+ +Note that the input voltage (1Hz, 5V peak) is offset by 6V so the detection points are immediately visible. Each circuit is shown with an AC voltage as the input, but that's for analysis purposes. In what's laughingly referred to as 'real life' ( ), the generator is replaced by the power amplifier output, via a filter to prevent activation with the audio waveform. All circuits shown are assumed to use a +12V supply.
The principle of a DC detector is very similar to a zero-crossing detector (ZCD - see AN-005). When the DC input voltage is close to zero the relay is energised and allows the amp's output to be connected to the speakers, but if it exceeds the preset limits the relayi s turned off. Several of the ZCD circuits are potentially suitable for DC offset detection, but the overall requirements are actually quite different, despite initial appearances. All DC protection schemes rely on a filter to remove the audio component down to the lowest frequency of interest. This is disabled for analysis, but is essential in normal use.
+ + +Of the various techniques, the only ones I will not cover is the µPC1237, plus a few others as noted. The IC is obsolete, and its internals are not disclosed in such a way that it can be analysed properly without having one to hand. There are several other techniques that I haven't covered, either because they won't work or require an isolated power supply in order to function as designed. This includes various circuits that use a bridge rectifier at the input, which will certainly work, but it will only work properly if a floating 12V supply is available. This makes the circuit a nuisance to power. I've also left out any system that requires a dual power supply, because that just makes the circuit harder to build because a simple +12V supply can't be used.
+ +The others vary widely, and one was suggested by a reader. This is a good circuit, and it's fairly easy to make it work with two amplifier inputs. One that is also worth looking at is Project 175, which is designed for use with BTL (bridge-tied load) power amplifiers. All circuits are shown for a single channel only, and do not include the high-pass filter. This was done so that the input and output can be viewed with a standardised input voltage and frequency.
+ +Please note that the input AC signal is offset by 6V so positive and negative transitions can be seen easily. That means that the reference (zero volt) level for the AC waveforms shown below is 6V, and not zero volts. This is indicated on the right of each response graph.
+ + +The first method examined is the one that's used in Project 33. It's very effective, but it is slightly asymmetrical. This means that the detection thresholds are different depending on whether the DC fault is positive or negative. In reality, this makes absolutely no difference, because power amplifiers rarely (if ever) develop a fault that causes a DC offset that is other than one or the other supply rail (basically, I've never seen it happen, nor have I heard of it happening, other than if a preamp goes bad and there's no coupling capacitor in place).
+ +A positive Input voltage causes the voltage at the base of Q1 to rise, turning it on. A negative input pulls the emitter voltage low, which also turns on Q1. Q1 operates in common emitter mode for positive voltages, and common base mode for a negative input. R2, R3 and D3 ensure that sensitivity is roughly the same regardless of how the transistor is driven (i.e. with positive or negative inputs). The remaining transistors increase the small current available from Q1 into something suitable for driving a relay (wired in place of R8).
+ +
Figure 1 - P33 DC Detector
Because this circuit uses diodes at the input, it's easy to add more channels simply by adding extra diodes (along with a filter circuit of course). This circuit was devised many years ago, and it only requires a single supply. The PCB also includes a 'loss of AC' detector, which mutes the amplifier almost instantly when power is turned off. It also includes a power-on mute, but the detector shown above doesn't include these.
+ +The threshold for positive input is 3.46V, and for negative inputs it's -3.39V. This variation is inconsequential in reality, and both thresholds are within the 'safe' range for most loudspeakers (less than 2W for an 8 ohm speaker). Because all DC detector circuits disconnect the amplifier from the speaker when the threshold it reached, no damage will be caused.
+ +
Figure 2 - Voltage Waveforms
The output waveform has clean transitions, and there's no sign of anything that may raise a 'red flag'. This circuit has been used by (literally) hundreds of constructors, and I've never heard of a failure. Additional channels require duplication of the input resistor (and capacitor, not shown) and the input diodes.
+ + +The next option is one that was suggested by a reader. Its detection thresholds are fairly symmetrical, but it is very sensitive. The sensitivity can be reduced by including R2, but when that's not included it will trigger at less than ±1.8V. Including R2 reduces sensitivity. As shown, the detection thresholds are +1.25V and -1.77V.
+ +A positive Input voltage causes the voltage at the base of Q2 to rise, turning it on. A negative input causes the base of Q1 to fall, turning it on. Either case results in the removal of base current for Q3, which turns off. The relay can be wired in series with the collector of Q2. The remaining transistor was included to reverse the polarity and increase the overall gain.
+ +
Figure 3 - 'Mitko' DC Detector [ 3 ]
This is a nice, simple circuit, and by adding input diodes more than one channel can be accommodated. There is an additional transistor included in the circuit to ensure the same polarity as the others shown, but in reality, Q3 can drive a relay directly from its collector circuit.
+ +
Figure 4 - Voltage Waveforms
The waveform is very clean, and the high sensitivity is quite obvious. I would have no hesitation recommending this arrangement, but it's a little harder to add a power-on mute and/ or 'loss of AC' detector as used in the Project 33 circuit board. The sensitivity can be reduced simply by adding a resistor (R2 'See Text'). If R2 is made (say) 56k, the thresholds are raised to ±3V, which is a perfectly reasonable voltage.
+ +Additional channels can be accommodated by duplicating the input resistor (and capacitor, which is not shown here), as well as the two diodes. Sensitivity is unchanged, and extra channels work identically. There are basically no downsides to this approach.
+ + +This is a particularly good arrangement, which is almost perfectly symmetrical provided R1 and R2 are the same value. While it appears more complex, it's still a very simple circuit to build, and it's based on a conventional window comparator circuit. The diodes can be eliminated if the LM358 is replaced by a dual comparator (such as the LM393).
+ +A positive input forces Pins 2 & 5 of U1 to rise. When the voltage at Pin 2 exceeds that on Pin 3, the output (Pin 1) goes low, and pulls the output voltage to (near) zero. A negative input voltage forces Pin 5 to a lower voltage than Pin 6, causing the output (Pin 7) low. The two diodes prevent the opamp outputs from interacting (they cannot be omitted, unless the opamp is replaced by a dual comparator.
+ +
Figure 5 - Opamp Based Detector
The detection thresholds are easily adjusted simply by changing the value of R4. With 33k as shown, the thresholds are +1.68V and -1.72V. The resistor string (R3, R4 and R5) can be reduced or increased in value, and provided the relative values are the same, the thresholds are unaffected. Because of the way it operates, R2 is required so that the input voltage is exactly half the supply voltage.
+ +
Figure 6 - Voltage Waveforms
While this circuit is very sensitive as shown, that's easily adjusted simply by increasing the value of R4. For example, if R4 were to be 100k, the thresholds are ±4V (You may expect a higher voltage, because the 'pull-up' resistor (R2) forms a voltage divider). However, it really is ±4V. Unlike most others, it can be made to be far more sensitive, simply by reducing the value of R4. At 10k, the thresholds are ±580mV.
+ +If used for stereo, only the opamps, diodes and input circuits need to be duplicated. The voltage divider (R3, R4 and R5) can supply the reference voltages to both channels. While this is the most elegant (and predictable) solution, it's also more expensive to build and takes up more PCB real estate.
+ + +The next circuit was used by a major home hi-fi manufacturer (which shall remain nameless because the circuit is rather poorly thought out). The positive threshold is 2.81V, but the negative threshold is poorly defined and has a low output level. The best estimate is around -4.33V, so it's very asymmetrical. It can be argued that the negative detection threshold is really -3.81V, but that doesn't really help. Note that the output polarity is reversed in this circuit - it's at some positive voltage when the DC thresholds are exceeded. The other circuits show a positive voltage when there is no significant DC voltage.
+ +When the input goes positive, Q1 turns on, and that removes the drive current to Q3 (which uses a zener diode as a level shifter in its base circuit). A negative input voltage is intended to turn on Q2, which is connected in a common base configuration. Unfortunately, this doesn't work as well as expected, because the common base configuration means that the emitter current is the sum of the base and collector currents. Negative detection is therefore rather dismal. It's worth noting the the Figure 1 circuit also uses common base connection for negative voltages, but it's been designed to ensure equal sensitivity in both common emitter and common base modes. Since no such precautions were taken in this circuit, it doesn't work well at all.
+ +
Figure 7 - Commercial Detector
The negative detection is so poor that additional amplification would be essential to ensure that a relay will activate reliably at the detection thresholds. As shown, it will activate a relay without additional parts, but the negative DC voltage needs to be at least -10V to ensure reliable relay operation. This is not a circuit I could ever recommend, and it's shown purely because it exists in a commercial product. Many people think that major manufacturers know what they are doing, but often they only aim for 'good enough'. This arrangement can be modified to work much better than it does simply by adding a couple of resistors, but IMO there's no point pursuing it. It's also unsuitable for more than one channel, which is another limitation.
+ +
Figure 8 - Voltage Waveforms
In a word, "dreadful". This isn't a circuit I'd use or recommend as shown, because it's symmetry is so poor. Yes, it will (probably) protect loudspeakers from a failed amplifier, but it's not an elegant solution by any stretch of the imagination. However, it can be improved, and it's not particularly difficult to do so. The problem with the circuit lies in the third transistor (Q3) and the zener diode, a combination that's very poorly thought out.
+ +
Figure 9 - Improved Commercial Detector
With the addition of one transistor and a couple of resistors, the circuit becomes usable. The detection thresholds are +2.4V and -3.3V with the values shown. The output waveform is much closer to those shown in Figures 2, 4 and 6, rather than the appalling waveform in Fig. 8. The cost difference is marginal, and you get a detector that works as it should.
+ + +With the exception of Fig. 5 (using opamps or comparators), all of the detection schemes are asymmetrical. It doesn't really matter, because when an amp fails the output voltage is well above the detection thresholds. However, it's probably still perplexing because you need to understand exactly what's going on to understand why the threshold voltages aren't the same for positive and negative fault voltages.
+ +This comes about because many detectors use base drive for positive inputs, and emitter drive when the input signal is negative. Unfortunately, the transistor that uses emitter drive has no current gain, so the collector current is (almost) the same as the emitter current. If you have (say) 5V across a 100k resistor, the current is 50µA. This is more than enough base current to drive a transistor (common emitter) into saturation, but if the collector resistor is 22k, the voltage drop across it with emitter drive (common base) is only 1.1V (vs. [say] 12V when the base is driven).
+ +That's a big difference, so the next transistor needs a lot of gain to saturate (turn on completely) with only 50µA base current. The original circuit in Fig. 7 doesn't have anywhere near enough gain (and it's badly configured), so performance is limited. In the other circuits, the required gain is available, but because the collector current in the detector is so asymmetrical, the detection voltage is also asymmetrical. Even when NPN and PNP transistors are used, that still doesn't mean that the circuit will be symmetrical. No two transistors of opposite polarities will ever be identical.
+ +Asymmetry can be cured completely by using a dual supply - say ±12V. However, that means a more complex power supply which will cost more. If (when?) a power amplifier fails, it's almost always due to a shorted output transistor. There are exceptions of course, but nearly all failures mean that the amp's output voltage swings to one supply rail or the other. 'Partial' failures are certainly possible, but are very rare (I don't think I've ever seen a failed amp where the quiescent output voltage was not stuck to one supply rail or the other).
+ +Use of DC coupled amplifiers is discouraged (by me) because they don't make sense for audio. A fault in a preamp can cause the amp's input to be subjected to some indeterminate DC level, which is then amplified. If the DC input is around 100mV, you could get a DC output from a DC-coupled power amp of perhaps 3V, and that will not trip 99% of DC detectors. It also won't hurt most speakers, but it will cause a significant cone offset, increasing distortion.
+ + +All of the circuits shown need an input capacitor as shown below. They were omitted from the circuits so the detection voltage could be monitored, but they are absolutely essential in the final circuit. The cap needs to be large enough to ensure that a full power 20Hz audio signal will never trigger the relay. In most cases, a 10µF bipolar electrolytic capacitor will be sufficient, but if the detector is very sensitive a larger value may be necessary. The input resistance/ impedance of the detector needs to be considered (not shown in Figure 10), but it's usually not a major issue.
+ +
Figure 10 - Input Filter (Typical)
+
+
The filter isn't particularly critical, with the main proviso being that no normal audio signal should cause the detector to trigger. The criterion I used for P33 was that a signal of 50V RMS at 16Hz should not cause the circuit to 'false trigger'. The same level at 10Hz should cause the protection relay to operate, ensuring that the amp's output is disconnected quickly enough to ensure that speaker damage won't occur. With amplifiers having a greater output capability (or detectors with a low input impedance), the capacitor value may need to be increased.
+ +The figure of 16Hz was used because that's the lowest frequency normally available from large pipe organs, and it's extremely unlikely that such a high level will ever be present in any recorded material. The filter shown is -40dB at 16Hz, which is ideal. Should a DC fault develop, the response time is dependent on the fault voltage, and with 35V (positive or negative) the circuit will respond in under 40 milliseconds. Most relays will release in less than 10ms, so the worst case is that the loudspeaker will be subjected to the supply voltage for no more than 50ms (an energy level of under 8 Joules ¹). This is not sufficient time or energy for any damage to occur with woofers, but for tweeters the cutoff frequency needs to be raised to provide faster operation.
+ ++ ¹ 8 Joules is delivered from a 10,000µF capacitor charged to 40V. If this is delivered to any typical low-frequency driver the result is no more than a loud 'pop'. + No damage will occur, as the energy is not present for long enough to cause voicecoil heating. ++ +
Of the circuits shown, those with diode inputs can be adapted for stereo by adding another pair of diodes and a second filter circuit. If diodes are not used, the circuit needs to be duplicated for stereo operation. While it is possible to simply add a second input resistor, this will only work with the Figure 5 circuit (for example) if R2 is reduced to half the value shown (50k). Adding the second input resistor has the secondary effect of reducing the sensitivity, so the circuit won't be as fast. More troubling is that if one amp output goes positive and the other goes negative, the two cancel and the circuit won't react at all.
+ +While this is extremely unlikely (the faults would have to be simultaneous), no protection is offered at all if the amp is turned off and back on again with the faults still present. The chances of such a failure may be extremely small, but it's not a risk I'd be willing to take. A protection circuit that doesn't work when it's needed is dangerous, so The Figure 5 circuit would require duplication to ensure ultimate reliability. The Figure 7 circuit is not recommended at all, and it's shown solely because it exists, and you may come across it one day.
+ +There are several alternative speaker protection circuits, with many based on principles that are similar to those shown here. Some can be expected to work, but others are likely to be somewhat irrational approaches to the problem. Some have been thought through, while a few examples appear to have (potentially serious) flaws. I have neither the time nor inclination to even try to include all of the arrangements I've seen, but almost without exception, the 'protection' relay is wired incorrectly, without the normally closed terminal grounded. The amplifier should always be connected to the common terminal of the relay, with the speaker connected to the normally open contact.
+ +BTL (bridge tied load) amplifiers pose special problems, especially those that use a single supply. That means that each speaker terminal always has DC present. One solution to this dilemma is described in Project 175, which uses the method shown in Figure 5. It shows LM393 comparators rather than opamps, but the principle is relatively unchanged. There is some added complexity because you can't rely on the output voltage being exactly half the supply voltage.
+ + +There is one (and only one) way to wire the relay, and that's shown below. The vast majority of speaker 'protection' circuits simply wire the amplifier and speakers to the common ('Com') and normally open ('NO') contacts, and if the voltage is over the rated maximum for the relay (typically 30V DC), when the contacts open an arc will be drawn across the contacts, which passes DC straight through to the speakers. I see many, many circuits published that completely fail to address this, so the protection circuit may actually provide far less protection than you imagine. I've tested and verified this!
+ +
Figure 11 - Relay Wiring Diagram
+
+
The drawing shows the correct wiring. If (or when) an amplifier fails, the arc current is shunted to ground rather than the speaker. In some cases, a capacitor can be used in parallel with the contacts in the hope that it may suppress the arc, but it needs to be a fairly high value, and there's absolutely no guarantee that it will work. You can also use two relays, with the contacts wired in series, which gives a theoretical maximum voltage of 60V DC. This is covered in some detail in the Relays, Part II article. Failure to configure the relay correctly could become a very costly mistake, especially with very high power amplifiers. The problem is worse with single-supply BTL amplifiers because you can't short the speaker to ground, so see Project 175 for details.
+ + +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +AN-018 |
It's not all that often that you need a diode with ultra-low reverse leakage. A typical 1N4148 diode has a reverse leakage of between 1 and 1.5GΩ (at 25°C) with a reverse voltage of 10V, and this is sufficient for most common applications. Of course, you can buy diodes that are fully specified for low leakage. The BAS716 is rated for 5nA reverse current at 75V, which works out to 15GΩ. The BAS454A is better still, with 1nA reverse leakage at 125V (125GΩ). This increases to 500nA at a junction of 125°C (only 125MΩ) - it's highly temperature dependent (as with all diodes). You may find that some specialised types are rather expensive and/ or difficult to get from your local supplier. You will need to run your own tests, using the technique described in AN-016 to measure leakage resistance.
+ +For these tests, I would normally use a test voltage of 10V, but I used a voltage of 25V because that made it a little more likely that I'd be able to measure something - however small. As it transpired, it made no difference if I'd used 10V or 25V (25V is at or below the collector-base or gate-source breakdown voltage for the BJTs and JFETs I tested). This was because the leakage was so low that I was unable to measure anything - even at the higher voltage. My bench meter has an input impedance of 11MΩ, so if I measure 1mV, the current through the meter is 9.09pA (Ohm's law).
+ +Provided you have low current requirements, the collector-base junction of an ordinary bipolar transistor is very good indeed. I tested a number of BC546 transistors, and was unable to measure any reverse leakage (and my bench meter can (theoretically) resolve to 0.1mV). It didn't matter if the supply was connected or not, the meter steadfastly showed ±0.0002 (normal digit uncertainty for my multimeter). Even if I did manage get a reading of 10mV on my meter, that still represents a leakage resistance of 25GΩ! However, I was unable to measure anything ! When the transistor was heated I was able to measure some leakage current, but it was (literally) too hot to touch before leakage exceeded 1nA (25GΩ).
+ +The drawing shows both a BJT (bipolar junction transistor) and a JFET (junction FET) used as diodes, with the diode symbols showing the polarity. The use of 'K' for cathode is standard nomenclature in case you were wondering, because 'C' is reserved for the collector of a transistor. However, the use of 'K' predates transistors, and has been used for as long as I can remember.
+ +
Figure 1 - NPN Transistor And N-Channel JFET As Low Leakage Diode
It's not uncommon to see JFETs specified for very low leakage diodes, but they usually aren't quite as good as a bipolar transistor. I tested a couple of 2N5459 JFETs (no longer available, but I had them in my parts drawer), and 'measured' a leakage current of about 45pA (~550GΩ). I say 'measured' in quotes because the value was so low, and I had to estimate the actual voltage displayed. However, this leakage increased very rapidly with heat, and it was no better than a 1N4148 even at a 'comfortable' temperature (I was unable to measure it, but I'd guess around 50°C).
+ +Note that the maximum current is low (equal to the peak base current of the transistor or gate current for a JFET), so this technique is only suitable for currents that are typical in 'signal level' circuits. The emitter and base of a BJT can be joined or not - it made no difference in the tests I performed, but I wouldn't be happy having a terminal floating in a very high impedance circuit. In general, the current should be no more than about 10mA (continuous), but short-term pulses with higher current will (probably) do no harm. I would be very reluctant to use a transistor or JFET at more than 25mA peak. The requirement for ultra-low leakage is common in sample-and-hold circuits, especially if the hold period is more than a few milliseconds.
+ +The other thing that must be considered is the junction capacitance, as that affects the switching speed. A BC546 has a typical value of 3.5pF, with a maximum of 6pF (10V between collector and base), while the two low-leakage diodes quote around 2-4pF, with recovery times of 0.3 to 3µs (which is pretty slow - a 1N4148 has a reverse recovery time of 4ns, almost an order of magnitude faster). This figure is not quoted for any transistor's collector-base junction, but can be assumed to be somewhat slower than a 1N4148, but faster than most low-leakage diodes.
+ +Analog Devices show a diode-connected transistor in the OP77 datasheet, as part of a peak detector. They specified a 2N930, but there's no reason to expect that to be any better than the BC546 devices I tested. The collector cutoff current (collector to base voltage will be specified) is usually shown in datasheets as a 'worst case' value, and most will be far better than claimed. Leakage currents in the pA (pico-amps) range are common ... at room temperature. Leakage current increases exponentially as temperature is raised, so expecting good performance at elevated temperatures is unwise.
+ +Note that if you use any of these techniques, the circuitry should be on Teflon (PTFE) standoffs or wired in 'mid-air'. Even PCB leakage can seriously degrade the total resistance, and this may make your circuit no better than a common 1N4148 diode if you aren't very careful with the layout.
+ + +Datasheets ...
+Some info can also be found on the Net, but there are many conflicting opinions and not much real information.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +AN-019 |
Although I've called this 'zero power', that's not strictly true, since anything that draws current and has some voltage across it must dissipate power. However, where most LED battery indicators draw current that does nothing other than illuminate the LED, this idea puts the LED current to use in the circuit being powered. It only works when you have a regulated supply (e.g. 5V) that's powered by a battery, which will typically be a standard 9V battery for small, low-power applications. It's a clever idea (thanks to B Fraser), and despite a lengthy search I found nothing similar elsewhere.
+ +There is one caveat - the circuit being powered must draw more current than the LED at any battery voltage. Standard voltage regulators can only source current, and if your load only draws a very low current, if the LED current is too high the regulator's output voltage will rise.
+ +Mostly, I'd suggest an 'ultrabright' LED, as these can provide more than enough brightness even with as little as 0.5mA. The basic circuit shown here can be adapted for almost any voltage, but you need to verify the LED forward voltage. It has to be greater than the minimum input-output voltage differential of the regulator. Most common 3-terminal regulators will function down to the point where the input voltage is only about 2V greater than the output.
+ +
Figure 1 - 'Zero Power' LED Battery Monitor
Provided the battery voltage is greater than ~7V (for a 5V output) the LED will get current via R1. Once the voltage falls below the LED's forward conduction voltage it goes out, indicating that it's time to replace the battery. Because the LED current flows into the load, it's not wasted, and no additional circuitry is required. Your circuit gains just one LED and one resistor. Once the battery voltage falls to ~7V, there's not enough voltage to turn on the LED, so you know that it's time for a new battery.
+ +The resistor (R1) should be selected such that it doesn't pass more current than the circuit following the regulator draws. For example, if your circuit draws 5mA at 5V, the LED current must be less than 5mA with a brand new battery (typically about 10V for a nominal 9V battery). To be safe, I'd recommend a maximum current of about 2.5mA. If the circuit has a low-current 'sleep' function, it may not be possible to use the circuit unless you use an ultrabright LED that will be bright enough with no more than half the current drawn in sleep mode.
+ +If the LED is an ultrabright type and your load draws 10mA (not including the regulator current of about 3mA), aim for LED current of about 2mA with a new battery. That means that R1 will be 1.5kΩ, based on a LED forward voltage of 2.2V. The current with a battery voltage of 7.3V (close to end-of-life) will only be 67µA so the LED will be dim, but still visible. This is not intended to be a precision circuit, simply a useful indicator of battery health. Having the LED current passed to the load means that there's no loss of capacity for the battery, where a parallel LED would draw current that's only used for the LED itself.
+ +High Intensity + | |||
Colour | Fwd. Volts Min. | Fwd. Volts Max. | Lumuminous Intensity + |
Yellow | 2.0 | 2.6 | 750 + |
Green | 2.4 | 3.0 | 240 + |
Yellow | 2.0 | 2.6 | 750 + |
Red | 1.9 | 2.6 | 1000 + |
Red | 1.8 | 2.2 | 1000 + |
Orange | 2.0 | 2.6 | 1000 + |
Yellow | 2.0 | 2.6 | 1000 + |
Ultrabright + | |||
Colour | Fwd. Volts Min. | Fwd. Volts Max. | Luminous Intensity + |
Red | 2.1 | 2.7 | 7500 + |
Green | 3.9 | 4.5 | 2400 + |
Blue | 3.9 | 4.5 | 750 * + |
Yellow | 2.1 | 2.7 | 5750 + |
The above data come from a Vishay LED Lamps datasheet (which fails to specify the units of luminous intensity, but it's probably millicandela [mcd]), and the blue LED indicated with a * looks like a misprint. The 'ultrabright' types are generally the most suitable, but note that blue LEDs have a much higher forward voltage than other colours. In the ultrabright series, the green LED also has a much higher than normal forward voltage. This may (or may not) be the case with equivalent LEDs from a different manufacturer. I didn't include the part numbers as they are supplier specific, and you will have to choose the most appropriate LED from your preferred supplier. As always, you will have to do some research, and you will almost certainly have to test the LED to make sure it turns off before the regulator's input voltage is too low.
+ +Note: The minimum forward voltage is usually specified at a particular operating current, which was not specified in the datasheet used to obtain the figures shown above. A LED with a nominal forward voltage of ~2V may actually still emit some light with as little as 1.6V, so testing is essential. A bench test with a red ultrabright LED showed that at 1.6V (with a 2.7kΩ series resistor) the LED stopped emitting light at almost the exact voltage where the 78L05 dropped out of regulation.
+ +For 'general-purpose' circuitry, the circuit current will generally be no more than 10-20mA if powered from a 9V battery, and a 'high efficiency' LED may suffice. Remember that like all things in electronics, you must be prepared to experiment. If your regulator needs more than around 2V input-output differential, you can add a signal diode (1N4148 or similar) in series with the LED to gain an extra 0.65V of 'headroom'. It is certainly possible to make the LED brightness fairly constant until the 'bitter end', but naturally that makes the circuit a little more complex.
+ +
Figure 2 - 'Zero Power' LED Battery Monitor With JFET Current Source
If you really think that the LED's brightness should remain fairly constant until the battery is discharged, you can add a JFET current source. Because JFETs are extremely variable, the resistor value depends on the JFETs characteristics. A good start is around 1kΩ, but it may be higher or lower depending on the JFETs 'pinch-off' voltage (VGS (off)). The FET also adds some extra voltage to that of the LED, and with a 'typical' (if there is such a thing) J113, the LED will go out when the input voltage is about 7.5V, assuming a 2V forward voltage for the LED, and a J113 with a VGS (off) of -1V (the range is from -0.5V to -3V according to the datasheet).
+ +The LED current as simulated is 1mA, more than enough with an ultrabright LED. At 8.4V, the LED current falls to 0.8mA, and by the time the battery is down to 7V, the current is only 100µA. The LED current reduces rapidly below 8V input, so you'll really know that the battery needs to be replaced. I tested the circuit shown in Fig. 2 and LED brightness remains the same with any input voltage greater than 8V (the J113 I used had a VGS (off) of 1.3V). As the voltage fell further, the LED dimmed very noticeably until it extinguished at about 6.8V, just before the 78L05 dropped out of regulation. The LED current was just under 0.5mA, and it is easily visible even under bright lighting.
+ +
Figure 3 - LED Current, Battery Voltage And Regulated Output (5V)
The graph shows the LED current vs. battery and output voltages. At about 8.5V, the LED starts to dim as its current is reduced. When the battery voltage reaches 7V, the LED current is only 120µA, and falls to zero just at the point where the regulator drops out. The above was taken from a simulation, but reality is actually slightly better.
+ +It's obvious that additional complexity can give better results, but it also means that you need to select the LED and the JFET and/or the resistor. The JFET should have a VGS (off) of no more than 1V, or the LED dropout voltage will be too high. Whether it's worth the extra fuss depends on your expectations and willingness to experiment. It should be apparent that the LED current still flows to the load, so no current is wasted. The solution with the JFET is more 'elegant' because the LED brightness doesn't change much until the battery is close to being 'flat'.
+ +Figure 1 idea submitted by B Fraser (first initial used by request).
+ +Datasheet ...
+![]() | + + + + + + + |
Elliott Sound Products | +AN-020 |
Zero-crossing detectors are covered in detail in AN-005, but peak detectors are a different animal altogether. In this case, I'm referring to circuits that provide a synchronisation pulse at the peak of an AC waveform, rather than detecting (and holding) a peak voltage level. In this sense, peak detectors are not very common. In reality, it's difficult to detect the true peak of a waveform with any precision, because it's rather poorly defined. Despite this, even a poorly defined pulse may be quite sufficient in some cases.
+ +The question will no doubt asked "why would anyone need to detect the peak?". I admit, it's a rather unusual requirement, but it is something that's needed in some classes of test equipment. An example is the Inrush Current Tester, where it's necessary to be able to apply mains power at the very peak of the AC mains waveform. There are countless peak detectors to be found on-line, but all that I saw were designed to capture the peak voltage, not provide a signal at the instant when the waveform arrives at its peak amplitude.
+ +The comparator plays a vital role - without it, there's no easy way to obtain a reference signal from an AC waveform.
+ +The reader may also wish to have a look at the zero-crossing detector described in the article about Comparators, which includes a circuit that can perform very well with audio frequencies up to at least 10kHz. It's more complex than the ones shown here, but is also a great deal more versatile. It's easy to get a pulse duty cycle of less than 2% at 1kHz. Similar results can be obtained from some of the other circuits described here, provided a fast enough comparator is used.
+ +In a way, you could consider these circuits (particularly the Fig. 3 version) as a rather over-complicated leading-edge dimmer. While this is certainly true, the circuit is designed for fairly high accuracy, and was originally devised for the inrush tester linked above. The circuits described are intended for fairly specific requirements, but I'm sure that at least a few people will have a need for them. I certainly did, and I seriously doubt that I'm alone.
+ + +Figure 1 shows a simple peak detector, and although it will work, the peak is not well defined. The pulse width is almost ~1.5ms with a 50Hz waveform, but a half-cycle at that frequency is only 10ms, so the mains voltage (based on 230V RMS) will vary by almost 17V during the detection period. That might not sound like much with a peak voltage of 325V, but it's certainly not a true indication of the peak amplitude. Although it has almost zero phase inaccuracy, that is largely because the pulse is so broad that any inaccuracy is completely swamped. The comparator function is handled by transistor Q1 - very basic, but adequate for the job if you don't need high accuracy.
+ +The circuit is also sensitive to level, and for acceptable performance the AC waveform needs to be of reasonably high amplitude. 12-15V AC is typical. If the voltage is too low, the pulse width will increase. The arrangement shown actually gives better performance than the version shown in Project 62 and elsewhere on the Net. In case you were wondering, R1 is there to ensure that the voltage falls to zero - stray capacitance is sufficient to stop the circuit from working without it.
+ +The pulse width of this circuit (at 50Hz) is about 1.9ms. The problem is that at 50Hz each half cycle takes only 10ms (8.33ms at 60Hz), so the pulse width is almost 20% of the total period. If you need true peak detection, this is nowhere near good enough. A simple circuit such as this also has issues when (not if) the mains voltage changes. Even a small drop of voltage can cause the circuit to miss a pulse, because the capacitor (C1) will retain some voltage. This is a requirement, as the circuit works by detecting the peak of the AC waveform as it momentarily exceeds the voltage across C1.
+ +If the mains voltage suddenly increases or decreases, the pulse-width is affected. A voltage increase will cause the 'detection' pulse to be wider, and vice versa. If you try to make the circuit more accurate (e.g. by increasing the value of R4), it becomes more sensitive to changes of the incoming mains voltage.
+ +The waveforms are shown above. The full-wave rectified voltage exceeds the voltage across C1 briefly for each half-cycle, turning on Q1 and producing a pulse that roughly corresponds to the peak of the waveform. It's quite obvious that it's not a precision detector, because the pulse is far too wide. To improve matters, greater complexity is necessary, relying on a more precise reference - the waveform's zero-crossing point.
+ + +There aren't a great many ways to make a mains peak detector. For best accuracy, a reference is required, and this will come from a zero-crossing detector. The simple circuit shown above works, but it's far from a precision detector. The next circuit is powered directly from the mains, but it can also be powered from the secondary of a transformer. Isolation is necessary for mains voltage operation, and this is provided by an optocoupler.
+ +![]() | WARNING - The circuits described below involve mains wiring, and in some jurisdictions it may be illegal to work on or build mains powered equipment unless + suitably qualified. Electrical safety is critical, and all wiring must be performed to the standards required in your country. ESP will not be held responsible for any loss or + damage howsoever caused by the use or misuse of the material provided in this article. If you are not qualified and/or experienced with electrical mains wiring, then you must not + attempt to build the circuit described. By continuing and/or building any of the circuits described, you agree that all responsibility for loss, damage (including personal injury + or death) is yours alone. Never work on mains equipment while the mains is connected ! | +![]() |
The circuit may look complex, but it's quite straightforward. It uses a single dual comparator, with the first stage being a zero-crossing detector. See AN-005 Zero-Crossing Detectors (ZCDs) for a complete explanation of these circuits. The ZCD discharges C2 at each zero-crossing, and it charges via VR1. This is a trimpot, so the exact timing can be set. For 50Hz mains, the peak occurs 5ms after the zero-crossing, or 4.17ms for 60Hz. The trimpot is set so that the output of U1B goes high at exactly the peak of the waveform. Note the 'isolation barrier', which separates mains (hazardous) voltages from 'safe' low voltage.
+ +The two important waveforms are shown next. The green trace is the voltage across ZD1, which is the zero-crossing signal. When that goes low, C2 is discharged, but the period is very short. As soon as the zero-crossing signal goes high again, C2 charges via VR1. When the voltage across C2 exceeds 3.3V (set by ZD2), the output of the timer goes high. This rapid increase of voltage is differentiated by C3 and R7, and drives the gate of Q1 high. The pulse duration is about 70µs. This pulses the LED in optocoupler U2, causing a positive pulse at the output.
+ +VR1 is adjusted to get the output pulse to match the peak of the AC waveform. By using the mains zero-crossing as a reference, far greater accuracy is possible than by any other means. This principle is used in the Inrush Current Test Unit, which allows the mains to be turned on at the zero-crossing point or at 90° (the peak of the AC waveform). The circuit can easily detect the 325V peak of a 230V AC waveform with an error of less than 1V (about 120µs).
+ +The circuit is also easily adapted for low voltage operation, in the same way as Fig. 1. There are very few changes needed, and while it's safe to work on you need a transformer. Despite any misgivings you may have, a transformer's output voltage is in phase with the input, so there's no displacement. The circuit is shown with a 15-0-15V transformer, but a single winding can be used, substituting a bridge rectifier for the full-wave version. It can be operated from any voltage you like (within reason), but if the zener diode voltages are changed the timing changes as well.
+ +You can build and test the Fig. 5 version with a (safe) low voltage, then simply power it directly from the mains with a series resistor arrangement as shown in Fig. 3. Because the circuit has a slightly higher current than the Fig. 3 version, the series resistance has to be lower. For 230V, a pair of 47k, 1W resistors in series will be fine for 230V, and only one is needed for 120V. The only other change is to add the optocoupler and change R9 to 1k (or less for more current). The optocoupler LED is wired in series with R9.
+ +The waveform from the output should look like that shown in Fig. 6 (or Fig. 7), with very narrow pulses that are aligned with the peak voltage by means of VR1. The red trace is in mA through the optocoupler, and the rectified voltage waveform is in green. The current peaks are only ~250µs wide, with the leading edge aligned with the AC peak. This can be used to trigger a switching circuit or anything else that needs a mains peak reference.
+ +I built the circuit shown above to test and verify the circuit's operation. I didn't have any LM393 comparators to hand, so I used an LM358 dual opamp. The other (small) changes were simply a matter of convenience (C3 & R6). The circuit behaves very well, and there is no sign of instability. In short, the simulated and physical circuits performed virtually identically. The next waveform (violet trace) was captured across R6, and it should be apparent that this is more than good enough to trigger the MOSFET driver stage.
+ +The capture was taken using the Fig. 7 circuit, but without the MOSFET and optocoupler. It was powered from a 10V AC source (ignore the voltage shown for the yellow trace, as that's a separate transformer voltage for reference). The peak detection pulse (violet trace) can be shifted across the mains peak, but because it's flat-topped the exact position isn't critical. The LM358 opamp instead of the LM393 comparator saves one resistor but adds a diode. Despite the rather dismal performance of the opamp (compared to a comparator), the circuit performs perfectly.
+ +The negative pulses seen on the violet trace indicate the point where the ZCD discharges the timing capacitor. As you can see, these pulses are not at the zero-crossing, because no attempt was made to use a 'precision' ZCD, as it's not necessary. It only needs to be predictable and repeatable, and while you only see a small number of pulses in Fig. 8, I observed the circuit for some time. The pulse positions stayed exactly where they were supposed to be for as long as I cared to run the test. Due to the normal 'flat-top' mains waveform (which changes during the day), there is some leeway. There is no significant shift in the pulse position when the mains voltage is varied, even if well beyond the variations normally seen (nominally ±10%).
+ +Current drawn from the mains is minimal (about 1.1mA RMS). The voltage on U1.2 is about 5.1V (set by ZD2), but the LM393 comparator and the LM358 opamp can have their inputs at (or even slightly below) ground. The output pulses low when Pin 3 falls below 5V, and discharges C2.
+ +The 10µF filter cap for both circuits looks as if it's far too low, but it can maintain the voltage for the very brief current pulses. Be careful of stray capacitance around the comparator's input pin. If it's more than around 50pF the circuit may not work, and R4 will need to be reduced a little. Lower values give a wider pulse. The LED current will be about 3mA with the 1k series resistor for the optocoupler, and it can be reduced if you need more current.
+ +These circuits are provided so you can experiment, and the waveforms (other than Fig. 8)are from the simulator. Fig. 7 has been tested on the workbench. A scope capture is the final proof of operation. If used with 120V, 60Hz, reduce the value of R1 and R2 (only half the resistance is needed, and two resistors in series for each isn't needed).
+ +There are limited applications for peak detectors, and they can only work with a fixed frequency because the 'peak' pulse is determined by a timer. Nevertheless, I hope that someone will find the circuits to be useful. Despite the limited application for something like the circuits described, the usefulness cannot be denied based on my inrush tester, which has allowed me to produce high resolution 'scope captures of inrush current under different conditions. It's one thing to hypothesise and/ or simulate, but another to run the tests on the workbench and provide real-world measurements.
+ +These circuits are ESP originals, and there's virtually nothing else even remotely similar described that I could find. The Fig. 3 circuit was inspired by the circuit I developed for the Inrush Tester, and also works as a 'proof of concept' for the circuitry shown in that article.
+ + +![]() | + + + + + |
Elliott Sound Products | +Application Note Index |
Page Last Updated - May 2022
+ +These application notes are presented as a means of making useful circuits and sub-circuits available, along with some information about how they work, test results (where applicable), etc. Some of these are adapted from the projects page, as their application has the potential to be broader than indicated in the project itself. + +
Few of these (already published or yet to be published) are original, although some have been adapted and changed to the extent that the original may barely be recognisable. As always, if you have a good circuit idea feel free to submit it (along with any reference material).
+ +No. | Description | Date | Flags |
AN-001 | Precision rectifiers. Half and full wave types for signal processing, instrumentation, etc. | Feb 2010 | |
AN-002 | Analogue metering amplifiers | Jun 2005 | |
AN-003 | Simple switchmode Supply for Luxeon Star LEDs | Jun 2005 | |
AN-004 | Car dome light extender - make the dome light stay on for a while after the car door is + closed | Jun 2005 | |
AN-005 | Zero crossing detectors and comparators, Unsung heroes of modern electronics design | Jan 2011 | +![]() |
AN-006 | Ultra Simple 5V Switchmode Regulator - voltage regulator version of AN-003 | Jun 2005 | |
AN-007 | High Power Zener Diode, boosting normal zeners to allow high power usage | Jun 2005 | |
AN-008 | How to Use Zener Diodes, the things the data sheets do not always tell you | Jun 2005 | |
AN-009 | Versatile DC motor speed controller (and where to get high power geared motors) | Jul 2005 | |
AN-009/2 | DC motor speed controller (Part 2) | Jul 2005 | |
AN-010 | 2-Wire/ 4-Wire converters/ Hybrids (Telephone) | Feb 2009 | |
AN-011 | 4-20mA Current Loop Signalling Interfaces | Feb 2011 | |
AN-012 | Peak, Average and RMS Measurements | Jul 2016 | |
AN-013 | Reverse Polarity Protection - A collection of different ways to protect electronic circuits | Jan 2017 | +|
AN-014 | Peak Detection Circuits | Mar 2017 | |
AN-015 | Input Overvoltage Protection Circuits | Apr 2018 | |
AN-016 | Measuring Ultra-High Resistance With Existing (Cheap) Equipment | Apr 2019 | |
AN-017 | DC Detectors For Loudspeaker Protection (also usable as zero-crossing detectors) | Jun 2019 | |
AN-018 | Ultra-Low Leakage Diodes - Not as hard to find as you might think | Aug 2019 | |
AN-019 | 'Zero Power' LED battery Condition Indicator | Jan 2022 | |
AN-020 | Mains Peak Voltage Detectors. Unusual (but useful) circuits to detect the peak of the AC mains waveform | Jan 2022 | +![]() |
No. | Description | Date | Flags |
AN-166 | Basic Feedback Theory (Philips Semiconductors) | Dec 1988 | |
AN-1000 | Mounting Guidelines for the SUPER-220 - Transistor Mounting Techniques + (IR) | Unknown | |
AN-72 | A Seven-Nanosecond Comparator + for Single Supply Operation (Linear Technology) | May 1998 |
Each application note may have one or more 'flags'. These indicate the status of the app. note, and are as follows ...
+ + +![]() | The design (or update) is less than 2 months old (or thereabouts) + |
![]() | Mains wiring is involved, and is potentially dangerous - heed all warnings ! Note that other app. notes may + also need a power supply which also requires mains wiring + |
Date | The page was added or updated on the date shown + |
![]() | The appnote has had an update since original publication + |
![]() | Link to external site + |
ESP reserves the right to change or update application notes, projects and articles without notice, so it is important to be aware that a change may have been made. You should always watch for updates of previously published items. Do not build any of the circuits presented here without checking for updates first. An 'update' symbol indicates a recent addition or update.
+ +Please see the ESP disclaimer for important information about this site and the contents thereof.
+ +Please note that these application notes are not supported, so please do not ask for assistance or explanations. All circuits are checked for accuracy, but it is not guaranteed that they will work for your application. In the same way that you might find application notes published by semiconductor (or other) manufacturers, you don't e-mail them for help and the same applies here.
+ +Submissions are welcome, but unlike magazines where they may offer prizes or cash for submissions, all I can offer is wide distribution of your idea. Please ensure that any submission is accompanied by full disclosure of references - do not try to claim the work of others as your own.
+ +Total Visitors since Jan 2001 -
+
+ |
![]() | + + + + + + + + |
Elliott Sound Products | +Voltage & Frequency |
Initially, it all seems simple enough. You buy a piece of equipment from an on-line seller in another country, and expect that it should have the necessary switching to allow you to set the voltage to suit the mains supply where you live. With globalisation being the key term thrown around these days, you'd naturally expect that there should be few (if any) problems.
+ +If the country you bought the gear from is Australia, Europe or the UK (but the equipment was built elsewhere) you might get lucky, but if it's equipment that's made in the country of origin you may not. Buying from the US or Canada will often cause problems, because the 'export model' is generally not sold locally, so it will be made to operate only with the US mains voltage and frequency.
+ +While the common answer is to just get a step down transformer to reduce your local mains (say 230V) to 120V, this may not solve the problem at all, and may introduce serious safety risks as well as the possibility of transformer failure. Unfortunately, the simple (and common) answer fails to consider many different possibilities, some of which may place the user and/or the equipment at considerable risk. One of the most common requirements is that people want to be able to use equipment from the US in Australia, Europe, etc.
+ +The step-down transformer is not straightforward, although it initially seems that nothing could be simpler. Very few people seem to appreciate the various things that can go wrong, even those with technical training. 'Information' from forum sites is almost always either wrong, overly simplistic or misguided. A small number of forum posters will understand the risks, but it's impossible for the average person to determine who is right and who's not.
+ + +Voltage can be very confusing. There are so many different standards worldwide, and most people are unaware that the quoted voltage is nominal. It can vary widely from that claimed and depends whether you are in the city, close to an industrial complex, near a distribution transformer, in a rural area, etc., etc. In Australia we used to have 240V, but that was 'changed' to 230V, except that for most installations nothing changed at all. Much the same has applied in other countries - especially the US.
+ +US mains voltage is regularly stated to be 110V, 115V, 117V and 120V. Various changes over the years have occurred, and the nominal voltage is supposed to be 120V. Don't be at all surprised if you measure any or all of the voltages quoted ... at the same wall outlet but at different times of the day! This is completely normal.
+ +The mains in much of Europe used to be 220V, but is now claimed to be 230V, and fluctuations are just as common there as anywhere else. The mains voltage in many countries outside of the so-called developed nations can often be an even greater lottery, and even Japan has dual standards - 100V, at a frequency of 50 Hertz in Eastern Japan (including Tokyo, Yokohama, Tohoku, Hokkaido) and 60 Hertz in Western Japan (including Nagoya, Osaka, Kyoto, Hiroshima, Shikoku, Kyushu). It may be claimed that "this frequency difference affects only sensitive equipment". The frequency difference can be highly significant - the above statement is a gross over-simplification. This is covered in more detail below. So, voltages worldwide vary widely [1] and you will rarely measure the claimed voltage.
+ +Unfortunately, many people will buy equipment first, assuming that conversion is 'easy'. Without conducting tests, it's actually very difficult to be certain that a conversion will work at all. US (60Hz) equipment may or may not work with 50Hz mains, and it is usually impossible to find out unless it's tested at the lower frequency, or someone else with an identical piece of kit has posted factual information about the end result. Information is everywhere, but a significant amount is not associated with reality.
+ +The first thing that you need to determine if whether the voltage needs to be increased (step-up) or decreased (step-down). The latter is more common, but if (for example) European equipment is to be used in the US, a step-up transformer will be needed. Many European made goods (or goods intended for Europe) include multiple voltage selection, but some don't. Some new equipment uses an auto-ranging switchmode power supply. These can generally work with any mains voltage from 100V or less up to 240V without need for adjustment - but not always!.
+ +Where it is decided that a step-down (or step-up) transformer is required, these are generally available with a limited number of output voltages - usually one!. While this might be alright with goods intended for Australia/NZ, Europe or the UK, you might run into problems with equipment designed specifically for the US domestic market. The reason for this is rather obscure, not well understood by most people, and is explained below. The problem is frequency!
+ + +Not all transformers are created equal, and even those that seem to perform exactly the same task can be very different. With on-line auctions, equipment made at rock-bottom prices in Asia, uneducated sellers who don't actually understand what it is they sell or how it might be dangerous, and you have a potentially lethal mix.
+ +Ideally, a step down transformer will use separate windings for the primary (typically 230V) and secondary (120V). The protective earth connection will be continuous, right from the 230V plug through to the transformer case and then to the 120V outlet(s) provided. The primary and secondary windings are separated electrically, and a test with an ohmmeter will indicate that there is no electrical connection between the two windings.
+ +A complete wiring diagram is shown in Figure 1, and by using separate windings there is an isolated and comparatively safe secondary winding. This is extremely important if you purchase vintage US electronic equipment that happens to use a 'hot chassis'. This term refers to gear that does not use a mains transformer of any kind, and simply connects directly to the mains. This often meant that the chassis was live (connected directly to the mains active). With 120V gear and a bit of insulation this was considered 'safe'. As Arthur Dent said in Hitchhikers' Guide to the Galaxy "Ahhh. This must be some strange new meaning of the word 'safe' that I was previously unaware of." (Or words to that effect.)
+ +
Figure 1 - Properly Wired Step-Down Isolated Transformer
The diagram above shows the only genuinely safe form of step down transformer. The isolated windings mean that no part of any insulation designed for 120V can be subjected to 230V (or more). While the fuses shown are not strictly needed in both primary and secondary, they are cheap insurance. Ideally, the transformer will also be protected by a thermal switch that will remove power if it gets too hot. While electronics engineers, technicians and some enthusiasts will - hopefully - always make sure that the load is appropriate, the general public will not. It's not their fault - how is someone who knows nothing about electricity going to understand what VA means? Do you know what it means? If not, I suggest that you read the beginners' articles on transformers.
+ +Compare Figure 1 with an autotransformer as shown in Figure 2. If wired correctly, these are a much cheaper (and usually smaller) alternative, and they are generally 'safe' (using the "strange new meaning" described above). There is a problem though, being that the primary and secondary are one and the same - the low voltage is simply a tapping on the main (primary) winding. Autotransformers are smaller, lighter and usually cheaper than an isolation transformer, so people like them. An autotransformer has a direct electrical connection (of usually only a few ohms) between the high and low voltage sections - there is NO isolation whatsoever.
+ +There is some doubt as to their legality if sold as an appliance in their own right - in Australia an autotransformer cannot be certified because there seems to be no classification. In Europe they may very well be an illegal appliance, but they are sold anyway, especially at on-line auction sites. The situation will vary from one country to the next, but potentially lethal models can be purchased on-line from anywhere in the world. In industrial applications auto transformers are common, but they are a fixed installation and not little boxes that anyone can use with a home appliance.
+ +Until some form of global certification scheme exists that describes (and enforces) what can and cannot be done, anything is possible. There are real dangers with auto transformers when used as a stand-alone product, and IMO they should be banned worldwide. I do not recommend that anyone wait for such regulations - it may never happen.
+ +
Figure 2 - Properly Wired Step-Down Autotransformer
While an autotransformer may be considered 'safe' if properly wired, there was a tale (including photos) in a local electronics magazine (Silicon Chip, February 2010 issue), where auto transformers sold on a well known auction site are not only wired incorrectly, but are not even earthed. The transformers in question are rated at 300VA, and the active lead was common to both input and output as wired. This wiring is potentially lethal, and these transformers have no Australian approval markings and are potentially deadly. I filed a report for the listing ... no action was taken.
+ +The biggest problem is that you usually don't know what you're getting until it's too late. If the seller doesn't understand the difference between a transformer with isolated windings and an autotransformer, then asking the question won't help at all. If the item was made in Asia, then all bets are off - it may be 100% ok, or it may be incorrectly wired, dangerous and illegal. If a supplier cannot tell you for certain that the transformer is an autotransformer or uses separate windings, don't buy it! If the seller doesn't know, and doesn't know how to find out, then s/he has no business selling the product.
+ +In general, I strongly recommend that any step-down transformer used should use isolated windings (Figure 1 arrangement). At least one fuse is mandatory, and the mains input plug must not be the same as the output socket. If the seller doesn't know what kind of transformer is used - go elsewhere. Safety comes first, and auto transformers are not safe for use by the general public - especially with equipment that may have a live (hot) chassis. This was very common with vintage US made equipment - including some small guitar amplifiers!
+ ++ An example is the Kay 703-C vintage guitar amp. Although the schematics I've seen show an 'isolation transformer' it's no such thing - it's only a filament transformer for the first + valve. 'Isolation' is afforded by a 47k resistor bypassed by a 500V DC capacitor (which will fail with 230V applied). An amplifier like this, connected through a dubious + autotransformer is a potential killer. Should the chassis (and therefore the earth of the input jacks) become live, you have 120V or worse - 230V - connected directly to the strings + on your guitar via an internal connection in the guitar. If you now pick up a microphone, or touch some other earthed equipment, you are very likely to die right then and there. + Alarmist? Not at all - this is very real ! ++ +
Some transformers have 'universal' connectors that will accept US, European, Japanese, UK, Australian/NZ (etc.) plugs. While perhaps convenient, the legality of these connectors is dubious. They may also pose a significant risk if the transformer is a step-up type (120V to 230V for example), because they will accept a US style 120V mains plug. It is important that any transformer used has a 3-pin earthed output connector, as it is often a requirement (or at least desirable) to be able to use a proper 3-pin earthed (grounded) mains plug.
+ +![]() | Some transformers (seen for sale in Australia, but no doubt elsewhere as well) have both a 240V and 110V output at the front, with both using + the same connector! This is asking for trouble ... a momentary lapse of concentration that causes your 120V equipment to be accidentally plugged into the 220/230/240V output may + cause irreparable damage, well before any fuse or other protective device opens. This qualifies as an extremely bad idea, and such products should be avoided like the plague. + |
To assure the safety of yourself and your family, any transformer you purchase should be checked by an electrician or qualified technician to ensure that it is properly earthed, is a true isolation transformer, and is likely to be able to provide the claimed power without overheating. Products purchased from reputable shop-front (not on-line) dealers are likely to be safe, but even this cannot be guaranteed.
+ +Equipment that has any claims to audiophile status needs to be examined very carefully. Safety may be compromised in the interests of 'better sound' - although most such claims are utter nonsense. Your amplifier (and its internal electronics) have no idea if there is an external transformer connected, and provided the transformer is properly sized, the sound quality will not be compromised simply because there's another transformer in the circuit. A high price, silver wire and allegedly esoteric components do not mean that the product works any better or is any safer than something you can buy at an on-line auction for a fraction of the price.
+ + +These have exactly the same requirements and risks as step-down types. Again, both isolated winding and autotransformer types will often be available, and wherever possible the isolated winding type should be used. Outlets must be different from the normal power outlets that are used where you live to prevent accidental connection of 120V equipment to 230V. Any 230V outlet should always be a 3-prong type, with the earth (ground) pin connected to a secure safety earth.
+ +US residents should avoid the temptation to use the commonly available 2-phase 240V connection. In some cases it may be illegal to use it for connection of foreign equipment. I also have personal experience with equipment designed for 220V that's operated at 240V - it often blows up! All internal voltages are higher than they should be, and some parts are not capable of withstanding the extra voltage.
+ +It is far better to use a transformer that converts 120V to 230V to suit the manufacturer's rating on the equipment. There is nearly always a safety margin, but only around the nominal nameplate voltage.
+ +Where a step-up transformer is used, some long-held habits may need immediate revision. I know that some people in the US use a 'finger test' to find out if a connection is live - it's a very bad idea, but the relatively low voltage means that electrocution is unlikely. This is absolutely not the case with 230V mains. This voltage is far more dangerous than 120V, and a simple finger test may prove fatal. Those who work with 220-240V systems all the time know just how dangerous it is, but if you're not used to it, extreme caution is essential.
+ +One thing that residents of the US and Canada (or other 60Hz countries) don't need to worry about too much is the frequency. Any 50Hz equipment will run cooler at 60Hz and is not usually a problem - other than appliances that use induction motors!
+ +A 50Hz 3,000 RPM motor will run at 3,600 RPM with 60Hz, and the same ratio applies for other speeds. This many cause potentially dangerous problems with many appliances, and in general cannot be recommended. Other motor types are usually not affected, but you may not know - nor be able to find out - what kind of motor is being used.
+ +Although many sellers rate step-up or step-down transformers in Watts, this is incorrect. The correct term is VA - Volt Amps. Many typical loads may draw significant current, but the actual power (in Watts) may be quite low. It isn't possible to give any kind of definitive answer, since different products can vary widely. In general, it's safe to assume that the load will draw about half the load current as actual power, with the remainder as either reactive or non-linear current. This is the basis of power factor, but a detailed discussion is outside the scope of this article.
+ +If we assume that the above is reasonably correct (it does err on the side of safety), then the maximum power consumption in Watts quoted for the equipment can be multiplied by two to give a safety margin. In reality, it is sometimes possible to use a step-down transformer that's rated for somewhat less than the maximum power for some audio equipment, but this relies on several things and has consequences ...
+ +Although you may get 'advice' that this is perfectly alright, my recommendation is very simple ... DON'T DO IT. The only exception is if an experienced technician has examined and tested your imported kit and determined that nothing bad happens (or is likely to happen) under all probable operational conditions - including typical and/or atypical faults or failures. No sensible technician will generally be willing to be quite so rash, so my original recommendation should remain ...
+ +An external step-up/down transformer should be rated for double the
maximum claimed power draw of the connected equipment
So, if your amplifier, CD player, kitchen blender, mixer, power saw, etc., etc. claims (say) 500W maximum power, use a 1kVA (1,000 Volt Amps) transformer. Yes, the transformer will be larger, heavier and more expensive, but it will also allow the appliance to operate as it normally would if connected to the design value of supply mains. The transformer's additional voltage drop will be small, and you have a safety margin that allows for power factor, momentary overloads and switch-on surge current (aka inrush current). The voltage to your appliance will be reasonably stable - not quite as good as if it were connected to the designed supply voltage, but reasonably close.
+ +Using a transformer that is larger than that suggested will generally provide little or no additional benefit. The difference will be measurable, but it is unlikely to be noticed in use. This applies regardless of the type of appliance.
+ + +Now comes the real can of worms. Many people believe (and will tell you) that the small frequency difference (50Hz vs. 60Hz) is insignificant, but this is not true. Many products intended solely for the US markets will have the transformer made for 60Hz. This has the advantage of making the transformer smaller than it would be if it could also handle 50Hz. Indeed, an advantage of 60Hz mains is that all transformers and induction motors are smaller than the 50Hz equivalent. The alternative is that in some cases, a 60Hz and 50Hz transformer may be the same physical size, but a 60Hz only version can use lower grade (and therefore cheaper) steel laminations.
+ +If a transformer is designed specifically for 60Hz, and understanding that this makes for a smaller and/or cheaper transformer than would be the case if it could also handle 50Hz, why would anyone assume that this 60Hz tranny will work fine at 50Hz? The answer (predictably) is that it will not. An initial quick check will usually not show the problem ... it may need to be left on for a while before anything shows up. The problem is heat - the transformer will get (much) get hotter than normal, and may easily reach a dangerous temperature that will cause failure.
+ ++ Some years ago, a company I worked for (in Australia, a 50Hz country) took delivery of six very large and expensive 48V power supplies for telecommunications use. These were + made in the US, and used ferro-resonant transformers that were designed for 60Hz. I discovered the problem and advised management, but it was decided that I was being 'alarmist'. + The first unit burnt out within 2 weeks of being installed, filled a large computer room with smoke, shut down a call centre and caused great deal of embarrassment for all. + After this, management listened when I said there was a problem! ++ +
Part of the design process for a transformer is to ensure that there are enough primary turns to prevent the steel core from saturating. This depends on the voltage and the frequency. If the frequency is reduced (and 10Hz or 16.6% makes a big difference), there are no longer sufficient turns to prevent saturation. When the core saturates, the primary winding of the tranny draws much more current from the mains than normal - not just 16% more though, it can easily exceed 100% more.
+ +The result is that the transformer overheats, and will eventually fail. Even most technicians will be unable to tell you that the transformer is saturating, because they either don't know what to look for, or don't have the equipment needed to look at the current waveform. There is actually no difference between decreasing the frequency or increasing the voltage by the same ratio. This is shown in Figure 3, where the voltage was increased from 240V to 270V - a mere 12.5% change.
+ +
Figure 3 - Magnetising Current at 240V and 270V
The oscilloscope shows voltage, but this is the output from a current transformer. At 240V, magnetising (idle) current is 32.3mA (which reads as 3.23V RMS), and the transformer will dissipate about 7.7W. A 12.5% increase to 270V increases the magnetising current to 69.6mA, or 18.8W - well over twice the normal idle current and power. Reducing the frequency by 12.5% will have almost exactly the same effect. Any transformer designed specifically for 60Hz will draw far more idle current than normal at 50Hz [2].
+ +Since many modern products will already be operating right at the very limits (smallest possible transformer, etc.), a reduction of mains frequency will almost certainly push them beyond the point where failure is inevitable. It's no longer a matter of 'if' it fails, but 'when'. Large 60Hz transformers may also growl with a 50Hz supply, and this can be loud enough to make a hi-fi amp unusable because of the mechanical noise. Electrical noise is also possible (i.e. noise from speakers), because stray magnetic flux can become a major problem because the core is saturated.
+ +There is no cure for the above-mentioned issues, other than replacing the power transformer with a 50Hz version. The replacement will be expensive - assuming that the transformer is even available from the manufacturer. If not, you have an expensive paperweight that's of no use to anyone. It might be possible to operate the transformer from a lower voltage to avoid damaging saturation, but this approach cannot be recommended because it often leads to quite unacceptable consequences - serious loss of power (for an amplifier), internal supply voltages that are no longer regulated, etc., etc.
+ +Note that operating a transformer designed for 50Hz mains at 60Hz reduces the idle current and power, so the transformer should run a little cooler. Therefore, products that are designed for 50Hz operation (destined for anywhere in the world apart from the US and Canada) will rarely have a problem with mains frequency, provided the supply voltage is correct. Inadequate design can still cause failure though.
+ + +Appliances that use motors cause additional problems. While 'universal' motors (as used in power drills, most blenders and other small appliances, vacuum cleaners and the like) don't care about the frequency at all, the same is not true of induction motors. Induction motors are used in some (mainly older) washing machines, clothes dryers, bench drills, grinders, air compressors and many other products. A 60Hz induction motor will draw more current and run slower with 50Hz mains, and of course the opposite is true of motors designed for 50Hz. In general it's best never to consider using imported equipment that uses an induction motor designed for a different voltage or frequency. Likewise, kitchen appliances are usually subject to rigorous country (and mains supply voltage) specific electrical safety tests, so moving them from one country to another is not usually a good idea.
+ +For larger induction motors, the sheer size of the transformer needed is usually enough to turn most people off very quickly. A motor may draw 6-10 times its rated current when started, and considering that single phase motors are readily available up to 2kW (sometimes more), that means a mighty big (and expensive) transformer. A 2kVA transformer would normally be used for a 2kW motor, because there is no real need to allow for poor power factors or non-linear current. An induction motor running at full load has a very good power factor, so VA and Watts are fairly close.
+ +Remember though - using a 60Hz motor at 50Hz will result in excessive current, lower speed and significant power loss. To convert from kW to HP (horsepower), divide power in kW by 0.746, so a 2kW motor is about 2.7 HP. Multiply HP by 0.746 to get kW. + +
Synchronous clocks and timers will usually run at different frequencies once the voltage is correct, but will either gain 20% or lose 16.6% - neither is useful.
+ + +Many new products of all types use switch-mode power supplies (SMPS) because they are efficient, cheap, light and can be truly universal. This must not be taken to mean that they are universal, because this is not necessarily the case. Some require a switch to change from the low range (100-130V) to the high range (200-250V), while others will work happily regardless of the input voltage. Even with switchmode supplies, some products that are exclusively for the US market may use a SMPS, but it may be low-range only! It is sometimes possible to modify the circuit to convert the supply to a higher or lower voltage, but this requires a technician with a lot of experience with these supplies. One small mistake will cause the power supply to self destruct, often in a spectacular fashion.
+ +The common types of SMPS do not normally care about the frequency, so in general no precautions need to be taken if the frequency is changed.
+ +There is one small problem though - many manufacturers do not disclose the type of power supply used, so you don't know if it uses a transformer or a SMPS. Most of the latest gadgets (IT products, LCD and plasma TV sets use switchmode supplies, but the product itself may not be compatible. TV sets are mostly multi-mode now, but earlier types would only work with the exact same transmission system for which they were designed. There are still many incompatibilities, so purchasing any TV or digital radio related product from overseas may be a complete waste of money.
+ +This term is common in the US (where it originated), but there the risk is marginally less compared to what can happen with 230V mains. Never has an electronic part been more appropriately named, and it is imperative that it be removed - regardless of where you live and your local mains voltage. I can't even begin to imagine how anyone cannot (or will not) understand just how dangerous this component can be. Expect to find the death cap in any piece of audio equipment fitted with a non polarised 2-wire mains cable. All such equipment should be converted to a 3-wire cable and plug. Maintaining authenticity of vintage equipment must take a back seat to safety - always!
+ +While the death cap is mainly found in guitar amplifiers, many valve (tube) hi-fi amps also used it, although the switch (see Figure 4) was not included. The topic is too important to ignore because of the roaring trade in vintage gear at on-line auctions. Much has been written about the 'death cap', but it has to be understood that even if an amplifier is used only in the country of origin (the US or other 120V country) it should still be removed, or replaced with a Y-Class capacitor. There are some potential problems, and although in general usage it's uncommon for the cap to create a life-threatening condition at 120V it is still the wrong type of capacitor to use from mains to chassis. Worldwide regulations stipulate that the only capacitor that is permitted to bridge the insulating barrier is a Y-Class type. Anything else is placing your (or someone else's) life at risk.
+ +The situation is very much worse when the amp is used on 230V supplies, whether through an external transformer or otherwise. Anyone who claims otherwise is simply wrong - DC rated film capacitors cannot be used at 230V AC because they are not designed to withstand the dielectric stresses of continuous AC with alternating peaks of ±325V or more.
+ +
Figure 4 - Normal US Wiring and Correct Wiring of Mains Input
The problem is the capacitor itself. While some later amps use a UL listed capacitor, it's still only a 600V DC cap, and it will fail at 230V. For 230V AC mains, the only capacitor type that can legally be used for this application (active or neutral to earth) is the Y-Class. These capacitors are specially designed to withstand AC voltage and to fail-safe. A normal plastic film DC cap used at 230V AC can easily (and most likely) fail short-circuit ... decidedly unsafe, and likely to be deadly if the chassis is not properly earthed via the mains lead and wall outlet. Capacitor failure can lead to the liberation of noise and smoke - hopefully from the capacitor and not the guitar player!
+ +
Figure 5 - Typical Death Cap, 1950s Era
For use within the US, it's obviously up to the individual owner to decide on whether to rewire an old amp as shown above, or to leave it alone. The US regulations seem to be fairly lax - allowing a DC cap to connect from live to chassis, with no mandatory earth (ground) connection is really not a good idea. My recommendation is to rewire the amp with a 3-core cable and a 3-pin earthed mains plug. Anything else is simply too dangerous.
+ +If the amp is used in a 230V country with an autotransformer, the likelihood of serious injury or death is very much greater. It's not (and never was) a major safety issue with a normal 120V AC supply, although there are still some very real risks documented on the Net. There are also as many anecdotal stories about 'near death' experiences as there are stories that it's just a ploy for amp technicians to make more money. Personally, I think it's a really bad idea, and if I owned an amp with a death cap fitted it would be modified and converted to 3-wire immediately, regardless of the mains voltage.
+ +When used outside of the US and from 230V mains supplies, amplifiers that include the 'death cap' must be rewired as shown in Figure 4. The cap and switch are removed from the circuit (the switch can remain in position on the chassis though), and the original 2-core mains lead must be replaced with a 3-core lead and an earthed plug. Once this is done, even an incorrectly wired autotransformer cannot create an unsafe condition - provided that the earth connection is robust and continuous from input to output. However, this does not mean that any old autotransformer can be used - all previous recommendations still stand. In particular, use a transformer with isolated windings. It may cost a bit more, but compared to a human life it's peanuts.
+ +The death cap can be found in a great many US amplifiers outside the US, including some export models. Any amplifier so fitted should be modified forthwith - while the cap may have failed long ago, it's simply not worth the risk of leaving it in position. If the cap hasn't failed, that doesn't mean that it won't, and you don't want to be on the receiving end of 230V mains under any circumstances.
+ +The only reason that the death cap was ever installed in the first place, was because most early US power installations did not use earthed 3-pin outlets, and most were not polarised. The switch allowed the player/user to select the position that gave the least hum and noise - this meant the cap almost invariably connected to the neutral, and saved the 'effort' of removing the mains plug and turning it around. Polarised mains connectors or other (possibly earthed) equipment connected to the amp could easily leave the cap connected to the live 120V lead. When the chassis is earthed, the switch makes no difference, so could be connected to active or neutral with no-one the wiser.
+ +Death caps must be removed, 2-wire non polarised mains leads and plugs removed and replaced with a proper 3-wire lead (with earth/ground securely connected to chassis) and 3-pin plug. This is doubly critical when there is the slightest chance that the voltage might be 230V instead of the relatively benign 120V AC. As long as it remains in circuit, the death cap has every opportunity to live up to its name.
+ + +First and foremost, avoid auto transformers. Make certain that the step-up/down transformer that you select has separate primary and secondary windings, is earthed properly, and has a VA rating of about double the maximum expected power drawn by the appliance. As described above, this may not be necessary, but is usually a good idea for most products.
+ +Regarding the possible importance of frequency, it depends a great deal on the product and how it was originally designed (with/without a safety allowance for example). I would have liked to show the difference in idle current by changing the frequency, but unfortunately it requires a great deal of time and effort to set up. If anyone doesn't believe that the results shown here are real, then feel free to ignore this article in its entirety. You might be lucky, you might not. It is important to understand that some products will tolerate a lower frequency while others will not, and it's not usually possible to know beforehand those that will survive and those that won't. This can generally only be determined by testing the product.
+ +There is no simple answer to the common question "Can you make it work?", when someone wants to know if it's alright to import product 'X' from overseas. There are simply too many possibilities for anyone to give a definitive answer - guesses are just that, and cannot be relied upon. This is especially true if the imported equipment is expensive and the seller doesn't know enough to be able to provide useful answers. In such cases, it may be better to avoid the item altogether because the cost of modification may make it more expensive than the same thing purchased locally.
+ +There are some products (especially vintage), where modifications are simply not an option because they will devalue the item. In such cases, the best you can do is hope that it will be alright. Otherwise it becomes a rather expensive display product that can't be used.
+ +For those for whom money is no object, motor-alternator units can be purchased (they are no longer as common as they once were though), or high-power electronic frequency and voltage converters also exist. These units range from a few hundred Watts to many kW, but are typically very expensive ... the mere fact that suppliers seem to never publish any prices gives you a good idea of the price range [3].
+ +![]() |
Elliott Sound Products | +The 555 Timer |
The 555 timer has been with us since 1972 - that's a long time for any IC, and the fact that it's still used in thousands of designs is testament to its usefulness in a wide variety of equipment, both professional and hobbyist. It can function as an oscillator, a timer, and even as an inverting or non-inverting buffer. The IC can provide up to 200mA output current (source or sink) and operates from a supply voltage from 4.5V up to 18V. The CMOS version (7555) has lower output current and also draws less supply current, and can run from 2V up to 15V.
+ +There are many different manufacturers and many different part number prefixes and suffixes, and they are available in a dual version (556). Some makers have quad versions as well. The 555 and its derivatives come in DIP (dual in-line package) and SMD (surface mount device) packages. I don't intend to even attempt to cover all the variations because there are too many, but the following material is all based on the standard 8 pin package, single timer. All pin numbers refer to the 8-pin version, and will need to be changed if you use the dual or quad types, or choose one of the SMD versions that has a different pinout. Note that the quad version has only the bare minimum of pins, reset and control voltage are shared by all four timers, and it has no separate threshold and discharge pins (they are tied together internally, and called 'timing').
+ +The 555 uses two comparators, a set-reset flip-flop (which includes a 'master' reset), an output buffer and a capacitor discharge transistor. A great many of the functions are pre-programmed, but a control input allows the comparator threshold voltages to be changed, and many different circuit implementations are possible. A block diagram is useful, and Figure 1A shows the essential parts of the IC's innards.
+ +
Figure 1A - Internal Diagram Of 555 Timer
Figure 1B shows a complete circuit diagram for a 555 timer, based on the schematic shown in the ST Microelectronics datasheet. Schematics from other manufacturers may differ slightly, but the operation is identical. There's really not much point in going through the circuit in detail, but one thing that needs to be pointed out is the voltage divider that creates the reference voltages used internally. The three 5k resistors are shown in blue so you can find them easily, and the main sections are shown within dotted lines (and labelled) so each section can be identified.
+ +
Figure 1B - Schematic Of 555 Timer
Unless you are very experienced in electronics and can follow a detailed circuit such as that shown, it probably won't mean a great deal to you. It is interesting though, and if you were to build the circuit using transistors and resistors it should work very much like the IC version. Note that there are often extra transistors in ICs because they are cheap to add (essentially free), some are parasitic, and the performance of NPN and PNP transistors is never equal. In most cases IC production is optimised for NPN, and PNP devices will nearly always have comparatively poor performance.
+ +The standard single timer package has 8 pins, and they are as follows. The abbreviations for various functions that are used throughout this article are included in brackets.
+ +++ ++
+Pin 1 Common/ 'ground' (Gnd) + This pin connects the 555 timer's circuitry to the negative supply rail (0 V). All voltages are measured with respect to this pin.
+ +Pin 2 Trigger (Trig) + When a negative pulse is applied (a voltage less than 1/3 of Vcc), this triggers the internal flip-flop via comparator #2. Pin 3 (output) switches from 'low' + (close to zero volts) to 'high' (close to Vcc). The output remains in the high state while the trigger terminal is maintained at low voltage, and the trigger + input overrides the threshold input.
+ +Pin 3 Output (Out) + The output terminal can be connected to the load in two ways, either between output and ground or output and the supply rail (Vcc). When the output is low, + the load current (sink current) flows from Vcc, through the load into the output terminal. To source current the load connects between the output and common + (0V). If the load is connected between the output and ground, when the output is high current flows from the output, through the load and thence to ground.
+ +Pin 4 Reset (Rst) + The reset pin is used to reset the flip-flop which determines the output state. When a negative pulse is applied to this pin the output goes low. This pin is + active low, and overrides all other inputs. It must be connected to Vcc when it is not in use. Activating reset turns on the discharge transistor.
+ +Pin 5 Control voltage (Ctrl) + This pin is used to control the trigger and threshold levels. The timing of the IC can be modified by applying a voltage to this pin, either from an active + circuit (such as an opamp) or by connecting it to the wiper of a pot connected between Vcc and ground. If this pin is not used it should be connected to ground + with a 10nF capacitor to prevent noise interference.
+ +Pin 6 Threshold (Thr) + This is the non-inverting input for comparator #1, and it monitors the voltage across the external capacitor. When the threshold voltage is greater than 2/3 + Vcc, the output of the comparator #1 goes high which resets the flip-flop and turns the output off (zero volts).
+ +Pin 7 Discharge (Dis) + This pin is connected internally to the collector of the discharge transistor, and the timing capacitor is often connected between this pin and ground. When + the output pin goes low, the transistor turns on and discharges the capacitor.
+ +Pin 8 Vcc + The supply pin, which is connected to the power supply. The voltage can range from +4.5v to +18v with respect to ground (pin 1). Most CMOS versions of the + 555 (e.g. 7555/ TLC555) can operate down to 2 or 3V. A bypass capacitor must always be used, not less than 100nF and preferably more. I suggest 10µF + for most applications. +
As mentioned above, the 555 can be used as an oscillator or timer, as well as to perform some less conventional duties. The basic forms of multivibrator are the astable (no stable states), monostable (one stable state) or bistable (two stable states). Unfortunately, operation as a bistable is not very useful with a 555 because of the way it's organised internally. However, it can be done if you accept some limitations. A 555 circuit that functions as a bistable is described in Project 166, where the 555 is used as a push-on, push-off switch for powered equipment.
+ +The timing is fairly stable with temperature and supply voltage variations. The 'commercial grade' NE555 is rated for a typical stability of 50ppm (parts per million) per degree C as a monostable, and 150ppm / °C as an astable. It's worse as an oscillator (astable) than a timer (monostable) because the oscillator relies on two comparators but the timer only relies on one. Drift with supply voltage is about 0.3% / V.
+ +Most of the circuits shown below include an LED with its limiting resistor. This is entirely optional, but it helps you to see what the IC is doing when you have a slow astable or timer. The circuits also show a 47µF bypass capacitor, and this should be as close to the IC as possible. If the cap is not included, you may get some strange effects, including a parasitic oscillation of the output stage as it changes state.
+ +When the output is high, it will typically be somewhere between 1.2V and 1.7V lower than the supply voltage, depending on the current drawn from the output pin. The output stage of a 555 cannot pull the level to Vcc because it uses an NPN Darlington arrangement that will always lose some voltage, and the voltage will fall with increasing current. It's not usually a limitation, but you must be aware of it. If it's a problem you can add a pull-up resistor between 'Out' and 'Vcc' (1k or thereabouts), but it will only be useful for light loads (less than 1mA).
+ +It should be made clear that the 555 is not a precision device, and this wasn't the intention from the outset. It has many foibles, but in reality they rarely cause problems if the device is used as intended. Sometimes it will be necessary to ensure that it gets a good reset on power-up so it's in a known state, but for most applications that's not necessary. If you really do need precision, use something else (which will be considerably more complex and expensive). It's been said that Bob Pease (from National Semiconductor, now TI) that he didn't like the 555 and never used them (see the Electronic Design website), but that's no reason to avoid them. Trying to use a 555 in a critical application where accuracy is paramount would be silly, but so is using a microcontroller with a crystal oscillator to perform basic timing functions.
+ +As many readers will have noticed, I will generally use an opamp, a comparator or even some discrete circuitry in preference to a 555 timer. This isn't because I don't like the 555 IC, but simply because so few applications I normally work with need the flexibility it offers. It's certainly not a precision device, but it is handy, and countless circuits (many of them hobbyist designs) have used it - often because the designer doesn't know how to get a time delay by any other means.
+ +Note: One thing to be aware of is the well-documented supply current spike created as the output changes state (particularly from low to high). While I have seen it claimed that this current spike can exceed 400mA, no-one I know has experienced this. It's easily simulated, and my simulation showed a peak current of around 100mA, lasting for about 100ns (0.1µs). A bypass capacitor is always needed to handle this, and the minimum is 10µF. In the drawings below I used 47µF, which will usually mitigate supply voltage disturbances.
+ +The oscillator (or to be correct - astable multivibrator) is a very common application, and therefore will be covered first. Note that all circuits below are assumed to be using a 12V DC supply unless otherwise noted.
+ + +The term 'astable' means, literally, 'not stable' - the very definition of an oscillator. The output switches from high to low and back again as long as power is available and the reset pin is maintained high. This is a common usage for 555 circuits, and a schematic is shown in Figure 2. The pulse repetition rate is determined by the values of R1, R2 and C1.
+ +
Figure 2 - Standard Astable Oscillator
The waveforms at the output and the voltage across C1 are shown below. The output goes high when the capacitor voltage falls to 4V (1/3 Vcc of 12V), and goes low again when the capacitor voltage reaches 8V (2/3 Vcc). The oscillator has no stable state - when the output is high it's waiting for the cap to charge so it can go low again, and when low it's waiting for the cap to discharge so it can go high. This continues as long as the reset pin is held high. Pulling the reset pin low (less than 0.7V) stops oscillation.
+ +
Figure 2A - Standard Astable Oscillator Waveforms
C1 is charged via R1 and R2 in series, and is discharged via R1. By default, this means that the output is a pulse waveform, rather than a true squarewave. The output will be positive, with negative-going pulses. If R2 is made large compared to R1 you can approach a squarewave output. For example if R1 is 1k and R2 is 10k ohms, the output will be close to a 1:1 mark-space ratio (it's actually 1.1:1). To determine the frequency, use the following formula ...
+ ++ f = 1.44 / ((R1 + ( 2 × R2 )) × C1 ) ++ +
For the values shown in Figure 2, the frequency calculates to 686Hz, and the simulator claims 671Hz. This may seem like a large discrepancy, but it's well within the tolerance of standard components and the IC itself. High and low times can be determined as well ...
+ ++ t high = 0.69 × ( R1 + R2 ) × C1+ +
+ t low = 0.69 × R2 × C1 +
With the values given in Figure 2, t high is 759µs and t low is 690µs. The simulator (and real life) will be slightly different. The duty cycle/ mark-space ratio is 1.1:1, and is calculated by the ratio of t high / t low. The high time is 1.1 times the low time, which makes perfect sense based on the resistor values. As R1 is made smaller the mark-space ratio gets closer to 1:1 but you must ensure that it's not so low that the discharge transistor can't handle the current. The maximum discharge pin current should not exceed 10mA, and preferably less.
+ +You may well wonder where the values of 1.44 and 0.69 come from. These are constants (or 'fudge factors' if you prefer) that have been determined mathematically and empirically for the 555 timer. They're not perfect, but are close enough for most calculations. If you need a 555 circuit to oscillate at a precise frequency you'll need to include a trimpot so the circuit can be adjusted. It still won't be exact, and it will drift - remember that this is not a precision device and must not be used where accuracy is critical.
+ +
Figure 3 - Extended Duty-Cycle Astable Oscillator
By adding a diode, the operation is changed and simplified. C1 now charges via R1 alone, and discharges via R2 alone. This removes the interdependency of the two resistors, and allows the circuit to produce any duty-cycle you wish - provided it's within the 555's operating parameters of course. Pulses can now be narrow positive-going or negative-going, and an exact 1:1 mark-space ratio is possible. Frequency is determined by ...
+ ++ f = 1.44 / ((R1 + R2 )) × C1 ) ++ +
If R1 is greater than R2, the output will be positive with negative going pulses. Conversely, if R1 is less than R2 the output will at zero volts with positive pulses. The length of the pulse (positive or negative going) is therefore determined by the two resistors, and each is independent of the other. There is a small error introduced by the diode's voltage drop, but in most cases it will not cause a problem. The (ideal) high and low times are calculated by ...
+ ++ t high = 0.69 × R1 × C1+ +
+ t low = 0.69 × R2 × C1 +
Finally, there a circuit that's commonly referred to as the 'minimum component count' astable. Apart from the basic support parts that are always needed (the bypass capacitor and the cap from 'Control' to ground), it requires just one resistor and one capacitor.
+ +
Figure 4 - Minimum Component Astable Oscillator
The mark-space ratio of this circuit is nominally 1:1 (a squarewave) but this can be affected by the load. If the load connects between the output and ground, the high time will be a little longer than the low time because the load will prevent the output from reaching the supply voltage. If the load connects between the supply and output pin, the low time will be longer because the output will not reach zero volts. Frequency is calculated from ...
+ ++ f = 0.72 / ( R1 × C1 ) ++ +
With the values shown it will be 720Hz. You can see that the discharge pin (Pin 7) is not used. The capacitor is charged and discharged via R1, so when the output is high the cap charges, and when low it discharges. The discharge pin can be used as an open collector auxiliary output, but do not connect it to a supply voltage greater than Vcc, and don't try to use it for high current loads (around 10mA maximum).
+ +All of the circuits shown use the internal voltage divider (3 × 5k resistors) to set the comparator thresholds. Whenever the voltage reaches the threshold voltage (2/3 of Vcc) the flip-flop resets and the output is low (close to zero volts). When the trigger (Pin 2) voltage falls below 1/3 of Vcc the circuit is triggered and the output is high (close to Vcc).
+ +If reset (pin 4) is pulled low at any time, the output goes low and stays there until the reset pin goes high again. The threshold voltage of the reset input is typically 0.7V, so this pin has to be connected directly to ground with a transistor or switch. An external resistor is required between Vcc and reset if you need to use the reset facility, as there is no pull-up resistor in the IC. In general, you can use up to 10k.
+ + +A monostable (also known as a 'one-shot' circuit) has one stable state. When triggered it will go to its 'unstable' state, and the time it spends there depends on the timing components. A monostable is used to produce a pulse with a predetermined time when it's triggered. The most common use of a monostable is as a timer. When the trigger is activated, the output will go high for the preset time then fall back to zero. While we tend to think of timers being long duration (several seconds to a few minutes), monostables are also used with very short times - 1ms or less for example. This is a common application when the circuit needs pulses with a defined and predictable width, and having fast rise and fall times.
+ +
Figure 5 - Monostable Multivibrator
The trigger signal must be shorter than the time set by R1 and C1. The circuit is triggered by a momentary low voltage (less than 1/3 Vcc), and the output will immediately go high and remain there until C1 has charged via R1. The time delay is calculated by ...
+ ++ t = 1.1 × R1 × C1 ++ +
With the values shown, the output will be high for 1.1ms. If C1 were 100µF, the time would be 1.1 seconds. As noted, the trigger pulse must be shorter than the delay time. If the trigger were to be 5ms long in the circuit shown in Figure 5, the output would remain high for 5ms and the timer has no effect. Apart from timers, monostables are commonly used for obtaining a pulse with a predetermined width from an input signal that is variable or noisy.
+ +
Figure 5A - Monostable Multivibrator Waveforms
It's helpful to see the waveforms for the monostable circuit. It's especially useful to see the relationships between the signal on the trigger pin and capacitor voltage in relationship to the output. These are shown above, and can be verified on an oscilloscope. You need a dual trace scope to be able to see two traces at the same time. As you can see, the timing starts when the trigger voltage falls to 4V (a 12V supply was used, and 4V is 1/3 Vcc). When the cap charges to 8V (2/3 Vcc) timing stops and the output falls to zero. Note that the cap charges from zero volts in this configuration, because C1 is completely discharged when the timing cycle ends.
+ +The most common use of the monostable 555 circuit is as a timer. The trigger might be a push-button, and when pressed the output goes high for the preset time then drops low again. There are countless applications for simple timers, and I won't bore the reader with a long list of examples.
+ +The timing components are fairly critical, in that they must not be so large or so small that they cause problems with the circuit. Electrolytic capacitors are especially troublesome because their value may change with time, temperature and applied voltage. Wherever possible, use polyester caps for C1, but not if it means that the resistor (R1) has to be more than a few Megohms. The threshold pin may only have a leakage of 0.1µA or so, but if R1 is too high even this tiny current becomes a problem. The capacitor is always the limiting factor for long time delays, because you will almost certainly have to use an electrolytic. If this is the case, use one that is classified as 'low-leakage' if possible. Tantalum caps are often suggested, but I never recommend them because they can be unreliable.
+ +Sometimes, you can't be sure that the input pulse will be shorter than the time interval set by R1 and C1. If this is the case, you need a simple differentiator that will force the input pulse to be short enough to ensure reliable operation. Differentiators require that the rise and/ or fall times are much faster than the time constant of the differentiator itself. For example, a 10nF cap with a 1k resistor has a time constant of 10µs, so the rise/ fall time of the input pulse should ideally not be more than 2µs or it may not work properly. The ratio of 5:1 is a guide only, so you need to check what is available from your other circuitry. Ideally, use a ratio of 10:1 or more if possible (i.e. differentiator time constant of 10 times the risetime of the input signal).
+ +
Figure 6 - Monostable Multivibrator With Differentiator
R3, C3 and D1 form the differentiator circuit. When a pulse is received, the cap can only pass the falling edge, which must be as fast as possible. This is passed on to the 555, and it no longer matters how long the input trigger pulse remains negative, because the short time constant of C3 and R2 (100µs) only allows the falling edge to pass through. D1 is necessary to ensure that Pin 2 cannot be made more positive than Vcc plus one diode drop (0.65V) when the trigger pulse returns to the positive supply.
+ +If the input trigger pulse fall time is too slow, the differentiator may not pass enough voltage to trigger the 555. If this is the case, the signal will have to be 'pre-conditioned' by external circuitry to ensure that the voltage falls from Vcc to ground in less than 20µs (for the values shown). If this isn't done, the circuit may be erratic or it might not work at all. If your trigger pulse is positive-going, you'll have to invert it so that it becomes negative-going. The 555 is triggered on the falling edge of the trigger signal, which causes the output to go high (Vcc).
+ +Hint: If you happen to need a timer that runs for a long time (hours to weeks), use a variable 555 oscillator circuit that then drives a CMOS counter such as the 4020 or similar. The output of the 555 oscillator might be (say) a 1 minute/ cycle waveform, and that can act as the clock signal for the counter. The 4020 is a 14 bit binary counter, so with a simple circuit you can easily get a delay (using a 1 minute clock) of 8,192 minutes - over 136 hours or a bit over 5½ days. Still not long enough? Use two or more 4020 counters. Two will allow a timer that runs for about 127 years! Note that you will have to provide additional circuitry to make any of this work, and it may be difficult to be certain that a 127 year timer works as expected.
Here's an example (but it's not a monostable), and depending on the output selected from the 4020 counter you can get a delay of up to 20 minutes. If C1 is made larger the delay can be much greater. With the resistor values given for the timing circuit, increasing C1 to 100µF will extend the maximum time to 3.38 hours (3h 23s), using Q14 of U2 as the output. If C1 is a low leakage electro, the values for R1 and R2 can be increased, so it will run for even longer. The drawing also shows how many input pulses are required before the respective outputs go high (Vcc / Vdd). The counter advances on the negative-going pulse. To use higher value timing resistors, consider using a CMOS timer (e.g. 7555).
+ +
Figure 7 - Long Duration Timer
As shown, the minimum period for the 555 is 20.83ms (48Hz) with VR1 at minimum resistance, and at maximum resistance it's 145.7ms (6.86Hz). When power is applied the timer will run for the designed time period until the output goes high. Pressing the 'Start' button will set the output low and the time period starts again. All outputs from the counter are set low at power-on by the reset cap (C3) and/ or when the 'Start' button is pressed. The 555 runs as an astable, and continues pulsing until the selected output from U2 goes high. D1 then forces the voltage across C1 to 0.7V below Vcc and stops oscillation. Therefore, when the 'Start' button is pressed the output goes low, and returns high after the timeout period.
+ +Additional circuitry is needed if you don't want the timer to operate after power-on, or if you want the 'Start' button to make the output high, falling to zero after the timeout. I leave these as an exercise for the reader. The above is simply an example - it's not intended to be a circuit for any particular application.
+ + +There are many uses for 555 timers apart from the basic building blocks shown above. This is an article and not a complete book, so only a few of the possibilities will be covered. They have been selected based on things I find interesting or useful, and if you have a favourite that isn't included then that's just tough I'm afraid.
Don't expect to find sirens, general purpose noisemakers or pseudo random 'games' in amongst the things here. If you want to build any of the popular 555 toys, there are plenty to be found on the Net.
+ + +This is a simple PWM (pulse width modulation) dimmer or motor speed controller. It's based on the 'minimum component' astable shown earlier, but uses a pot and a pair of diodes to vary the mark-space ratio. When the pot is at the 'Max' setting, the output is predominantly high, with only narrow pulses to zero. When at the 'Min' setting, the output is mostly at zero, with narrow positive pulses.
+ +
Figure 8 - PWM Dimmer/ Motor Speed Controller
The way it works is no different from the basic astable, except that the amount of resistance for capacitor charge and discharge is variable by means of the pot. The diodes (1N4148 or similar) 'steer' the output current so that the pot has the ability to have a different resistance depending on the signal polarity. For example, when the pot is at 'Max', it takes much longer to charge C1 than to discharge it, so the output must spend most of its time at Vcc. The converse applies when the pot is set to 'Min'. The maximum and minimum duty-cycle can be altered by changing R1. With 1k as shown, the maximum is 11:1 (or 1:11), but making R1 smaller or larger can change this to any ratio desired (within reason). I suggest that 100 ohms is a practical minimum.
+ +To be useful, the output of the 555 will normally drive a MOSFET as shown, or perhaps even an IGBT, depending on the load current. If it's used as a motor speed controller, you must include a diode in parallel with the motor or it will not work properly. The diode has to be a 'fast' or 'ultra-fast' version, and rated for the same current as the motor. The diode isn't needed if the circuit is used as a dimmer, but it's a good idea to use a UF4004 or similar fast diode anyway. The supply to the motor can be anything you like (DC only), but the 555 must have a 12-15V supply, separate from the main supply if necessary. See Project 126 for a project version of a dimmer/ speed controller. It doesn't use a 555, but uses the same PWM principles.
+ + +A 555 can be made to work as a PWM (Class-D) amplifier. It's not very good and output power is very limited, but you can get up to 100mW or so into an 8 ohm load. It's a purely educational exercise more than anything else, because fidelity is not great due to the limited performance of the 555. Maximum frequency is 500kHz or so, but the IC will almost certainly overheat if operated at maximum frequency and output current. I won't bother showing a practical circuit for a Class-D amp using a 555 because the performance is so poor. Suffice to say that if you inject a sinewave or music signal into the 'Ctrl' pin, you can modulate the pulse width. The same trick is used for many of the 555 based sirens that you can find elsewhere.
+ +The control input is often overlooked, but it can be used any time you need to create a voltage-controlled oscillator. Apart from toy sirens and other 'frivolous' applications, this ability can be useful for many circuits. Just because the 555 is a rubbish Class-D amplifier doesn't mean that the general principles should be ignored. One application that's quite popular on the Net is using a 555 as the controller for a simple regulated high voltage supply. The drawing below is a modified version of one that is all over the Net (so much so that it's not possible to provide attribution because I have no idea who posted it first).
+ +
Figure 9 - DC-DC Converter
The circuit shown is largely conceptual. It will work, but is not optimised. The feedback applied to the control input is dependent on the zener voltages, and the emitter-base voltage of the transistor has little effect. There are ICs specifically designed for voltage sensing that use a voltage divider to set the output voltage, and this makes it easy to change the voltage to an exact value if necessary. The high-voltage zener string will provide surprisingly good voltage stability. The circuit is shown here simply to demonstrate the use of the control input to change the operation of the 555.
+ +It will be able to deliver up to 50mA without much stress, but as with any step-up switchmode converter, the peak input current may be quite high. With the values shown and a 20mA output, the peak current will be around 2A. Naturally, if the output current is less than 20mA, input current is reduced proportionately. Start-up current will be much higher than the operating current. L1 (100µH) should have a resistance of no more than 1/2 ohm. An output of 100V at 20mA is 2W, so it's reasonable to expect the average input power to be somewhat greater. Losses will almost certainly be close to 1W in total, so the average input current will be around 250mA at 12V.
+ +There are dedicated SMPS controllers that may be no more expensive than a 555 timer, but it's still a useful application and means you don't need to search for an obscure part. It's greatest advantage is that it can often be built using parts you already have in your junk-box, with the added benefit that it doesn't rely on SMD parts and can be built on Veroboard.
+ + +This is a useful circuit, and it can be used to drive simple transducers (small speakers, lamps, etc.). The maximum current the 555 can source or sink is about 200mA, so loads that draw more than that will cause the IC to overheat and fail. Because there are no support components needed at all, it can be very economical for PCB space. It's been claimed that using a discrete circuit with a pair of transistors is cheaper, but that's doubtful given the cost of a 555. The IC also takes up very little PCB real estate, something that's often far more expensive than a few cheap parts, especially is space is at a premium.
+ +
Figure 10 - Inverting Buffer
The input signal is subject to hysteresis. This means that the input voltage needs to exceed 2/3 Vcc before the output will switch low, and the input then needs to fall below 1/3 Vcc before the output will switch high. This provides very good noise immunity, and input impedance is very high. The circuit is an inverting Schmitt trigger.
+ + +This is a fairly uncommon application. By using the reset pin as an input, any voltage above ~0.7V is determined to be high, and the output will switch high. The input voltage must fall below 0.7V for the output to switch low again. There is no hysteresis, and the driving circuit needs to be able to sink the 555's reset pin current of about 1mA.
+ +
Figure 11 - Non-Inverting Buffer
You must be careful to ensure that the input to pin 4 can never exceed Vcc or become negative, or the IC will be damaged. If out-of-range excursions are possible, then the input voltage must be clamped with a diode, zener or both to keep the voltage within limits.
+ + +One quite common use for 555 timers is as a missing pulse detector. If you expect a continuous train of pulses from a circuit, should one go 'missing' for any reason that may indicate a problem. Being able to detect that a pulse is missing or delayed can be an important safety function, raising an alarm or disabling the circuit until the fault has been corrected.
+ +
Figure 12 - Missing Pulse Detector
Input pulses are used to switch on Q1 and hence discharge C1. As long as the pulses keep arriving in an orderly manner the output of the 555 stays high. The time constant of R1 and C1 must be selected so that the timer can never expire as long as the input pulses keep arriving as they should. If the time is too short C1 will charge to 2/3 Vcc before the next input arrives. If it's too long, a single missing pulse won't be detected and it will require several pulses in a row to be missing (or the pulse train may stop altogether) before the timer will operate. You may also need to take precautions to ensure the timer will always operate, even if the incoming pulse train gets stuck at the high voltage level. This will involve adding a differentiator, similar to that shown in Figure 6.
+ +One use for a missing pulse detector is to detect that a fan is not performing as it should. Some fans have an output that pulses when the fan is running, or the function can be added with two small magnets and a Hall effect detector (two magnets are needed so the fan's balance isn't affected). The missing pulse detector can raise an alert if a fan fails or is running too slow.
+ +The circuit can also be used as a 'loss-of-AC' circuit, and will detect a single missing cycle or half cycle, depending on the detection mechanism used. This makes it capable of quickly detecting that AC has been removed, either by switching off or due to mains failure, and can be used to operate muting relays (for example). In most cases it's not necessary to be quite so fast, but there may be critical industrial processes where rapid detection of as little as one missing half-cycle may be crucial to prevent malfunction. This arrangement will also work well to ensure very fast changeover to a UPS (uninterruptible power supply) in cases where loss of AC may cause major problems.
+ + +Although a 555 can drive a relay directly, it has to be protected against the inductance of the relay coil. Back EMF should (in theory) be absorbed because the output has high-side and low-side transistors, but instead it can cause the timer to 'lock up' and cease to function until the power is cycled. This can happen when a single diode is used in parallel with the relay coil. Use the parallel diode, but also drive the relay coil via another diode which prevents any malfunction. The output must never be subjected to a negative voltage - even 0.6V can cause problems.
+ +
Figure 13 - Relay Driver
D2 performs the usual task of shorting out the relay's back EMF, and D1 completely isolates the relay circuit from the 555. Using this arrangement will prevent any possibility of malfunction due to the relay coil's back EMF, and the same arrangement should be used when driving any inductive load.
+ + +A 555 timer can make a handy mute circuit. There are countless different ways that muting can be achieved - see Muting Circuits For Audio for a range of different techniques. Of them all, a relay is still one of the best. Because contact resistance is very low, even low impedance circuits can be effectively shorted to ground with usually no audible breakthrough. All ESP circuits include a 100 ohm resistor at the output to prevent oscillation, and no common opamp can be damaged by placing a short at its output - with the resistor, the opamp is protected against a direct short anyway.
+ +
Figure 14 - Relay Mute Circuit
The circuit shown can be powered from the main preamp supply, or it can even be powered from a bridge rectifier across the 6.3V AC heater supply with valve (tube) equipment. If you do that, Cbypass should be around 220µF, and no other filter cap is needed. You will need to add a resistor in series with the coil to limit the voltage to 5V. The LED will be on for the duration of the mute period. The relay drive requires two diodes as discussed above. Most suitable relays will draw between 30 and 50mA, well within the capabilities of a 555.
+ +The 555 gets a trigger signal by virtue of the cap from the trigger input (C2) and R2 is the pull-up resistor. C2 holds the trigger input low for just long enough to start the timing process, so the output is high, the relay is de-energised, and C1 starts charging via R1. When the voltage at the threshold input reaches 2/3 of the supply voltage, the output goes low, operating the relay and removing the short across the audio signal lines. If the supply voltage rises slowly the circuit may not work properly (or may not work at all), and you may need to increase the value of C2.
+ +The relay remains energised for as long as the equipment remains powered. Ideally, the supply to the timer should be removed as quickly as possible when power is turned off to ensure that there are no 'silly' noises generated as the supplies collapse. Some opamps can create a thump, squeak or 'whistle' as their power supply drops below the minimum needed for normal operation. If you need a mute circuit, this is not one that I recommend. See the article Muting Circuits For Audio. Figure 12 is particularly recommended.
+ + +The 555 timer is very versatile, but is not really suited for very long time delays unless you are willing to pay serious money for a large, low-leakage timing capacitor. It's easier to use a 555 oscillator followed by a binary counter if you need long delays. Most applications will only call for delays of perhaps a few minutes (20-30 minutes is the suggested maximum), and this is easily achieved. The number of possible circuits using 555 timers is astonishing, and there are countless circuits, application notes (from IC manufacturers, hobbyists and others) and web pages devoted to this IC and its derivatives.
+ +555 timers are used in many commercial products where a simple time delay is needed. I've seen them used in trailing edge and universal lamp dimmers, and (despite the comments in the introduction) have used them in several products I've developed over the years. The popularity of the 555 has not diminished despite its age, and it's safe to say that the number of applications has steadily increased, even with the use of digital techniques that supposedly render analogue 'obsolete'.
+ +It's not at all unusual to find a 555 timer used in a switchmode power supply (SMPS), and simple low power supplies can be made with a 555 IC, a transformer and not much else. As with any IC there are limitations, and it's important to ensure that the IC is properly bypassed because they can draw up to 200mA as the output makes the transition between high and low or vice versa.
+ +CMOS versions of the 555 (e.e. 7555) offer some useful advantages over the bipolar type. In particular they have much lower supply current and exceptionally high input impedances for the comparators. To get the best from these timers, use high value timing resistors and low value capacitors. Using resistors of 1Meg or more is fine for long time delays. Be careful with timing capacitors less than 1nF, because PCB track-to-track capacitance (or leakage) can cause significant timing errors. CMOS types cannot source or sink high output current, and output current may be asymmetrical. For example, the TLC555 can sink 100mA but can only source 10mA, so this must be accounted for in your design.
+ +The 7555 provides greater flexibility (in some respects) than the bipolar types, but are not always suitable. They draw very little quiescent current, have extremely high input impedance, and can operate with a supply voltage as low as 2V. However, as noted above, they can't provide as much output current as the bipolar transistor versions.
+ +There are some precautions that must be taken. Input voltages must never exceed Vcc or fall below zero (ground) or the IC may be damaged. Failure to provide adequate bypassing close to the IC can cause parasitic oscillation in the output stage (of bipolar types) that can be interpreted by logic circuits as a double (or multiple) pulse.
+ +The output stage is commonly referred to as a 'totem pole' design, and both transistors can be on simultaneously (albeit very briefly) as the state changes from high to low or low to high. The type of circuit is different from the output stage of TTL gates, but the effect is similar. Use of the bypass capacitor is essential so it can provide the brief high current demanded as the output switches.
+ +When used as a oscillator or when the reset pin is used to stop and start oscillation, the first cycle takes longer than the rest because the cap has to charge from zero volts. Normally, the cap voltage varies from 1/3 Vcc to 2/3 Vcc. When the cap has to charge from zero, it takes a little longer. This is rarely a problem, but you do need to be aware of it for some critical processes.
+ +Be aware that 555 timers (whether bipolar or CMOS) will always have some variability from one to the next, and doubly so when they are from different manufacturers. Despite the ubiquitous nature of these ICs, they are not all equal, with some having (for example) greater supply sensitivity than others. Driving relays (or other inductive loads) must be done with caution, as even a small negative voltage at the output (the -0.65V of the 'flyback' diode) can cause malfunction. Some will be fine with this, while others may lock up or fail!
+ + +There are countless websites that examine the 555 timer, and if you need more information or want to use a calculator (on-line or downloaded) to work out the values for you, just do a web search. The primary references I used are shown below.
+ +A search for '555 timer application circuits' will return over 480,000 results, so there's a lot of material to choose from. As always, not all information is useful or reliable, so you need to be careful before you decide on a particular circuit as many will not have been thought through very well. Some of the info is very good indeed, but you'll have to use your own knowledge to separate the good stuff from the rest.
+ + +![]() |
Elliott Sound Products | 6dB/ Octave Passive Crossovers |
Speaker crossover networks are always a requirement with any system using two or more loudspeaker drivers. The Design of Passive Crossovers article covers 12dB/ octave types in considerable detail, and shows just how complex it is to get a good result. While some high quality systems go to great lengths to get everything right, many don't, so the result is not always as expected (or hoped for). There is also some information on 6dB/ octave crossovers, but in many circles they have a bad reputation.
Some designers carry out a process called 'voicing', where the design is tweaked to get it to sound 'right'. Whether this is backed up with detailed measurements and/ or analysis depends on the designer. For simple, 2-way passive crossovers that are intended for low power (no more than 30W/ channel or so) it's very hard to make a case against a 6dB/ octave (first order) crossover, and it should be used in a serial network rather than parallel. While one could be forgiven in thinking that the two are equivalent, this only applies if the loads (the drivers) are perfectly resistive. This doesn't happen with any real-world loudspeakers.
The differences aren't subtle looking at the electrical performance, but may be less noticeable in listening tests. The nice thing about the series network is that it is 'self-correcting', and will always sum flat electrically. Whether or not this translates to the acoustic response is another matter, but in general it works out well. This is covered in some detail in the article Series vs. Parallel Crossover Networks, but the explanations there look at both first and second order systems, and it is not intended as a design guide.
My preference is for active crossovers, but for a simple system this is difficult to justify. The cost penalty isn't great, but it adds complexity and means a four-wire connection is needed for the speaker. This isn't sensible for a simple 2-way box that's used at low power. An example of just such a system is shown in Project 73 (Hi-Fi PC Speaker System), and that shows a series network. This has been in daily use for nineteen years (at the time of publication of this article), and has seen several different PCs in its time. Apart from one repair (a faulty electrolytic capacitor in the power supply), the system hasn't missed a beat in all that time!
It's worth noting that the first-order (6dB/ octave) crossover is the only version that works best when connected as a serial network. Higher order crossovers become unwieldy and very sensitive to component variations, including voicecoil resistance. This is covered in some detail in the ESP article referenced above, and serial connection is not recommended for 12dB/ octave or above. As part of my workshop monitoring system, I use a simple vented 2-way box with a series 6dB/ octave crossover. It doesn't match my horn-loaded 'main' system (fully active), but it does let me make direct comparisons of power amplifiers, and it sounds fairly good overall.
This article has many similarities with the Series vs. Parallel Crossover Networks article, but is specifically aimed at first order systems, and provides better graphs to show the results of the simulations. This is more of a construction guide, with emphasis on loudspeaker driver impedance compensation networks. These are essential for a parallel configuration, and optional for a series connected crossover network.
With higher order crossovers (12dB/ octave or 18dB/ octave) impedance compensation is mandatory if you want a final system that performs well. In many cases this point is not made clear (or may not even be mentioned!). If you expect to build a fully compensated 3-way crossover (12dB or 18dB/ octave), be prepared for a world of pain - these networks become very complex, very quickly. The cost is likely to be such that using an active system (provided you build your own active crossovers and amplifiers) will be cheaper and will perform far better. This is especially true if you intend to use 4th order (24dB/ octave) filters. Not only are you up for the cost of eight expensive inductors and capacitors (just for the crossover!) the circuit sensitivity to any variation is high, and impedance compensation still can't cope with voicecoil temperature changes.
To build any speaker system you need to be able to measure the Thiele-Small parameters. This can be done in a number of ways, and you can use the technique described in the article Measuring Loudspeaker Parameters. These parameters are essential to obtain the optimum enclosure volume and vent dimensions, but are not necessary for the tweeter. You can measure the tweeter if you wish, but there's no point trying to obtain Vas (equivalent air volume of suspension) as it's not useful for anything. You can easily measure the main characteristics, namely ...
fs Resonant frequency Le Voicecoil Inductance Re Voicecoil resistance Zmax Maximum impedance Qms Mechanical Q Qes Electrical Q Qts Total Q VAS Air Volume For Equivalent Compliance
These parameters can be used to calculate the equivalent circuit of the driver, both for the woofer/ mid-bass and the tweeter. These are necessary only if you wish to devise impedance correction networks. Although the calculations that follow used a hypothetical mid-bass driver, I measured the characteristics of a tweeter I had to hand (a Vifa D26G-05). This is the tweeter used for the descriptions that follow, but yours will be different. It's nameplate says that the rated impedance is 6Ω.
fs 1425 Hz Le 886 µH (This is obviously not correct, and needs to be calculated. I used a value of 57µH) Re 4.542 Ohms Zmax 8.239 Ohms Qms 1.105 Qes 1.325 Qts 0.6025 (Total Q - Not used) VAS n/a Not measured
Qts isn't used, but the measurement system gave it to me anyway, so it's included. Figure 1 shows the equivalent circuits of a woofer (or mid-bass) and tweeter, and the figures for your drivers can be substituted for those shown. Note that the parallel resistance (representing losses) is Zmax minus Re. With the figures shown above, that makes the parallel resistance 8.239 - 4.542 Ohms (3.697Ω). This tweeter uses ferro-fluid, which gives it a much lower resonant peak than 'ordinary' tweeters.
Using the formulae shown in the Impedance Compensation article, the effective inductance is 374µH, with a parallel capacitance of 32µF. These figures were used in the calculations, and shows that the techniques used are accurate enough for our calculations (assuming that you use impedance compensation). The voicecoil (semi) inductance is somewhat higher than expected, and the measured value doesn't correlate with the impedance curve. It's a semi-inductance, and this is not provided by the measurement system I used. Otherwise, a simulation using the values calculated (or estimated) is almost a perfect match for the measured parameters.
This information is far more useful (and essential) when calculating the values for complex (higher order) crossovers, which are more sensitive to variations in speaker impedance across the crossover region. Even a comparatively small impedance change can cause serious disturbances to the overall frequency response. By modelling the drivers accurately, you'll get a better overall result than simply assuming that the impedance remains constant. It doesn't for the vast majority of moving coil loudspeaker drivers, as most people will be aware.
The equivalent circuit for loudspeaker drivers can be worked out using the techniques described in the Impedance Compensation For Passive Crossovers article. For the series network recommended in this article, not compensation is necessary and it's not covered here. However, it's worthwhile to look at the equivalent circuit and the impedance response.
Figure 1 - Simulated Woofer And Tweeter
Figure 2 shows the impedance for the woofer and tweeter without any compensation. While your drivers will be different, the trend is the same. A woofer has maximum impedance at resonance, and the impedance rises beyond 250Hz due to the voicecoil's semi-inductance. Some drivers will show a more exaggerated rise, and some less. The use of a copper cap on the centre polepiece usually reduces the effect. Tweeters also show a rise above resonance, but it's usually fairly minor within the audio band. The biggest problem is resonance, which can seriously disturb the performance of the crossover. Ideally, the tweeter will be crossed over at no less than 2.5 times the resonant frequency, but in some cases it may be less - especially for tweeters with a higher than normal resonance.
All speakers have the same basic components that provide an equivalent circuit as shown in Figure 1. Voicecoil resistance is measured at 25°C, but it increases with temperature. The semi-inductance is difficult to measure directly, but it can be determined using a frequency-sweep, which will show the impedance rising beyond the minimum value measured (which is usually close to the voicecoil resistance (Re). You can calculate the approximate inductance, or use a speaker test box (such as that shown in Project 82 - Loudspeaker Test Box. This is a great deal easier than calculation, and gives a near-perfect result.
Figure 2 - Impedance Curves Of Simulated Woofer And Tweeter
The tweeter has a much flatter impedance curve than many others in the above graph, but the mid-bass is fairly representative of typical 125mm (5") mid-bass drivers. This will affect the crossover's response, but its effect is not pronounced. If you decide to use impedance compensation, there's a 'balancing act', because to get the impedance response flat may reduce the overall system impedance, and produces higher losses in the compensation networks. Impedance compensation is outside the scope of this article, which is about first-order series crossovers. This is the only network that will provide good results with no attempt at ensuring a flat impedance curve.
If you want to know more about impedance equalisation, see the article Impedance Compensation For Passive Crossovers.
While there are many very good reasons not to use first-order crossovers, with low-power systems (< 30W/ channel) they can be pretty much ideal. Unlike higher order networks, they can reproduce a perfect squarewave, which some people think is important. Whether you think this is worthwhile or not depends on your preferences, but it's fair to say that it usually makes no difference, because the modified wave-shape from high-order networks simply indicates phase shift. No-one has ever demonstrated that phase shift in any system is audible in a double-blind test.
Still, there is always something to be said for any system that can reproduce a squarewave, even if it's only for 'bragging-rights'. There are two ways that a 6dB/ octave crossover can be connected - series or parallel. Most people use parallel, because "that's the way it's always been done", but this loses one of the most valuable attributes of a simple crossover. Most of the simulations and examples you'll see elsewhere assume a resistance for the speaker, but to get a proper understanding of performance, you must use a simulated (or real) speaker. Failure to do so gives highly unrealistic results, particularly with parallel crossover networks.
Even with relatively low powered systems, there will be some voicecoil heating. The woofer (or mid-woofer) is affected more, because more of the supplied energy from the amplifier is below 3kHz. This can have a surprisingly large influence over the performance of the crossover network. A series network is self-correcting, and even using mismatched inductors and capacitors, it still provides a flat summed (electrical) response. There is no requirement for impedance correction, although performance may be improved by adding a notch filter tuned to the tweeter's resonant frequency. The very nature of a first-order series network is generally an improvement over the more common (and/ or 'conventional') parallel network.
The formulae for the crossover components are ...
CX = 1 / ( 2π × f × Z )
LX = Z / ( 2π × f )
CX is the crossover capacitance, LX is the inductance, Z is the speaker impedance (nominal or compensated) and f is frequency. For the drivers described here, I used a (pretty much random) value of 6.5 ohms for both drivers, and a crossover frequency of 2.5kHz. The capacitance (CX) works out to be 9.79µF (10µF was used) and the inductance (LX) is 413.8µH (424µH was used due to a typo when I ran the simulations). The difference is minor, so I didn't re-run the simulations and change the graphs.
Figure 3 - Parallel And Series 6dB/ Octave Crossovers
If a series and parallel crossover are compared using a suitable resistor in place of the speaker, they are almost identical. However, loudspeaker drivers are not resistive, other than at a couple of frequencies - at resonance and at the 'minimum impedance' point which depends on the driver. For the majority of the frequency band, drivers are reactive, showing either inductance or capacitance. Resonance occurs at the frequency where inductive and capacitive resonance are equal, and only a resistive component remains. At higher frequencies, the impedance is heavily influenced by the semi-inductance of the voicecoil. It's never truly inductive, because there are losses (largely due to eddy-currents in the pole pieces).
The essentials are shown above - a parallel network requires impedance compensation. Without it, the final frequency response is a lottery, and it certainly won't behave as expected. The impedance correction involves a notch filter for the tweeter, and a Zobel network for the woofer. Without these, the response will be highly unpredictable (and rarely good). Other effects will also have an influence as detailed below. A parallel network has a lower overall impedance, mainly due to the compensation circuits (a compensated parallel version using the same drivers has an average impedance of about 4.5Ω between 400Hz and 10kHz).
With a series (6dB/ octave) crossover, the response will sum flat (electrically) regardless of speaker impedance variations (due to frequency, thermal effects, ageing, etc.) and the entire crossover network has been reduced to a single inductor and capacitor. The response (with the simulated drivers) is shown below, and it's as close to perfect as you'd ever want. All of the caveats of 6dB/ octave crossovers apply of course, so don't even think of using it in a high powered system. The tweeter will be stressed well beyond its design limits, and may suffer from serious intermodulation distortion. Continuous high power will cause the tweeter to die. In common with all first-order networks, serious anomalies in the woofer's high frequency response will not be attenuated very well. This means that cone break-up and similar (usually unpleasant) high frequency irregularities can affect the sound, so choose a driver that's well-behaved above the crossover frequency.
Figure 4 - Series Crossover, No Impedance Correction
While a series network is theoretically equivalent to a parallel network, this only applies when impedance correction circuitry has been added. The series network shown is as simple as it's possible to make a crossover. The inductor bypasses low frequencies from the tweeter, and the capacitor bypasses high frequencies from the woofer. You can include impedance compensation circuits if you prefer, but they are not required for flat response. With the simulated drivers, the crossover frequency was raised to 3.8kHz, largely because the tweeter has a lower impedance than the 'design' value.
Figure 5 - Series Electrical Frequency Response
Although the individual responses of the woofer and tweeter show a peak before rolloff, the summed response is due to a combination of both amplitude and phase. If the summed response is flat, then the acoustic response will also be flat. There will always be discrepancies that cannot be accounted for in the electrical domain, but I've run acoustic tests that show that the response is as flat as the drivers will allow.
It's obvious from the graph that while the summed response is dead flat, everything isn't quite as wonderful as it seems. The tweeter's rolloff isn't sharp enough, and this will create issues if the tweeter can't handle the extra power. This means that some tweeters will exhibit higher distortion than expected. This is one of the reasons that I would only ever suggest a first-order crossover for low powered systems. Should the woofer suffer from cone break-up at higher frequencies, this too may be audible, because the crossover network can't reduce the energy quickly due to the low slope of the network. All crossover networks are a compromise, and first-order systems are no exception. Indeed, there are more compromises than with higher-order types, but they are more tolerant of parameter changes.
It's common for the tweeter to be more sensitive than the mid-bass, and it often needs some attenuation. With a series network, you don't need to be particularly fussy about maintaining the correct impedance, and the necessary level reduction can just be a series resistor, selected to attenuate the tweeter. For example, if the tweeter is 3dB more efficient than the mid-bass, just placing a 1.8Ω resistor in series with the tweeter will provide the 3dB attenuation required (assuming one with the same specifications as that used in this article).
Rather than repeating everything here, refer to the article Loudspeaker L-Pad Calculations, which covers the topic in detail. It includes a simple calculator that you can use to maintain the desired impedance while providing the attenuation needed. Mostly, you don't need to go overboard, especially with a series network. However, for the L-pad to work properly, it's helpful if an impedance compensation network (a notch filter) is used to keep the impedance constant.
The impedance compensation isn't required for a series first order network of course, and the small variations you'll get without it are likely to be less than the response fluctuations of the tweeter itself. Just adding the L-pad goes some way towards taming the resonant peak anyway, because it involves a resistor in parallel with the tweeter if done properly. Another alternative that's been used in some speaker systems is to have switched levels for the tweeter, usually (nominally) flat, +3dB and -3dB settings. I leave this to the reader.
We get used to seeing impedance plots that in some cases dip to very low values at certain frequencies, but the series 6dB/ octave network is fairly benign. A plot with no impedance equalisation is shown below, and it is mostly above 10 ohms, only falling to 3.8 ohms at about 6.5kHz. This is to be expected, since the tweeter has a voicecoil resistance of 4.5 ohms. A tweeter with a higher impedance will reduce the impedance dip.
Figure 6 - Series Impedance (No Impedance Correction)
If the two drivers remained at exactly 8 ohms at all frequencies, the impedance curve is almost dead flat. There will be a small dip at the crossover frequency, but for two 8Ω (nominal) drivers it falls to 6.3Ω at 2.4kHz, with a very gentle slope (the 'notch' is 8kHz wide!). The impedance curve was produced using the same values as all other examples, and the only significant points are the resonant frequency of the two mid-bass, and the impedance peak roughly half-way between the tweeter's resonance and the crossover frequency. The impedance was measured using the same equivalent circuit for each driver as shown throughout this article. With an impedance such as that shown, no known amplifier will be at all stressed.
There's a great deal of complete nonsense on the Net about the 'audibility' of certain components, and capacitors seem to be the most commonly discussed parts. However, a frequency response test (under load) will quickly show that there is almost no difference between any two capacitors with the same ratings (in particular, capacitance and voltage). You can set up a null tester quite easily to prove to yourself that this is the case. While it's well outside the scope of this article to describe such a tester, you don't really need one. Choose good quality (but not necessarily 'audiophile') parts, with a generous voltage margin, and preferably with a low ESR (equivalent series resistance). This usually doesn't change by very much with most capacitors.
Polypropylene is generally the preferred dielectric, as it has low losses. This is important when a capacitor has to carry several amps of current at higher audio frequencies. In some cases, it may be cheaper to get 'motor start/ run' caps, especially when high capacitance is needed. While it might seem unlikely, polyester (aka Mylar® or PET) caps are also fine, provided they are rated to carry the current demanded by the drivers.
Bipolar electrolytics should be your last choice, and only if you can't get anything else. They have a finite life, much higher ESR than most film caps, and usually have limited current ratings. Be careful when you see claims that a capacitor's dissipation factor is a major factor in 'the sound'. Very few film capacitors have a dissipation factor that will cause any problems at audio frequencies, with the possible exception of bipolar electrolytic types. Even with these, it usually only becomes a problem as the capacitor ages, and that is a good reason to avoid them if possible.
Inductors are another matter entirely. As the world's worst passive component, you need to choose carefully to ensure that the DC resistance is low, and be aware of possible self-resonance. Even if it's outside the audio range, it is possible (albeit unusual) for self-resonance to cause power amplifiers some grief. There's a wide range available, with many from specialist ('audiophile') suppliers. Some of these may be very good, others can just as easily be awful - 'customer reviews' are meaningless and should be ignored (this also applies to capacitor reviews of course - many are unmitigated drivel).
The biggest issue with inductors is their DC resistance (DCR). This is present for both AC and DC, and it does two things (neither of which is desirable). When used in series with a woofer, the damping factor provided by your power amplifier is in series with the DCR (as well as the voicecoil resistance), and this reduces damping. Thin wire might make for a smaller inductor, but it will dissipate more power than a larger coil wound with thicker wire. It's a balancing act, and finding the optimum compromise isn't easy. Because of the resistance, the coil also dissipates power (as heat), and every watt 'stolen' by the inductor is a watt that doesn't get to the loudspeaker driver.
For example, if an inductor has a resistance of only 0.66Ω and handles a current of 5A (RMS, average, providing 100W into 4Ω), it will dissipate 16.5 watts. For a comparatively small component with little or no airflow, it can get surprisingly hot, and that increases its resistance even further (copper has a positive temperature coefficient of resistance) of 0.395% per °C, At a temperature of 150°C the same coil will show a resistance increase to 1Ω. It will now dissipate 25 watts! Quite clearly, low resistance is essential.
Some crossover coils uses a magnetic core, which reduces the size and DCR, but at the expense of linearity. Unless the core is much larger than theoretically required, it will suffer from partial saturation, and that introduces distortion. Saturation depends on current and frequency, and is worst with high currents at low frequencies. For a 'utility' speaker system that will only be used at low power, you'll probably get away with a magnetic cored inductor, but an air-cored coil is always better. However, it will almost certainly have higher DCR unless you go for something very expensive.
Resistors are generally benign, even 'standard' wirewound types. Yes, they have some inductance, but it's unlikely to cause any problems at audio frequencies. The response aberrations of almost all drivers will exceed any error cause by resistor inductance. The vast majority of resistors used in crossover networks are relatively low values, so exhibit only small amounts of parasitic inductance. Non-inductive wirewound resistors are available, but some are 'ordinary' wirewound types that have been marked (or sold) as 'non-inductive'. This is something I've tested and verified, and it's not a myth. In general, the inductance of most 'ordinary' wirewound resistors will be a few micro-Henrys, and rarely cause any problems.
The topic of component selection is covered in more detail in the Design of Passive Crossovers article.
I think that the conclusions pretty much speak for themselves. In the limited places where it's appropriate to use a first-order passive network, the series configuration wins every time. There's almost nothing you can do that will disturb the summed response, and testing with speaker measurement hardware will confirm that the acoustic response follows the electrical response as closely as the drivers will allow. You are more likely to see disturbances created by driver peaks and dips or diffraction than any variation of the response across the crossover region.
I haven't included any details of time-alignment, but first-order crossovers are sensitive to acoustic centre misalignment. Unfortunately, if you use a stepped baffle to align the acoustic centres, you'll create diffraction effects that can easily make the overall response worse. Some people use a sloping baffle to time align the drivers, but that means that the off-axis response has to be very good. This is not always the case, but the cabinet details are not covered here. When small drivers are used, the time difference will not normally be excessive, and the small 'wobbles' in response will usually be inaudible. For example, the small speakers used in my Project 73 system would have an offset of no more than 10mm (a time delay of 29µs). Sound travels at (about) 2.92mm/ µs if you want to work it out yourself.
In all of the cases shown (including elevated temperature), a squarewave is reproduced as electrically perfect, and any deviation is due to the drivers. No loudspeaker driver is free from peaks and dips, and adjacent drivers can cause diffraction, as will the edges of the enclosure. These can be hard to eliminate, and drivers should always be mounted so they are at different distances from the two sides, top and bottom. For information on cabinet bracing, vibration analysis and other aspects of cabinet design, see Loudspeaker Enclosure Design Guidelines.
One thing that you must be aware of is that all passive crossover networks rely on (close to) a zero ohm source impedance. Most transistor amps provide this, and while it's never really 0Ω, it's close enough. Very few valve (vacuum tube) amplifiers have a low output impedance, especially 'low-feedback' and 'no-feedback' designs. All passive crossovers are affected, and obtaining flat response is extremely difficult. The crossover can be designed to work properly by including accurate impedance compensation. If the impedance appears purely resistive, the crossover can be designed to function (more-or-less) normally regardless of small source impedance variations. This is very hard to achieve - the compensation networks need to be very accurate, with the smallest possible impedance variation with frequency. High output impedance also limits electrical damping of the driver at its resonant frequency, so bass response is usually exaggerated.
Next time you put a small speaker system together, try the series 6dB/ octave network. It's highly unlikely that you'll be disappointed. It's not perfect, but it will give good results where a first-order system is appropriate for the drivers you are using. An active system (using an electronic crossover) will win hands down every time, but isn't always feasible. For example, I use a speaker using drivers very similar to those described here to test amplifiers (and often to listen to music in my workshop). An active system would mean that I couldn't use a single amp, and that's very limiting when one needs to make comparisons between amplifiers!
![]() | + + + + + + + |
Elliott Sound Products | +Sound Level Measurements & Reality |
Contents + +
All audible (and some inaudible) variations of air pressure are defined as sound. It is accepted by most people that sound one wishes to hear, such as music, speech, etc. is 'sound', whereas sound that one does not wish to hear is 'noise'. There is no difference in terms of physics - an air pressure variation of 1 Pascal is 94dB SPL (Sound Pressure Level) regardless of the source or the listener. What is music for one person is noise to another, but there are several sound sources where few (if any) people will enjoy the listening experience. A neighbour's party (to which you weren't invited), aircraft flying overhead, a worker using a jackhammer or any other audible disturbance that causes annoyance, loss of sleep or just mild irritation will almost always be considered as noise rather than sound.
+ +When noise is experienced, the tool of choice is a sound level meter. This is an instrument that measures the sound level and displays the result on either an analogue (moving coil) meter movement or digitally, using an electronic display (such as a liquid crystal display). To be useful, meters must be calibrated to a known SPL - most commonly 94dB at 1kHz. Predictably, the calibrator must itself be calibrated to a standard, and this continues up the 'food chain' to internationally recognised calibration equipment.
+ +Microphones (any mic - including those used in sound level meters) are dumb. They don't know anything about the sound they are reacting to, and are only able to produce a voltage that corresponds to the pressure variations that impinge on the diaphragm. There is currently no way to process the signal from one or more mics to arrive at a directly comparable 'annoyance value' as might be experienced by a human listener. In addition, the physical location of a microphone in the environment can make a large difference to the reading obtained, so moving the microphone can easily make a ±5dB difference to the reading. Humans (and other animals) have significant signal processing abilities that are extremely difficult to emulate in software, to the extent that this hasn't been achieved yet.
+ +Virtually all sound level meters sold worldwide contain filters to meet international standards, and these are discussed below. The overall accuracy is determined by the class of the instrument, with Class-0 being laboratory standard and 'unclassified' being useful only for getting a rough idea of the SPL. In between are Class-1 and Class-2 instruments, and these are the ones that will normally be used to determine if there is a legally enforceable breach of noise limits. Class-1 is preferred but expensive.
+ +Some excellent background information has been provided by a colleague in New Zealand. The changes and challenges paper is from the Institute of Acoustics Bulletin Volume 32 number 2 March/April 2007. It was also published in the Acoustics Australia Journal December 2006. The article was written by Philip Dickinson from Massey University in Wellington, and explains the progress of sound level metering over the years, and the things that went wrong in the process. See Changes And Challenges In Environmental Noise Measurement, essential reading for anyone who thinks that measuring sound level is an 'exact science' or who might believe that the existing 'standards' are in any way representative of reality.
+ +Another document that points out that A-weighting is a flawed concept is available - see reference [ 9 ]. It's entitled "A-Weighting: Is it the metric you think it is?" by Terrance McMinn from Curtin University of Technology. So, there is dissent in the industry, but it's uncommon and nowhere near as loud as it needs to be for anyone in 'authority' to take any notice.
+ +Note too that A-Weighting is often used to specify signal to noise ratio (S/N) for amplifiers, preamps and other electronic devices that are used for audio. The claimed figure with A-Weighting can be 10dB or more 'better' when A-Weighting is applied, because all low and high frequency noise is attenuated. There is a small boost at 3kHz (where our hearing is most sensitive), but overall the bandwidth for noise measurements should be flat across the audio spectrum (20Hz to 20kHz).
+ + +When a sound level reading is obtained, it is (or should be) written up as follows ...
+ +The meter reading response time is specified, so F and S have a specific meaning. They are not random, but have exact values as indicated in standards documents. Fast response has a time constant of 125ms and Slow response uses a time constant of 1 second. I (Impulse) time-weighting is no longer used, but the time constant is (or was) 35ms.
+ +The response and time weightings may be combined as follows ...
+ +Another term you will see is Leq. The Leq is the 'Equivalent Continuous Sound Level', that is to say it is the average sound level over a specified time interval. There is no time constant applied to the Leq. The Leq is best described as the Average Sound Level over the period of the measurement. It is usually measured with A-weighting (LAeq). Because it is an average, it will settle to a steady value, and this makes it much easier to read accurately than a simple instantaneous Sound Level. Being an average, it's also showing the total energy of the noise being measured, so is potentially a better indicator of possible hearing damage or the likelihood that the noise will cause complaints. Leq can be measured over a period of seconds to hours.
+ +A sound level meter that measures Leq (or Lavg - average SPL) is usually referred to as an 'Integrating Sound Level Meter' and all sound level meters should meet the standards IEC60651, IEC60804, IEC61672 or ANSI S1.4, depending on country specific requirements. Provision of A-weighting is mandatory for any meter that meets the applicable standard(s), which shows just how entrenched this has become.
+ +While it may appear that I'm targeting the wind industry in this article, that's not the case at all. However, it must be considered that there are more and louder complaints about wind turbines than almost any other single noise source, and there has to be a reason why this is so. The primary reason is the use of A-weighting, which in the opinion of the author is almost always unjustified. A-weighting is only applicable to a small number of measurements at very low SPL, and must not be used where there is significant low frequency energy, tonality or rhythm.
+ +The debate about A-weighting is most commonly raised specifically because of wind turbine noise. There have always been people who have never liked A-weighting, but their complaints were transient for the most part because very few noises continue for years. Most complaints about noise target a specific incident or series of incidents that are temporary, rather than continuous long-term. Other long-term problem noises include rail and major highways, but in most cases the noise source was there well before the people who complain about it moved in. Neither of these noise sources contain significant amounts of sub-audible noise (infrasound), although trains can create significant vibration (travelling through the earth, rather than air). Vibration is another topic altogether.
+ +It's not just about wind turbines though. There are any number of noise sources (both natural and man-made) that should only ever be measured using a meter with essentially flat response (or C-Weighting), but invariably A-Weighting is used. There is no consideration for the type of noise (rhythmic, low-frequency, impulsive, etc.), either in legislation or procedure. Unqualified people (such as police or council rangers) are permitted to take noise readings, without the slightest knowledge of how sound propagates or indeed without any knowledge of the way the meter takes a reading. They (presumably) follow some kind of guideline that someone (also unlikely to be properly qualified) drew up.
+ + +In case you missed it above, I must make it clear from the outset that I do not agree with the use of weighting filters, since they are not - despite claims, standards and legislation to the contrary - an accurate representation of human hearing. Nor do they predict the potential for annoyance to people other than by accident. Indeed, it could be argued that the use of weighting filters (in particular A-weighting) is designed to provide a highly optimistic measurement that rarely correlates with perception or annoyance.
+ +There are literally countless sites on the Net that will tell you that "A-weighted measurements are an expression of the relative loudness of sounds as perceived by the human ear. The correction is made because the human ear is less sensitive at low audio frequencies, especially below 1000 Hz, than at high audio frequencies." or words to that effect. Very few (well, almost none actually) clarify this or point out that the 'correction' is only valid for broad band signals at very low SPL.
+ +The standard A-weighting curve is accurate at or below one SPL, assuming that the listener has 'Standard' ears. Based on the 'Equal Loudness Curve' (see below), the closest match is at or near 40dB SPL - an unrealistically low noise level by today's standards. I would suggest the A-weighting curve may have relevance somewhere around 40dB SPL (unweighted!) and below. Indeed, many years ago that's exactly when A-weighting was used ... only for low level (below 40dB SPL) noise. Today, it is mandated for all sound level measurements in almost all countries, with no regard for the actual SPL. As will be shown below, this has caused many noise affected people to doubt the science, and it has to be said that there is considerable justification for their doubt.
+ +To obtain any correlation with reality, a noise measured using A-weighting must be ...
+ +The vast majority of real-world measurements do not fulfil any of these criteria, and A-weighted noise level measurements will give a completely unrealistic reading that does not reflect the audibility or annoyance value of the overall sound. This is especially true for very low frequency signals, amplitude modulated noise and/ or any rhythmic sound. In some instances there is provision for a 'penalty' of 5dB that is added to the A-weighted SPL. This supposedly compensates for the noise having characteristics that make the use of A-weighting unsuitable.
+ +One has to be very cynical of these measurements when the basic measurement is taken using a filter that is clearly inappropriate, then adding a fixed 5dB penalty. Everything is set in stone, and there is usually no opportunity to protest because the meter is always deemed to be right ... even when it's patently obvious that it is completely wrong!
+ +When the police or council rangers measure the noise from a car exhaust or your neighbour's party, they happily use A-weighting - it's in the legislation in most countries, and it's unlikely that the officer concerned has even the most rudimentary understanding of what it is or does. That is very scary! People with little or no training, taking noise measurements, and expected to know how sound propagates. Then they are made to use an arbitrary filter (the A-weighting filter) that is almost always inappropriate for the type of noise and the actual SPL. The sound level meter they use must be accurate to within 1dB or better, yet simply moving the noise source or measurement point by a few metres can cause a big change - perhaps as much as ±5dB or more, depending on predominant frequency and surroundings.
+ +The purpose of the weighting filter is supposedly to account for the fact that human hearing is less sensitive at low and high frequencies than in the upper midrange. The most troubling (and totally unrealistic) part is that the weighting filter is applied regardless of the actual SPL, and without regard to the type of noise. The IEC standard 61672-1:2003 mandates the inclusion of an A-frequency-weighting filter in all sound level meters. Some cheap meters offer nothing else! At any (unweighted) SPL, potentially intrusive LF sound is ignored when an A-weighting filter is employed - even though the noise may be clearly audible to the person taking the measurement!
+ +A-weighting filters are (supposedly) based on the Fletcher-Munson curves reproduced below, which show the variation of sensitivity at different sound levels. It is clear that any loss of sensitivity is highly dependent upon the actual SPL, but this is not generally considered. The idea that a single filter can represent the true subjective annoyance potential at all levels is clearly not just wrong, but seriously wrong. Despite this, A-weighting is a worldwide standard procedure.
+ +
Figure 1 - Fletcher Munson Curve
Each 10dB increase in the Figure 1 curves represents the sound being twice or half as loud, because this is the way our hearing works. For example, to get a sound system to sound 'twice as loud' according to listeners, the amplifier power must be increased by 10 times (i.e. 10dB). Assuming the use of the same speaker system, a 200W (average) audio signal is perceived as twice as loud as a 20W signal, but a 40W signal is only 3dB greater - a just perceptible change to the listener. There are other (subtle) influences, but in general this is verified in controlled tests.
+ +What the chart shows is that as the SPL is reduced, our ability to detect low or high frequency noise is reduced, so measurements should reflect this phenomenon. While it is undeniable that the chart above is a reasonable representation reality in terms of human hearing [ 1 ], I remain unconvinced that A-weighting is a valid test methodology unless the absolute sound intensity is specified. In addition, it only works with a single tone. If a nuisance sound (noise) is broad-band or has any rhythm, modulation or tonality, you cannot use A-weighting to measure the likely 'annoyance value' and the meter will badly underestimate the audibility and intrusion characteristics of the noise.
+ +A-weighting has some validity for thermal noise measurements of audio amplifiers and other sound equipment, because the noise from most equipment is at or below the threshold of audibility. There are some sounds that seem (at a casual glance) to defy all measurement standards, and remain audible (albeit at very low level) despite all the 'evidence' that this should not be so. As with all such things, experience and practical application are far more important than the absolute indication on a meter.
+ +When dealing with audio electronics, a piece of equipment that is essentially 'noise-free' for all intents and purposes is comparatively easy, because the ambient noise level in most urban or suburban areas is likely to be far higher than the residual noise of most audio equipment. For example, 80dB signal to noise ratio for a car hi-fi system is not really useful for the most part, but is easy to achieve. Even the most expensive luxury cars generate far more engine, wind and road noise than any tuner/ MP3/ CD system, and this is apart from all the other external noise generated by other vehicles on the road.
+ +Remember that if the car audio system has 80dB S/N ratio, noise referred to 100dB SPL will be at only 20dB SPL. One is seriously loud (and if sustained will damage your hearing after around 15 minutes of exposure), and the other is very quiet indeed. Many older people will not be able to hear sound at that level - even if there is no external noise at all. Anyone who has listened to 100dB SPL for 1/2 hour or so (regardless of age) will be unable to hear 20dB SPL until at least 24 hours has passed between the two listening sessions.
+ +It is worth noting that the Fletcher/ Munson curves were devised in 1933, with a test group that apparently consisted of only about 12 people. Equipment of the day was very limiting by today's standards, but response was plotted between 25Hz and 16kHz (in 1933 even that was quite a feat!). The above curves are considered to be gospel throughout the industry. I'm not disputing that the general trends are accurate (there would hopefully have been changes if errors were found), but I am astonished that test data from so long ago has managed to stand the test of time. More recent tests of very low frequencies have added to our understanding, but between 25Hz and 100Hz the existing curve has been in reasonable agreement with the latest data.
+ + +![]() | One thing that seems to have been missed by a great many people is the SPL range between 'just audible' and 'seriously loud'. At 1kHz (and assuming very good hearing),
+ this ranges from 0dB (the threshold of hearing) up to 100dB, which is quite loud enough. That's a range of 100dB. However, at 31.5Hz, the range is far less (look at the curves
+ shown above). The difference between a sound being just audible and very loud is only about 35dB. This means that a comparatively small SPL difference can take a signal from below
+ audibility to extremely annoying - assuming that it's someone else's noise of course. ![]() A 100dB range means that the loudest sound is over 100,000 times louder than the 'just audible' sound ... but that's at 1kHz. At a frequency of 31.5Hz, the loud sound (100dB SPL) + is only 56 times greater than the point where it becomes audible. That's a huge difference between the two, and discussion of the effect is commonly avoided. There is some literature + that covers this in some detail - see reference 3 , but don't expect to see any references in any official documentation. Apparently we are all supposed + to be happy with a meter reading, and stop complaining if the A-weighted sound level meter says there's no problem - regardless of whether we can sleep through the noise or not. + |
Since it is unlikely that I shall be able to convince the entire industry that it is using flawed reasoning, I have described an A-weighting filter on my website (see Project 17) so that we can at least make some meaningful comparisons with other systems where this has been used. Note that with electronic equipment, A-weighting is generally applied only to residual (mainly thermal) noise measurements. These tests are usually valid, but results can still be misleading. While I have described the filter, I do not use it for my own measurements.
+ +Remember that we should only ever use A-weighting when the noise we are measuring is of very low amplitude, has an even, broad frequency spectrum, and contains no tonality or rhythm - the neighbour's party and many other urban noise sources are unlikely to fit this mould, but will be measured with A-weighting anyway - oh dear - so much for getting some sleep!
+ +The frequency response curve of an A-Weighting filter is shown below, and it is essentially a tailored bandpass filter, having a defined rolloff above and below the centre frequency. The reference point is at 1kHz, where the gain is 0dB. The filter response is supposed to be the inverse of one of the curves of the equal loudness graph shown in Figure 1 - it is a little hard to tell which one, but according to most comments on the topic it's the 40 Phon curve (40dB SPL at 1kHz). This is a worldwide standard, warts and all. Note that there is some gain (1.2dB) around the 3kHz point - that's where our hearing is most sensitive.
+ +
Figure 2 - A-Weighting Response (C & Z-Weighting Shown For Reference)
Regardless of what (and by how many so-called 'experts') may be claimed, I do not accept for an instant that A-weighting really does account for our perception of real-life noise levels. IMO it is a laboratory curiosity, but when used on wide bandwidth noise sources at very low levels (less than 40dB SPL), there is reasonable correlation between A-weighting and auditory perception. At other levels and with different noise sources (man-made rather than naturally occurring), correlation is generally poor, and the weighting filter simply trivialises real problems. The graph shows the approximate response for C-Weighting as well, as this is the one that should be used for any measurement over perhaps 60dB SPL. Z-Weighting is included for reference.
+ +A-weighting also completely fails to provide for a realistic measurement of high frequencies - especially those above 15kHz. Depending on age, our hearing has close to a 'brick-wall' filter somewhere between 12kHz and 20kHz. The filter gives an unrealistic (wrong) indication of any high frequency (including ultrasonic) noise that might exist.
+ +An A-weighting filter will enable you to make 'industry standard' measurements of amplifier noise levels, and this is one of the very few areas where the use of A-weighting might give results that match what we hear, because the levels involved are usually at the lower limit of our hearing. Life would be easier if all noise measurements were made 'flat' - with no filters of any kind, but this is not to be.
+ +C-weighting filters are sometimes used for especially troublesome noise measurements, with frequencies below 31.5Hz and above 8kHz being filtered out (albeit comparatively gently). Z-weighting is also used - there's no significant filtering, and the measurement system operates over its full bandwidth, defined as 10Hz to 20kHz ±1.5dB excluding microphone response. Much as many people would like to see the standards changed to outlaw or at least restrict A-weighting, I fear that it won't happen unless enough people point out that the present A-weighted measurements are largely meaningless because they are misused - due in part to an apparent lack of understanding. There is also a matter of will, and many companies and industries that make noise will fight very hard indeed to prevent any change.
+ +Frequency (Hz) | 31.5 | 63 | 125 | 250 | 500 | 1k + | 2k | 4k | 8k | 16k + |
A-weighting (dB) | -39.4 | –26.2 | –16.1 | –8.6 | –3.2 | 0 | + 1.2 | + 1.0 | -1.1 | –6.6 + |
C-weighting (dB) | -3.0 | –0.8 | –0.2 | 0 | 0 | 0 | –0.2 | –0.8 | –3.0 | –8.5 + |
Z-weighting (dB) | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
+ |
The table shows the relative response at octave frequencies from 31.5Hz to 16kHz. Z-Weighting is flat, at least as flat as the microphone can produce. While it would be the ideal (IMO), most meters don't provide it. C-Weighting is common on better meters, and is even provided on some cheap units as well. I happen to think it's indispensable, and is the most appropriate setting for the vast majority of measurements. I would not buy a meter (at any price) that did not include C-Weighting.
+ +It used to be (at least fairly) common for sound level meters to include B-Weighting, as a halfway point between 'A' and 'C' and intended for moderately loud sounds. The idea was that A-Weighting would be used for sound at around 40dB SPL, B-Weighting for 60dB SPL (or thereabouts) and C-Weighting used for sound at 80dB or more. B-Weighting is now 'deprecated' (to use the latest buzz-word), and hasn't been included for many years (so I haven't shown it in the chart or table). Since A-Weighting is mandated for everything, I suppose it's only a matter of time before C-Weighting goes the same way. Should that be allowed to happen, we're all screwed, and noise-makers can have a field day .
In fact, it is quite easy to prove to yourself and your co-workers that A-Weighted measurements at any meaningful level are pointless. You need a speaker with good response to at least 30Hz, and a graphic equaliser that can provide about 10dB boost at 30Hz, plus music or a pink noise source (preferably both). Set up the equipment, and play the signal at about 74dB (unweighted). Prove that the meter (set for C or Z-Weighting) shows an increase when the 30Hz component is boosted, and that you can hear the difference (it should be very obvious). If you cannot hear or measure a difference, either the source material has no bass, or the speaker cannot reproduce it. Select a different source and/or speaker so the difference is quite audible, and repeat the test.
+ +Now, set the meter to A-Weighting and repeat the test. According to the meter you cannot hear the difference, yet perversely, you find that it is just as audible as when the meter was set for C-Weighting! But how can this be? Everyone knows that you can't hear such a low frequency - just look at the Fletcher/ Munson curves above, or better still, read the standards documents! Look at the meter again - it tells you that you can't hear the change. Strangely, you hear it anyway, as will anyone else who comes along to find out what you are up to.
+ +The A-weighting standard means that the meter reading with a frequency of 31.5Hz is attenuated by 39.4dB - almost 40dB - 1/100th of the actual sound pressure.
+ +This simple experiment should be mandatory for anyone who uses a sound level meter, and should be forced upon all legislators and standards writers. The test must be continued until the victim test subject freely admits that they can hear the difference, that the expensive meter they are clutching is therefore wrong, and that they shall refrain from taking a sound level measurement until they learn how to switch off weighting filters (and use their ears).
It's important to understand, recognise and admit (depending on where you are in the industry) that if a sound increase is clearly audible, then any measurement system that fails to show the increase is faulty. It doesn't matter how well calibrated or how expensive the equipment might be, it should always show the SPL in a manner that correlates with what you can hear. Just because the sound is at a low frequency is no reason to ignore it - quite the reverse, because low frequencies generally travel much further and are harder to 'contain' than high frequencies.
+ +Just in case you missed my point here ... A-Weighting is almost always completely inappropriate (i.e. bollocks). It doesn't work, and is used by industry because it doesn't work, thereby giving them far more leeway than should be the case. I have spoken with many, many people involved in professional noise measurement, and the sensible ones (i.e. those not employed by an industry that gets noise complaints) all readily admit that A-Weighting is flawed and is rarely used appropriately.
+ +I jokingly said to some people I worked with in New Zealand that I could imagine a 'consultant', clutching his meter, hearing low frequency noise that obviously could not be ignored, but still pointing to his meter and saying "No, no, it's perfectly fine - look at the meter."
+ +Unfortunately, I was advised that this is no joke - they had experienced this exact scenario, and seen it with their own eyes (assisted by their ears). I kid you not.
+ + +Sound level meters are available from as little as $30 or so, but anything that will satisfy a legal requirement has to be of a certain standard. Class 2 meters are considered satisfactory for most work, but Class 1 offers greater accuracy and therefore may be considered to be a 'precision' instrument and be more believable if a noise case goes to court. Expect to pay at least $500 for Class 2, and considerably more for Class 1. The final cost depends on the other functions - up to $2500 gets you a fairly comprehensive meter (as you might hope for that kind of money).
+ +The additional functionality that you get from an integrating sound level meter allows much more comprehensive measurements to be taken (such as LAeq, LCeq, etc.). Another function that is available on top-of-the-line meters is a filter set. This is typically 1/3 octave, and allows measurements to be taken on each of the internationally recognised 1/3 octave bands. These are as follows (all frequencies are in Hertz) ...
+ ++(12.5) (16) 20 25 31.5 40 50 63 80 100 125 160 200 250 315 400 500 630 800 1k0 1k2 1k6 2k0 2k5 3k2 4k0 5k0 6k3 8k0 10k 12k 16k (20k)
+ +The 12.5, 16Hz and 20kHz bands will sometimes be omitted, especially in low-cost meters. Some meters have octave band filters rather than 1/3 octave bands - this minimises the number of filters needed, but at the expense of measurement flexibility.
+ +Accuracy is specified by the meter class. At the reference frequency of 1kHz, the tolerance limits for Class 1 are +/-0.7dB and +/-1dB for Class 2. At the lower and upper extremities of the frequency range, the tolerances are wider. A class 2 meter is considered sufficiently accurate for any measurement that is used to prosecute or defend a noise complaint, but s/he who has the class 1 meter rules . It's often expected that there will be a statement of 'percentage uncertainty' with reported sound levels, but this can be extremely difficult to provide in an outdoor environment because of the way sound is propagated and can be heavily influenced by terrain and/or buildings and other structures.
Measurements are supposed to be taken as the true RMS value of the signal, but most cheap meters will use average readings, but displayed as RMS (by meter calibration). The difference between average and RMS can be quite pronounced, especially with repetitive impulsive noises. Depending on the duration of the impulses, the difference can be as great as 3:1 - the true RMS value is 3 times the average (it can be more under specific circumstances). With most waveforms the error is smaller, but a cheap average reading meter may still read low by up to 6dB.
+ +
Figure 3 - A Class 2 Meter (Top) And A Cheap Unit (Bottom)
The photo shows a high quality integrating sound level meter (with octave band filter set) and a cheap but functional unit that offers the basics but is not classified. Even if adjusted with a calibrator, the SPL measured would not usually be accepted in court because the accuracy cannot be guaranteed. It is still useful for comparative readings and to get a general idea of the SPL from typical noise sources.
+ +As noted in the introduction, sound level meters also have time weighting, with averaging time options for S (slow, 1s) and F (fast, 125ms). This allows the user to see instantaneous changes, or a relatively slow 'moving average' of the sound level. More advanced meters offer longer-term integration of the sound, giving an 'equivalent continuous sound level' - an average level that is collected over a selectable time period (seconds to hours). This is referred to as Leq (sound level equivalent), and is further designated to LAeq (A-weighted), LCeq (C-weighted). No weighting designator assumes Z-weighting (zero filtering).
+ +Some of these meters also display the maximum and minimum values (Lmax and Lmin) encountered during data collection. Taking this a step further, there are PC based data loggers that record the SPL at selected intervals, provide up to 1/12 octave band filters, and also can take a recording of any noise that exceeds a preset maximum.
+ +Sound level meters are normally calibrated using a special device (a calibrator) that provides a consistent SPL of 94dB (1 Pascal) at 1kHz. Some calibrators provide additional frequencies, and it's not uncommon to also provide a second reference level of 114dB (10 Pascals). Adaptors are available for most common microphone sizes, because the meter's microphone must be completely sealed by the calibrator.
+ +Note that there are additional weighting schemes that are also in use. So far, nothing has really replaced A-weighting for general noise measurements, but there is also C-weighting (referred to above), Z-weighting (Z = zero ... no filter at all) and G-weighting. The latter was specifically designed to measure infrasound - noise that is supposedly below the minimum we can hear. Unfortunately, it is also flawed because it only measures a fairly narrow band of frequencies, centred on 20Hz. This means that any frequency that is not 20Hz is subjected to quite radical filtering. This renders the G-weighting filter rather pointless, and it's very hard to recommend it for anything (again, despite legislation and standards).
+ +Other weighting schemes also exist (such as ITU-R 468), but are rarely found in sound level meters.
+ + +In this case, the problem is far larger than elephants - wind turbines. A quick search will reveal that there are countless websites dedicated to the low frequency (infrasound) noise from these machines. Likewise there are countless websites that complain that everyone who complains has a hidden agenda, imagines the issues, and that no problems exist. I doubt that anyone will be even slightly surprised when I tell you that the (official) noise measurements are invariably done using A-weighting ... because that's what the standards worldwide say must be used.
+ +This is wrong on all counts. A-weighting cannot be used because the low frequency noise has rhythm. A-weighting cannot be used because there is a genuine low frequency component that can be heard or felt. In some instances, A-weighting cannot be used because the sound is impulsive. It is quite obvious that no low frequencies will register on a sound level meter set for A-weighting, especially since the frequencies that cause the most problems are generally below 20Hz. Tame 'consultants' (i.e. those employed by the turbine operators to 'prove' compliance with noise regulations) commonly deny that (any) people can hear or be affected by these low frequencies, yet families are (literally) abandoning their homes because of the noise.
+ +It is inevitable that wind turbine 'farms' will generate rather large pressure fluctuations - as the blade passes the tower, and due to other issues such as different wind velocity close to the ground vs the highest point of the blade's swing (wind shear). Likewise, it is inevitable that at times, the blades of several turbines will be in sync, referred to any given point on the landscape. When a number of blades are in sync with each other, the low frequency component of the noise must either increase or decrease dramatically, and to assume otherwise is both naive and irresponsible. Needless to say, when the turbines are in sync so that there is cancellation no-one will mind at all, but this isn't the problem. Reinforcement is just as likely, and that's when people are bound to have issues. Despite this, turbine operators all over the world claim that there is no problem, and that the people who complain are either victims of their own imagination, hypochondriacs or are making fraudulent claims. They 'prove' this by taking A-weighted SPL measurements.
+ +Lest anyone think that the pressure variations created by turbine blades can't be so great as to be audible from (up to) 10km away, consider the following. It's been determined fairly recently that bats killed by wind turbines are often not victims of 'blade strike' as thought initially - their lungs burst due to the pressure difference [ 4 ]. The pressure differential has been measured at between 5 and 10kPa (kilopascals). In terms of sound pressure, 1 Pascal is equivalent to 94dB SPL, so 5kPa is around 168dB SPL! That's right at the blade-air interface and can't really be directly translated into SPL, but I use this as an example only! Birds have stronger rib cages and different lung structures, but they are thought to be disoriented by the pressure differential and may also come to grief.
+ +It seems to be generally accepted that wind turbines kill far fewer birds than many other man-made structures - I included the previous paragraph purely to illustrate the pressure differentials that are created by the turbine blades. If there are pressure differentials, there is sound. It doesn't magically cancel itself out to leave silence, but instead radiates like any other sound. Being very low frequency, the sound can travel for a considerable distance without attenuation by the air, or by the local terrain. Indeed, the terrain can reinforce (or cancel) the sound under some conditions. Computer modelling and prediction is used to determine if there is a likely problem, but that will only work if the algorithms are correct and the input data match reality.
+ +Even though there is mounting evidence (see [ 5 ] as just one of hundreds of similar sites), wind farm operators and most government regulators claim that there is nothing to worry about, and that there are no adverse effects from low frequency noise (including infrasound), blade shadow or reflection ('glint'). I seriously doubt that anyone would be able to tolerate a shadow passing their house several times a second for very long, nor would they be able to cope with flashes caused by the sun reflecting off the blades at a similar rate.
+ +Then, at night, there is the likelihood of low frequency noise. There will be times when the blades are in sync in just the right way to cause complete cancellation of the noise, and other times when the exact opposite occurs. Normal wind and turbine blade noise may be amplitude modulated by the turbines (aerodynamic modulation), or there may be a very noticeable variation in air pressure (infrasound) that is claimed to be inaudible, but only because it doesn't show up on the sound level meter. However, that's not the experience of people who live near a wind farm. People have said that the noise can be audible from as far as 10km from the turbines [ 5 ], although in general the majority of problems seem to be within 2km or so. In some cases, the LF noise may cause windows to rattle or create other sounds within the structure of a dwelling. One room may be quiet while another can be 'noisy', with significant low frequency energy. This is virtually impossible to predict with any certainty.
+ +Also consider that a large wind farm acts like a huge line array, so the SPL may not diminish by the expected 6dB each time the distance is doubled. A true line array (which is large compared to wavelength) causes the SPL to fall by only 3dB each time the distance is doubled, making it entirely possible that the measured SPL of low frequency noise could be far greater than predicted. This will not happen all the time, but you probably don't want to be close by (and perhaps asleep) when it does occur.
+ +Health Canada, in its infinite (lack of) wisdom has proposed a noise limit for wind turbines of 45dBA. This simply shows an astonishing lack of understanding, but naturally it is fully supported by and was prepared by consultants [ 6 ]. There is a great deal of criticism all over the Net about this particular 'study', which appears to be considered a complete load of old cobblers by anyone who actually knows anything about the effects of low frequency noise. Regrettably, this does not seem to have deterred anyone involved [ 7 ].
+ +When you consider that the A-weighting filter applies 50dB of attenuation at 20Hz, it is entirely possible that someone would be expected to tolerate up to perhaps 75dB SPL (unweighted) of low frequency noise at or below 20Hz, but the wind farm would still be 'compliant' for noise output. 75dB is well within the Fletcher-Munson curve for audibility at 20Hz and below, so it's extremely hard to imagine what kind of nonsense these consultants are thinking. This is a perfect example of how the A-weighting measurement system is abused.
+ +The many problems of wind turbines aren't something that I wish to pursue in depth here, but it is one of the best examples of the completely inappropriate use of A-weighting when taking sound measurements. It's well past the time where governments and other regulatory bodies (such as standards organisations) should realise that low frequency noise really is audible or is sensed by other organs in the body, and that A-weighting is providing a great benefit to the noise-makers to the detriment of those who are directly affected by the noise.
+ + +Mine is not the only voice in the wilderness on this topic, but credible references can be difficult to find. I've gathered quite a few (as seen in the references section below), but no-one who needs to understand the issues is listening. "We've always done it that way" doesn't make it valid or sensible, and a poor decision made many years ago does not have to be continued simply because "we've always done it that way". All that shows is that it's always been done incorrectly, and will continue to produce completely irrelevant and unhelpful results until we stop doing it 'that way'.
+ +It is important that standards bodies and legislators actually understand that applying frequency weighting (specifically A-weighting) is something that is only applicable in certain limited circumstances. That it is applied to virtually everything - and generally inappropriately - is something that has to be addressed. A simple sound level reading by itself and without context is meaningless, because the measurement gives no hint as to the original sound source.
+ +If someone measures an SPL of 74dBA, that doesn't give anyone the slightest clue as to the nature of the sound. It could be a number of people talking loudly close by, or a jackhammer at some distance. More to the heart of the issues, if the noise contains significant low frequency energy, A-weighting will trivialise the audibility of the LF content to the extent that the meter may indicate compliance, while the person taking the measurement can hear quite plainly that there really is a problem. In most cases it would probably be unwise for that person to actually admit to anyone else that the problem exists, despite the meter reading. Quite obviously, this same issue applies for any simple meter reading. The only way anyone can really understand what's going on is to make a recording (calibrated to the reference SPL), and that can be analysed in any way one likes.
+ +None of this is considered in legislation, and the wind industry in particular seems to have a clear opportunity to make a considerable (and audible) amount of LF noise. Because of blind faith in a flawed concept (and the misuse of the weighting curve so that it is applied regardless of actual SPL) is causing problems. These problems are not just isolated to the affected residents who may even be forced to abandon their homes, but they affect the community at large. This includes the turbine operators! At some time in the future, laws will be changed, and installations that comply today will fail miserably. The cost to the operators and the community is likely to be staggering.
+ +Even now, I expect that $millions has been spent on studies, research, more studies, court cases and lost productivity in every country where wind farms are proliferating. This will continue, because it is highly unlikely that there will ever be consensus between the parties. Nothing is helped when supposedly reputable vendors and consultants extol the 'virtues' of A-weighting and try to convince people that it's somehow the right thing to be using. It's not !
+ +Now, imagine for just one moment that A-weighting did not exist. The situation would be clear to everyone, and meter readings would show the total SPL from all frequencies within the audible range. Of course there would still be arguments, because there would still be disputes about the audibility of very low frequencies. However, it seems to me that these would be somewhat easier to manage, because the meter readings would always show the actual SPL and include all frequencies more-or-less equally.
+ +Unfortunately, A-weighting does exist, but it would make things far easier if it could be made to go away. Most of the time, applying frequency weighting only ever causes those affected by noise to be left wanting a proper solution, and enables those making low frequency noise to trivialise the complaints. "The meter says it's fine" we are told, when it's patently obvious that it's anything but fine [ 10 ].
+ +When A-Weighting is used to specify the S/N ratio of audio equipment, the results are theoretically justifiable, because the characteristic noise is broadband and random. However, some manufacturers don't specify whether the measurement is weighted or not, and that can lead to the intending purchaser being duped into thinking that equipment 'A' is quieter than equipment 'B', while 'B' is actually quieter. Again, it would be so much better if A-Weighting was made to go away, so that all equipment could be compared equally.
+ +Don't hold your breath!
+ +![]() |
Elliott Sound Products | Acoustic Centre |
![]() ![]() |
The acoustic centre of a loudspeaker is the point in space from which sound waves appear to originate. When adjacent loudspeakers (typically mid-bass and tweeter) are not aligned properly, there is often a notch at the crossover frequency. The sound-field is not projected forwards as is wanted, but is at an angle determined by the offset. The tweeter's output will be 'first' as it's generally closer to the listener. The mid-bass driver follows some time later (usually a few 10s of microseconds), so there's a lobe aimed at the floor instead of at your ears. It's this lobe that causes the frequency response dip, because the speaker's energy isn't directed where you want it (to your ears). It shows up as a dip in the frequency response when that's measured on the tweeter's axis (the most common measurement technique).
The issue with most driver combinations that people use is that their acoustic centres are different. Dedicated dome midrange drivers might manage a very small effective offset, but when a dome (or ribbon) tweeter is use in conjunction with a cone mid-bass driver, there's effectively a delay in series with the midrange driver. It's only a short delay (perhaps from 50 to 100μs), but it can cause response anomalies. These may not be audible, but a measurement system will show any issues.
This can be corrected in a number of ways, with one common approach being to include a phase-shift network (all-pass filter) in series with the tweeter to delay its output. This is easily done with an active crossover, but is more difficult when passive networks are used. Sometimes, designers will use asymmetrical crossovers (e.g. 12dB/ octave low pass and 18dB/ octave high pass), and while this can provide almost perfect response when applied correctly, it's not without challenges. Perfecting the design is not a simple process. A graphical example is shown in Fig. 4.3.
With the advent of DSP (digital signal processing) systems, it's usually possible to introduce a digital delay to align the drivers, and there are systems that will do everything for you, based on a series of measurements. These are outside the scope of this article, not just because I'm an 'analogue man', but also because adding DSP is a serious undertaking. There are 'cheap and cheerful' DSP systems (although they aren't cheap any more), but a great many of my crossover PCB sales are to people who have a cheap DSP and discovered the limitations thereof.
Physical displacement will also work, which can be due to a 'stepped' baffle (so the tweeter is moved back so the acoustic centres align), a sloping baffle (which means you are slightly off-axis for all drivers), or by using a waveguide for the tweeter which can both move it backwards and improve efficiency. Waveguides come with their own set of problems of course, not the least of which is designing it so it doesn't impact on the tweeter's response.
This article examines a number of different techniques that you can use to locate the AC of a pair of drivers. It's likely (based on the test methods described below) that all will give a slightly different result, so it's up to the constructor to decide on a method that s/he's happy with. Each method has its merits and drawbacks, with some being somewhat irksome to set up (and I know this because I had to test every technique described). Much depends on your workshop facilities, and your level of determination.
The hard part is knowing where the acoustic centre of a driver is, and knowing how to find it. Ideally there should be no highly specialised hardware or software needed, but you will need a microphone (electret or a dynamic type). This is easy and cheap, because it doesn't need to be perfect, and off-the-shelf cheap electret mics will work just fine. If you have an electret capsule, you only need a 5V DC supply and a 10k resistor from the supply to the mic's positive (the case is ground/ common). No capacitor is needed because the scope can be set for AC coupling to remove the DC offset.
Any electret mic will be sufficiently flat across the crossover frequency band (usually between 1kHz and 4kHz). Even if it doesn't have flat response, the results will still be fine. This is because you are looking for the delay between the application of a pulse and the time taken for it to reach the microphone. Since you're not trying to measure frequency response, any electret will work. A dynamic mic can be used, but it will also have an acoustic centre too. This will be a constant though, and it just adds a bit to the overall time-of-flight (ToF) of the test signal.
The first (and biggest) problem faced when determining the acoustic centre offset is where do you take the measurement? You can measure directly in front of each driver, but then there will be an offset when you listen to the speakers. This depends on how close you are as well, since in the near-field a small difference in listening position (vertical) will cause a difference in the path length from the driver to your ears.
There are countless methods suggested to determine the acoustic centres (AC) of drivers, all different, and most are likely to give slightly different results. Using a pulse (in my case, a single cycle of a 3kHz sinewave repeated at 1s intervals or a transient created by discharging a capacitor into the speaker) look as though they should give good results, and when performing measurements the results seem to be pretty accurate. You can also use an impulse generated by audio test software and base the measurement on that.
One suggestion from the late J Marshall Leach (sorry, I have no reference for this) was (apparently' to 'assume' the acoustic centres to be at the peak of a dome tweeter and the point where the dustcap meets the cone for a woofer. It's easy and convenient, but my tests indicate that this is likely to be fairly close. Is it close enough? That's for you to decide, based on your expectations.
I took a bunch of measurements, and everything makes a difference. A good result was obtained with the mic directly above the centreline of each driver, and time the arrival of the pulse measured using a scope. By triggering from the electrical signal, the time-of-flight (ToF) was easily measured. When the ToF for each driver was identical (same time delay, about 508μs) the drivers are time aligned. It turns out for those I tested that the acoustic centre offset is between 38mm and 27mm. That means that the mid-bass driver has to be (on average) 32mm closer to the listener than the tweeter - that's not a huge offset, but it poses a challenge.
Sound travels at roughly 0.343mm/μs, so a ToF difference of 100μs means that one signal had to travel an extra 34mm compared to the other. A 'conventional' flush-mounted tweeter will almost always present its signal first, so it has to be delayed so its AC is the same as that from the mid-bass (or midrange) driver.
Unfortunately, the AC of a driver is not a fixed quantity, and it can vary with frequency. Provided it remains fairly constant for one octave above and below the xover frequency, the results will be satisfactory. Only rigorous testing will provide all the information you need, tempered by reality (speakers are rarely even close to flat response devices) and your expectations. If you expect the response to be within ±0.5dB you will be disappointed. Even 'top-shelf' speakers will typically show an overall response that's no better than ±3dB (unsmoothed). Some are better, but not many (often the results are 'doctored' by applying excessive smoothing).
The lobe created by the tweeter signal arriving first is shown above. This only occurs at (or near) the crossover frequency, where both drivers are providing the same signal, but with different phase relationships due to physical displacement. Above and below this frequency range, each driver is independent. In this drawing (along with those that follow), I've assumed the listening position is on-axis for the tweeter.
A 'true' delay is difficult without a DSP, but a reasonable delay can be achieved using one or more phase shift networks (aka all-pass filters). These are designed to operate at a lower frequency than the nominal 90° frequency, and the delay is what's known as group delay. It's generally fairly easy to obtain a group delay of up to 40μs (13.7mm offset). This is covered in detail in the article Phase Correction - Myth or Magic.
An ideal delay circuit will give flat group delay to at least one octave but preferably two above the crossover frequency (±2μs or so). For small offsets this is easy (up to 50μs 917mm] or so). It becomes more difficult if you need a greater delay with additional all-pass filter stages.
Stepped baffles are often frowned upon because if not executed properly you can create a 'diffraction engine' that will create more problems than it solves. However, it's been done by many respected designers/ manufacturers, and it solves the problem very nicely. It should be obvious that you need to know the difference between the acoustic centres before starting work on the cabinet! I recently saw an article that examined the diffraction effects in some detail, but I've not been able to locate it again. The conclusion was that diffraction is not usually an issue, provided the step is modest (up to perhaps 25mm or so).
One way I tested was to wire the two drivers in parallel, with attenuation to the tweeter to get roughly equal levels. Each driver has a switch so it can be turned on or off. When the woofer and tweeter are switched, if the drivers are time aligned, the signal will arrive at the same time (and polarity).
When both drivers are enabled, the level should increase if the two signals are in-phase. Adjust the relative spacing to get the maximum increase. At close range (200mm), if you go off-axis for the tweeter, the timing will change because the relative distances change. Ideally, the mic should be at least one metre away, but that may prove to be difficult due to the low level picked up by the mic.
However, this will give an incorrect result because the path from the mid-bass is longer (relatively) than it will be when you're listening (unless you listen at 250mm - somewhat unlikely). The reason is based on simple geometry.
If the mic is 200mm from the tweeter, the distance to the midrange is 228mm, but as you move out further, the error is reduced from 14% to 1.7% (at 600mm). At greater distances the error is reduced even more, but once it's below 1% there's unlikely to be much real improvement. The error will be greater if the drivers are farther apart. This is Pythagoras' theorem in action.
When two identical signals are summed passively, with a pair of resistors, the output is equal to the voltage from either source. When the phase relationship is unequal, we see an overall phase shift and a reduction of level. For example, two 1V sources at any given frequency will provide a 1V output. We'll assume 3kHz for simplicity.
Should the phase of one be shifted by 10°, the overall amplitude will fall by 0.044dB and the signal will be shifted by 4.8μs. Increase the phase shift to 20°, and the amplitude drops by 0.13dB, with the signal shifted by 9.6μs. A further shift to 30° cause the amplitude to fall by 0.31dB, with a displacement of 14.4μs. With 45° shift the level drops by 0.7dB (close enough).
Most readers should know that if the phase is shifted by 90° the amplitude falls by 3dB. What you likely don't realise is that the effective time advance or delay is 43.2μs. This in itself is mot important, but it becomes relevant when you look at loudspeaker drivers. Determining the acoustic centre offset is as much about the phase relationship of the driver (i.e. is it inductive, resistive or capacitive at the frequency of interest.
We would normally hope that it's resistive, but in reality that's unlikely. I used an impedance bridge to determine the impedance and phase angle (at 3kHz) for the two drivers I tested (as shown below). Unfortunately, few hobbyists have access to such an instrument, so consider the data to be moderately interesting, but irrelevant for the most part. It can be determined by other means, but it's not as useful at it might seem. I also used Dayton Audio's DATS to measure the impedance and phase of the mid-bass and tweeter.
Driver Impedance (3kHz) Reactance Tweeter 6 Ω 2.5° Capacitive Mid-bass 11.75 Ω 35° Inductive
The tweeter is benign, with its nominal and measured impedance being identical. The mid-bass is a different matter, being inductive and with a phase angle of 35°. That means that the current lags the voltage, so the output (which depends on voicecoil current) will be delayed by about 32μs (equivalent to a mechanical displacement of about 11mm). This shifts the acoustic centre back from its assumed position. However, as discussed above, a 30° phase angle causes a very small error, so it's not likely to be a problem.
The physical offset will be far more significant than the phase displacement caused by the voicecoil's inductance. Effective capacitance is created by the driver's compliance, and the mechanical analogue of inductance is mass. The variable impedance and phase angle of the voicecoil current explains (at least in part) why the AC varies with frequency.
This measurement is a little different from those I got with the impedance analyser, but is still (potentially) useful. Resonance is shown clearly, and because it's quite flat that indicates that the tweeter is very well damped. It only increases to ~7.5Ω from the nominal impedance of 6Ω. Below resonance you just see the voicecoil's DC resistance. I added a cursor line at the 0° phase point for both graphs.
The mid-bass is inductive above 500Hz (actually semi-inductive). When looking at any impedance graph, if you see the impedance rising with frequency the load is inductive. Conversely, if the impedance falls with increasing frequency the load is capacitive. A resistive load remains constant with frequency. The mid-bass driver is resistive at two frequencies - resonance (800Hz) and between 300Hz and 400z.
On this basis, using the top plate of the magnet assembly looks like a fair compromise. At 32mm (vs. the measured difference of 29mm) that's an error that will have a negligible impact. Interestingly, during the tests I discovered that one mid-bass I tested was reverse-phase, so a positive input produced a negative initial output. This was not expected! The next drawing shows the general scheme for any speaker, and it's shown as a mid-bass unit. Tweeters use very similar construction, but without the cone.
It seems sensible to assume that the middle of the top plate/ polepiece is the source of the sound. After all, it's all down to the movement of the voicecoil, and that is centred in the gap between the top plate and centre polepiece. Of course, it takes time for any movement to propagate along the voicecoil former and activate the cone (the parts are not 'infinitely stiff'), but is this significant? Based on physical measurements and electro-acoustic measurements, the answer seems to be "maybe". Even getting an accurate physical measurement isn't always easy, because with tweeters in particular, the top plate is often embedded (or at least partly) into the plastic 'basket' and it may be difficult to locate its centre-line. In most cases its approximate location can be found with a little guesswork.
In each case here, I've assumed flush mounting which is generally preferred to minimise diffraction. That's why I've taken the front of the driver surround as the 'reference plane'. If you surface mount the drivers (not recessed into the baffle), then the reference point is the rear of the mounting flange.
I've seen a lot of different schemes suggested over the years Some of these are magnificently over-complicated, to the point where mere mortals will probably mutter a few choice phrases and move along. As the old saying goes though "for every complex question, there is an answer that is obvious, easy to understand, and wrong".¹ I like to stay away from these if I can, but sometimes an apparent over-simplification can still result in an answer that's 'good enough'. In the context of audio reproduction, there are so many influences that can affect what you hear that perfection is not possible. If you use a simplified technique to estimate the AC offset that happens to be a few microseconds in error, the acoustical deviation will (hopefully) be less than the normal response variations of the drivers used.
If you happen to be out by (say) 10μs (3.43mm) the level may be affected by up to 0.5dB at the crossover frequency. Not perfect, but room effects can have up to an order of magnitude more effect, and even the speaker drivers themselves are far less accurate overall. Of course, you can verify the results by measurement, and the results may surprise you. After all, the loudspeaker system is the weakest link in any audio chain - everything else is typically way ahead in terms of response flatness, distortion and transient response. Of course, you're reading this because you want to make your speakers as good as possible - never a bad thing.
¹ Adapted from H. L. Mencken
The numbers shown are all at 3kHz, and while they may change with frequency, it's not by a great deal. I've only shown the measurements for the mid-bass and tweeter at 3kHz, with the time taken from the peak of the electrical pulse and the peak of the acoustical pulse. I tested a high-quality mid-bass (ScanSpeak Revelator 120mm), and my electro-acoustical measurement gave an offset of about 28.5mm, and the distance from the reference point to the magnet's top plate measured 27mm. 1.5mm offset is negligible and it can safely be ignored. The tests described here were performed on a 'no-name' but quite passable 120mm driver.
I found that a mic at roughly 10mm from the mounting plane of the driver (I'll call this the reference point) works fairly well, but it's not overly critical - provided you keep the distance consistent for all tests. The mic must be on the driver's axis. I tested at 1kHz, 2kHz, 3kHz and 4kHz, ensuring that the signal arrived at the same time in each case. With a distance of 11mm (set by the spacer I used) between the reference point of the driver and the mic, the minimum possible time delay is about 32μs, which is easy to display on a scope. If the driver's acoustic centre is (say) 10mm behind the reference point, you'll measure a total time delay of 61.2μs (21mm total distance).
λ = C / f (where λ is wavelength, C is velocity [343m/s] and f is frequency) d = t × 0.343 t = d / 0.343 So (for example) ... d = 20 × 0.343 = 6.86 (mm) t = 6.86 / 0.343 = 20 (μs)
Sound will travel 0.343mm in 1μs, based on the nominal 343m/s speed of sound at 20°C. Of course it changes with temperature, but 343m/s remains a reasonable estimate. In a number of articles I've assumed 345m/s. but the difference is tiny. The time is in microseconds, so if you measure 20μs it's entered in the formula as '20'. Likewise, distance is in millimetres, so 6.86mm is entered as '6.86'. If you include the suffixes in the formula you'll get very silly answers.
As with the mid-bass, there's a little ambiguity, and the measured tweeter time offset is ~53μs determined by the peak amplitude of the electrical and acoustic signals. Although the 11mm is included, the error is the same in both cases, so the offset remains unchanged. I didn't go to the trouble of subtracting the 11mm 'extra' delay, and you don't need to - it doesn't affect the relative offset.
With any electromagnetic system there is some ambiguity as to where the pulse really starts. I tried the zero-crossing, but found that the peak is (probably) more accurate. The following table shows the measured offsets using ToF (time of flight) with a pulse, the measurement from the mounting plane (front of faceplate/ mounting flange) and from the mounting plane to the top of the dome and dustcap attachment point (the tested mid-bass has an extra 3mm above the mounting plane). As is pretty obvious, the latter is not even close to the real offset.
I didn't test (or run any calculations) for this next point, but it might be worth looking at more closely. There is some inertia in the voicecoil/ cone assembly, and perhaps measuring the timing of the second peak (negative-going) may be more realistic. Music can never generate impulses as fast as the one I used, but with good motors I'd expect that most speakers will respond as hoped-for after the initial half-cycle. My personal view is that the technique described is probably correct, but without a dedicated test enclosure specifically for the drivers being tested this can't be proven either way.
Driver Time of Flight Distance -11mm Spacer Top Plates Dome/Dustcap Tweeter 53 μs 18 mm 7.01 mm 10.5 mm 0.8 mm Mid-bass 165 μs 56.6 mm 45.6 mm 40 mm 28 mm Relative Offset 112.4 μs 38.5 mm 38.6 mm 29.5 mm 27.2mm
The average of the measurements shown is close enough to 32mm (we can ignore the fractional part), and the deviation is not excessive. I'd consider it to be well within acceptable limits, so you can use any of the techniques without fear of any major issues with overall response. Considering that probably most home-built (and many 'name brand' speakers) will have made little or no adjustment for AC offset, I expect that an offset of around 30mm (roughly 87μs) will be very satisfactory. It doesn't matter if the delay is engineered by using a stepped baffle, digital delay, phase shift network or an asymmetrical crossover, the end result should be pretty flat response across the crossover region.
Should the difference between the acoustic centres be much greater, it will be harder to deal with. The wavelength at 3kHz is C/f, or 343m/s / 3k - 114mm. A half-wavelength is 57mm, so if there's a 57mm AC offset that would put the drivers out-of-phase at 3kHz. Reversing the phase of the tweeter will (in theory) re-establish time alignment, but it's very frequency dependent. This method is only recommended if you use a 24dB/octave crossover. The fast rolloff minimises errors caused by the relative timing.
It's (at least somewhat) noteworthy that if the delay caused by the mid-bass voicecoil inductance (about 32μs or 11mm) is subtracted from the measured ToF derived as shown, the offset is reduced to 27mm, very close to the distance between the magnet top plates. This was already measured at about 35° and is more-or-less benign. I'm fairly confident that alignment of the top plates will give a result that's satisfactory for any loudspeaker design. The room and cabinet edge diffraction will almost certainly have a far greater effect than a 35° phase misalignment.
The second method you can use doesn't require a tone-burst generator, but just uses a capacitor (around 33μF) charged to 12V via a 2.2k resistor. The cap is discharged into the voicecoil with a bounce-free switch (so-called mini tactile switches are as good as you'll get). The scope is triggered by the electrical pulse (the yellow trace in the screen captures below). The mic output is captured on the other channel.
The delay is 68.6μs, taken just as the signal rises (roughly 10% of the peak amplitude). As before, the mic was spaced from the mounting plane by 11mm. It's interesting that the tweeter and the mid-bass give shorter times with this technique. The offset is roughly half that indicated with a single-cycle tone burst. While it may be tempting to use the peak of the mic signal, this is unlikely to be accurate. However, it is fairly close to the ToF obtained with the tone burst.
If you were to use the peak as the ToF reference, you get about 120μs for the tweeter and 240μs for the mid-bass. The difference is 120μs (41mm), and is not far from the figure determined using the tone-burst method (112μs). The distance difference is negligible.
The mid-bass/ woofer's delay is 137μs with the same 11mm spacing from the mounting plane. The difference between this and the tweeter is close enough to 69μs, an equivalent distance of 24mm. This is quite different from the effective displacement determined using the tone burst. Which one is right? Unfortunately, the only way to be certain is to mount the drivers on a baffle and perform a frequency response measurement. My gut feeling is that the 'real' figure is probably halfway between the two. That works out to 16mm (a bit under 47μs).
Driver Time of Flight Distance Tweeter 68.6 μs 18.01 mm Mid-bass 137 μs 41.16 mm Relative Offset 68.4 μs 23.5 mm
Interestingly, this is pretty close to the previous methods. On the basis of these tests, I'd be pretty happy to acoustically align the top plates of the two drivers and use that as the acoustic offset. Everything is a compromise, but aligning the top plates (either mechanically or acoustically) is likely going to give a fairly consistent result. After all, this is where the voicecoil is located, and that is the origin of the sound that's propagated by any loudspeaker. This is by far the easiest measurement to take (it requires nothing more than a ruler). However, there is a delay due to voicecoil inductance, but provided the phase shift is less than ~45° I wouldn't worry about it too much.
The most accurate measurement may end up being to use both drivers in parallel, with one driver phase-reversed. You then look for the deepest notch with a sinewave at the intended crossover frequency. When the acoustical signal is perfectly cancelled, the phase angle is 180°, and the test is extremely sensitive to the smallest phase change. Unlike testing for flat response (where ±30° only makes a tiny difference), a notch is sensitive to only a few degrees. With perfect inverse-phase signals (of equal amplitude), you get complete subtraction and the notch is infinitely deep. A phase change of just 1° will reduce that to -41dB, and 10° reduces it to only -21dB. Don't expect to get anything like that with an acoustic test, but even 20° (which has negligible effect when signals are added) should be very obvious when they are subtracted.
I tested this, but only in the most basic setup. Even so, with the drivers wired reverse-phase an almost complete notch was seen fairly easily. This corresponded fairly closely with alignment of the top plates of the two drivers, so I'm reasonably confident that this remains a very good starting point, and will likely work with most loudspeakers. However, the requirement for a well distanced microphone is clear, and the requirement for ear-muffs equally so! If you try this technique, you can pretty much guarantee that anyone nearby will be seriously irritated if you keep it up for long. A 3kHz 'whistle' maintained for more than a few seconds is really hard to tolerate.
This is one area where a directional microphone is preferred, as it will help to minimise the effects of room reflections. Frequency response is immaterial, since the test will be performed at the crossover frequency and optionally at one octave above and below. If you have a good listening area, you can just use your ears! All you're after is a null, which is easily detected by ear. The disadvantage is that you need to be as far from the drivers as is sensible - at least a couple of metres. This makes it hard to tell when you have a good notch, as you can't adjust and listen at the same time. A remote method for moving the tweeter would be necessary, which will be hard to arrange. Note that you also have two ears, and they will hear the notch differently! One ear should be 'disabled' using one ear-muff or an ear-plug (a proper one).
Fairly obviously, an anechoic test area is ideal, but that's not something that most mere mortals can get access to. The test can be done outside (well away from walls, fences, etc.) but the noise will do you no favours with the neighbours. You only need enough level so the mic gives you a usable level at a distance of 1-2 metres, but it will still be annoying. Perhaps fortunately, people have difficulty locating a 3kHz tone, so you might be able to blame someone else.
Note: This method can be made to work very well, but you can easily be tricked by reflections. Just moving yourself around (assuming the drivers are located on a padded chair for example) will create dips and peaks. The wavelength at 3kHz is only 114mm, and what you think is an insignificant movement changes standing waves dramatically. I suspect that in most cases you'll have great difficulty getting a reliable null unless the test area is anechoic (or close to it). The reality of this is easily demonstrated, by simply moving your head while a 3kHz tone is played. You'll find positions where it's loud, and others where it's almost silent in one ear or the other.
If the test is done outdoors, the drivers should be at ground level on a non-reflective (acoustically) surface, facing straight up. The mic needs to be directly above the drivers, at a minimum distance of 1 metre. I suggest that no crossover (rudimentary or otherwise) be used, as that will add phase shift that will create serious measurement errors. Start with the mounting planes of both drivers in line, and move the tweeter back until you see a (possibly large) drop in level. It's a very sensitive measurement, and you're aiming for the best null. If it turns out that it's wildly different from the difference between the top plates and the mounting plane, you've made an error - the null should be within a few millimetres of the top plates being in alignment.
As mentioned in the intro, an asymmetrical crossover can sometimes be employed to provide the required delay. A 2.4kHz, 4th order high-pass filter has a low-frequency group delay of 250μs, vs. 155μs for the 1.52kHz, 18dB 3rd order low-pass. There's an effective net delay of 130μs applied to the tweeter (but note that it varies a little with frequency). However, if done properly this technique will force a reasonable approximation to effectively delay the tweeter's output to obtain time alignment.
There's only a limited amount of asymmetry that can be applied before the design becomes overly complex, and much experimentation and testing is needed to get a usable result. Any asymmetrical crossover will provide unequal group delay (e.g. 18dB and 12dB/ octave), noting that the greatest delay is provided by the high-order filter. This must be used for the tweeter, with the lower order filter (e.g. 12dB or 18dB/ octave) used for the mid-bass. It should be possible to get the summed response to be flat within ±1dB fairly easily.
Less group delay becomes available if the filters are changed to 3rd order and 2nd order (18dB/ octave and 12dB/ octave). A fairly typical group delay with this arrangement is 70μs, allowing for an AC offset of up to 25mm. You can even use a 24dB high-pass with a 12dB low-pass to get more delay, but it becomes less consistent, and keeping ripple below ±1dB is possible but not at all intuitive. The slow rolloff of the mid-bass driver may cause issues if it has cone breakup effects above the crossover frequency.
This isn't a solution that can be used by anyone who is maths-averse or can't use a simulator at a fairly advanced level. It also requires that you know the acoustic offset, so everything described above is still relevant. Whether you can arrive at a suitable design depends on your skills, measurement accuracy and the drivers you choose. In some cases you can even use this technique with a passive xover design, but this raises even more challenges if the driver impedance is not flat across the xover frequency (and at least 1 octave either side of the xover point, preferably more).
This is a topic that is discussed regularly in forum posts and elsewhere, but there seem to be few decent resources you can draw upon that describe how it can be done. The techniques shown here will give a reasonable approximation to locate the acoustic centre of tweeters and mid-bass drivers. I'd be interested to hear from anyone who has their own favourite technique, and can provide details. Because we are dealing with a moving target, the more information that people can refer to the better.
Of course, the ideal would be for loudspeaker driver manufactures to supply this information along with everything else in their datasheets. Unfortunately, I wouldn't hold my breath waiting. When the drivers are characterised for the datasheet it would be so easy to add this small piece of info to make everyone's life just that little bit easier. Alas, it hasn't happened yet - I have never seen the acoustic centre figure specified in any way, shape or form.
Meanwhile, many modern mid-bass drivers are quite shallow, and some tweeters include a (small) waveguide. A shallow mid-bass and a 'deep' tweeter will minimise the offset, and you can even go to the trouble of fabricating a waveguide for the tweeter that moves it back far enough to achieve time alignment. There is an article on the ESP site covering waveguides, but it's fairly heavy-going (see Practical DIY Waveguides, Part 1 (along with parts 2 & 3) for details.
Waveguides can introduce issues with response, so care (and experimentation) is needed before you commit to this approach. As noted in the article Phase Correction - Myth or Magic, you can use an all-pass filter to provide the required delay for the tweeter, which some people will prefer to a stepped baffle. If you're using a DSP crossover, the ability to add a delay will be in the software, but unless the DSP offers very high performance this may not be an option for hi-if.
None of this is necessary for the woofer to mid-bass/ midrange driver. The crossover frequency will probably be somewhere around 300Hz, at which frequency the wavelength is over one metre, and a timing error of a few 10s of microseconds will have no audible or measurable effect. Some designs may provide correction for the bass-to-mid crossover, but room effects will be so completely dominant that no improvement is likely. Even an AC offset of 200μs (68mm) will have less than 1.5dB effect. Compare that against the measured response of any driver and you'll see that it's insignificant. This is doubly true when in-room response is considered.
Note that I make no claims for the accuracy of any of the techniques described. This article is the result of a number of experiments to locate the acoustic centre of drivers, but all give slightly different results. Other than a specialised test baffle. an anechoic test space and very careful measurements, any method you use will be an estimation. I might (but probably won't) put an adjustable test baffle together at some point, but that would require the anechoic test environment, something I don't have and never will. Frequency response measurements need to be very carefully performed, with the mic at least 1 metre from the drivers.
The most accurate measurement technique is to wire one driver reverse-phase, and adjust for a null (a notch) at the crossover frequency. This is a lot harder than it sounds, because reflections from nearby surfaces (including you!) can easily create false nulls, and/ or obscure the 'real' one. Using close-mic techniques can create serious errors due to the geometrical errors shown in Fig. 2.4. As noted, a directional mic is probably better if you use the null technique, as it will reduce at least some of the reflections. If the measured offset is wildly different from the alignment of the top plates then you've either made an error, or reflections/ standing waves are creating a false null.
![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Acoustic Feedback & Frequency Shifting |
In this article, the term PA refers to speech amplification systems employing microphones, amplifiers and loudspeakers used in auditoriums and churches to address an audience or congregation. The industry name for a high powered system used for musical performances is 'Sound Reinforcement' or 'SR'. While much of the information supplied is applicable to both, the emphasis here is about voice systems.
+ +Acoustic feedback (aka the 'Larsen Effect') is especially troublesome when non-professional speakers use the system, as they tend to have poor microphone technique, and will often speak more quietly if they hear their own voice through the PA system's loudspeakers. If the person controlling the sound then increases the gain, a vicious cycle is started, and feedback is almost inevitable. Since feedback is usually preceded by 'ringing' (an unrelated tone that starts and stops with sound excitation) which usually dies away before it becomes a full 'howl', you do get some warning, but it's usually too late.
+ +This article was written by Phil Allison, with additional material by Rod Elliott (indicated by dark grey text and ended with ).
![]() | Please note that in this article, the person speaking is referred to as the 'speaker' or 'talker', while the loudspeaker is referred to as the + 'loudspeaker'. This is done to ensure there is no ambiguity between the shortened version of loudspeaker and the person speaking into the microphone. + |
Anyone who has been part of an audience while a PA system was being used is likely heard that piercing squeal called 'acoustic feedback'. The problem has been with us since the very first systems were installed and has not gone away. Acoustic feedback is a natural phenomenon, inherent in situations where the people using the PA system occupy the same room with the audience.
+ +Electronics has made it possible to record sound for later reproduction and also to transmit sound from one place to another, in both cases it is possible to reproduce the original sound at deafening levels if you have enough amplification equipment. However, making a sound become louder in the same location where it originates is very different, as it attempts to break the laws of nature.
+ +"But that is what PA systems do", I hear you say. The voices of people speaking through a PA system are much louder than their unamplified voices, and singers in rock bands can achieve deafening levels from the kind of SR systems usually employed. So what is the real story?
+ +Try this simple test - have someone speak in a loud voice directly into your ear, from less than an inch distance. I bet you will find this very unpleasant and quickly pull your head away because the peak SPL involved is around 110 to 120 dB. A good singer's voice might be 15dB louder again. The test simulates what the diaphragm of a microphone is often subjected to during PA or SR system use.
+ +The combination of using a close microphone and speaking loudly make it possible to generate quite high sound levels some distance away in the body of a room - but not nearly as high as at the microphone itself.
+ +In a voice PA system, the sound heard by most of the audience is similar to normal speech level, which peaks at about 85dB SPL note 1. Close to the loudspeakers it will be far more, but back at the microphone position it is normally no more than this. If the speaker at the microphone can deliver 95dB SPL peaks with their voice, it will be well above the sound level arriving back and should render the PA system free of feedback problems.
+ +The unfortunate fact is that most people, when speaking into a PA system, lower their voice and back off from the microphone as soon as they hear themselves being amplified. Turning up the gain for such users is no solution, because the same person then speaks even more quietly and/ or backs off further. If the system begins to feed back, the person stops talking altogether. Experienced and professional users of PA systems have long known they must not let themselves be distracted by hearing their own voice and do the opposite in order to avoid acoustic feedback.
+ ++ Note 1: Many sources quote normal speech level as being 65 to 70dB SPL, but invariably fail to describe how the measurement was done. The figure of 85dB + is the peak, measured at 1 metre. and based on actual tests with a 'Rode SPL-1' sound meter connected to a scope displaying instantaneous peaks levels.+ + +
+ + The discrepancy is easily accounted for from the fact that speech has a 14 dB peak to average ratio, and SPL meters are commonly average responding and set to give + an A-Weighted value. +
Although it's not common to describe it this way, you can think of a PA system that's experiencing feedback as an oscillator, much like the many oscillators that we use in electronics. While a 'true' oscillator has a frequency determining network to fix a specific frequency, a PA system can have many different frequencies that are often just below the point of instability. In this context, the 'system' consists of the microphone, preamp and power amps, the loudspeaker(s) and the room. They cannot be separated, because they all play a part in the overall response.
+ +This can be seen as a 'closed loop' feedback system, with the acoustic path completing the loop. The frequency at which the system oscillates is determined by amplitude and phase. At frequencies where the feedback is out of phase with the microphone's output, the system is stable, but when the signals are in phase it will be prone to oscillation if the overall system gain is high enough. In all typical rooms, the phase is random - it varies with frequency and is highly dependent on the reverberation and standing wave characteristics of the room. The operator usually has control of only one thing - gain. Equalisers can be used (specifically notch filters to reduce the gain at feedback frequencies), but this is rarely useful if the talker is moving around with a hand held mic, because phase and overall frequency response depend on the relative positions of the mic and loudspeaker(s).
+ +The requirement for oscillation is simply that there is sufficient positive feedback to make the system unstable. In a PA system, positive feedback can occur at any frequency, determined by the frequency response of the loudspeaker and microphone, the time delay between the two, the nature of the room (standing waves, reflective surfaces etc.) and the system gain. When the loop gain (the total gain of the entire system including the room) exceeds unity, feedback will occur. Imagine a microphone, amplifier and loudspeaker as shown below. Only a few reflections have been included, but in most rooms the effects are chaotic, with potentially hundreds of different paths between the loudspeaker and microphone. Most will have a level that's well below the feedback threshold, but there will be one or more that can provide enough level at the mic diaphragm to cause feedback.
+ +
Figure 1 - PA System Feedback Conditions
With the gain structure shown in the drawing, the SPL from the loudspeaker at the microphone diaphragm will exceed 74dB at many frequencies, the loop gain of the system is above unity, and feedback is assured. The only condition is that the loudspeaker can provide a level at the mic position that ensures that the overall gain is more than 1. The frequency is indeterminate - it depends on too many different factors. It should be apparent from the above that the reference SPL at the microphone is not some fixed value. The signal from the loudspeaker at the mic position will always be greater than the speech level because the system has excessive gain.
+ +Many community halls and other venues use ceiling loudspeakers, and as often as not some will be positioned almost directly above where most talkers will stand. This means that the signal from the loudspeaker has a direct path to the microphone, which when added to the multiple indirect paths ensures that feedback is not only likely, it's almost a certainty. System gain will always be very limited before feedback happens.
+ +We can look at an example that may help you to understand the system's gain structure. The required loudspeaker output is 98dB SPL at 1 metre, based on a loudspeaker that's 94dB/ 1W/ 1m at a power of 2.65W ...
+ +++ ++
+Mic: 2mV/ Pa 200µV at 74 dB speech level (1 metre), (-74 dBV) 1.4mV at 90 dB speech level (-57 dBV) + + Total Electrical Gain 23,000 (87 dB) 3,300 (70 dB) + + Speaker SPL 98 dB @ 1m, 92 dB @ 2m, 86 dB @ 4m 98 dB @ 1m, 92 dB @ 2m, 86 dB @ 4m + System #1 - Unstable System #2 - Stable ... Maybe +
Note that all SPL and voltage levels are average. System #1 will feed back! Even if the loudspeaker's signal path is 4 metres to get back to the microphone, the returned SPL at the mic will be 86dB. This is louder than the speech level allowed for by the amount of gain applied, and feedback is guaranteed. The gain must be reduced, and the talker will have to get a lot closer to the mic, and/ or speak louder to get the same SPL from the loudspeaker(s). Note that this doesn't account for the delayed reflected sound paths that send additional loudspeaker energy back to the microphone. This effect is shown in the next section.
+ +If the talker's mouth is around 25mm from the diaphragm, the level will be closer to 90dB SPL, and the mic's output will then be around 1.4mV. The gain can now be reduced to a total of 3,300 meaning that the mic preamp gain will be a little over 14 instead of 100 (assuming that the other gains are fixed).
+ +With the mic at 4 metres from the loudspeaker, the level at the mic diaphragm from the loudspeaker will still be 86dB. However, this is now lower than the speech level and the system should be stable. However, this relies on the room being well damped so that at 4 metres from the loudspeaker, the level really will be 86dB SPL. In most cases it will be more ! A single narrow band peak in the system's overall response (microphone, loudspeaker and room) is all that's needed for feedback to start.
+ +The aim of any operator is to ensure that the loudspeaker's SPL at the microphone will always be less than that which will create feedback. This means that sound picked up by the mic from the loudspeaker is at a lower level than that produced by the talker. This is not always possible, and feedback will occur. In many cases, the only solution is to lower your expectations regarding the SPL in the audience area. A reduced SPL simply means that less gain is needed.
A room's acoustic properties have a major influence on the amount of sound reaching the microphone position from the loudspeakers. In the study of acoustics, locations in a room where the direct and reverberant sound levels from a source are the same are said to be at the 'critical distance' [ 1 ].
+ +In nearly all rooms without significant acoustic treatment, the critical distance is only a metre or two away from the loudspeakers, and beyond this distance the famous inverse square law no longer applies. This is a benefit, as it makes it possible to fill a room with sound from a modest size PA system, but the drawback is in how it aggravates the problem of acoustic feedback.
+ +Another problem is how sound waves in a reverberant field arrive back at the microphone position from any and all directions, having bounced off the walls, floor or ceiling first - largely negating any benefit from using directional loudspeakers and microphones to minimise the feedback issue. In such a room, moving the microphone or loudspeaker positions has little effect on the gain setting that results in feedback.
+ +PA systems sound far better in rooms with minimal reverberation. Purpose designed auditoriums (like cinemas) minimise sound reflections by the use of large amounts of absorbent materials and by avoiding having parallel walls and parallel floors and ceilings. The average public hall and most churches have the exact opposite, hence exhibit massive reverberation and as a result are very hostile environments for a PA system.
+ +
Figure 2 - ≈10Hz Spaced Frequency Peaks Caused By Standing Waves [ 6 ]
+
+
The loudspeaker signal picked up by the microphone will always be delayed. The delay is determined by room dimensions (as are standing waves), and may vary between perhaps 5ms (a distance of 1.7 metres), up to 100ms (34.5 metres) or more. In any given room, you will usually have (something close to) both examples, as well as many other intermediate delay times, as determined by the dimensions of the room itself. The above graph shows the measured response in a 'typical' room. The response is primarily due to standing waves.
+ +Shorter delay times increase the frequency spacing, but also mean that the feedback builds up faster. The way sound behaves in a room is very complex, and while it would be nice to be able to explain it all in a few simple charts or graphs, it's impossible to do so. To makes matters worse, every room and loudspeaker system is different, and moving the microphone or loudspeaker(s) even a small distance can change everything - usually radically. However, it rarely provides an improvement.
Directional microphones like cardioid and super-cardioid types help greatly, but not for the obvious reason. While both types discriminate against sound waves arriving from the rear of the microphone, this has no benefit unless the rear is carefully aimed at the loudspeaker. Where there is more than one loudspeaker in the room, this becomes impossible, and is also impossible when the microphone is being hand held. See polar response graphs ...
+ +
Figure 3 - Shure SM58 Polar Response [Original]
The way a directional microphone actually helps depends on something called the 'proximity effect' - the name given to an increase in mid and low frequency sensitivity when the sound source is close to the microphone. At a distances under 25mm (1 inch), the increase can be 20dB at low frequencies and about 10 dB in the middle of the voice range. An omni-directional microphone has no such effect and provides no benefit. See response graph ...
+ +
Figure 4 -Directional Microphone Proximity Effect [Original]
The increased sensitivity does not exacerbate feedback as it only applies to close sound sources, so does not involve the PA system's loudspeakers. If having extra low frequency content in the speaker's voice is a problem, turning down the bass tone control on the mixer channel in use will compensate. Doing so automatically increases the feedback margin for low and mid frequencies.
+ + +Many speakers dislike fixed microphone locations, preferring to move about and also have their hands free. By far the best solution for them is to use a head worn, miniature, cardioid microphone. This has every possible benefit - the microphone is always in the same spot, very close to the speaker's mouth (but just out of breath and pop noise range) and goes wherever they go.
+ +The electret microphone capsules used have very high sound quality, better than typical dynamic microphones used in most voice PA systems - see an example [ here ].
+ + +The microphone's signal is normally transmitted via UHF radio link to the PA system avoiding problems with trailing cables. A mute button on the transmitter allows the speaker to have a silent conversation or cough if need be.
+ + +Sometimes a PA system must be used in a highly reverberant room, the people who will use it are not experienced professionals, head worn microphones simply cannot be employed, the allowable loudspeaker and microphone positions are not ideal and still the system has to perform well and be free of feedback without the benefit of an expert operator. +An impossible task ?
+ +There is an electronic device that can come to the rescue here, one that has been around for over fifty years but has lately fallen into disuse. Known by various names the device will do the following ...
+ +That device is an 'Audio Frequency Shifter'. What it does is unique and in most respects superior to other methods of improving the feedback threshold of a PA system, plus can be used in addition to graphic equaliser or adjustable notch filter devices.
+ + +The technique is called frequency shifting and the device employed is also known as a 'Howl Round Stabiliser'. Such units were developed and used by the BBC around 1960, based on experiments by M. R. Schroeder and employed many valves (vacuum tubes) and radio frequency techniques [ 3 ].
+ +By the mid 1970s, advances in analogue microchips called 'analogue multipliers' made similar or better performing units far cheaper to build, plus kept all signal processing down at audio frequencies.
+ +What a frequency shifter does is move the audio band upwards by adding a few Hz to every incoming frequency, removing the possibility of a reverberating tone circulating in the room being directly reinforced by the same tone also coming from the loudspeakers. This strikes directly at the root cause of acoustic feedback in reverberant spaces.
+ +With a 5Hz shift, an input signal of 500Hz changes to 505Hz while an input signal of 1000Hz changes to 1005Hz and so on. Applied to a speaking voice or recorded music, it is very difficult to hear that there has been any change.
+ + +Any room with smooth, parallel surfaces will support 'standing waves' - single frequency sounds with wavelengths mathematically related to the dimensions of the room. The frequencies involved typically start at around 10Hz and then every multiple of that number up to the limit of the audio range (see Figure 2 for an example). If you carry out a very slow frequency sweep of the room using a loudspeaker and a sound level meter spaced well apart, the result is a pattern of intensity peaks at about 10Hz intervals. These peaks and corresponding dips in-between are centred about the average level by around plus and minus 10dB.
+ +When a PA system in such a room suffers acoustic feedback, the howl frequency will coincide with one of the peaks, typically the strongest one - which explains why the frequency is very steady and repeatable. However, most people will have noticed that the frequency often changes when the microphone is moved, and sometimes the distance can be fairly small (around 300mm or so may be enough).
+ +The PA system is then supplying energy at one of these standing wave frequencies, sustaining the oscillation and quickly raising the sound level to the full available output of the system. When the gain is reduced, the howling soon stops but the sound quality of speech may still be affected by ringing at the same frequency.
+ +Adding a frequency shift of about 5Hz into the amplification loop defeats oscillation at any standing wave frequency, because the sound leaving the speakers is not the same frequency as that picked up by the microphone, and so cannot reinforce the oscillation.
+ + +Since frequency shifters operate with unity gain and have an essentially flat frequency response, installation at audio line level is simple. Once installed in-line with the output signal from the mixer to its associated amplifier, a user puts it into bypass mode and then increases the microphone gain control until the first signs of feedback are heard. In this condition, any PA system is quite unusable and attempts to approach the microphone and speak will be greeted with severe ringing and possibly loud howling noises.
+ +When the device is switched out of bypass, the same PA system instantly becomes stable and usable, any tendency to ringing or howling having gone. The microphone gain can be increased by several dB and the PA system is still fine. On first encounter, most people find this quite magical.
+ +If the gain is increased too far, instead of loud howling a mild warbling tone is generated, accompanying speech. Simply backing off the gain by about 2dB makes this effect disappear. The only control a frequency shifter normally has is a switch to vary the number of Hz the audio band is shifted by - in most rooms a shift of 4 to 5Hz is optimum. A smaller shift is better for large auditoriums with reverberation times of more than 2 seconds, where the standing wave response peaks are more closely spaced.
+ +Of great benefit is that when a head worn or hand held microphone gets too close to one of the loudspeakers, the user is immediately alerted by hearing the warbling sound and only needs to move away. The PA system does not howl, there is no risk of damage to loudspeakers or to an audience's equanimity.
+ +![]() | Please Note: A frequency shifter for this purpose uses analogue circuit techniques and must not to be confused with the now commonplace + 'digital pitch shifter'. A pitch shifter changes incoming frequencies by a fixed ratio, such as an octave or several semitones. Using the latter will not provide any real + benefit in relation to acoustic feedback. + |
Frequency shifters used to be available, although they were never actually common despite the significant advantages they offer. Today, it's almost impossible to get one. They have always been a niche product, seemingly known to relatively few people, and therefore never became 'main stream'. Today we have all-singing, all-dancing automatic feedback 'eliminators' that may or may not work, and will always colour the speech quality because they rely on narrow notch filters controlled by clever electronics. This seems to be the preferred approach, because a DSP (digital signal processor) can - so we are told - do everything we'll ever need.
+ +There are also frequency shifters that are used for effects during performances or recording, often as a 'plug-in' for digital audio workstations. These are usually not suitable, because they are designed for comparatively large frequency shifts and to intentionally remove harmonic relationships within the sound being processed. While it might be possible to use one to provide a 4-5Hz shift, it would be a very expensive addition to most PA systems used in community halls and/ or churches.
+ +A fully proven design for a 5Hz frequency shifter is described in Project 204. A printed circuit board will be made available for the author's version if there is sufficient interest (the project includes an updated version of the Hartley-Jones original, published in Wireless World in 1973). The design features very low noise and THD, has balanced audio input and output and requires only a small transformer and a ±15V power supply (such as Project 05-Mini). Only commonly available components are specified and it should be possible to fully assemble a system for under US$150.
There are many references to frequency shifters, but very few available schematics, and none that use parts that are available now (as opposed to 40-odd years ago). There has been a lot of academic work (as demonstrated by the references above), but for reasons that are rather puzzling, the devices themselves have all but vanished from sale. The last known commercial version is made in the UK and although it's still shown as a current product, it may or may not be available in reality. The website is obscure, rarely shows up in searches, and seems to be poorly organised.
![]() | + + + + + + + |
Elliott Sound Products | +Active Filters |
There is a wide range of filter circuits, each with its own set of advantages and disadvantages. All filters introduce phase shift, and (almost all) filters change the frequency response. There is one class of filter called 'all-pass' that does not affect the response, only phase. While at first look this might be thought rather pointless, like all circuits that have been developed over the years it often comes in very handy.
+ +Filters also affect the transient response of the signal passing through, and extreme filters (high order types or filters with a high Q) can even cause ringing (a damped oscillation) at the filter's cutoff frequency. In some cases, this doesn't represent a problem if the ringing is outside the audio band, but can be an issue for filters used in crossover networks (for example).
+ +If you are not already familiar with the concept of filters, it might be better to read the article Designing With Opamps - Part 2, as this gives a bit more background information but a lot less detail than shown here. There is some duplication - the original article was written some time ago, and it was considered worthwhile to include some of the basic info in both articles.
+ +Filters are used at the frequencies where they are needed, so the filters described here need to be recalculated. I have normalised the frequency setting components to 10k for resistors, and 10nF for capacitors. This provides a -3dB frequency of 1.59kHz in most cases. Increasing capacitance or resistance reduces the cutoff frequency and vice versa.
+ +Capacitors used in filter circuits should be polyester, Mylar, polypropylene, polystyrene or similar. NP0 (aka C0G) ceramics can be used for low values. Choose the capacitor dielectric depending on the expected use for the filter. Never use multilayer ceramic caps for filters, because they will introduce distortion and are usually highly voltage and temperature dependent. Likewise, if at all possible avoid electrolytic capacitors - including bipolar and especially tantalum types.
+ +![]() | Note Carefully: Nearly all filter circuits shown expect to be fed from a low impedance
+ source, which in some cases must be earth (ground) referenced. Opamp power connections are not shown, nor are supply bypass capacitors or pin numbers. All
+ circuits are functional as shown. + Also not shown are output 'stopper' resistors from opamp outputs. These must be included for any signal that leaves an opamp and connects to the outside + world using a shielded cable. Most opamps will oscillate if a resistor is not used in series with the output pin. 100 ohms is a convenient value, but it + can be lower (less safety margin) or higher (higher output impedance). + |
The following is actually a fairly small sample of all the different topologies, but the examples have been selected based on their potential usefulness. Some of the circuits shown are extremely common, others less so. In the general discussions about filter properties I have avoided heavy mathematical analysis. The maths formulas provided are enough to allow you to configure the filter - few readers will want to perform detailed calculations and they are not generally useful other than for university exams.
+ +Within this article, the filters are intended for 'audio' frequencies, meaning only that they are not generally suitable for frequencies above ~100kHz or so. This limit is imposed by the opamps, not the filters as such. However, at radio frequencies (RF, above perhaps 200kHz or so), it's far more common to use inductors and capacitors, because the inductance required is small, and the parts are physically small too. While high speed circuitry can allow any of the filters to operate at RF, the cost will generally be far greater than for 'conventional' L/C filters. For opamp based active filters, there is no lower limit (other than DC), so operation at 0.1Hz or less is perfectly acceptable if that's what you need.
+ +In the early days of electronics and still today for RF (radio frequency), filters used inductors, capacitors and (sometimes) resistors. Inductors for audio are generally a poor choice, as they are the most 'imperfect' of all electronic components. An LC (inductor/ capacitor) filter can be series or parallel, with the series connection having minimum impedance at resonance. The parallel connection provides maximum impedance at resonance. This article does not cover LC filters, but there are cases where the final filter uses an active equivalent to an inductor (a gyrator for example).
+ +Gyrators are every bit as imperfect as 'real' inductors within the audio frequency range, but with the benefits that they are not affected by magnetic fields, and are smaller and (usually) much cheaper than a physical inductor. They are also very easy to make variable using a potentiometer, which allows functionality that may otherwise be difficult and/ or expensive to achieve. So, if you are looking for information covering the design and construction of passive LC filters, this is not the place to find it.
+ +It's important to understand that all filters introduce phase shift, and there is no such thing as a filter without phase shift. Any two filters with the exact same frequency response will have the same phase response, regardless of how they are implemented. There are countless spurious claims from manufacturers (especially for equalisers) that this or that equaliser is 'better' than the competition's EQ because it has 'minimum phase' or 'complementary phase' (etc.). These claims are from marketing people, and have no validity in engineering. Contrary to what is often claimed, our ears are insensitive to (static) phase response, but we can detect even quite small variations in frequency response.
+ + +The common terminology of filters describes the pass-band and stop-band, and may refer to the transition-band, where the filter passes through the design frequency. Q is a measure of 'quality', but not in the normal sense. A high-Q filter is not inherently 'better' than a low-Q design, and may be much worse for many applications. In some cases, the term 'damping' is used instead, which is simply the inverse of Q (i.e. 1/Q).
+ +It is generally defined that the -3dB frequency is the point where the output level has fallen by 3dB from the maximum level within the passband. This means that if a filter produces a 1dB peak before rolloff, the -3dB point is then actually 2dB below the average level. I tend to disagree that this is the most appropriate way to describe the filter's behaviour, but it is accepted as the 'standard', so I won't attempt to break with tradition here.
+ +The above shows the major characteristics of a low pass filter. A high pass filter uses the same definitions, but obviously the stop band is at the low frequency end. Insertion loss is not common with active filters, but is always present with passive designs. The filter response shown is for a Cauer/ elliptical filter, only because it has all of the details needed to describe the various sections of the response. Passband ripple isn't shown (there's just a small peak before rolloff) because very few filters designed for audio show this behaviour. It's generally only found in multi-stage, fast rolloff filters.
+ +Not all filters show all of the responses shown. Most 'simple' filters do not have a notch in the stop band, and the ultimate rolloff is usually reached about 2 octaves above or below the -3dB frequency. Most have no peak before rolloff either, but are simply smooth curves that roll off at the desired rate.
+ +There are several different filter types, generally described by their behaviour. The basic types are low-pass, high-pass, bandpass, band-stop (notch) and all-pass. There are also many sub-types, where either a combination of filter types is incorporated into a single block, or different filters are combined to produce the desired result.
+ +Then we need to describe the different topologies, some of which are named after their inventor/discoverer, while others are named based on their circuit function. For example the Linkwitz-Riley crossover filter set was invented by Siegfried Linkwitz and Russ Riley, the Sallen-Key filter was invented by Roy Sallen and Edwin L. Key (thanks to a reader, I discovered not only their first names, but also found that they invented a portable radar system called 'Chipmunk'), and the state-variable and multiple feedback filters are described by the functionality of the circuit. The biquad filter is known by the type of equation that describes its operation (the bi-quadratic equation). Wilhelm Cauer was the inventor of the Elliptical filter - also known as a Cauer filter.
+ +Of all the filters, the Sallen-Key is the most common - it has excellent performance, is simple to implement, and it can have an easily varied Q by either changing the system gain (equal component value design), or component selection. Stop-band performance is generally good, with the theoretical attenuation extending to infinity (at an infinite frequency). 'Real world' implementations are not as good due to limitations in the active circuitry (whether opamps or discrete), but are more than acceptable for most applications. Other popular types are the multiple-feedback (MFB) filter, and (somewhat surprisingly) the all-pass filter.
+ +Multiple feedback (MFB) filters are also popular, being easy to implement and low cost. Unfortunately, the formulae needed to calculate the component values are somewhat complex, making the design more difficult. In some cases, a seemingly benign filter may also require an opamp with extremely wide bandwidth or it will not work as expected. High-pass MFB filters cannot be recommended because of very high capacitive loading, which will stress most opamps and can cause instability and/or high distortion.
+ +Less common (especially in DIY audio applications) are the rest of the major designs ...
+ +Finally, there is a circuit that is quite common, but is not a filter in its own right. The simulated inductor uses an opamp to make a capacitor act like an inductor. Because there are no coils of wire, hum pickup is minimised, and cost is much lower than a real inductor. When used with a capacitor in series, it acts like an L-C tuned circuit. Very high 'inductance' is possible, but circuit Q is limited by an intrinsic resistance.
+ +The generalised formula for first order filters is well known, and variants are used for higher orders. This depends on the topology of the filter, and for some the standard formula doesn't work at all. For reference ...
+ ++ fo = 1 / ( 2π × R × C ) ++ +
fo is the frequency, which is either the -3dB frequency for high and low pass filters, or the centre frequency for band pass types.
+ +A bandpass filter's Q is defined as the centre frequency (fo) divided by the bandwidth (bw) at the -3dB frequencies. For example, if the centre frequency is 1kHz, the upper -3dB frequency is 1.66kHz and the lower -3dB frequency is 612Hz, the bandwidth is 1.05kHz. Therefore ...
+ ++ Q = fo / bw+ +
+ Q = 1k / 1.05k = 0.952 +
Conversely, if we know the Q then the bandwidth is given by ...
+ ++ bw = fo / Q+ +
+ bw = 1k / 0.952 = 1.05kHz +
High and low pass filters also have a Q figure, but it doesn't define the bandwidth. Instead, the Q determines what happens around the transition frequency. High Q filters usually have a peak just before rolloff, and low Q filters have a very gradual rolloff before reaching their ultimate slope (6dB/octave, 12dB/octave, etc.). Converting Q/ bandwidth to octaves can be somewhat tedious, but the following table should be helpful.
+ +Q | BW (oct) | Q | BW (oct) | Q + | BW (oct) + | ||
0.50 | 2.54 | 1.50 | 0.945 | 6.50 | 0.222 + | ||
0.55 | 2.35 | 1.60 | 0.888 | 7.00 | 0.206 + | ||
0.60 | 2.19 | 1.70 | 0.837 | 7.50 | 0.192 + | ||
0.65 | 2.04 | 1.80 | 0.792 | 8.00 | 0.180 + | ||
0.667 | 2.00 | 1.90 | 0.751 | 8.50 | 0.170 + | ||
0.70 | 1.92 | 2.00 | 0.714 | 8.65 | 0.167 + | ||
0.75 | 1.80 | 2.15 | 0.667 | 9.00 | 0.160 + | ||
0.80 | 1.70 | 2.50 | 0.573 | 9.50 | 0.152 + | ||
0.85 | 1.61 | 2.87 | 0.500 (½ Octave) | 10.0 | 0.144 + | ||
0.90 | 1.53 | 3.00 | 0.479 | 15.0 | 0.096 + | ||
0.95 | 1.46 | 3.50 | 0.411 | 20.0 | 0.072 + | ||
1.00 | 1.39 | 4.00 | 0.360 | 25.0 | 0.058 + | ||
1.10 | 1.27 | 4.32 | 0.333 (⅓ Octave) | 30.0 | 0.048 + | ||
1.20 | 1.17 | 4.50 | 0.320 | 35.0 | 0.041 + | ||
1.30 | 1.08 | 5.00 | 0.288 | 40.0 | 0.036 + | ||
1.40 | 1.01 | 5.50 | 0.262 | 45.0 | 0.032 + | ||
1.414 | 1.00 (1 Octave) | 6.00 | 0.240 | 50.0 | 0.029 + |
The above table is based on that provided by Rane [ 12 ] in their technical note 170. The notes also provide the formulae if you want to make the calculations yourself. Naturally, this only applies to bandpass filters, but it's a useful reference so has been included. Also shown are the three most common filter bandwidths used for 'graphic' equalisers, namely 1 octave, 1/2 octave and 1/3 octave. Some signal analysis software also includes sharper (higher Q) filters, but these aren't shown.
+ + +All filters are described by their 'order' - the number of reactive elements in the circuit. A reactive element is either a capacitor or inductor, although most active filters do not use inductors. In turn, this determines the ultimate rolloff, specified in either dB/octave or dB/decade. Most filters do not achieve the theoretical rolloff slope until the signal frequency is perhaps several octaves above or below the design frequency. With high Q filters, the initial rolloff is faster than the design value, and vice-versa for low Q filters.
+ +In addition, filters are classified into two distinct groups - odd and even order. Each behaves differently, and this often needs to be accounted for in the final design. The general characteristics are shown below ...
+ +Order (Poles) | dB/Octave | dB/Decade | Phase Shift * | Comments + |
1st | 6 | 20 | 90° | Only passive, very common + |
2nd | 12 | 40 | 180° | Extremely common - most popular + |
3rd | 18 | 60 | 270° | Moderately common + |
4th | 24 | 80 | 360° | Linkwitz-Riley crossovers (etc.) + |
5th | 30 | 100 | 450° | Uncommon - rarely used + |
6th | 36 | 120 | 540° | Uncommon + |
n | n × 6 | n × 20 | n × 90° | Anti-aliasing filters (etc.) + |
+ * Phase shift refers to the phase difference between a high and low pass filter set for the same rolloff frequency ++ +
You'll see that the first order filter is passive only. While an opamp is often used with these filters, it is only a buffer. While it is certainly possible to build an active 1st order filter, the Q still can't be altered. The filter's Q and rolloff are fixed by the laws of physics and cannot be changed. All other filters allow a choice of Q, modifying the initial rolloff slope and creating a peak (high Q) or gentle rolloff (low Q) just before the cutoff frequency. By definition, the cutoff frequency of any filter is when the amplitude has fallen by 3dB from the normal output level. If there is a peak in the response, this is ignored when stating the nominal cutoff frequency.
+ +This can be rather confusing to the newcomer, because the formula may show a nominal cutoff frequency of (say) 1.59kHz, yet the measured response can differ considerably. In general, any formula given for frequency assumes Butterworth response. The table below is for second order filters, but the overall Q is the same for all filter orders above the first (these always have a Q of 0.5).
+ +Type | Q | Damping | Description + |
Bessel | 0.577 | 1.733 | Maximally flat phase response, fastest settling time + |
Butterworth | 0.707 | 1.414 | Maximally flat amplitude + |
Chebyshev | > 0.707 | < 1.414 | Peak (and dips) before rolloff. Fastest initial rolloff + |
The above covers the most important and common filter classes, but the Q can actually be anything from 0.5 ('sub-Bessel'), up to often quite high numbers. Few filters for normal usage will have a Q exceeding 2, and a Sallen-Key filter will become an oscillator if the Q exceeds 3. Extremely high Q factors are generally only used with bandpass and band stop (notch) filters.
+ + +It's common to see references to poles and zeros with filters, and this can create difficulties for beginners in particular. This isn't helped at all when you are faced with complex calculations, vector and/ or Bode plots and somewhat convoluted explanations that usually don't help when you're starting out. This isn't at all surprising when you are dealing with digital or notch filters, but it is rather daunting when you see the mathematics involved. I don't intend to cover this in any great detail because mostly it won't help you understand if you see a bunch of vector diagrams but few 'real life' examples. Explaining filters in terms of s-parameters, Neper frequencies (and/ or Nepers/ second) and phase shift in radians/ second doesn't really help anyone to understand the basic principles!
+ +Only first order filters are discussed in this overview, having an idealised rolloff of 6dB/ octave or 20dB/ decade. These are the simplest of all filters, and only require an opamp to ensure that loading on the filter circuit is minimal. They cannot drive any external load without changing their behaviour (however slightly that may be). Filter circuits are often described by a mathematical equation called a transfer function, which in general form is a formula with variables denoted as 'j' and 'ω' where ...
+ ++ j = √-1 (the square root of -1)+ +
+ ω = 2π × f (where f is frequency) +
This is where things go pear-shaped, because √-1 is an impossible number (you can't take the square root of a negative number), and is classified as the 'imaginary' part of the equation. Some calculators allow what's often known as 'complex' arithmetic/ maths, which permits the use of imaginary numbers to allow 'complex' equations to be solved. The 'imaginary' part of the equation represents the reactive element (a capacitor or inductor), while the 'real' part usually represents resistance, which is not reactive.
+ +For example, one can determine the output voltage of a first order low-pass filter at any frequency with the equation ...
+ ++ Vo = ( 1 / ( jωC )) / ( R + 1 / (jωC)) ++ +
Long before simulators were available to the average user, this was the only way that one could determine the output voltage of a filter circuit at any given frequency. It's still necessary if you need to calculate the input or output impedance of even a simple resistor/ capacitor (RC) filter (unless you use a simulator of course). If you don't have a calculator that handles 'complex' maths (i.e. one that can handle j-notation) then basically you have a great deal of work to do! The common formula ...
+ ++ f3dB = 1 / ( 2π × R × C ) ++ +
... provides the -3dB frequency, but at any other frequency it isn't easy to determine the output voltage or phase without delving into complex maths. Everyone had to use scientific calculators that had the ability to work with the 'imaginary' part of the equation, and to say that the process was tedious is putting it mildly (in the extreme). It's a long time since I used this particular form of calculation (and yes, I still have a couple of scientific calculators with that facility), and most of the time it's not necessary any more ... if you have a simulator of course. Even without simulation tools, much of what you actually need to determine is still simply based on the 3dB frequency, and the rest tends to follow common rules that don't change. At least, this is the case with simple filters, but it becomes a lot more difficult when you're designing filters with characteristics that differ from the 'normal'.
+ +Zeros in a filter are a different matter again. There are some extremely tedious calculations involved if you're writing code for a filter to be implemented in a DSP, but somewhat predictably this isn't a topic I intend to cover. I will only describe zeros in the most simplistic sense - it's not strictly accurate (at least not with more advanced filter techniques), but it's intended as a very basic introduction only.
+ +As an example, we'll examine one of the most common 'complex' first order filter networks, the RIAA equalisation curve for vinyl playback. The original filter (as opposed to the IEC 'amended' version) has two poles (at 50.05Hz and 2,122Hz), and one zero at 500.5Hz. Although we're only interested in the zero (since that where this explanation is headed), the two poles and the zero are shown below. You will also see these frequencies described in terms of time constants, being 75µs, 318µs and 3180µs for 2122Hz, 500Hz and 50Hz respectively.
+ +Note that the frequencies have been rounded to the nearest whole number, so 50.05Hz is shown as 50Hz. In some designs, there's a second zero at some indeterminate frequency above 20kHz. This is not part of the RIAA specification, and is the unintended consequence of using a single stage to perform the entire equalisation (this flaw does not exist in Project 06). It happens because the single stage EQ arrangement cannot have a gain below unity (although the reasons are outside the scope of this section).
+ +As you can see, the zero at 500Hz effectively stops the rolloff as frequency increases. Because filters are 'real world' devices, the theoretical response (in red) can never be achieved. The recording EQ has the same generalised response, but is inverted (it has two zeros and one pole). In the case of an RIAA playback filter, the zero at 500Hz simply stops the rolloff - if no high frequency (2,122Hz) de-emphasis were applied, the response would flatten out above ~1kHz, with a theoretical level of 0dB (in reality it will be somewhat less, at around -2dB or thereabouts).
+ +Interestingly (or not ) a high pass filter will always have an 'implied' zero at DC. By definition, a high-pass filter must be unable to pass DC, because it uses one or more capacitors in series with the input signal. Since an ideal capacitor cannot pass DC (and most film caps approach this ideal), this always sets the output to zero at DC, although the response may already be attenuated to the point where the DC component is immaterial anyway.
If you want to know more on this area of filter design, there are some references below, or do a search on the topic of filter poles and zeros. I can pretty much guarantee that most people will stare blankly at the descriptions offered and be none the wiser afterwards, hence this brief introduction to the subject. There is no doubt that people who really like maths will find the explanations enlightening, but most people just want to be able to design simple circuits that work. The remainder of this article shows you how that can be done.
+ + +In various texts you see references to component sensitivity. This refers to the changes in parameters that you see when the component values are varied by perhaps 5%. The resistor and capacitor values must affect the response, because that's how we determine the frequency. It's often claimed that this or that filter topology has high or low component sensitivity, but these terms are highly subjective. All filters will show a frequency change if the values are different from those calculated.
+ +Low-order filters (e.g. 6dB/octave) are naturally less 'sensitive' than high-order types. Once you exceed 24dB/octave, even 1% resistors may cause problems, especially if there's a requirement for a very precise turnover frequency. The formulae for filters almost always give the -3dB frequency. If you have very strict limits on the 'flatness' of the curve up to some reference frequency, then it may be necessary to select capacitors (in particular) for better than 1%, or use a slightly lower (selected) value and add a small cap in parallel to get the exact value.
+ +In some cases, you may need to add one or more trimpots that allow you to tweak the filter to get the required response. Expect lots of hassles if you need an 8th order (48dB/octave) filter with highly specified rolloff and flatness requirements. Note that high-order filters are not simply a series-connected set of filters set for the desired frequency. Each filter in the series string will be different, particularly the filter's Q ('quality factor'). For example, the first filter in the 'chain' may have a Q of 0.51, the second 0.6, the third 0.9 and the final filter needs a Q of 2.6. Note that the filters start with a low Q, and end with a high Q, making the final filter especially susceptible to even small component variations.
+ +I suggest that you use specialised design software if you need to build high-order filters, as the process is beyond tedious otherwise. You also need to be prepared to select component values carefully, and beware of thermal drift. Silvered mica, Teflon (PTFE), C0G ceramic (low values only) and polypropylene are the better choices, but polyester can be used if the temperature variations are likely to be small. High-K ceramic caps should never be used unless you don't care about the final response - a fairly unlikely proposition if you're designing a high-order filter.
+ +Even the resistors matter. I wouldn't suggest anything other than 1% (or better) metal film resistors. The thermal drift of opamps isn't usually a problem unless absolute DC accuracy is important, but only for low-pass filters. By their nature, high-pass filters remove DC.
+ + +In general, it is preferable wherever possible to operate all opamps in an audio circuit using a dual power supply. Typically, the supply rails will be ±12V or ±15V, although this may be as low as ±5V in some cases. While a single supply can be used, it is necessary to bias all opamps to a voltage that's typically half the supply voltage.
+ +This may be done individually at the input of each opamp, or a common 'artificial earth' can be created that is shared by all the analogue circuitry. In either case, all (actual) ground referenced signals must be capacitively coupled, and it is probable that the circuit will generate an audible thump when power is applied or removed. For the purposes of this article, all opamps will be operated from a dual supply. Supply rails, bypass capacitors and opamp supply connections are not shown. If you need to run any of these filter circuits from a single supply, you will need to implement an artificial earth and all coupling capacitors as needed.
+ +This is now your responsibility, and you can expect me to become annoyed if you ask how this should be done. I suggest that you read through Project 32 for a simple split supply circuit that can be used with the filters shown here.
+ +You will need to verify the pinouts for the opamp(s) you plan to use. For general testing, TL072 opamps are suggested, as they are reasonably well behaved (provided the peak input level is kept well below the supply rail voltage), have very high input impedance so filter performance is not compromised, and are both readily available and cheap. Experimentation is strongly recommended - you will learn more by building the circuits that you ever can just by reading an article on the subject. In some cases you may need to use 'premium' opamps, such as for high-frequency filters, or those with unusually high Q. In some cases you may need very low noise, and the opamps have to be chosen to meet the objectives of the final design.
+ +Supply pins, bypass capacitors and power supply connections are not shown in any of the circuits that follow. A 100nF multilayer capacitor should be used from each supply pin to ground (artificial or otherwise) to ensure that the circuits don't oscillate. You will also need to include a 100 ohm resistor at the final opamp's output if you plan to connect any of the filters shown to shielded cables (for example to a monitor amplifier). Failure to include the resistor may result in the opamp oscillating.
+ + +Selecting the right values is more a matter of educated guesswork than an exact science. The choice is determined by a number of factors, including the opamp's ability to drive the impedances presented to it, noise, and sensible values for capacitors. While a 100Hz filter that uses 100pF capacitors is possible, the 15.9M resistors needed are so high that noise will be a real problem. Likewise, it would be silly to design a 20kHz filter that used 10uF capacitors, since the resistance needed is less than 1 ohm. There is always a compromise that will provide the best results for a given filter, although it may not be immediately obvious.
+ +E12 | 1.0 | 1.2 | 1.5 | 1.8 | 2.2 | 2.7 | 3.3 | 3.9 | 4.7 | 5.6 | 6.8 | 8.2 | + | |||||||||||
E24 | 1.0 | 1.1 | 1.2 | 1.3 | 1.5 | 1.6 | 1.8 | 2.0 | 2.2 | 2.4 | 2.7 + | 3.0 | 3.3 | 3.6 | 3.9 | 4.3 | 4.7 | 5.1 | 5.6 | 6.2 | 6.8 | 7.5 | 8.2 | 9.1 + |
Capacitors are the most limiting, since they are only readily available in the E12 series. While resistors can be obtained in the E96 series (96 values per decade), for audio work this is rarely necessary and simply adds needless expense. The E24 series is generally sufficient, and these values are usually easy to get. E48 values may be required for some high-order filters. Capacitors can be obtained with 1% tolerance, but they will be expensive, and only available for some values. Consider that caps are graded after manufacture, so don't expect to get 1% tolerance from nominal 5% caps, because those that meet the 1% spec will have been sorted out already.
+ +Where possible, I suggest that resistors should not be less than 2.2k, nor higher than 100k - 47k is better, but may not be suitable for very low frequencies. Higher values cause greater circuit noise, and if low value resistances are used, the opamps in the circuit will be prematurely overloaded trying to drive the low impedance. All resistors should be 1% metal film for lowest noise and greatest stability. Capacitance should be kept above 1nF if possible, and larger (within reason) is better. Very small capacitors are unduly influenced by stray capacitance of the PCB tracks and even lead lengths, so should be avoided unless there is no choice.
+ +Capacitors should be as described above. Never use ceramic caps except when nothing else is available - if you must use them (low values only), use NP0 (C0G) types. Since close tolerance capacitors are hard to get and expensive, it's easier to buy more than you need and match them using a capacitance meter (but be aware that you will get very few 1% caps from a batch of 5% types!). Absolute accuracy usually isn't needed, but close matching between channels for a stereo system is a requirement for good imaging.
+ +Unless there is absolutely no choice, avoid bipolar (non-polarised) electrolytic capacitors completely. They are not suitable for precision filters, and may cause audible distortion in some cases. Tantalum caps should be avoided altogether!
+ + +![]() | For this article, all filters are based on 10k resistors and 10nF capacitors. This gives a frequency of 1.59kHz for a first order filter. In many cases, it will be difficult to see where the standard values are actually used, because many second order topologies require modification to get the correct frequency and Q. First order filters are not covered, and all filters described below are second order Butterworth types unless stated otherwise. + |
Sallen-Key filters are by far the most common for a great many applications. They are well behaved, and reasonably tolerant of component variations. All filters are affected by the component values, but some are more critical than others. The general unity gain Sallen-Key topology can be very irksome if you need odd-order filters, and changing the Q of the unity gain filters will subject you to a barrage of maths to contend with. Nothing actually difficult, but tedious.
+ +The general formula for a filter is ...
+ ++ fo = 1 / ( 2π × R × C ) Where R is resistance, C is capacitance, and fo is the cutoff frequency ++ +
... however, this is modified (sometimes dramatically) once we start using filters of second order and higher.
+ +A modification that allows equal component values and lets the Q be changed at will is easily applied, provided you can accept a change of gain along with the change of Q. Sometimes this is not an issue, but certainly not always. The majority of filters shown in ESP's project pages use unity-gain Sallen-Key filters, but in most cases the required values are already worked out for you. Figure 3.1 shows the traditional Butterworth low and high pass unity gain filters.
+ +This is the standard unity gain Sallen-Key circuit. The values are set for a Q of 0.707, so the behaviour is Butterworth. The turnover (-3dB) frequency is 1.59kHz. As you can see, for the low pass filter we change the value of C (10nF) as follows ...
+ ++ R1 = R2 = R = 10k+ +
+ C1 = C × Q = 10nF × 0.707 = 7.07nF
+ C2 = C / Q = 10nF / 0.707 = 14.14nF
+ fo = 1 / ( 2π × √ ( R1 × C1 × R2 × C2 )) = 1.59155 kHz +
Exactly the same principle is applied to the high pass filter, except that the standardised value for R (10K) used here is modified by Q, with R1 becoming 14.14k and R2 becomes 7.07k. In many cases, it is necessary to make small adjustments to the frequency to allow the use of standard value components.
+ +If all frequency selecting components are equal (equal value Sallen-Key), the Q falls to 0.5, and the filter is best described as 'sub-Bessel'. This is shown below, along with response graphs showing the difference. For calculation, there are countless different formulae (including interactive websites and filter design software), but all eventually come back to the same numbers. I have chosen a simplistic approach, but it is worth noting that the final values are definitely not standard values. This is very common with filters, and it may take several attempts before you get values you can actually buy (or arrange with series/parallel arrangements).
+ +This version uses nice equal values, and is the easiest to build. However, because the Q is so low, it is not generally considered to be useful (although it is used for the 12dB/ octave Linkwitz-Riley crossover network). The relative response of the Butterworth and sub-Bessel filters are shown in Figure 3.3.
+ +With a Q of 0.5 (damping of 2), the sub-Bessel filter has a very gradual initial rolloff. The crossover frequency between high and low-pass sections is at -3dB for a Butterworth filter, but is -6dB for the sub-Bessel type. Note that a true Bessel filter has a Q of 0.577, hence the distinction here. This is not always adhered to, as some references indicate that a Bessel filter simply has a Q of less than 0.707 (or damping greater than 1.414). While it may seem pedantic, I will stay with the strict definition in this area.
+ +A useful (but relatively uncommon) change to the Sallen-Key filter allows us to obtain a much more flexible filter. This is a very useful variant, but the added gain may be a problem in some systems. While it is possible to use it as unity gain (see below), there are still limitations.
+ +By adding a feedback network to the opamp, we can change the gain and Q of the filter without affecting the frequency. The Q of a filter using this arrangement is ...
+ ++ Q = 1 / ( 3 - G ) (where G is gain) ... or ...+ +
+ G = 3 - ( 1 / Q ) +
Once the gain is known, the values of R3 and R4 can be determined. Since gain is calculated from ...
+ ++ G = ( R3 / R4 ) + 1 ) ... then ...+ +
+ R3 = ( G - 1 ) × R4 +
As a result, the circuit in Figure 3.4 has a gain of 1.586 and a Q of 0.707 as expected (or close enough to it). It is generally considered that the gain and Q are inextricably linked, but there is no real reason that the output can't be taken from the junction of R3 and R4, via a high impedance buffer (unity gain non-inverting opamp buffer). This restores unity gain, but remember that the opamp is still operating with gain, so there is a requirement to keep levels lower than expected. From ±15V, most opamps will give close to 10V RMS output, but this is reduced to a little over 6V RMS (at the junction of R3 and R4) when operated this way.
+ +For a Bessel filter, gain will be reduced to 1.267 (R3 = 2.67k), and for Chebyshev with a Q of 1, the gain is 2 and R3 = R4 = 10k. Remember that the Sallen-Key filter must be operated with a Q of less than 3 or it will become an oscillator.
+ +For most applications in audio, it's difficult to justify the extra complexity of any other filter type. The Sallen-Key has established itself as the most popular filter type for electronic crossovers, high pass filters (e.g. rumble filters or loudspeaker excursion protection) and many others as well. It does have limitations, but once understood these are easy to work around and generally cause few problems.
+ + +Multiple feedback (MFB) filters are most commonly used where high gain or high Q is needed - especially in bandpass designs. The design calculations can be extremely tedious, and there is regularly a requirement for component values that are simply unobtainable (or extremely messy - using many different values). The performance is usually as good as a Sallen-Key circuit, but one extra component is needed for a unity gain solution.
+ +While it is accepted that gain, Q and frequency are independently adjustable, this is only really true at the design phase. Again, there is a requirement for widely varying component values. The MFB design is very well suited to bandpass applications though, and its simplicity is hard to beat in that application. You may see MFB filters referred to as Deliyannis, Delyiannis, Deliyannis-Friend or just 'DF'. These are the same as shown here but with a different name.
+ +![]() | Note that the high-pass MFB filter has a capacitive input as well as capacitive feedback via C2. I received an email that described exactly this issue, and it
+ caused both serious opamp oscillation and distortion. A standard fix would be to add Rs1 and Rs2 (stability resistors) that isolate the capacitive load from the driving and
+ filter opamps. Using resistors in both locations raises the impedance but doesn't change the frequency by more than 1 or 2Hz for the values given. + (My thanks to Dale Ulan for pointing out the problem and describing the fix for it.) + |
Notably, the high pass MFB filter has an input impedance that falls with frequency, and it can easily become so low as to overload both the driving opamp and the opamp used for the filter itself. In the circuit shown below, input impedance for the high pass falls to 1.6k at 20kHz - it can be far lower if the filter is tuned to a lower frequency, because the capacitor values are larger. If the caps are changed to 50nF and 100nF (giving a high pass filter tuned to 159Hz), the input impedance falls to just 320 ohms at 10kHz if Rs1 and Rs2 are not included. For the most part, the capacitive loading makes the high-pass version pretty much useless, due to the extreme likelihood of serious distortion at high frequencies and/or instability.
+ +The loading is so high that it's almost guaranteed to cause most opamps stress, and distortion will rise rapidly as frequency increases (remember - this is within the pass band of the filter). At the same time, the opamp's open loop gain is falling because of its internal frequency compensation, so distortion rises far more than expected. The additional resistors do reduce the level slightly, but that's a small price to pay if distortion can be reduced to an acceptable level. Don't expect to find this in many text books, but it's a fact nonetheless [8]. Ultimately, it's best to avoid using high pass MFB filters unless there is absolutely no choice - Sallen-Key has none of the problems described. (Note that the low-pass MFB filter has no bad habits and is quite safe to use.)
+ +Figure 4.1 shows low and high pass versions of the MFB filter. These are both set for a -3dB frequency of 1.59kHz, and based on 10k and 10nF tuning components. Look carefully at the high-pass filter, and you can see the capacitive feedback path. Rs1 and Rs2 can be added to isolate the capacitance, but will reduce the level. The safe value depends on the opamps used, and you'll lose a little over 0.6dB in the pass band with the values shown above. The loss can be reduced (but input impedance is also reduced) by using a lower value for Rs1 and Rs2. Note that Rs1 and Rs2 are both needed, and must be the same value.
+ +Using the normal frequency formula, R =10k and C = 10nF, but these values don't work properly in the MFB filter. Since we know that Q = 0.707 for a Butterworth filter, we can simplify the component selection quite dramatically as shown below. What? It doesn't look simple? The normal formulae are a great deal more complex than the method described here.
+ ++ fo = 1 / ( 2π × R × C ) ... and ...+ +
+ R1 = R2 = 2 × R = 20k
+ C1 = C / Q = 14.14nF
+ C2 = ½C × Q = 3.54nF +
As with the Sallen-Key filter, it will generally be necessary to change your expectations of the cutoff frequency to allow the use of available component values. Fortunately, it is rarely necessary in audio applications to have very precise frequencies, so minor adjustments are usually not a problem. Using the MFB filter for a crossover network is usually not a good idea though, because you end up with too many different values, increasing the risk of making assembly errors. Because the filter is also slightly more complex, it will be more expensive to build.
+ +It's difficult to recommend the MFB high pass filter because of its extremely low input impedance and capacitive load on the driving stage at high frequencies. Although adding the resistors as shown mitigates this problem, it's far easier to use a Sallen-Key filter which doesn't have the problem.
+ + +Bandpass filters are commonly used for various effects, constant-Q graphic equalisers and parametric EQ circuits. They are also used with analogue analysers and various pieces of test equipment. Where fixed frequency and Q are needed, the MFB bandpass filter is difficult to beat, as it is a straightforward design with no bad habits.
+ +As before, the filter is tuned to 1.59kHz, and we can measure the Q to verify that it's what we expect. For a bandpass filter, Q is equal to the peak frequency, divided by the -3dB bandwidth (384Hz), so Q = 1590 / 384 = 4.14 - that's pretty close, considering that the resistor values were rounded to the nearest sensible value. The values were obtained from the ESP MFB Bandpass Filter Calculator (available on the ESP website).
+ +This filter is used in Project 84 (a one third octave band subwoofer equaliser) and is also referenced in a number of other projects. I suggest that you use the calculator to work out the values, since the formulae are somewhat beyond the intent of this article.
+ +I (recently) became aware of a simplified version of the MFB bandpass filter, which uses one less resistor. This makes it less useful overall, but that's not to say that it should not be used. As with all things in electronics, there's more than one way to do something, and despite some limitations the simplified version is a handy tool where you don't need great flexibility.
+ +I kept the frequency and Q the same, but we don't have the ability to vary gain and Q independently with one resistor missing. The parallel combination of R1 and R2 in Fig. 4.2 is 1.26k, so a single 1.26k resistor is used for the input. Because we can't control both gain and Q, we get a gain of 30dB (×31.8). While this might be far too high for some circuits, it will likely be fine in others, particularly if we have a low input level (less than 250mV at the tuned frequency). If we don't mind the Q changing, the circuit can be tuned over a limited range by making R1 variable. If R1 is varied from 1k to 2.2k (for example), the frequency is changed from 1.78kHz to 1.2kHz, but the gain changes too - 32dB with 1k, 25dB with 2.2k. The Q changes too, but not by a great deal.
+ + +The state-variable filter is something of an oddball design, with several different versions of the basic circuit being available, and different formulae being described to calculate the gain and Q. All of the frequency calculations I've seen are correct, but some imply that multiple resistors are involved to change frequency. This is not the case - two resistors affect the frequency, and these can be in the form of a dual-gang pot. This makes the filter easily tunable, unlike any of the others so far.
+ +In addition, the state-variable filter provides 3 simultaneous outputs - high pass, low pass and bandpass. All have the same frequency (-3dB or peak for the bandpass) and the same Q. It is often said that gain and Q cannot be separated - so as one is varied, the other varies as well. Q and gain can be made independent by adding a fourth opamp. This is desirable (and commonly applied) in parametric equalisers.
+ +This is an extremely versatile filter, and its usefulness is often overlooked. Some reference material suggests that there's no real reason to even use the design, but I disagree with this assessment. Since both low and high pass outputs are available simultaneously, it can be used as a variable crossover (with some changes). While higher orders can be made, they become more and more complex, and for this article only the second order filter is discussed.
+ +In the example above, R1 changes gain and Q. Increasing R1 reduces gain, and increases the filter's Q, although the change of Q is relatively small compared to the gain change. R2 changes Q, but leaves gain unchanged (contrary to the myriad claims that the two are inseparable without the fourth opamp). Increasing R2 reduces Q, and vice versa.
+ +Rt and Ct are the tuning components, and as shown give a frequency of 1.59kHz. The two Rt resistors can be replaced by a dual-gang pot, allowing a continuous variation of frequency. A series resistor must still be used, typically one tenth of the pot value. In the above circuit, Rt could be replaced by a 100k pot in series with a 10k resistor, giving a range from 145Hz to 1.59kHz - a range of just over 1 decade. When the frequency of a state variable filter is changed, the Q remains the same.
+ ++ fo = 1 / 2π × R × C+ +
+ R3 = R2 × ( 3 × Q - 1 ) +
A notch filter is created by adding the high and low pass outputs. Because they are 180° out of phase at the tuning frequency (fo), the result is (close to) zero voltage at fo when the two outputs are added. Addition can use a traditional opamp summing amplifier or just a pair of resistors. There will be a 6dB signal loss across the pass band for the simple resistive adder. The depth of the notch depends on how accurately the two signals are summed, but even a small phase shift (through the filter) can considerably reduce the depth.
+ +It is beyond the scope of this article to cover the complete design process, and in particular the process for setting the filter Q to a specific value. There are countless examples and design notes available on the Net, and those interested in exploring further are encouraged to do a search for material that gives the information needed.
+ +For a lot more info on this topology, see the ESP article State Variable Filters. This includes the little-known 1st order variant.
+ +Although the state-variable filter is a bi-quadratic (biquad) design, it is different from the 'true' biquad shown here. The biquad in its pure form is somewhat remarkable in that it can only be made as a low pass or bandpass filter. There is no ability to use the traditional approach of swapping the positions of tuning resistors and capacitors to obtain a high pass filter. This limits its usefulness, but it is still very usable as a bandpass filter. Like the state-variable, both outputs are available simultaneously. In addition, there is an inverted copy of the low pass output, however this is probably of limited value.
+ +While the circuit looks similar to the state variable, it is very different. Again, a complete discussion of the calculations is outside the scope of this article, but R2 is used to set Q and gain, while R3 & R4 and C1 & C2 are the tuning components. When the frequency of a biquad filter is changed, Q also changes, so a bandpass implementation has a constant bandwidth. Q increases with increased frequency. Use as a low pass filter is rather pointless, since there is no high pass equivalent, and the Q changes with frequency anyway. R4 sets the Q, and with 18k as shown, it's a little above 0.707 (Butterworth). Unfortunately, adjusting the Q also changes the frequency. As the resistance is lowered, the frequency and Q increase.
+ +You can swap the positions of R4 and C2 to get a high-pass and low-pass output, but the slope is only 6dB/ octave and you lose the bandpass (it becomes the low-pass output). This isn't a useful modification.
+ + +Notch filters are used for a variety of purposes, including distortion analysers and for removing troublesome frequencies. 50/60Hz hum or prominent acoustic feedback frequencies can be reduced (or eliminated almost completely), because typical notch filters have a very narrow band-stop region. The bandwidth can be as low as around 10-20 Hz, with the unwanted frequency reduced by 40dB or more.
+ +There are many circuit topologies that can be used for very narrow notch filters, including the twin-T, Fliege, Wien-bridge and state-variable. All have similar responses, but the twin-T is unique in that it can have an almost infinite notch depth even when configured as a completely passive filter (i.e. with no opamp or other amplification). All other types require active circuitry to achieve usable results.
+ +The twin-T notch requires extraordinary component precision to achieve a complete notch, and for this reason it's not often recommended. However, it is without doubt one of the best filters to use when a very deep notch is needed - especially for completely passive circuits. The following is only a very brief overview of notch filters - there are many more configurations that can be used, each with its own advantages and disadvantages. Notable (but not shown) is the bridged-T filter that has been used in some distortion analysers. It is easier to tune than the twin-T, and comes in a number of different topologies. It's interesting, but IMO not sufficiently useful to describe here. Bridged-T notch filters can never equal a twin-T for notch depth or Q without the addition of active circuitry.
+ +I have heard complaints that the twin-tee notch filter is 'finicky' to set up. In reality, it's no harder that any other filter type with similar performance. If a very deep notch is needed at a particular frequency, the filter component values will always be critical, and even a small drift of a component value (due to time or temperature) will affect the notch depth at the selected frequency. In many respects, the twin-tee is likely to win out over any other design, because it can achieve a very deep notch with no active components. Feedback is only ever needed to minimise the -3dB frequency bandwidth, and it does not affect the notch depth.
+ + +The twin-T (or twin-tee) filter is essentially a notch (band stop) filter, and unlike most filters shown here, can still give an extremely high Q notch without the use of any opamps. In theory, the notch depth is infinite at the tuning frequency, but this is rarely achieved in practice. Notch depths of 100dB are easily achieved, and are common in distortion analysers. If the notch is placed at the fundamental frequency of the applied signal, it is effectively removed completely, so any signal that is measured is noise and distortion. While a notch filter can be converted to a peaking (bandpass) by means of an opamp, the result is usually about the same as you can get with a MFB filter, so there's not much point because of the added complexity.
+ +It is still common to add an opamp to a twin-t filter though, because it makes it possible to ensure that there is little or no attenuation of the second harmonic when used as the basis for a distortion analyser. By applying feedback around the notch filter, the response can be maintained within a dB or less at only one octave from the notch frequency.
+ +R and C are the tuning components. These have to be extremely accurate for a very deep notch, and it's common for one of the R values and the 2R value to be made using a fixed resistor and two (or more) potentiometers. For example, 10k might be made using a 9.95k fixed resistance, in series with a 500 ohm and 50 ohm pot. The idea is that at the nominal tuning frequency, the two pots will be centred, allowing fine and very fine adjustment. A change of as little as 10 ohms makes a big difference to the notch depth.
+ +The first opamp acts as a buffer, ensuring that the output of the filter is not loaded by the voltage divider that supplies the signal to the second opamp. The second opamp applies feedback via the R/2 and 2C leg of the tee, making the initial rolloff occur closer to the notch frequency. As shown, the second harmonic is attenuated by less than 0.3dB. When used to remove the fundamental frequency for distortion measurements, it can be extremely difficult to maintain a good notch because of minute amounts of frequency drift.
+ + +The bridged-tee (bridged-T) notch is often used for equalisation, and other places where a fairly shallow (and broad) notch is acceptable. Strictly speaking, it's not an active filter, other than the requirement for a high impedance output buffer. The bridged-tee filter has a wide band where frequencies near the tuned frequency are affected, and very deep notches are not available with most versions. There are a few different topologies, but they are generally intended to provide a specific response rather than act as 'true' notch filters. The bridged-tee can be used as the tuned feedback path for an oscillator, but there's little or no advantage over a Wien bridge in this role. Note that the circuit must be driven from a low impedance source.
+ +The circuit above is more-or-less typical, and also shows the response with the values provided. Calculation of the frequency is non-intuitive and a bit cumbersome, but it's easy enough when you know how. The ratio between the two capacitors is defined by the cap values, and as shown they are 10:1 (C2, C1 respectively, shown below as Cratio). To determine the frequency we must take the square root of the ratio, in this case, √10 is 3.162. This means the effective (or 'nominal') capacitance is C1 × 3.162 =31.62nF or C2 / 3.162 =31.62nF. Frequency is ...
+ ++ f = 1 / ( 2π × R × Cnom ) (where Cnom is the nominal capacitance to get the required frequency)+ +
+ f = 1 / ( 2π × 10k × 31.62nF ) = 503.3 Hz,
or ...
+ f = 1 / ( 2π × √( R1 × R2 × C1 × C2 )) +
Turning the first two formulae around makes it easier to calculate the capacitor values needed for a defined frequency ...
+ ++ C1 = Cnom / Cratio = 31.62 / 3.162 = 10nF+ +
+ C2 = Cnom × Cratio = 31.62 × 3.162 = 100nF +
The attenuation at the tuned frequency is set by the capacitor ratio, and for the example shown ...
+ ++ Attenuation = 20 × log(( 2 / ( 2 + Cratio ))+ +
+ Attenuation = 20 × log(( 2 / 12 ) = 15.56 dB +
However, as you can see from the graph in Figure 6.2, the 'notch' is very broad, with -3dB frequencies at 126Hz and 1.97kHz, a bandwidth of 1.844kHz. Increasing the capacitance ratio achieves a deeper notch, but all other frequencies (outside the 'stop band') are also attenuated. While the bridged-tee is useful for some specific applications (EQ circuits in particular), it's too broad to be useful for eliminating 'nuisance' frequencies such as mains hum. There is no reason that the values of R1 and R2 must be equal, and it's not uncommon to see different values used in equalisation circuits.
+ +The bridged-tee is very sensitive to output loading, so a high impedance buffer is essential at the output to prevent the levels above and below the tuning frequency from being 'skewed'. Any output load will reduce the level below the notch frequency. The alternative version described below is more sensitive to output loading than the conventional arrangement, but neither is much use without a buffer.
+ +An interesting twist on the 'conventional' bridged-tee shown above is to reverse the positions of the resistors and capacitors. You might expect that this would reverse its operation, and provide a peak rather than a notch. It actually works identically to the version shown above. For example, for the same frequency and notch depth, use a 100k resistor in place of C1, and a 10k resistor in place of C2. Two equal value caps (10nF each) replace the resistors. The potential advantage is that it's more flexible, because resistors are available in a wider range than capacitors. While you could replace one resistor with a pot, that will affect both notch depth and frequency, so it's not especially useful. The tuning formulae are the same, except that it becomes the resistor ratio rather than the capacitance ratio that determines frequency and attenuation.
+ +It's not at all uncommon to see bridged-tee network with unbalanced values, deliberately driven with a non-zero source impedance and/ or loaded at the output. One of the places you are most likely to come across the circuit in bass guitar amps as a 'contour' circuit, which deliberately inserts a notch and (usually) bass boost. Further discussion of this is outside the scope of this article.
+ + +Normally, the Fliege filter is something of an oddity (high and low pass versions are shown below), but it makes an easily tuned notch filter with variable Q. Notch depth is not as good as a twin-T, but is much better than the bridged-tee. It can be tuned with a single resistor (within limits). The Q can be changed by changing two resistors. There is a caveat on the variable Q though - if the frequency tuning resistance is changed, the Q is also changed.
+ +As before, the frequency with component values shown is 1.59kHz, and follows the same formula as other filters. Q is set by resistors RQ, and the value needed is approximately ...
+ ++ RQ = Rt × Q × 2 ++ +
In the circuit shown, Q is about 5, and that's enough to ensure that the second harmonic of the input frequency is attenuated by less than 0.1dB. Increasing the Q will reduce the notch depth, so the lowest Q that gives an acceptable minimum attenuation of harmonics should be used. It is possible to increase the Q to at least 10, but notch depth will be reduced.
+ +The circuit can be tuned over a reasonable range by varying the resistor Rt* - it can be changed from 5k to 20k, providing frequencies from about 2.25kHz down to 1.13kHz with the other values unchanged. The Q does vary (as does notch depth), but performance is satisfactory over the range. I don't know of any other notch filter that's so easily adjusted, which makes this an excellent candidate for removing any 'nuisance' frequency such as 50/60Hz hum.
+ +Fliege notch filters have unique phase performance. As frequency increases towards the notch frequency the phase is 0° - in phase with the input. As the notch frequency is passed, the phase is -360° above the notch - again exactly in phase with the input. No other notch filter I've looked at does this.
+ + +There are many, many more filter types. Some are extremely obscure (but interesting), and there are no doubt others that richly deserve their obscurity. It would not be sensible to even try to cover them all, and with a few exceptions most will never be even considered as candidates for your next project. Some of the better known types are covered, others are mentioned only in passing.
+ + +The Fliege filters shown below are interesting - gain is fixed at two, but the frequency and Q are (at least to some extent) independent. The Q can be changed with a single resistor scaled to the frequency tuning resistors, as shown below. If RQ is half the value of Rt (the tuning resistor) the Q is 0.5 - a Linkwitz-Riley alignment.
+ +Frequency is set with Rt and Ct, and they are conveniently the same values we'd use for a single pole filter. RQ sets the filter Q (surprise), and if set to 10k in the example, the Q is 1. When set to 7.07k as shown, the Q is 0.707 - very easy and convenient. Considering the requirement for two opamps, it's unlikely to be adopted for crossovers or many other audio applications, but it is interesting nonetheless (or at least I think so). Fliege filters can also be configured for bandpass or notch.
+ + +Another obscure design is the Akerberg-Mossberg Filter. This is an easy topology to use, but requires three-op-amps for its operation. It is easy to change gain, type of low-pass and high-pass filter (Butterworth, Chebyshev or Bessel), and the Q of band-pass and notch filters. The notch filter performance is not as good as that of the twin-T but it is supposedly less critical. While undoubtedly useful, the details will not be included here, because there seems little application for audio circuits.
+ + +One filter that does require further explanation is the Cauer or elliptic filter. As the basis for the NTM™ (Neville Thiele Method) crossover, and a very common anti-aliasing filter for analogue-digital conversion, it deserves some attention. It is an interesting filter, in that it is the only one to have ripple in the stop band. Pass band ripple is common with high-order Chebyshev filters, but no other filter has ripple in the stop band - beyond the cutoff frequency. This is produced because the filter is typically a combination of a (more or less) traditional Sallen-Key filter, followed by one or more notch filters, all tuned to operate beyond the cutoff frequency.
+ +The following example uses a Sallen-Key 12dB/octave filter, followed by a state variable filter. The summing amplifier adds the high pass and low-pass outputs together, resulting in a notch because they are out-of-phase. Changing the value of R13 (68k) changes the position of the notch ... a lower value reduces notch frequency, but increases the level of the rebound (see Figure 7.3).
+ +Only the low pass filter is shown - the requirements for a high pass equivalent are met by the usual technique of reversing resistors and capacitors for the primary frequency, and changing the frequency for the notch filter(s). Admittedly, this is not especially easy, but a complete description of both types is not warranted here.
+ +The red trace is the Cauer response - as is immediately obvious, it rolls off more sharply than the fourth order filter after the cutoff frequency, but 'rebounds' at about 6kHz. While the rebound (or bounce) appears disconcerting, with higher order filters it's not really a concern. Even here, the peak level is at -40dB. Note that the rolloff slope after the bounce is 12dB/octave, not 24. This is because the state variable filter is used to produce the notch, and does not add a further 12dB/octave. The green trace shows the level when the state variable filter is used as an additional 12dB/octave filter, giving 24dB/octave in total.
+ +The turnover frequency is a little lower than the 1.59kHz expected (1.48kHz), but that's because the filter was optimised for the 24dB/octave response shown in green. The faster rolloff of the Cauer filter is very pronounced, especially beyond 3kHz. At 4kHz, the level is 44dB below that at 2kHz, but it would be incorrect to say that the rolloff was 44dB/octave, because it changes - very rapidly as the notch frequency is approached (4.1kHz in this example).
+ +While I have only shown a basic 24dB/octave version, it's not uncommon for Cauer filters to be 6th order or above. As the order is increased, the bounce is reduced further, and this is common for anti-aliasing filters. The much-sought-after 'brick wall' filter is almost achieved with this topology.
+ + +Inductors are without doubt the worst of all electronic components. Not only are they bulky, but they pick up noise from any nearby source of a magnetic field. Inductors also have significant resistance and often high inter-winding capacitance as well. When used for RF applications, the values needed are typically very low and it's easy enough to minimise the deficiencies. For audio frequencies, the failings of inductors make themselves well known.
+ +One solution for 'line level' applications, where the voltage and current are low, is the simulated inductor. By configuring an opamp and capacitor appropriately, the combination can be made to act just like a real inductor, but with fewer shortcomings. This is commonly known as a simulated inductor or a gyrator. When used with a capacitor, 'traditional' LC (inductance-capacitance) filters can be created, and these are common building blocks in many filter applications.
+ +The generalised circuits are shown below, one using only an emitter follower (cheap and cheerful) or the 'real' alternative using an opamp. The response shown is based on the generalised circuit shown below the two gyrators. It's a parallel resonance circuit with a 10k feed resistance. The formula for resonance is also shown in the drawing. Gyrators can be used as an inductor only, or in series or parallel resonance circuits ... provided the 'inductor' is earth/ ground referenced.
+ +As you can see from the response graph, the single transistor version is nowhere near as good as that using an opamp. However, it's cheap, and in many cases will work just fine - depending on your application of course. In reality, the cost difference is minimal, because most opamps are inexpensive (and you save one resistor as well). The basic formula for determining inductance is ...
+ ++ L = R1 × R2 × C1 Henrys (where resistance is in Ohms and capacitance is in Farads) ++ +
For the above example, the simulated inductors are nominally 1H, but the transistor version is actually slightly less because the gain of an emitter follower is typically only about 0.98 instead of unity. The circuits can be wired for series or parallel resonance, but the 'inductors' are earth (ground) referenced. If you need a floating inductor, there is a circuit that can be used, but it's considerably more complex. For a great many equalisers and the like needed in audio, having the inductor earth referenced is not usually a problem.
+ +Simulated inductors are not immune from 'winding resistance', but it is fairly obviously not because of the resistance of a coil of wire. R2 is needed for the circuit to work, and is directly equivalent to winding resistance. Although some opamps will be able to work with values lower than the 100 ohms shown, there is a risk of instability if R2 is made too low. In general, 100 ohms is a reasonable compromise, and works well in practice.
+ +If you wish to know (a lot) more about this approach, see the Gyrator Filters article, which covers them in much greater detail that this short introduction.
+ + +It's hard to think of this as a filter, since it leaves the frequency response unchanged. Only the phase of the signal is varied, and with this comes a potentially useful time delay. Although the delay is short, it can be used to 'time align' drivers whose acoustic centres are separated far enough to cause problems.
+ +Version 'A' produces a lagging phase. That means that the output signal occurs after the input. For the values shown, the delay is about 155µs with a 1.59kHz signal. Version 'B' has a leading phase - the output signal occurs before the input. While this seems impossible, for a signal that lasts more than a few cycles it really does happen. In the second example, the output occurs 155µs before the input (but only after steady-state conditions are established).
+ +The circuit is shown above. It is a simple circuit, and easily incorporated into a system if needed. R1 and C1 can be exchanged as shown in 'B', which simply reverses the phase. Instead of having 0° shift at low frequencies, there is 180° and vice versa. The advantage of the second circuit is that R1 can be replaced with a pot, allowing the phase at 1.58kHz to be varied from 0° (pot shorted) to 180° to around 12° with a 100k pot. When the pot is set for minimum resistance, C1 is connected to ground, and may cause the driving opamp to become unstable. You need to verify that the driving circuit remains stable in your design.
+ +The leading phase angle of the second circuit makes it unsuitable as a time delay - for that, you might use several of the 'A' circuits in series to get the desired time delay. It must be understood that the time delay is the result of phase shift, so varies with frequency. At one octave either side of 1.59kHz (i.e. 795Hz and 3.18kHz), the delay is roughly 180µs and 110µs respectively.
+ +Above, you can see the input signal (red), and the outputs of the two versions of the all-pass filter (lead and lag). The time response is set up within half a cycle, so by the completion of the first full cycle, the leading and lagging time delay is clearly visible. The leading trace (green) is 159µs before the input, and the lagging trace (blue) is 159µs after the input. This amount of time may seem insignificant, but it represents the time taken for sound in air to travel about 55mm.
+ +By adjusting the values to suit the crossover frequency, it is possible to obtain pretty close to perfect time alignment. This may be necessary if the acoustic centres of the loudspeaker drivers cause the relative outputs to be out of phase by less than 180°. It is usually the tweeter signal that has to be delayed to match the midrange (or mid-bass) driver. The details of how to achieve this are outside the scope of this article.
+ + +Digital filters are not new, but with cheap digital signal processor (DSP) ICs now available, they are becoming very common. In many cases, the end-user is completely unaware that digital filters are in use because they are commonly integrated within equipment. Surround-sound, room 'correction' (which cannot and does not work! ¹), tone controls and many other functions are now implemented using DSPs, rather than analogue circuits. Indeed, many of the functions (whether useful or not) can't even be done using analogue processing because the cost and circuit complexity would be far too high. Some filter implementations are simply impossible with analogue processing.
+ +The design and implementation of digital filters is worthy of a complete book, and indeed there are many such books available. I do not propose to even attempt to explain these filters, other than in general terms. Although not exactly outside the scope of DIY, it requires dedicated hardware and software to calculate the filter coefficients and to program the DSP.
+ +There are basically two different types of digital filter, known as 'finite impulse response' (FIR) and 'infinite impulse response' (IIR). Analogue filters are essentially IIR types, and the IIR digital filter coefficients are commonly derived from the analogue equivalent. All digital filters rely on digital delay lines, plus addition, subtraction and/or multiplication in software. Although all processes needed can be performed by general purpose processors, DSP chips are optimised for these functions so generally require far less code than would be needed for a DSP function performed by the general-purpose microprocessor in a home PC (for example). Basic digital filter characteristics are as follows ...
+ +Finite Impulse Response (FIR) filters +
Infinite Impulse Response (IIR) filters +
When a signal that is to be filtered is analysed, it's usually easy to decide which type of digital filter is best for the application. If phase characteristics are important, then FIR filters must be used because they have a linear phase characteristic. FIR filters are of higher order and more complex. If it's only the frequency response that matters (for example to replace an analogue filter), IIR filters may be a better choice because they have a lower order (less complex), and are therefore easier and cheaper to implement. While there are many claims that phase is somehow 'important', that is often not true at all. Relative phase (between two frequency bands for example) is important, but is not an issue with IIR filter implementations if done properly.
+ +FIR filters have the advantage that they are always stable, but they require greater hardware resources. FIR filters use a mathematical function referred to as convolution - where the final function is a modified form of one of the two original functions. FIR filters have no analogue counterpart, and can be designed to do things that are impossible with any analogue filter. An example is to build a filter with a steep rolloff slope, but with linear phase shift (even if it's not needed for audio).
+ +IIR filters use recursion (feedback), and while this makes the functions more efficient (requiring fewer computing resources), it also means that the final filter may not be stable. IIR filters are virtually identical to conventional analogue filters, and it is not possible to remove phase shift from the output.
+ +A filter using convolution (FIR) requires a separate processing section and delay for each sample being processed, and uses only the input samples in the equations. In contrast, a recursive filter (IIR) uses both input and output samples because of the feedback, and therefore requires fewer processor resources. As noted, this can lead to instability and also 'limit cycles' - basically a form of non-harmonic distortion resulting from quantisation errors that may circulate within the DSP filter block.
+ +It has been claimed (for example [11]) that digital filters are far superior to analogue filters because they "are not subject to the component non-linearities that greatly complicate the design of analogue filters". While this is true up to a point, it also ignores the fact that digital filters are subject to quantisation errors and all the other issues that all digital systems can suffer from. Not the least of these is headroom. Most DSPs operate from 5V or 3.3V, so the level is limited to an absolute maximum of 1.77V or 1.17V RMS, more than 15dB lower than can be used with analogue filters using common opamps.
+ +However, as noted above, digital filters can have far greater rolloff slopes and much higher complexity than analogue equivalents, and FIR filters can be configured as linear-phase so there is minimal phase shift through the filter. Digital filters can be configured to do things that are simply impossible with an analogue design. Because digital filters rely on signal delay, there is an inevitable latency (time delay) as the signal passes through the filter, analogue to digital converter (ADC) and digital to analogue converter (DAC). Most digital filters also require an analogue low-pass filter ahead of the ADC to prevent aliasing.
+ +Some proponents of the digital approach may claim that the FIR filter's linear-phase characteristic is ideal for audio. However, it should be remembered that the phase of a typical audio signal is virtually random, and eliminating phase shift is of no practical benefit. There is no evidence that the normal phase shift introduced by any (sensible) analogue filter is audible in a blind test.
+ +Overall, the digital approach is likely to cost more for typical audio applications such as electronic crossovers. There are DSP boards available that can easily be configured as crossovers, with optional equalisation in some cases. The end result may well be very good, but it's close to impossible to truly understand what's going on, and little is learned along the way (other than how to drive PC based software to configure the DSP).
+ +Because of the low output level which may not be sufficient to drive a power amp to its maximum, additional analogue circuitry is needed to restore the level, and the digital circuitry must be operated at a level that guarantees that 0dBFS (maximum digital (full scale) level without clipping) is never exceeded. This might mean that the maximum level may need to be kept below perhaps 500mV, and most of the time the level will be a great deal less at normal listening levels.
+ +Of course, once the signal is in the digital domain (after the ADC), any other effects that might be needed are easily accomplished. For example, a digital crossover network can be configured with the necessary time delays to 'time align' the loudspeakers, or to apply equalisation as needed to obtain a flat frequency response. Great care is still required though, because it's easy to apply radical EQ to 'correct' a poor loudspeaker, and while the end result might be flat, it may also sound like a bucket of bolts. Despite claims you may see, digital processing cannot make a silk purse from a sow's ear - a crappy speaker is still crappy no matter how much technology you throw at it!
+ +Digital filters can be used to re-create any analogue response (Butterworth, Bessel, Linkwitz-Riley, Chebyshev, 'inverse' Chebyshev, elliptic (Cauer), low pass, high pass, band pass, band stop (notch), etc., etc. As explained above, responses and functions can be created in the digital domain that are simply impossible with analogue. Despite all the apparent advantages, it does not follow that digital is necessarily 'better'. Indeed, if the DSP isn't capable of at least 32-bit precision the digital realisation may be a great deal worse, and there is always the additional circuitry (and low signal level requiring additional amplification) that just means that there are a great many more things to go wrong.
+ +There is no doubt that digital filters are immensely useful, and it's expected that entire subsystems will become more powerful and cheaper over time. It's already possible to get fully configured boards and software to drive them quite cheaply (less than $100), and these will eventually replace many analogue designs. Whether they are 'better' than an analogue implementation for 'routine' applications such as electronic crossover networks is subject to some debate - as is to be expected. As always, many of the claims and counter-claims are based on purely subjective testing, without a great deal of science. Most readers will know that I consider subjective claims to be pointless at best, and they are often highly misleading.
+ +I do not propose to cover digital filters in any more detail than has been presented. People who are interested in more information are encouraged to do a web search - there is a vast amount of information available. Be warned that much of what you will find is extremely technical, and assumes that the reader is already acquainted with digital techniques and understands the complex maths involved. It's worth noting that I've sold a great many Project 09 analogue crossovers to people who've tried DSP and were disappointed with the results when listening to music. Unless the DSP has a sufficiently high sample rate and bit-depth, digital 'artifacts' are more likely than not, made worse when complex functions are implemented.
+ + +As noted earlier, all filters affect the transient response of the signal passed through them. As the order and Q are increased, the transient response becomes worse, with clearly visible ringing on an impulse waveform. While this can often look very scary ("that must ruin the sound"), in reality it's not really a problem for most of the filters we use. Part of the problem is that the typical test waveform is a pulse, and while that does show the problem, it makes it appear much worse than it really is. Music does not consist of very narrow pulses that have infinitely short rise and fall times, but tends to be relatively smooth. Even musical transients do not have very fast rise times, because the instruments do not have fast rise-times and the recording process uses filters to limit the maximum frequency. This reduces the maximum possible risetime of any signal that passes through.
+ +Although it is possible to record a single 50µs pulse (half a cycle of 10kHz), loudspeakers can't reproduce it even if it were to get through the recording chain. We would also be hard pressed to hear it - such a signal would only sound like a click, provided the level was high enough of course. It takes time before the ear-brain combination can recognise that a signal exists as a tone or the sound of an instrument. Nevertheless, transient response will be examined here, warts and all.
+ +More to the point, while the 'standard' test signal shows the effect, it is totally unrealistic. Being of only one polarity, it is completely unlike any normal signal in audio. There is no musical instrument that can produce such a waveform, and no microphone that can record it intact.
+ +The term 'steady-state', if used strictly, describes a waveform that has existed for eternity. Any disturbance (such as switching it on or off) introduces transient effects. In most cases, steady-state conditions can be seen to exist after a number of cycles of a sinewave. Minor disturbances will not usually be audible, because the signal needs to exist for a period of several cycles before we can interpret it as a particular tone. This is highly dependent on the frequency and amplitude of the signal and its harmonics.
+ +The impulse used for the above was a 1V peak, 200µs wide impulse (green trace). The filter used was 24dB/octave with a cutoff frequency of 1.66kHz, and is approximately Butterworth. Even a Linkwitz-Riley alignment shows a (very) small amount of ringing, but it is negligible in real terms. The red trace shows that the filter is triggered into a heavily damped oscillation at a frequency just below the cutoff frequency (in this case, at about 1.3kHz). As Q is increased, the ringing becomes worse, but since high Q filters are not generally used in audio, they can be ignored for the purposes of this article.
+ +What is more important is the overall change to a normal signal. While music is not steady-state, for most filters it takes only a couple of cycles for steady-state conditions to be established. For the filter used for Figure 9.1, it takes only one half-cycle at 1kHz before the output signal reaches (approximately) steady-state conditions. When the input signal is above the cutoff frequency, it takes a little longer for the signal to settle down - at 2kHz, 2½ cycles are needed before steady-state conditions apply. This gets progressively worse as frequency is increased, but the filter is also reducing the amplitude of the signal above cutoff, so the effects become immaterial. For example, we don't really care if it takes 3 days for a 20kHz signal to settle from a 1.66kHz filter, because the filter has effectively removed the signal anyway (20kHz is about 88dB down with the test filter).
+ +A high pass filter also affects the transient response. Figure 9.2 shows the same pulse, applied to a 70Hz, 24dB/octave high pass filter. Again, the red trace is the filtered response, and green is the applied pulse. Again, because the test impulse is unidirectional, the effects shown are far worse than will ever be experienced by a real filter handling music signals. The majority of the disturbance seen is a direct result of using a single pulse of only one polarity.
+ +While it is simple enough to create a somewhat more realistic test waveform, there really isn't much point. The simple fact is that filters affect transient response, and it does not matter if they are active, passive or digital. Passive filters are the hardest to control, and if the load is a loudspeaker it presents a different impedance depending on frequency, and will therefore be far less predictable.
+ +Suffice to say that all filters create deviations in transient response, but provided filter Q is kept reasonably low, the effects are generally completely inaudible. Filters with a Q of 0.5 (sub-Bessel) are as close to benign as it is possible to achieve while still maintaining useful frequency response and crossover performance. Low frequency high pass filters (for example, infrasonic filters, speaker excursion limiting filters, etc.) introduce phase shift (as do all filters), but their transient response does not usually significantly affect signals within the normal audio range.
+ +While transient response is obviously important, I can find no evidence that listeners are able to detect any statistically relevant differences in a properly conducted blind test. Vast numbers of people listen to vented (ported) loudspeaker enclosures, and their transient response is dreadful. However, it must be considered that bass signals hardly qualify as 'transient', because they are rather slow by nature. While commonly used by reviewers, the term 'fast bass' is an oxymoron.
Because it's pertinent to this discussion (and because I can do it easily), I recorded the waveform distortion of a 723Hz 3-cycle tone burst, passed through a 723Hz 6dB/octave filter (2.2k and 100nF). This is the most benign of all filters, yet the waveform distortion is clearly visible in the above two oscilloscope captures. The image on the left is the high-pass section, and on the right is the low-pass. Note that the input waveform is exactly 3 cycles, and it starts and stops at exactly zero volts.
+ +This is a more realistic test than using a single polarity pulse, but the waveform is still easily able to show the effect, but it will never be found in isolation in music. This notwithstanding, the effect of the filters is audible, as you would expect from any filter. There is also a phase shift of 90°, with the high pass output leading the low pass output. A single 3 cycle burst sounds like a click, but at 723Hz the tonality of the signal is just audible. There is also an 'undertone' created by the stop-start nature of the waveform. The filter changes the sound simply because it is a filter. The low pass filter accentuates the non-harmonic 'undertone' that is created by the burst waveform, and the high pass version removes it.
+ +This shows quite clearly that even a first order filter (6dB/octave) will cause transient distortion. The above results can be duplicated easily, and a simulation gives identical results to those captured on the oscilloscope. For those who remain dubious, I recommend that you either run the test yourself if you have the equipment, or at least perform simulations to verify that these effects are very real. A tone-burst gate is shown in Project 143, and Project 86 describes a simple audio oscillator. Both are ideal for this type of test.
+ +Higher order filters do exactly the same thing, but the effects are more pronounced. However, even a 24dB/octave (4th order) filter will show the second cycle from both high and low pass sections to be exactly equal and in phase with each other. Only the first and last cycles are affected in a tone-burst test. Note that any waveform disturbance when the tone burst ends is always after the input stimulus has ended (the filter is not pro-active, and can't make a change before the stimulus has started or stopped). Lest the reader assume from this that a full-range driver is 'better' because it needs no crossover network, these have many other compromises and will rarely (if ever) match a decent 2-way or 3-way system using active crossover networks.
+ +Of all the filter orders, only first order (6dB/ octave) types will produce an output waveform that's identical to the input when the outputs of high-pass and low-pass filters (tuned to the same frequency) are recombined. This is often used as a 'reason' that we shouldn't use anything else, but first order filters are almost always too gentle to be useful for the majority of applications.
+ +Filters affect the phase of the signal, and in so doing also affect the time it takes for the signal to pass through the filter. This time is called 'group delay', and is described in the next section.
+ + +Group delay is best described as the delay difference between one group of frequencies and another different group of frequencies (e.g. above and below 2kHz). To use the analogy of John L Murphy (True Audio), imagine if the treble was heard instantly, but the bass was delayed until the same time tomorrow (24 hours). This would be audible to everyone. All normal filters (and even some loudspeakers ) can be expected to have a delay much less than this, and group delay is not generally a problem.
Above we see a Butterworth (red) and sub-Bessel (green) filter. Only the low-pass section is shown, and only as a matter of interest. There is nothing that can be done to change group delay for a given filter type, and if that filter type is needed to produce a specific response then you are simply stuck with it. Like phase shift, group delay comes free with all filters as a matter of course.
+ +There is a table (below) that gives the approximate thresholds of audibility for group delay, and the data were compiled by Blauert and Laws [7]. There is not a lot of research into this for some reason, but there's little or nothing that can be done about it. The group delays of most filters are well below the threshold of audibility based on the available data.
+ +Frequency | Audibility Threshold | No. of Cycles + |
500 Hz | 3.2 ms | 1.6 + |
1 kHz | 2 ms | 2 + |
2 kHz | 1 ms | 2 + |
4 kHz | 1.5 ms | 6 + |
8 kHz | 2 ms | 16 + |
The table shows the minimum group delay that is thought to be audible, along with the number of cycles at that frequency. Any delay time less than shown will not be heard, however there may be exceptions if the delay causes an anomaly in the frequency response. If this is the case, it will be detected as a frequency response error - not a time delay. Although there appears to have been surprisingly little testing in this area, it is generally thought that human hearing is not especially sensitive to short time delays. As frequency is increased or reduced around 2kHz (the most sensitive frequency), greater delays are required before they become audible.
+ +Audibility of group delay depends on the source material. Sharp impulse sounds can sound 'blurred' if there is too much delay between the low and high frequencies, but you may not hear any significant change if the source material has no transients. It's probably safe to assume that if the group delay never exceeds (say) 0.5 of a cycle at any frequency, it won't be audible. This is a far stricter criterion than we see in the above table, but it's not unreasonable. Some speaker designers consider that up to two complete cycles is "probably ok" (and they are probably right), and a typical vented speaker enclosure (the vent, box and loudspeaker create an acoustic filter) has far more group delay than most filters.
+ +One complete cycle at 50Hz is 20ms, so two cycles will take 40ms. At 20Hz, a single cycle is 50ms and two cycles take 100ms. You can work out the cycle time for any frequency and take it from there. In the table above, anything over 1.6 cycles at 500Hz is at the threshold of audibility, but at sub-bass frequencies (below 40Hz) our hearing is not at all sensitive to the delay. There is little or no empirical data though, and the above table is pretty much all that anyone has to work with ... you'll find the same data all over the Net.
+ +Figure 10.2 shows the group delay for the P99 36dB/octave infrasonic filter. This is a very high rolloff filter, and the group delay looks pretty bad at 1Hz ... until you realise that the theoretical output level at that frequency is -120dB. Group delay is 24ms at 20Hz (50ms cycle time), 29ms at 18Hz (55.5ms cycle time) and 51ms at 10Hz (100ms cycle time). This is close enough to the 1/2 cycle limit that I set above, and will normally be completely inaudible. Room effects and enclosure design will cause far more havoc than a 1/2 cycle delay.
+ +It also has to be understood that if you have a serious problem with infrasonics (for example), then a filter can only improve matters. Anything that fixes a known (and audible) problem can only ever improve the system overall. It's very rare that the cure is worse than the disease.
As most readers will be aware, nothing in life is perfect. While active filters are almost always preferable to their passive equivalents at audio frequencies, there are limitations. Many of these are due to the opamp used in the filter circuit, although it doesn't often show up during simulations or real-life testing. However, it is important that these limits are known, because in some circuits it can make a big difference.
+ +Every opamp has a limited frequency response and a non-zero output impedance. The main reason for frequency response upper limit is the internal stabilisation capacitor, although it may be external with some devices. Either way, the open loop gain (and frequency response) falls at 6dB/ octave at frequencies above a lower limit of between 10 and 1,000Hz. This stabilisation feedback is necessary to ensure that the opamp doesn't oscillate, and needs to be fairly brutal for stable unity gain operation. If an opamp is properly selected, high pass filters usually behave exactly as expected, but low pass filters may show sub-optimal performance in the stop band.
+ +The reduced HF gain has two effects - because there is less feedback, distortion is higher and output impedance rises. Any low pass filter that relies on a low opamp output impedance will eventually fail to maintain the desired rolloff rate, and will 'bottom out' at a frequency determined by the opamp's characteristics. The Sallen-Key low pass filter is particularly susceptible to this issue, as shown in the following drawing and graph. Some other filter topologies are also affected, including the multiple feedback bandpass. MFB low pass filters are slightly affected, but not as badly as the Sallen-Key. Rather than the signal showing a 'rebound', the slope changes from 12dB/ octave to 6dB/ octave when the opamp runs out of steam.
+ +The output impedance of the opamp is exaggerated for clarity, but the effect is very clear. the -3dB frequency is 1.59kHz as before, and the dip occurs at 16kHz. As the frequency increases further, the output level is eventually determined solely by the combination of R1 and Zout, which forms a simple voltage divider in the example circuit. At the frequencies where we see problems, C2 has a very low impedance (710 ohms at 16kHz), and C1 becomes more-or-less redundant because the opamp can't do anything useful any more.
+ +The real-life situation is more complex of course, because Zout is not a simple resistance and it increases with increasing frequency as the opamp runs out of gain. This is shown in the light grey trace. The 'rebound' level will not continue to increase indefinitely though, but the only way to know exactly what the circuit will do is to build it. A simulator can only ever get you part of the way.
+ +The important things to understand here are that a) this is very real, and b) it rarely causes a problem. That might come as a surprise, but at audio frequencies the transition frequency (where the output voltage stops falling at 12dB/ octave) is almost always well outside the audio band. For example, a lowly TL071 opamp has a transition frequency of 100kHz, where the signal is 66dB down. Yes, as the frequency is increased so is the output level, but there is no audio signal at 100kHz and we shouldn't be concerned that the level has risen to perhaps -40dB at 2MHz.
+ +For audio frequencies, very few opamps (even the worst possible examples) will 'bottom out' at a frequency much less than around 50kHz, showing clearly that the example shown is very pessimistic. However, even with the result shown above, performance is not really compromised. At the worst, the output level is 40dB down, so with a 1V input the output at frequencies above 50kHz is still less than 10mV. Since there is no audio at that frequency, there's still no problem. However, it would not be sensible to use the worst possible opamp, and any opamp designed for audio use will be far better than shown. When filter sections are cascaded, the result is that the 'rebound' occurs at a much lower level. For a 24dB/ octave filter, the rebound level will typically be below the noise floor, even for signals above 100kHz.
+ +Where this effect does become important is when one is building or designing test equipment or signal processing circuits that operate at frequencies well above the audio band. Naturally, if this is the case, we will choose a wide bandwidth opamp that's designed for the frequency range that we need. Expecting most 'audio grade' opamps to function properly even at low radio frequencies would be folly.
+ +As with all things in electronics, the effect can be mitigated (or at least minimised) by suitable trickery. For example, we can include a first order filter in front of the main filter circuit, having a turnover frequency that's perhaps 10 to 20 times the design frequency. So, a 1.59kHz filter could have a preceding 6dB/octave filter (with an impedance of less than 1/10th of the primary filter) tuned for some suitably selected higher frequency. In the above example, that would add a series resistor in front of R1 of (say) 560 ohms, followed by a 3.9nF cap to ground (about 73kHz turnover frequency). With this in front of the circuit shown, even a µA741 will achieve an ultimate attenuation of at least 60dB below the reference level (at least according to the simulator I use). The additional filter does change the response ever so slightly, but the effects of that will need to be determined at the design phase.
+ + +Filters are an ongoing development, with DSP (digital signal processing) now being applied for more complex types. Regardless, the analogue versions are still very much in use, and for DIY applications are generally the cheapest and easiest to use. Performance is every bit as good as a DSP version, but they can't be changed with software coefficients because they must be hard-wired. Of course, many is the claim that digital filters are ever so much better than analogue, and there are just as many counter-claims. I don't believe that either camp is right - both can do the same things. As noted above, digital filters can do things that are impossible with analogue, but are significantly more complex and costly to develop. With the advent of high speed analogue-digital converters, even traditional anti-aliasing filters are often not needed, with a fairly basic filter being adequate. This is achieved by sampling the audio at a much higher than required rate, applying the filter digitally using a DSP, then down-sampling to the required rate (44.1kHz for example).
+ +The hardware basis of analogue filters is rarely a problem for any fixed installation, such as a hi-fi system or a dedicated powered speaker, and the DSP approach is (generally) not cost effective. While even analogue filters can be made adjustable, it's very difficult to get 4-way (or more) ganged pots - and even harder to get them with acceptable tracking. However, it's easy to install machine sockets to allow resistors to be changed if this is needed.
+ +Because of the huge range of different filter types, there is one to suit every need, however obscure. While some of those shown above are suitable for use as a crossover, others are completely unsuitable - often for reasons of cost and complexity. There is no point building a complex filter whose Q can be varied without affecting anything else, because you generally know the Q that's needed for your application before you start. This is determined by the filter topology and the requirements. For an electronic crossover, you need to be able to sum the outputs to get a flat response (generally an absolute requirement, because that's what the loudspeakers will do), so the Q needs to be set accordingly based on the filter slopes.
+ +The Sallen-Key filter is still the easiest to use, and despite its shortcomings is sufficiently well behaved for almost anything needed in audio (for general purpose high and low pass filters at least). While MFB filters are sometimes applied, there is usually no advantage - the required values are more irksome, they are an inverting topology, and IMO offer no benefit to offset the greater complexity. High-pass MFB filters should be avoided altogether. Of course, bandpass MFB filters are ideal and beat most other contenders hands down. State variable filters are probably the most flexible, but need 3 opamps instead of only one for MFB or Sallen-Key types (for 12dB/octave or bandpass filters). The other topologies are interesting, but other than specialised applications, are generally not especially useful for audio/ hi-fi applications.
+ +It's obviously necessary to ensure that the active element you use (usually an opamp) is up to the task. For example, using a µA741 for an RF filter would be ill-advised, because it simply isn't fast enough. Equally, using a very fast current feedback opamp designed for RF work would be just as silly in an audio circuit. The information here is simply an introduction to the various filter types and topologies, and it's up to the designer to select an opamp that will provide the best compromise for the intended application.
+ +Notch filters are a somewhat unique application, especially the twin-tee. These are the basis of many distortion analysers, and this topology is the only passive R/C filter that offers a fairly high Q along with close to infinite attenuation of the tuned frequency. Adding feedback improves the Q, so the 'stop band' can be limited to just a few Hertz, with everything else passing through with little or no attenuation.
+ +In short, there is an active filter for just about any audio frequency application imaginable, and it's up to the system designer to adopt the one(s) that best suit the specific needs of the final design. As noted earlier, the term 'audio frequency' does not mean 'audio' in the hi-fi sense, only that the frequency range is (mostly) within the audio spectrum (± a couple of octaves).
+ + +Several references were used while compiling this article, with many coming from my own accumulated knowledge. Some of this accumulated knowledge is directly due to the following publications:
+ ++ 1 - National Semiconductor Linear Applications (I and II), published by National Semiconductor+ +
+ 2 - National Semiconductor Audio Handbook, published by National Semiconductor
+ 3 - IC Op-Amp Cookbook - Walter G Jung (1974), published by Howard W Sams & Co., Inc. ISBN 0-672-20969-1
+ 4 - Active Filter Cookbook - Don Lancaster (1979), published by Howard W Sams & Co., Inc. ISBN 0-672-21168-8
+ 5 - Maxim - A Beginners Guide to Filter Topologies Application Note 1762
+ 6 - Texas Instruments - A Single-Supply Op-Amp Circuit Collection SLOA058
+ 7 - Blauert, J. and Laws, P "Group Delay Distortions in Electroacoustical Systems", Journal of the Acoustical Society of America, Volume 63, Number 5, pp. 1478-1483 (May 1978)
+ 8 - Analog Devices - OP179/279 Data Sheet, p12
+ 9 - Miscellaneous data sheets from National Semiconductor, Texas Instruments, Burr-Brown, Analog Devices, Philips and many others.
+ 10 - Audibility of Group Delay - True Audio forum discussion
+ 11 - Digital filter - Wikipedia
+ 12 - Bandwidth in Octaves Versus Q in Bandpass Filters - Rane Technical note170
+ 13 - Understanding Poles and Zeros - MIT
+ 14 - Real Rational Filters, Zeros and Poles - UCI Math + 15 - RIAA Equalisation - Wikipedia
+ 16 - Basic-Linear-Design - Chapter 8 (Analog Devices) +
Recommended Reading
++ Opamps For Everyone - by Ron Mancini, Editor in Chief, Texas Instruments, Sep 2001+ +
+ Designing With Opamps - Part 2 - ESP +
![]() | + + + + + + + |
Elliott Sound Products | +Amplitude Modulators |
'AM' stands for amplitude modulation, the first system used for radio (aka 'wireless') broadcasts. While the AM band may be considered passé for most people, there is still an interest in AM reception, and in particular being able to simulate a waveform that's suitable for testing demodulator circuits. In amongst the articles on the ESP site, there's information in a submitted article for an 'infinite impedance' AM detector, which is capable of much lower distortion than the simple diode demodulator that's common in most receivers. See AM Radio for the details.
+ +The difficulty is that most simulators don't have provision for amplitude modulation in the available signal sources, so it becomes necessary to synthesise a suitable waveform. Those simulator packages that do include AM capability generally require that the details are entered as a formula, which they may or may not include in the help files. There are several versions of amplitude modulators on the Net, but most are completely unsuited to running distortion tests, because the AM carrier has a significant distortion component.
+ +This article shows how you can easily build a very simple modulator circuit, having close to zero distortion. This makes comparisons of the various detection techniques easy, because you have a good starting point. Amplitude modulation seems to be fairly straightforward at first, but the experimenter quickly learns that changing the amplitude of a signal without creating a great deal of distortion is actually very difficult. Voltage controlled amplifiers (VCAs) are a very specialised area, and obtaining good linearity is not an easy task. This limitation extends to the real world as well, so you have to be prepared for some pain if you want to build an AM transmitter circuit.
+ +AM transmitters have used a variety of techniques over the years, but the early ones were both fairly simple and rather clever. This is especially true when one realises that commercial AM transmissions started in 1920, and prior to that there were only a few test transmissions and the idea of 'broadcasting' to a wide audience wasn't considered. The early designs were terribly inefficient, and needed an audio amplifier that could deliver half the power of the transmitter itself (often many kilowatts as broadcasting became popular). This was a major challenge in a time when valves were the only option, and were very primitive compared to what we take for granted today.
+ +However, this article does not cover AM transmitters as such. If you want to know more about them, you will need to do some research of your own. The goal here is to describe methods that can be used to generate a signal in a simulator, so the reader can better investigate the various detectors that are used for AM demodulation.
+ +Firstly, I do show a simplified transmitter, as well as a generalised circuit that seems to be the mainstay of most simulation attempts. Two signals are required for a modulator - the carrier waveform - typically 455kHz to match the common intermediate frequency (IF) of most superheterodyne AM receivers, and a signal source. The latter will usually be a 1kHz sinewave, but it can be any frequency (or waveform) you like, but of course it will always be within the normal AM bandwidth. This is usually only around 5kHz, but it can be up to 10kHz if you happen to think that frequencies above 5kHz might just make it through the IF stage of any commercial receiver.
+ +In reality, most struggle to get much beyond 3kHz, but that's largely an issue with the receiver, not the transmitter technology. However, there are limits placed by the various regulatory bodies worldwide on how much bandwidth an AM transmitter can occupy, and that limits the maximum frequency that can be used for modulation. The frequency spacing between different broadcast transmissions is generally 9kHz, although 10kHz is common in some regions. Because there are two sidebands (one each side of the carrier) and these are directly related to the modulating frequency, the practical limit is around 4.5 to 5kHz. (The issue of sidebands is discussed below.)
+ +While I will simply refer to the modulated signal as 'AM', its full title is DSBFC - Double Sideband Full Carrier. This is the standard modulation scheme used for AM broadcasting. If you are looking for information about SSB (single sideband) or DSBSC (double sideband suppressed carrier) or other modulation systems, this article won't help you much, but you might get a few ideas as a result. That's a hint, by the way .
Before we try to develop a circuit that's suitable for testing in a simulator, it's useful to understand the basic principles involved. The first requirement is the carrier - the frequency used by the radio station to broadcast its programme material. Each broadcast station has a frequency allocated by the relevant authority, and this must be very accurately controlled. Governments usually charge a license fee for each frequency, and they are tightly controlled. Unauthorised use of any frequency is generally considered a serious offence, so I discourage anyone from setting up their own radio station for the fun of it. Most radio stations sell advertising to pay their costs (and hopefully turn a profit), but in some cases the government itself provides broadcast services (which may or may not involve propaganda, depending on the government).
+ ++ Some older readers will remember the 'pirate' (unlawful in the eyes of the UK government) radio stations that operated from small ships off the coast of the + UK in the 1960s. This was to challenge the British government monopoly over all broadcasts at the time. Commercial licenses have since become + available, but at the time they didn't exist. Some (usually 'portable') pirate stations are still operating in the UK, but are uncommon in most other regions. ++ +
A real transmitter is a fairly complex piece of kit. Considering that typical AM broadcast stations operate at 10-50kW, they are actually quite fearsome beasts, not even considering the 'shock jocks' that blast the airwaves with their vitriol. Modern systems used advanced techniques to maximise efficiency at all levels, but more traditional modulators simply use a very large RF power amplifier, and modulated the DC supply to the RF output stage. A 10kW transmitter needs a 5kW audio amplifier, a significant challenge in the early days of electronics. A simplified version is shown below, and this provides an insight into the process.
+ +The audio transformer used in the simulation is 1:1, and the RF transformer has 1+1:1 ratios - i.e. all three windings are the same. In reality, the low voltage from the transmitter would normally be stepped up to a higher voltage to allow more power to the antenna. This isn't done here for simplicity. The secondary of T2 forms a resonant circuit with C2, and is tuned to the transmitter frequency (1MHz). The antenna load is 50Ω, and the tuned circuit is designed for a Q of 10. A real transmitter will use more sophisticated filters and will also include antenna tuning.
+ +
Figure 1 - Simplified High Level Modulation AM Transmitter
The exciter (represented by V2 and an inverter) generates the RF carrier frequency, and in a real transmitter the exciter will be crystal locked and carefully monitored to ensure it remains at the designated frequency. In early systems it was generally a reasonable facsimile of a sinewave, but many systems now use switching (including Class-C, Class-D and Class-E), as well as multiple RF amplifiers that are switched in and out of circuit based on instantaneous demand. However, there will be extensive filtering of the modulated signal before it reaches the antenna to ensure that the carrier waveform is clean, with no significant harmonics other than the sidebands. + +
The modulated carrier is shown above as well, for 3 cycles of audio at 1kHz. The carrier is at such a high frequency that it looks like a solid block of colour, but it's a continuously varying signal at 1MHz. The next drawing should help ...
+ +
Figure 2 - Expanded View of Amplitude Modulation
In the above, you can see what the waveform looks like if the carrier frequency is reduced to 10kHz so the modulation can be seen clearly. This is not visible in any of the other drawings, because the simulations were all done using a 1MHz carrier. The 1kHz modulation envelope is clearly visible (shown in red), but of course it won't be smooth because the carrier frequency is much too low to be useful. Note carefully that the phase of the carrier remains constant, and this is an important factor with AM. Other modulation schemes can look superficially similar, but the carrier phase reverses as the modulation passes through zero.
+ +The modulation system shown in Figure 1 is 'high-level', meaning that significant audio power is needed, and it works out that you need to provide 50% of the carrier power as an audio signal to achieve 100% modulation. However in reality, 100% negative modulation is never used because if exceeded (even momentarily) it creates interference (called 'splatter' - frequencies at odd multiples of the carrier for a push-pull transmitter). Negative over-modulation also distorts the audio waveform, so there will always be a 'safety factor' of around 10% to prevent the carrier from being reduced to zero. However, positive modulation may be up to 150% (sometimes more), and audio phase switching is often used to ensure that the highest peaks of normally asymmetrical audio signals are phased to ensure positive modulation. In my simulation, the audio power is 4.6W, because the carrier is not fully modulated. As shown, modulation is 71.4%.
+ +To determine the modulation index (m, sometimes referred to as µ) you measure the minimum and maximum amplitude of the modulated waveform. Since the waveform shown in Figure 1 varies between a maximum of 120V p-p and a minimum of 20V p-p, the modulation index (m) is ...
+ ++ m = ( Vmax - Vmin ) / ( Vmax + Vmin )+ +
+ m = ( 120 - 20 ) / ( 120 + 20 ) = 0.714 = 71.4% +
V1 is a 1kHz sinewave generator, with a voltage of 20V peak (14.4V RMS), with the 1:1 transformer secondary in series with the DC supply. The sinewave generator is replaced by an audio amplifier for high-level modulated transmitters. The voltage at the RF transformer's centre tap varies from 10V up to 50V in this case, which is the 30V supply modulated by ±20V. Power to the antenna is 17W. Could you build this and would it work? Yes, but there's a great deal missing and it's not something that I'd ever recommend.
+ +Using high level modulation was the only viable option in the early days of radio (aka 'wireless'), because it wasn't easy to make a large amplifier to start with, but making it essentially distortion free (or 'linear') wasn't feasible at the time. The disadvantage is as discussed above - a 10kW transmitter needs a 5kW audio amplifier. The alternative is to modulate the carrier at a low level, then increase the power using a linear amplifier - one having very low distortion.
+ +You may wonder why RF distortion is important in a radio transmitter, but if you recall from audio, distortion means you generate harmonics - frequencies that didn't exist before. If you have a transmitter at 1MHz that has distortion, then there will be harmonics at 2MHz, 3MHz, 4MHz and so on (plus the sidebands generated with amplitude modulation), and these cause problems for other radio stations and interfere with reception. This is especially important when you transmit at high power, because the distortion products will be at levels equal to (or possibly greater than) many legal low power transmitters that operate at affected frequencies.
+ +To put the transmitter power levels into perspective, consider that for a transmitter output power of 10kW (carrier only), the voltage fed to the antenna (50Ω) is 707V RMS at a current of 14A. This is at the radio frequency used, which will be between 526.5 and 1,606.5 kHz in Australia, and similar for medium wave AM broadcasts elsewhere as well. If that sounds a bit scary, work out the voltage and current for 50kW (not at all uncommon for AM broadcasters). Of course there are smaller transmitters as well, but you get the idea.
+ +Many modern transmitters use low-level modulation, and this is not covered in detail here. There are some important differences (especially with over-modulation - but it's still a no-no), and low level modulation usually involves the use of a multiplier, where the audio and carrier signals are fed into a linear multiplier IC, providing an amplitude modulated output. Analogue VCAs (voltage controlled amplifiers) are an example of simple multipliers. Linearity is important for both the RF and audio signals.
+ +For the remainder of this article, we will concentrate on circuits that are suitable for simulations, so that detectors can be evaluated. For that we need very low distortion so the performance of the demodulator can be measured, with some degree of confidence that the measured distortion is purely from the detector, and not the modulation source.
+ + +The first method shown is based on that used in many of the simulations that you'll see on the Net, and it relies on a transistor to modulate the carrier with the audio waveform. There are simple and complex versions, but most miss out in one important area - there is no tuned circuit to produce a reasonably undistorted carrier wave. This makes any further processing much less accurate, because the result will never be a 'proper' double-sideband AM waveform. The greatest problem is waveform distortion, usually of both the carrier and the modulating waveform. In the drawing, the voltages shown for the two generators are peak values, so the 1MHz carrier is 7.07mV RMS, and the 1kHz modulating voltage is 3.54V RMS.
+ +Despite appearances, this circuit would not work at all well as a modulator suitable for sending audio to an AM receiver. It's intended for use in a simulator. The basic idea could be adapted as a 'real' low-power transmitter, but given its high distortion and generally poor performance it's not worth wasting time on.
+ +
Figure 3 - Simple Transistor Modulator
There are countless versions of this circuit on the Net, but only one has been referenced below. Some are (slightly) more advanced, some are incomplete, and all show high distortion. It's certainly simple, but the results are not good enough to test a detector for linearity using a simulator. The voltages are shown so you can check your simulation, and you may need to change R1 to get the optimum collector voltage. Note that the upper modulation frequency is 338Hz (-3dB) set by R4 and C2.
+ +There are also demonstration circuits that use a diode, but that technique only gives a passable AM waveform if a tuned circuit is included - a diode modulator is useless without it. The addition of a simple tuned circuit is easy enough, and the one shown above suits the 1k output resistance to give an acceptable filter Q. Diode modulators also suffer from high distortion of the audio signal, as well as carrier distortion. They are not good enough for a simulation to test demodulators.
+ +The transistor circuit works because the gain of Q1 is changed as its emitter current changes, caused by the audio waveform appearing at the emitter. The amplitude of the carrier waveform is modulated by the transistor's non-linearity. However, the circuit - whether simulated or built with real parts - has poor distortion performance, so the audio and RF waveforms are both distorted. If one does a FFT (Fast Fourier Transform) of the waveform, there are countless harmonics, and it's not really a viable option if you need a nice clean AM waveform. It's obviously pointless trying to determine the distortion from a detector if the audio waveform is already distorted. The tuned circuit is optional, and is described below.
+ +
Figure 4 - Transistor Modulator Waveforms (No Tuned Circuit)
In the above, a) shows the waveform at the collector of Q1. The 1MHz RF carrier is at a low level, and only shows up as 'fuzz' on the audio signal, with the amplitude of the fuzz varying over the audio cycle. C3 and R5 are used to filter out the low frequency (audio) component so only the RF gets through to the output. The AM output is shown in b) and you can see that it is distorted - note that this is without the tuned circuit. The distortion is subtle, but the modulated waveform isn't as clean as it should be. In particular, note that the positive and negative peaks are offset slightly. In reality this doesn't matter, because only one sideband is normally detected, but it still demonstrates imperfect modulation.
+ +The missing link is the tuned circuit (a bandpass filter), and when that's added the RF waveform is improved (the symmetry of the RF envelope is greatly improved), but it's still far from ideal. While the tuned circuit makes the RF waveform a lot cleaner, it doesn't help the audio component, so the distortion after detection won't be as low as you need to be able to accurately measure the results from the detector you are working with.
+ +To include a tuned (resonant or 'tank') circuit, you add a capacitor and inductor, with values selected to suit the carrier frequency. For the example shown, we have a 1MHz carrier, and the circuit's output impedance is 1k (determined by R5, although it's really 909Ω for RF). The circuit will have an acceptable Q (quality factor) if the reactance of C4 and L1 is around 100Ω (a nominal Q of 10 with a 1k source impedance). Inductance and capacitance are calculated by ...
+ ++ L = XL / ( 2π × fo )+ +
+ C = 1 / (2π × fo × XC )
+ fo = 1 / ( 2π × √ L × C )
+ + Where L is inductance, C is capacitance, XL is inductive reactance, XC is capacitive reactance, and fo is resonant frequency +
The values of 1.59nF and 15.9µH are close enough to 1MHz (actually 1.00097MHz, but the small error is of no consequence). The spectrum of the waveform with the tuned circuit in place is shown below. For an ideal AM waveform, there should be sidebands at 999kHz and 1.001MHz (exactly 1kHz from the carrier), and the presence of the additional sidebands shows that the audio waveform is distorted.
+ +
Figure 5 - Spectrum of Figure 3 Modulator With Tuned Circuit
As you can see, there are many sidebands, all at multiples of 1kHz. This shows us that the 1kHz waveform has second, third, fourth, fifth (etc.) harmonics, created by the AF waveform distortion. If you wish to evaluate a detector, this is clearly unacceptable. The upper and lower sideband (USB and LSB) should stand alone with the carrier. Everything beyond these is distortion of the audio signal. As you can see, the distortion components are significant out to the 4th harmonic (4kHz). Beyond that they are more than 60dB below the carrier so they're not a problem - for a 1kHz signal. At higher modulating frequencies the harmonics present more of an issue as the allowable AM channel bandwidth can easily be exceeded.
+ +One way that a fairly good amplitude modulator can be simulated is by including a sub-circuit of a complete low distortion VCA (voltage controlled amplifier), but this is a serious undertaking. If there isn't a model for one already, you need to find the circuit for a commercial VCA chip or design one yourself, and build a complete model in your simulator package. If you are using a free version, you may find that the final circuit has too many parts and you can't run an analysis.
+ +There are other methods used for simulations, some which work fairly well and others that are pretty much pointless, and it's obvious that this is not as easy as it first seems. There are variations on the transmitter circuit shown in Figure 1, and while this does work well, it's still not perfect. If the tuned circuit (aka 'tank' circuit in RF parlance) is omitted, the results are poor, and there will inevitably be some degree of audio distortion unless you build a complex and accurate model of a 'real' transmitter circuit.
+ +In this respect, the circuit shown in Figure 1 is somewhat better (actually a lot better) than you might imagine, but it adds complexity to the simulation.
+ + +All modulators are imperfect, some more than others. Using a simulator, you may need to get as close to perfection as possible so detectors can be simulated to determine distortion characteristics (for example). The last thing you need is a modulator that creates so much distortion that the end result is impossible to determine. With this in mind, you can get a perfect amplitude modulated waveform. Any distortion measured is due to the detector, as you can be sure that the RF waveform is blameless.
+ +Of course, actual AM transmitters are also imperfect, but no commercial operator will run a modulator that can't do better than 1% THD, with most (probably) being better. Getting useful information isn't always easy.
+ + +There is actually a small clue in the above description of the sidebands that might give you a clue as to how you can create a perfect modulated carrier waveform. An ideal AM spectrum shows the carrier, plus an upper and lower sideband, spaced at the audio frequency. So, if you use three voltage sources and simply sum their outputs, will this work? The short answer (and the only one we need worry about) is "yes".
+ +In your simulation, add a signal source, with an amplitude of (say) 2V as shown, set for a sinewave output at 1MHz or other frequency of choice (such as 455kHz, the intermediate frequency of most typical AM receivers). If you need 1kHz modulation, add two more generators, each with a voltage of 800mV, with one set to 1kHz below the carrier (i.e. 999kHz) and the other set to 1kHz above the carrier (i.e. 1.001MHz). Sum the 3 generators using 1k resistors as shown. Add a resistor (R4) to allow you to change the overall level without having to modify the values of the 3 generators. This produces an AM waveform with 80% modulation, that is - or should be - perfect in every way (simulator dependent). If you only need 50% modulation, set the sideband generators for 500mV output. Any modulation depth can be produced, and any audio frequency can be synthesised by modifying the frequency spacings of the two sideband generators.
+ +
Figure 6 - 'Ideal' Amplitude Modulator And Output Waveform #1
Yes, it really is that simple. The voltages shown for the generators are all the peak value, so divide by 1.414 to get RMS. The three generators are all set for 0° phase - no phase shift on any of the three generators is required. You can get an audio sinewave (after detection) that is almost completely distortion free ... for an ideal detector). You can now test any detector that takes your fancy, and can be confident that there is zero distortion from your RF source, so any measured distortion is due to the detector you are experimenting with. This takes the guesswork out of simulations, and is a very easy way to generate AM. As shown above, the RF level is 285mV RMS with R4 set to 390Ω.
+ +This arrangement should work with any version of Spice, regardless of the type or price. It requires no 'special' techniques, just the three generators and mixing resistors. While some versions of Spice allow you to create various types of modulation, this generally requires that you provide the 'generator' with a suitable formula, and there's no guarantee that the version you use will allow you to insert the formula.
+ +No FFT is included for this modulator, simply because it's rather boring. All that's present (apart from a few simulation artifacts at around 98dB below the carrier) is the carrier, lower sideband and upper sideband, at the exact levels that were used for the three generators. The degree of 'perfection' of the waveform is entirely up to the simulator you use though, and while there are essentially zero distortion components, that doesn't necessarily mean that the simulator you use will provide a perfect audio outcome. This depends on the simulator's resolution and how it's set up.
+ +![]() | When you set up a simulation for RF + AF processing, if possible you need to set the maximum 'time-step' to a very small value. For a 1MHz carrier, you need a + minimum of 50 to 100 samples for each cycle to get a good result. I suggest a maximum time-step of 10 to 20ns. This makes the simulation rather slow, and in + many cases you may prefer to use a lower modulation frequency so the simulation doesn't take too long to complete. This limitation isn't specific to the 'perfect' + modulator though - it applies for all simulations that involve RF and audio. + |
Note that this process is almost identical to using an ideal multiplier (as used for low-level modulation), and negative over-modulation does not cause the carrier to disappear. Instead, it reverses phase and produces a small 'bump' where the carrier would otherwise be reduced to zero. However, it still distorts the audio waveform, so the relative levels of the carrier and sidebands have to be adjusted to ensure that the modulation index never exceeds unity (100% modulation).
+ + +The second way you can create a perfect modulator is to use the simulator's 'Arbitrary Source'. That's what it's called in SIMetrix, but other simulators will have something similar that you can use. When it's defined, you only need to specify that the output is derived from 'Input1' multiplied by 'Input 2'. I don't know the specific name or syntax for other simulators, but for SIMetrix it's ...
+ ++ V ( In1 ) × V ( In2 ) Note: The spaces are added for clarity - the formula may not work in some simulators if the spaces are included. ++ +
This creates two inputs called 'in1' and 'in2', with 'V' specifying that the inputs are voltages. The output is the product of the two inputs, i.e. the two input voltages multiplied together. The bias voltage is essential, as that sets the carrier level. In the case shown, with only the 2V bias and 2V peak carrier wave present (unmodulated carrier) the peak amplitude is 4V (2V DC multiplied by 2V peak carrier wave).
+ +
Figure 7 - 'Ideal' Amplitude Modulator And Output Waveform #2
Despite your expectations (and mine I must admit), the waveform is not as pure as that from 'Ideal #1', but it is substantially better than anything you'll get trying to use simple circuitry such as the schemes shown in Figures 1 and 3. The imperfections are simulation artifacts, and (probably) caused by sampling. At more than 90dB below the carrier level, it's quite safe to ignore any artifacts you may see in the output.
+ +It's a great deal easier to experiment with different frequencies or waveforms with this arrangement because the modulating waveform is simply a signal source. There's no need to mess around with sidebands and levels. The peak output level is exactly as specified by the formula, so is 3.6 × 2 = 7.2 volts. (3.6 is the sum of the 2V Bias signal and the peak Modulation amplitude of 1.6 volts.) The minimum peak (maximum negative modulation) is 800mV.
+ +It is important that the modulating waveform never exceeds the bias voltage, as this will cause over-modulation. However, this is not the same as you get with a real AM transmitter, so is not usable to simulate 'splatter' - the wide bandwidth signals created by an over driven AM transmitter. The multiplier is what's known as a '4-quadrant' type, and can produce negative output voltages, which a transmitter cannot. If the modulation signal is kept below 1.8V peak (1.27V RMS) with the values shown, modulation is very close to ideal (i.e. 'perfect').
+ +There are several ways you can change the output level. One is to use a simulated potentiometer (pot), or the output can be scaled within the formula for the arbitrary function. For example, if you use the following ...
+ ++ ( V ( In1 ) × V ( In2 ) ) / 10 ++ +
The output is simply the product of the two inputs, but divided by 10. This will give a peak output level of 720mV. For most RF simulations the voltage will usually be fairly low, and it's easier to scale it in the arbitrary function than messing around with the generator levels, although a voltage divider can also be used if preferred. As with most functions in a simulator, input impedance of the arbitrary function generator is infinite, and output impedance is zero.
+ + +If you wanted to build an amplitude modulator, you can use one of the methods shown earlier, but it's a great deal simpler to use a dedicated IC that does most of the hard work. The MC1496 is a balanced modulator/ demodulator, and the IC has been around almost forever (ok, that may be a small exaggeration ). These are available in DIP and SOIC (through-hole and SMD respectively) packages, and are usually under AU$2.00 each from most major suppliers. A suitable modulator is shown below, adapted from the MC1496 datasheet. C3 and C4 will ideally be multilayer ceramic capacitors for good RF performance, and the incoming supplies should also be bypassed with 10-100µF electrolytic caps (not shown).
Figure 8 - MC1496 Amplitude Modulator
The circuit shown is pretty much 'as-is' from the datasheet, and it would need to be optimised to ensure that input levels are within the range you need. There are several application circuits in the datasheet, including one that uses a single 12V supply which may be more convenient. Since the IC is well known and has been in production for many years, you'll be able to find any number of suitable complete circuits that allow you to build a low power AM transmitter that can be used for your own local broadcast. Be aware that in most countries this will be illegal unless the output power is limited to a few milliwatts at most.
+ +The levels of both RF (carrier) and AF (audio modulation) must be well within the maxima that the IC can handle, or the output will be distorted. Note that the modulation input has a very low input impedance, set by R6, and is 51Ω as shown. An input resistor will generally be necessary to reduce the signal level to a few millivolts at most - an initial value of around 1k is suggested. This will provide 100mV at the IC with an input voltage of around 2.5V RMS. The RF level needs to be around 300mV RMS (according to the datasheet). The output level will be very small without additional amplification - expect no more than around 500µV peak between +Out and -Out.
+ +The AF and RF levels need to be set carefully, using an oscilloscope and (ideally) a frequency analyser. The latter is a fairly serious piece of kit, but the FFT function of a digital oscilloscope will probably be sufficient for basic tests. The output is monitored using an AM radio. You'll probably need to include a (very) small 'power amplifier' to feed the aerial, which should include a broadly tuned circuit if you need to tune the carrier frequency, or a high Q filter for a fixed frequency.
+ +Selection of a suitable carrier frequency depends on how crowded the AM band is in your area. You need to find a frequency that's not in use, and ideally that's separated by at least 18kHz from adjacent AM broadcasts. Since few AM radios have much response beyond 5kHz, you may find it useful to limit the top end of the audio input. Anything beyond 9kHz is usually wasted.
+ +
Figure 8A - Discrete Amplitude Modulator
A discrete modulator is shown above. This uses a Gilbert cell, which is the basis for analogue multipliers, including the MC1496 shown above. The tuned circuit is designed for a frequency of 1MHz, and with the paralleled 1k resistor, it has a Q of 10. L1 and C3 both have a reactance of 100Ω at 1MHz. It's to be expected that a discrete modulator probably won't be quite as good as a dedicated modulator IC, but (at least in a simulation) it works well.
+ + +The main reason to use a simulator to generate an AM waveform is so that one can experiment with detectors (demodulators). It's therefore worthwhile to briefly examine 'detection' - the recovery of the original audio modulating frequency. You will almost certainly have a preferred circuit or have something you want to experiment with, but we can start with a simple example. There are many different types of AM detector, including the 'infinite impedance' detector described in the article High Fidelity AM Reception. For this exercise, only a simple diode detector will be covered.
+ +This type of detector was one of the very first ever used to detect RF, and although there were other, earlier, detectors they weren't linear and were often insensitive. By selecting a point on the surface of a natural semiconductor (commonly a galena (lead sulphide) crystal), it was possible to listen to AM through headphones. Finding the optimum point on the crystal was done using what was known as a 'cat's whisker' - a fine piece of wire in a special holder that allowed the listener to find a point on the crystal surface that gave the best signal. This was known as a 'crystal set', and they work just fine to this very day with some care. 'Crystals' were followed by the valve (vacuum tube) diode, then germanium diodes, and now Schottky diodes. If you can get them, germanium diodes are still a good choice.
+ +The circuit below shows a simple Schottky detector, with 800mV of forward bias applied to improve linearity. The tuned circuit and antenna shown are for the sake of completeness, but would not normally be included in a simulation. Note that C2 is essential if the source (your modulator) is DC coupled. If you leave out C2, the diode detector won't have any forward bias, and that increases distortion dramatically. The anode of D1 must have a DC return path or it won't work at all.
+ +
Figure 9 - Diode Based AM Detector/ Demodulator
All diode detectors have a well known problem, namely the distortion caused by the diode conduction voltage. For conventional small signal silicon diodes, this is 650mV, and around 200mV or less for germanium. Schottky diodes vary from 150mV to 450mV, depending on their intended purpose. At low RF signal levels, the diode may not conduct at all so (almost) nothing will be heard at the output. This can be overcome (at least to an extent) by applying forward bias to cancel the diode's forward voltage. This is shown in the above circuit. It's generally difficult to achieve less than 1% distortion with most common demodulator circuits.
+ +When tested using the output of the ideal modulator (Figure 6) at an RF signal level of 285mV RMS and 80% modulation, the distortion from the circuit shown is 1.6% at a level of 180mV RMS. The diode is a Schottky type, and the bias voltage is 800mV. Not all of the distortion is due to the diode though, as some of the RF carrier is still present. As you can see, there is also a DC voltage, with the average value being proportional to the amplitude of the RF. There is also a fixed offset due to the diode bias voltage.
+ +With any diode detector, the time constant (C3 + C4 and R5 in the above drawing) is important. If there's too much capacitance or the resistance is too high, the cap will be unable to discharge quickly enough to follow the AC (modulation) waveform, leading to greatly increased distortion on the negative-going parts of the audio waveform. There's plenty of info available on this topic, and it's not part of the analysis here. For the record, the values shown will provide reasonable filtering, with acceptably low distortion up to 5kHz.
+ +In most radio receivers, the average DC level is used to activate the circuit's AGC (automatic gain control). This is designed to keep the intermediate frequency amplitude reasonably constant at the detector's input as different stations are tuned in, so that the audio level remains fairly steady. Without AGC, the audio level is entirely dependent on the strength of the received signal. The DC must be removed from the audio signal before being passed to the audio amplifier stage, and this is done simply with a coupling capacitor.
+ +An ideal detector will perfectly half-wave rectify the RF envelope, so that the audio waveform is preserved intact. It makes no difference whether the positive or negative half cycles are demodulated, since the same audio information is present in both. The RF component is then removed with a low pass filter, leaving only the audio and a DC level that depends on the RF amplitude. The DC is easily removed with a capacitor, leaving only the audio, which will hopefully be an exact replica of the signal used to modulate the transmitter. While the concept is simple in theory, it is very difficult to achieve in practice, and there are many different solutions (including applying forward bias as shown above).
+ +There are many different types of AM detector, so if you wish to know more a web search will provide endless hours of reading.
+ + +The method described here to obtain a 'perfect' AM waveform seems to be virtually unknown. I saw one oblique reference to the method (that told students to "think about it"), but the details were not provided in the text (and I can't find it again, or it would be included in the references). Once you do think about it, it's quite obvious and will almost certainly invoke cries of "why didn't I think of that" from quite a few people who read this. When I saw the brief reference mentioned above, that was certainly my reaction .
The multiplier idea came from messing around with the details for another project. I doubt that SIMetrix is the only simulator to offer an arbitrary function that can be 'user defined', and it's a little puzzling that no mention of this method was found during my original research. Since writing this article and searching a little more specifically, I did come across a few forum posts and some academic work that suggested the use of a simulator's 'special' functions, but found no specific information.
+ +Overall, this is an interesting exercise, even if you're not remotely interested in the rubbish that one normally hears on AM radio. I certainly learned a great deal as I was preparing the article and running simulations so the waveforms could be demonstrated. It's a long time since I did anything serious with AM, and looking at some of the offerings on the Net is rather depressing. In many cases, the student will learn bugger-all about AM, other than running pre-arranged or pre-configured simulations, or delving deep into a mathematical minefield.
+ +This isn't to say that the maths aren't potentially useful, or that messing with analogue multiplier simulations isn't interesting. Both are valuable, but not if all you want to do is test ideas for AM demodulation. If this is the case, you need something that's as close to perfect as you can get so that demodulator flaws are exposed. Having something that will work in almost any simulation package is especially useful, because different versions have differing capabilities and may not allow you to do what you need easily - if at all.
+ +One thing that is important is to understand that simulators have limitations, and some may be incapable of resolving the end result without adding artifacts that are essentially the result of simulator resolution. While it's possible in many simulators to specify the maximum 'time step' (and therefore the resolution), this can make simulations run very slowly. For example, to properly resolve a 1MHz waveform, the 'sampling rate' or maximum time step has to be no more than a few nanoseconds, and that means the simulation will be very slow. Naturally, this also applies for simulations using other techniques.
+ +You can also use this technique to produce double sideband suppressed carrier AM (simply reduce the carrier level to some suitably small voltage). SSB (single sideband) waveforms can be created by reducing the amplitude of one sideband and the carrier to suitably low voltages (typically they will be around 5-10% of the main sideband voltage). Unfortunately, there does not appear to be an equivalently simple method to produce FM (frequency modulation), but many simulators include that facility for 'advanced' signal sources.
+ + +Please note that two of the references provided here show a sub-optimal technique as shown in 'Method 1', but this is not intended to denigrate the authors in any way. The circuits are reproduced on many other sites, and the original source is unknown. While many circuits you will find may not be ideal, the authors are still providing an invaluable service by showing beginners (and others) ways to accomplish something that is not as easy as it seems at first.
+ +![]() | + + + + + + + |
Elliott Sound Products | +High Fidelity AM Reception |
Although few would regard the AM/MW broadcast band as being capable of high fidelity reception, the reality is that most good quality AM broadcasters put to air quite a high quality audio signal, covering quite a wide audio bandwidth (to around 8kHz or so). The problem is that the majority of AM receivers in use today make use of the superheterodyne circuit principle (conversion to an intermediate frequency (IF) prior to detection), something that unfortunately results in very severe sideband cutting, with the result being audio bandwidth and quality comparable to an ordinary telephone channel.
+ +Those who have heard AM under ideal conditions will know how good it can sound, with a clarity, bandwidth and subjectively superb quality that has to be heard to be believed. I have been tinkering with wideband AM receivers since my late teens, and in fact these were my first dabblings into the wonderful world of high fidelity. I have many fond memories of laying in bed listening to the late night jazz music programs as reproduced through my home made 'crystal set' tuner feeding my old pioneer audio amp and old AR28 loudspeakers. I remember thinking, "that sounds so good". In those days we had no real FM service in our area, and AM on the medium wave broadcast band was all that was available. That was a very good incentive to see what could be done with wideband AM reception.
+ +Although I had only a very basic knowledge of radio and electronics at the time, I think I did a reasonable job with that very basic set-up. That initial passion has continued to this day! The great advantage of a simple 'crystal set' type of AM tuner, or similar is that all processing takes place at the (RF) signal frequency, and despite the limitations of this approach, the full modulation impressed on the carrier can be recovered - something not easily done with a superheterodyne receiver without much additional circuit complexity and refinement.
+ + +I mentioned the crystal set in the introduction. Apart from the charm of a receiver that requires no power source, a well designed crystal set tuner feeding an audio preamp/ power amp combination can give excellent results under ideal conditions. There is a lot more to diode envelope detection than one might think, especially if one is aiming for high quality detection. The process of diode detection is often explained in overly simplistic terms in text books, in my opinion. In reality, there are lots of factors that affect the ultimate potential quality of a simple RF diode detection system, such as the RF injection level and the loading on the diode (the so-called AC/DC ratio), which has major implications on the overall detector distortion profile, especially at high modulation percentages (very common practice these days in the broadcast industry in the endless quest for a 'loud' signal).
+ +
Figure 1 - Amplitude Modulation Waveform
The amplitude modulation waveform is shown in Figure 1. Maximum modulation level occurs when the RF signal falls to zero (maximum negative modulation), or is doubled (maximum positive modulation). Practical limitations mean that the minimum is usually around 5-10% of the static (unmodulated) RF power to prevent 'splatter' - a form of RF distortion that causes massive interference across the radio frequency spectrum. The maximum can be up to 150% of static power, and phase switching is often used to ensure that the highest level signals always increase transmitter power. This is possible because audio waveforms can be highly asymmetrical. The audio signal is always compressed and limited to ensure that over-modulation cannot occur.
+ +Germanium diodes are considered mandatory in this service due to their low turn on voltage and consequent good sensitivity to weak RF signals. Although true germanium diodes are still being made, many electronics parts stores are now stocking germanium diode 'equivalents', diodes that are actually silicon Schottky (hot carrier) diodes, fabricated to electrically resemble germanium diodes as sensitive RF detectors. They are actually pretty good. As with all silicon Schottky diodes, they exhibit very little reverse leakage, very good weak signal sensitivity and low noise. They do possess a higher junction capacitance than most point contact germanium diodes, but this is not a problem in crystal set service. In a moderate to strong signal strength area, a simple diode detector of this type will work well. Combined with optimal diode loading and a selective tuned circuit front end, this kind of approach will indeed work well and provide a high quality audio program source for AM reception.
+ +
Figure 2 - Traditional Crystal Set Demodulator
Figure 2 shows the general arrangement of a 'traditional' crystal set. As shown, both the tuning coil and capacitor are variable, although many different arrangements have been used. Some sets used variable capacitance and a tapped coil, others a fixed capacitance and a variable coil. Still others used an RF transformer (so L1 has an additional winding) to create a better impedance match between the antenna (aerial) and headphones. Piezo-electric ear pieces were commonly used because of high sensitivity and impedance. The audio signal can be taken to an amplifier rather than headphones.
+ +The need for a selective front end and optimal diode loading are rather important. The relatively low impedance of the diode detector circuit will invariably heavily load its feeding tuned circuit, something which can cause problems such as station overlap and generally poor station selectivity in tuning. My preferred approach is to use double tuning, where using two separately tuned and coupled networks results in much improved 'nose and skirt' selectivity. This does complicate the design admittedly, but the overall selectivity improvement is definitely worthwhile. With single coil tuned circuit arrangements, a commonly used approach is to use coil taps in order to reduce the loading problems by connecting the diode and antenna to a point on the coil of lower impedance. This approach can work well, but it is not quite as good as two separately tuned networks in practice.
+ +The diode loading aspect is actually a rather complex subject, but basically, the ratio of the AC and DC load on the diode output should be approximately equal for the best handling of high modulation depth with minimal detector distortion. This normally means a relatively low value diode load resistor, followed by minimal capacitive shunting as would invariably be needed for RF filtering purposes. A very high impedance for the following audio preamp helps too. A following audio stage impedance of at least 250k is considered optimum. Actually, it is interesting to note that in these modern days of solid state technology, we are actually at some definite disadvantage as lower circuit impedances are generally the order of the day. Valve audio preamps of the past routinely possessed input impedances of the nominal required value.
+ +
Figure 3 - Demodulation Waveform
The demodulation waveform is shown above. This assumes an ideal (or 'perfect') rectifier - i.e. a diode with no forward voltage drop, no resistance, and no junction capacitance). Because all of these parameters are included free with real components, an alternative diode demodulator is shown below. The RF waveform is shown on the left, but the carrier frequency is reduced to the minimum for clarity.
+ +After passing through the diode, C1 integrates the signal. The cap charges and discharges slightly with each positive-going RF signal, and the remaining signal is the audio. It is always important that the capacitor value is worked out to be correct for the following load resistance (R1). If it is too large, that will create distortion - even if the diode is perfect. This is visible in Figure 3 at the negative-going audio peaks. If too small, excessive RF is applied to the following stage(s).
+ +Note the DC offset in the demodulated audio signal. This is used in superheterodyne (and some early TRF - tuned radio frequency) receivers to activate AGC (automatic gain control) of the RF stage(s) to ensure maximum RF level without excessive distortion.
+ +Listening tests and evaluations show that proper diode buffering makes a very audible difference in a high quality AM tuner application. It is worth noting that diode detectors like to work with a moderate to strong level of RF input, and audible distortion does increase somewhat at lower signal levels. I have developed a novel method of using adjustable voltage bias with a 1N5711 UHF mixer hot carrier diode that does provide improved performance at lower RF signal strengths. This biased diode detector has an incredible ability to produce clean detection under even the weakest signal strength conditions by careful adjustment of the bias potentiometer! As an aside, it must be mentioned that the subject of diode detector distortion is a complex one, and much more involved than one might think. It is gratifying though, that good results can be obtained with simple circuitry.
+ +
Figure 4 - Improved Diode Detector
Based on some simulations that ESP did with the circuits, a biased detector can reduce distortion from around 6% to 1% with a strong signal, with potentially greater improvements at lower levels. As noted though, this is a complex issue, and not one that lends itself well to simulation - unless the simulator is optimised for RF analysis. Most are not.
+ +However one doesn't have to use a diode type of AM detector. There are other ways to achieve high quality AM detection. I have investigated a number of bipolar transistor and field effect transistor circuits that offer excellent general performance and other advantages as well. The 'infinite impedance' detector, using a field effect transistor is a personal favourite. It is based on a very old triode valve circuit from the early days of 'wireless'. FETs (Field Effect Transistors) are remarkable devices, especially in RF applications, and their close electrical similarity to valves is a very useful characteristic indeed. A FET version of the infinite impedance detector is easily made, and offers very high quality AM detector performance. A particular advantage results from the very high gate impedance of the FET into which the tuned circuit is fed. This effectively eliminates the selectivity problems caused by diode detectors. I have used single tuned arrangements combined with infinite impedance detectors with excellent results in terms of good selectivity. Such is the advantage of a very high input impedance.
+ +
Figure 5 - FET Based Infinite Impedance Detector
A basic 'infinite impedance' detector is shown in Figure 5. While this will work very well, there are some refinements that improve performance. In my set-up, I make use of a high input impedance FET source follower modified with another FET constant current source for improved linearity and lower distortion. The second stage acts as a buffer between the detector and audio preamp input.
+ +The version of the infinite impedance detector I have developed was adapted from the generic circuit that has appeared for many years in the ARRL Handbook, a long available and well regarded text, well known to Ham Radio operators all over the world. The circuit I've developed is more suited to tuner applications, and differs from the basic generic circuit in that the value of the source resistor is much higher, which overcomes a slight tendency to intermittent oscillation, has improved RF filtering, and includes a modified FET source follower output stage for improved audio drive into a following audio preamp stage.
+ +
Figure 6 - Improved Infinite Impedance Detector
All round, the field effect transistor based infinite impedance detector is pretty amazing, offering high audio output, low distortion, excellent sensitivity and is quite free of the weak signal diode detector distortion tendency. It offers impressive performance for such a simple circuit. Just connect a tuned circuit and audio system, and one has quite a high quality AM program source.
+ +All of these AM detector circuits offer potentially excellent recovered audio quality, limited only by the quality of the original audio modulation. The infinite impedance detector offers the overall advantage of simple 'no fuss or adjustment needed' apart from tuning, but the biased diode detector has the advantage, by virtue of precise bias adjustment, of coping with any signal strength situation (strong or weak). This does require that the bias be individually optimised for each station received which can be a trifle annoying! However the audio quality under ideal bias adjustment has to be heard to be believed! Also as discussed earlier, due to selectivity considerations, a more complex 'double tuned' input is considered desirable, however another option is to add a FET stage in front of the diode detector in order to relax potential selectivity concerns (as discussed previously). So many possibilities!
+ + +Welcome to the wonderful world of building things at work at radio frequencies (RF)! Those who've only dabbled with audio gear will be slightly shocked by the way things are done at RF. It's basically a different world. At times, I've even wondered if Ohms Law applies at RF! At broadcast band medium wave RF, things aren't too critical, but it still pays to do things the right way. At RF, the effects of stray and 'incidental' inductance and capacitance become an important factor. Even a short piece of wire represents a substantial amount of inductive reactance at RF.
+ ++ XL = 2π × F × L ... and ...+ +
+ XC = 1 / 2π × F × C ... then we get ...
+ fo = 1 / ( 2π × √ L × C ) +
Where XL = Inductive Reactance, XC = Capacitive Reactance, F = Frequency, fo = Resonant Frequency, L = Inductance and C = Capacitance
+ +Things like excessively long leads are a no-no at radio frequencies - you will get around 5-6nH of inductance for each 10mm (1cm) of straight wire, depending on lead spacing and many other factors. My favourite method of construction is using a piece of copper clad circuit board material and wiring all components 'point to point' with all earth (ground) connections going directly to the copper surface of the circuit board material which provides a low impedance ground plane. Veroboard or similar is also quite ok at lower radio frequencies if used appropriately.
+ +Pre-wound coils and ferrite rods can be purchased from the usual parts shops, along with suitable variable capacitors with a maximum capacitance of about 260 pF. The two outer leads of the variable capacitor need to be joined together in common to obtain maximum capacitance. I prefer to wind my own coils in the interest of optimising coil Q (or coil quality). This is a very complex subject in itself, and many differing winding approaches are used, including the use of so-called Litz wire. I use ordinary enamelled copper winding wire of about 0.315 mm thickness, along with spaced turns, where a slight gap (about a wire thickness) is placed between each turn. This helps with coil Q. A drop of super glue adhesive at the start and end of a winding is perfect for stopping a coil from unwinding itself. For the MW band, about 50 turns on a ferrite rod will be needed, however the exact number of turns may vary somewhat depending on local conditions and the desired tuning range.
+ +For simple crystal set type of AM tuners, unless one lives close to a local transmitter, some kind of external antenna and earth connection will likely be required (just like in the old days). Remember that a simple crystal set tuner is a purely passive detector arrangement without any additional active front end RF gain, and the diode detector needs all the help it can get, hence the need for an external antenna. Depending on the method of connection to the tuned circuit, the antenna can affect the tuning range of the tuned circuit. With simple receivers like these, external factors can have a big impact. Remember too that the audio output level is entirely dependent on the incoming RF input level. A lot of empirical adjustment may be needed, but that's half the fun. An earth connection is also needed, but sometimes existing earthing through the audio gear is sufficient. I have an earth rod driven into the ground outside my window in the garden outside with a short connecting lead.
+ + +Individual construction arrangement techniques are left up to the constructor, but remember the general guidelines when working with RF circuitry and also try to leave a little space around the coil(s) from any metal (enclosure etc.) in the interests of not degrading the coil Q. Very importantly, when working with any type of external antenna, safety is absolutely paramount! I spent nearly four months in hospital after a serious accident when I fell from a roof, and I live with permanent spinal cord injury as a result. Don't repeat my mistake!
+ +Despite the limitations of this kind of approach to wideband AM reception, I personally have found this a very satisfying way to listen to local AM stations with the sort of audio quality that no ordinary receiver can match. In fact, one may realise that your local AM radio station may have some transmission problem such as distortion or other defect. Such is the potential resolution of this kind of AM tuner. It is rather sad that many transmission defects will go completely unnoticed when listening on an average receiver with a poor audio bandwidth and lots of inherent circuit noise and distortion but are immediately noticed when listening on a high resolution system such as these AM detectors provide! In any case, once you listen to true wideband AM audio you will never think of AM in the same old way ever again!
+ + +Unfortunately, the range of available JFETs has shrunk alarmingly since this article was published. The JFETs suggested aren't available from most suppliers, and where they are available, no-one knows for how long (particularly in the TO-92 through-hole package). If you are willing to play with SMD parts, the availability might be a little better, but the MPF102 is listed as obsolete.
+ +A few suppliers still offer them, and likewise the 2N5484, but you may need to be prepared to grab a few when you find them, as they aren't common any more. You also need to have a few more than expected, because JFETs have a wide parameter spread, and you usually have to run tests so you can find ones that will work in the circuits. I suggest that anyone interested in the circuits here read the Designing With JFETs article, which has a lot of information that you'll find useful. This includes a simple circuit you can use to determine that two main parameters of interest, the 'pinch-off' voltage (VGS(off) and the zero gate voltage drain current (IDSS).
+ + +![]() | + + + + + + + |
Elliott Sound Products | +Amplifier Classes |
There are already many articles on the Net that cover this topic, some quite well (but often without enough information), some badly and some that are largely wrong. It's usually not the descriptions that are incorrect, but the comments about alleged sound quality. For example, some Class-A amplifiers are very good indeed, but others are terrible. It's not only the class of operation that makes an amplifier good, bad or indifferent, but how the circuit is designed and how much effort has gone into minimising problems. Many 'boutique' amplifier makers will make outlandish claims for their chosen topology, but advertising hype is not fact and should be ignored.
+ +Many Class-AB amplifiers are far better than the vast majority of Class-A amps, despite being far more efficient and lacking the gravitas of being called 'Class-A'. There are also some obscure classes, some of which are not defined, and others are useable only with (some) radio frequency signals. There are others where there is no 'official' definition, so there is often confusion about whether an amplifier is one or the other (Class-G and Class-H are the main examples of this).
+ +Amplifier classes that are used exclusively with radio frequencies will not be covered here, only classes that are directly related to audio.
+ +While Class-C is generally thought to be purely an RF technique, it was (at least technically, and if taken to the extreme of normal definitions) used by Quad in their 'current dumping' amplifiers. Output transistor conduction was not quite 180° as required for Class-B. The difference is really academic, so the output stage can just as easily be called Class-B because the conduction angle really is very close to the full 180° for each device in normal operation. Close analysis of the Quad system shows that it largely behaves like a more 'traditional' amplifier, but with unexpectedly low distortion - especially considering the relatively poor power transistors available at the time.
+ +All classes of amplifier (except Class-D) can be made using bipolar transistors, MOSFETs or valves (vacuum tubes). If used in a linear circuit, MOSFETs should be 'lateral' types which have lower gain but are more linear than 'vertical' MOSFETs (the most common types are generally known as 'HEXFETs' because of their internal structure). These types are designed for switching applications, and even the manufacturers don't recommend them for linear use. HEXFETs and other switching types are not linear. Although it's possible to make linear amplifiers using HEXFETs, careful device matching is needed and there are some interesting traps that await the unwary. Naturally, vertical MOSFETs are ideally suited to Class-D amplifiers, where they are used exclusively.
+ +Amplifiers can also be hybrids, meaning that they use a combination of valves, transistors and/or MOSFETs. When we talk of hybrid amplifiers, it is usually taken to mean a combination of valves and semiconductors. Hybrid amps can be any class, but are most commonly either Class-A or Class-AB. While there's no real reason that a valve front end can't be used with a Class-D amp, that is a rather unlikely combination and serves no useful purpose. There are many combinations that serve no useful purpose, but that hasn't stopped advertising people from extolling their (alleged) virtues.
+ +In amplifiers where negative feedback is not used to provide correction and increase linearity, the distortion produced will affect the sound. Harmonic and intermodulation distortion products are created that can seriously reduce an amplifier's performance. This applies regardless of the amplifying device, class of operation or topology. Despite claims by some, negative feedback is not evil, and properly applied in a competently designed amplifier using any of the available devices (valve, transistor or MOSFET) it will almost always improve sound quality overall. Very few amplifiers with no negative feedback will qualify as hi-fi. There are exceptions, but the additional complexity is such that there is little or no overall benefit.
+ +Summary
+ +++ ++
+Class-A Output device(s) conduct for complete audio cycle (360°) + Class-B Output devices conduct for 180° of input cycle + Class-AB Output devices conduct for more than 180° but less than 360° of input cycle + Class-C Output device(s) conduct for less than 180° of input cycle (RF only) + Class-E, F Sub-classes of Class-C, RF only + Class-D Output devices switch at high frequency and use PWM (pulse-width modulation) techniques (Note that Class-D does NOT mean 'digital') + Class-G Make use of switched power rails, with amplifiers typically having multiple power supply rails + Class-H Use modulated power rails, where the supply voltage is maintained at a voltage slightly greater than required for the power delivered + Class-I A proprietary variant of Class-D (it appears that this is not officially recognised) + Class-T Another proprietary amp class, and also a variant of Class-D (this is also not officially recognised) + BTL Bridge-Tied-Load. Not a class of operation, but sometimes thought to be. Can be applied to any class of amplifier +
The above is a very basic summary of the different amplifier classes, and all (non RF related) classes are covered below. Note that Classes G and H suffer from great confusion, with the terms regularly used interchangeably. They are quite different techniques, and should be treated as such. No-one appears to have made any effort to categorise them, despite their popularity - especially for high power public address (sound reinforcement) applications.
+ +A new (and disturbing) trend is for many amplifier manufacturers (and especially Class-D ICs) to rate the output power at 10% distortion. The only reason to do so is to inflate the figure. An amplifier that can deliver 120W with less than 1% distortion will produce over 160W at 10%, but that amount of distortion is intolerable. For reasons that escape me, the only thing that seems to matter is power. Audio is not all about power, it's about the accurate reproduction of music. Many people would find (if they ever measured it) that their systems deliver less than 20W/ channel with normal programme material at realistic sound levels - a 20W amplifier can produce peaks of almost 100dB (assuming a rather poor efficiency of 85dB/W/m). The average power is likely to be no more than 1W.
+ + +The term 'Class-A' means that the amplifying device (transistor, MOSFET or valve) conducts for the complete audio cycle (360°). It does not turn off at any output voltage or current below clipping, where the output voltage would otherwise exceed the supply voltage. Since it is not possible for a device to remain linear if the amplifying device is turned off or fully conducting, the output level must be low enough to ensure that neither extreme is reached. In the case of amplifiers that use an output transformer or inductor, the upper limit is actually double the supply voltage, as the inductive element adds an extra voltage that would otherwise not be available. Note that biasing circuitry is not shown in the drawing below. DC flowing in the inductor or transformer winding causes additional problems, and they are related to some of the issues faced by single-ended designs.
+ +By definition, all single ended audio amplifiers are Class-A. They may use inductors, transformers, resistors, active current sources, the loudspeaker itself (bad idea) or even a light bulb as the current source. With all Class-A amplifiers, the amplifying device current must be slightly greater than the peak output current. For example, if the load (loudspeaker) can draw up to 4 amps, the amplifying device requires a quiescent (no signal) current of slightly more than 4A. Where the loudspeaker is used as the 'current source' output power will be limited to a few milliwatts because DC flows in the voicecoil.
+ +
Figure 1 - Single-Ended Inductor And Transformer Output Stages
Note that in the two examples shown, the voltage across the amplifying device approaches double the supply voltage. While this might seem unlikely, it is quite normal and is due to the stored energy in the inductor/ transformer. This is added to and released under the control of the transistor or valve. The DC current flow through the inductive element must be at least as great as the peak current demanded by the loudspeaker load (but reduced due to transformer action for the valve example).
+ +Without feedback, both transformer and inductor output Class-A amps tend to have a higher than normal output impedance, and this may also apply to other designs where feedback has been eliminated or minimised. Where transformers or inductors are used, the amount of feedback that can be used is usually quite modest due to high frequency phase shift in the inductive component. Increased output impedance causes colouration in most speakers, especially an increase in apparent bass and extreme treble. This is not because of Class-A, it happens with any amplifier of any class if the output impedance is greater than (close to) zero. Most amplifiers are designed to have an output impedance of less than 100mΩ (0.1 ohm), but 'low' and 'zero' feedback designs can have an output impedance of up to several ohms. Speaker systems are invariably designed to suit amplifiers with very low output impedance.
+ +Amplifiers can be single-ended as shown above, or push-pull. Single-ended valve Class-A is popular in some circles as the so-called SET (single-ended triode) amp as shown in Figure 1. Despite being Class-A, these amplifiers generally have low power (as expected) and often very high distortion. This distortion (both simple harmonic and intermodulation) is due to the basic non-linearity of all valves, and is also partly due to the output transformer. Push-pull operation improves matters, and is described in more detail below.
+ +There are also single-ended transistor (or MOSFET) amplifiers. Those having an inductor load used to be common in early transistorised car radios (almost always using a PNP germanium transistor), but are very uncommon today. Examples of more conventional single ended amps (by today's standards) are the Zen (by Nelson Pass) and the 'Death of Zen' (DoZ) described on the ESP website. These amplifiers are very inefficient, typically managing a best case of 25% (meaning that 75% of all power supplied to the amp is dissipated as heat).
+ +Push-pull Class-A amplifiers use two amplifying devices, and as one conducts more, the other conducts less (and vice versa of course). At no time does either transistor or valve turn completely off, nor do they saturate (turn fully on). By definition, they must conduct (hopefully but rarely linearly) for the full 360° of each and every cycle of audio they amplify. Efficiency is still poor, but distortion is reduced dramatically because the devices are complementary, and second harmonic distortion in particular is cancelled. In fact, all even-order harmonics are cancelled, leaving only relatively low levels of odd-order harmonics. There is no fundamental difference between push-pull amplifiers of any class, other than the bias current. For Class-A, the current through the amplifying devices never falls to zero at any point during the signal waveform, or at any power level.
+ +While it is often claimed that Class-A distortion levels are always lower than Class-AB amplifiers, this is not necessarily the case. A well designed Class-AB amp can often achieve lower distortion and better frequency response overall than many Class-A designs - especially those claiming 'low' or 'no' feedback. Despite claims to the contrary, there is no intrinsic improvement in sound quality from Class-A in any form. Perceived differences are often due to output impedance or perhaps the listener preferring the 'wall of sound' created by higher than normal distortion. There are countless claims that Class-A sounds 'better' than other classes, but this is not necessarily true.
+ +Prior to the widespread use of opamps in small-signal applications, low-level stages were always Class-A, and that remains the case for valve preamp designs. Very low distortion is possible in well designed circuits, but as with power amplifiers there is no 'magic'. It's not commonly accepted, but in general any two preamplifiers of equivalent performance (with equally low distortion and noise, and having the same bandwidth) will sound the same, regardless of the technology used - but only if tested using proper double-blind techniques.
+ +
Figure 2 - Power Device Operating Current And Typical Device Gain Vs. Current
In the above (left graph), it is obvious that the current never falls to zero, but it is very important to understand that it is not constant. Because the current varies (from 56mA up to 4.7A), so does the gain of the amplifying device, also shown (right). Valves and transistors are capable of very linear output if the current remains constant, but their gain always varies with current, and this leads to distortion. The gain vs. current graph is taken from the datasheet for a 2N3055, but nearly all devices have the same issue. Note that the typical gain of the 2N3055 varies from over 100 at 200mA down to less than 30 at 5A. There are some bipolar transistors that have remarkably flat gain vs. current graphs, and these give higher performance (and lower distortion) over their operating range, but very few have useful gain at only a few milliamps. Note that most valves have far worse behaviour in this respect - claims that they are 'inherently linear' are unfounded.
+ +It might not look like it, but the waveform shown in Figure 2 has over 7% THD. The second harmonic is dominant, but the third isn't far behind. As always, there is a full spectrum of harmonics that diminish smoothly with increasing frequency.
+ + +In reality, there are very, very few 'true' Class-B amplifiers. The term 'Class-B' dictates that each amplifying device conducts for exactly 180° of the signal waveform, which implies that they will not conduct at all if there is no signal. While this can certainly be done, the penalty is distortion, which will always be worst at low levels. The above graph showing the gain of a 2N3055 demonstrates that it falls with decreasing current. What is not shown is that at very low current (a few milliamps) the gain falls to almost nothing. While some power devices are a little better, it is unrealistic to expect that any device capable of 100-200W dissipation will have acceptable gain at perhaps 20mA. This applies to all known amplifying devices - including valves.
+ +Low gain at low current means that there must be a region of low overall gain through the amplifier, and that means that negative feedback cannot remove the distortion because the amplifiers open loop gain is very low and little feedback is actually available. The result is what is commonly known as 'crossover' distortion, because it occurs as the signal crosses from one output device to the other.
+ +
Figure 3 - Crossover Distortion With Class-B Amplifier
In the above, the crossover distortion around the zero volt point has been deliberately exaggerated so it's easy to see. In reality it can be quite subtle, but is almost always audible, even if a distortion meter shows that overall distortion is quite low. The total harmonic distortion of the amplifier I used to simulate the above was about 1.4% at full power(120W), but because of the nature of the distortion it would be judged (quite rightly) as "bloody awful" by any passably competent listener. True Class-B is virtually impossible with valves, because their gain is too low at very low current. Almost without exception, valve amps are Class-AB - even if described as Class-B.
+ +Because 'true' Class-B is not generally considered to be a viable option, it will not be discussed further. However, it should be obvious that Class-B can only be used with a push-pull topology.
+ +However, there is one point that needs to be made, and it's rarely discussed. In a 'true' Class-B amplifier, when both output transistors are turned off, the amplifier must have no gain. To have gain, transistors (or valves) must be in their active region, neither turned fully on or off. If the amplifier has no gain with no signal, then it also has no feedback! Feedback relies on the amplifier's open-loop gain and the feedback ratio set by the feedback network. If any circuit has no gain and no feedback, then it effectively ceases to do anything useful. Only when a signal is applied and the output transistors conduct can the amp (and the feedback network) perform normally. This is why no amount of feedback or open-loop gain can ever remove crossover distortion, because at the vital zero-crossing point, there's zero gain.
+ + +To eliminate the objectionable crossover distortion, almost all amplifiers (whether valve or 'solid state') use Class-AB. A small quiescent current flows in the output devices when there is no signal, and ensures that the output devices always have some overlap, where both conduct part of the signal. Some manufacturers claim that their amp operates as Class-A up to some specified power, and this can certainly be true. However, most amplifiers only operate at very modest quiescent (no signal) current, often as low as 20mA. For an 8 ohm load, that equates to a couple of milliwatts of 'Class-A operation' - hardly worth getting excited about.
+ +It's worth mentioning that with valve amplifiers, there are two sub-categories, Class-AB¹ and Class-AB². It's generally accepted that Class-AB¹ means that output valve control grid current does not flow at any time, and with Class-AB² there is some grid current - typically only at maximum output. This means that the control grid becomes positive with respect to the cathode. As with Class-B, push-pull operation is a requirement for Class-AB, which cannot work linearly in any other mode.
+ +
Figure 4 - Basic Push-Pull Output Stages
The above stages are highly simplified, but are equally suited to Class-A, Class-B or Class-AB. The only difference between the operating mode is the quiescent current (Iq), which can vary from zero (Class-B) up to 50% of the maximum peak speaker current (Class-A). A valve output stage requires each device to be driven with the opposite polarity, so as one device is turned on the other is turned off. Valves have no complement (opposite polarity device), so they require that each is driven with an opposite polarity signal. The current through each valve must be the same to prevent a net DC from flowing in the transformer windings because that will cause premature core saturation. With the transistor stage, a single polarity signal is used because the transistors themselves are complementary (NPN and PNP), so as one turns on the other automatically turns off.
+ +Transistors (or MOSFETs) can also be used with a transformer output in the same way as the valves shown, but this is very uncommon today. It may still be used for some specialised applications, but is a far from a mainstream technology. Several early transistorised power amps did use output transformers.
+ +In all cases, and regardless of the class of operation (other than Class-B), the quiescent current must be carefully controlled to account for temperature variations. The bias control networks shown need to be adjustable in most cases, and additional measures taken to prevent a phenomenon called 'thermal runaway'. This happens when the transistors get hot, and draw more current than they should. This causes them to get hotter still, so they draw even more current and get even hotter ... until the output stage fails. Thermal runaway is also possible (but uncommon) with valve stages, especially if the control grid bias resistors (not shown) are a higher value than recommended.
+ +
Figure 5 - Idealised Current In Output Devices for Class-AB
The above is typical of the current measured through each output transistor for Class-AB operation. We see the transistor current vary between zero up to the full output for one ½ cycle, then do the same in the other transistor for the second. Each transistor is turned on for very slightly more than half the waveform, and the load is shared between them. The upper part of the current waveform is provided by the NPN transistor (see Figure 4), and the negative part is provided by the PNP transistor. Any discontinuity as the signal is passed from one device to the other shows up as crossover distortion, so the bias current (Iq) must be high enough to avoid problems, but not so high that it reduces efficiency or causes excessive heat.
+ +It's only at very low levels that we can see that there is a small area where the amplifier operates in Class-A. As noted above, this is typically only a few milliwatts. The current through the output devices still varies, but over a limited range. In a valve stage the same thing happens, but there's a larger area of 'overlap' where they operate in Class-A. This is not because valves are 'better' - in fact it's because they are far less linear than transistors and need more Class-A area or distortion will be intolerable.
+ + +Class-C is only used for RF (radio frequency) applications, because it relies on a tuned (inductor/ capacitor (LC) 'tank') circuit to minimise waveform distortion. Operation is only possible over a very limited frequency range where the tank circuit is resonant. Output device conduction time is less than 180°, but the drive signal is (more or less) linear over the conduction range.
+ +Classes E and F are similar to Class-C, and also use RF amplifier topologies that rely on LC tank circuits. Where class C amplifiers are common below 100 MHz, class E amps are more popular in the VHF and microwave frequency ranges. The difference between Class-E and Class-C amplifiers is that the active device is used as a switch with Class-E, rather than operating in the linear portion of its transfer characteristic.
+ +Class-F amplifiers resemble Class-E amplifiers, but typically use a more complex load network. In part, this network improves the impedance match between the load and the switch. Class-F is designed to eliminate the input signal's even harmonics, so the switching signal is close to being a squarewave. This improves efficiency because the switch is either saturated or turned off. [ 5 ].
+ + +First and foremost, Class-D does not mean digital. There are several Class-D amplifiers that accept a digital input (S/PDIF for example), but the class designation was simply the next in line after A, B and C. The first commercial Class-D audio amplifier was produced by Sinclair Radionics Ltd. in the UK in the 1964, but it was a failure at that time because of radio frequency interference and the lack of switching devices that were fast enough to work properly. This was before high-speed switching MOSFETs were available, and bipolar transistors of the time were far too slow. Although the MOSFET was invented in 1962, it took some time before they were commercially available and HEXFETs didn't arrive until 1978. The earliest reference I found to something resembling Class-D was the subject of US Patent 2,821,639 in 1954, but that was a servo system for motor control and was far too slow for audio. There was also a patent taken out in 1967 for what is claimed to be a Class-D amplifier [4], and many others followed.
+ +For more info and a detailed description of Class-D amplifiers, see the ESP article Class-D that has far more detail than can be included here.
+ +The unfiltered output of a Class-D amp superficially resembles a digital (on-off) signal, but it is purely analogue, and requires high speed analogue design techniques to get a design that works well. It's as far from traditional TTL or CMOS logic ICs as a valve amp design! The output of a Class-D amplifier must be filtered (using an inductor and capacitor) to remove the high switching frequency from the speaker leads and (hopefully) eliminate RF interference. Many Class-D amplifier ICs operate in 'full bridge' mode, and neither speaker lead may be earthed. See BTL (bridge tied load) below for a description.
+ +Class-D amplifiers utilise PWM (pulse width modulation), with a perfect squarewave (exactly 50% duty cycle) representing zero output. A representation of the creation of a PWM signal is shown below. A comparator (literally an IC that compares two signals) is used, with one input fed by the desired signal, and the other fed with a high frequency triangle waveform. If the blue trace shown is filtered using a low-pass filter, the original sinewave will be restored.
+ +The output filter is not actually necessary for a Class-D amplifier to produce sound in a speaker, and the high amplitude switching waveform will (usually!) not 'fry' the speaker's voicecoil because the impedance at the switching frequency is very high. However, without the filter, the harmonics of the PWM waveform will create substantial radio frequency interference over a wide frequency range. This is obviously unacceptable, as it could easily swamp broadcast radio (especially AM) and would cause havoc to other radio frequency bands as well.
+ +
Figure 6 - Generation Of PWM Waveform For Class-D amplifier
Notice that for a correct representation of the signal, the frequency of the PWM reference waveform must be much higher than that of the maximum input frequency - usually taken to be 20kHz. Following the Nyquist theorem, we need at least twice that frequency, but low distortion designs use higher factors (typically 5 to 30 - 100kHz to 600kHz). The PWM signal must then drive switching power conversion circuitry so that a high-power PWM signal is produced, switching from the +ve to -ve supply rails (assuming a dual supply topology). The use of BTL allows single supply operation without an output coupling capacitor, simplifying the power supply.
+ +The spectrum of a PWM signal has a low frequency component that is an amplified copy of the input signal, but also contains components at the switching frequency and its harmonics that are removed in order to reconstruct the original (but amplified) modulating signal. A high power low-pass filter is necessary to achieve this. Usually, a passive LC filter is used, because it is (almost) lossless and it has little or no dissipation. Although there must always be some losses, in practice these are usually minimal.
+ +Class-D and its derivatives are the most efficient of all amplifier technologies at medium to high power output. Switching losses mean that Class-D may be less efficient than Class-AB at low power. Early efforts had limited frequency response because very fast switching wasn't easy to achieve. The availability of dedicated PWM converters and MOSFET driver ICs has seen a huge increase in the number of products available, ranging from a few watts up to several kilowatts output.
+ +As with all types of amplifier, there are many claims made about Class-D amps. Descriptions range from "like a tube (valve) amp", to "hard and lifeless" and almost anything you can think of in between. Some claim they have wonderful bass while others complain that the bass is lacking, flat, flabby, etc., etc. Very few of these comparisons have been conducted properly (double blind) and most can be discounted as biased or simply apocryphal.
+ +I have tested and listened to quite a few Class-D amps (as well as 'Class-T' - see below), and most that I've tried are at least acceptable - bass performance in even the cheapest implementations is usually very good indeed, with some able to get to DC easily. There may be cases where the DC resistance of the output filter inductor causes a lower than expected damping factor, but this seems fairly unlikely for most of the better designs.
+ +Some definitely have issues with the extreme top end - I can't hear above 15kHz any more, but I can measure it easily. The output filter has to be designed with a particular impedance in mind, because this is necessary with passive filters. As a result, if the loudspeaker impedance is different from the design frequency above 10kHz, then the response of the filter can never be flat. There is a trend towards using higher modulation frequencies than ever before so the filter can be tuned to a higher frequency, but there will still be some effect.
+ +
Figure 7 - Effect Of Output Filter At Different Impedances
All Class-D amplifiers need the output filter - it is essential to prevent radio and TV interference. We know that a passive filter must be designed to suit a particular impedance, but what is the ideal? The problem is that there isn't an ideal, and loudspeaker makers make no attempt to standardise on a designated impedance at (say) 20kHz. A nominal 8 ohm speaker may well measure 32 ohms (or more) at 20kHz, due to the semi-inductance of the speaker's voicecoil (the maximum impedance is usually somewhat less for tweeters and horn compression drivers because the semi-inductance is usually quite low).
+ +In the above graph, I've shown a more-or-less typical filter circuit, along with the response with different load impedances. Should a reviewer's (or customer's) speaker happen to be 16 ohms at 20kHz, then there will be a boost of 3dB at 20kHz with the filter shown. The response isn't deliberately done that way to look bad - it's a simple filter that's fairly typical of those used on commercial Class-D amplifiers. Some listeners will report that the amplifier has 'sparkling' high frequencies, and another will complain that it's 'harsh' and/ or 'ear piercing'. It's neither, it's simply a matter of an impedance mismatch. Some Class-D amps use a Zobel network at the output in an attempt to provide a predictable load impedance at 20kHz and above.
+ +In the past we have never had to worry too much about impedance. The power amp has a very low impedance, speakers have a variable impedance that has a nominal quoted value, and no more needed to be said. Class-D has changed that, but no-one is taking notice. If speaker makers were to add a network that ensured a specific and standardised impedance at 20kHz and above, many of the disparaging claims about Class-D amps might just go away. This would also ensure that some of the (IMO 'lunatic fringe') esoteric speaker cables don't cause amplifiers to oscillate (but that's another story, described in Cable Impedance). Don't hold your breath.
+ +More recently, many Class-D designs have included the output filter in the overall feedback loop, so output level remains constant regardless of load impedance and (signal) frequency. In some cases, the phase shift of the filter is used to set the oscillation frequency (i.e. essentially a 'self oscillating' design). While this should stop reviewers from griping, it almost certainly will do no such thing. There are many Class-D designs that measure (and sound) every bit as good as Class-AB amps, but it's very difficult to remove prejudice from any sighted (i.e. non-blind) listening test.
+ + +This type of amplifier is now very common for high-power amplifiers used in sound reinforcement applications. The amps are often very powerful (2kW or more in some cases), but are more efficient than Class-AB. At low power, a Class-G amp operates from relatively low voltage supply rails, minimising output transistor dissipation. When required, the signal draws current from the high voltage supply rails, using a second set of transistors to provide the signal peaks. See the ESP article that describes Class-G amplifiers in detail for more information.
+ +Class-G amplifiers may have from 4 to 8 power supply rails (half used for the positive side and half for the negative). Four rails are quite common, and might provide ±55V and ±110V to the power amplifier as shown below.
+ +
Figure 8 - 4-Rail Power Supply Class-G Amplifier Voltages
In the above, you can see that the upper (higher voltage) supplies are used only if the output signal exceeds the lower supply rails (±55V in this example). Lower dissipation means that the heatsinks and transformer can be smaller than for a Class-AB design with the same peak power output. The output signal is shown dashed when it's being provided by the higher voltage supply rails and added output transistors.
+ +Class-G is reasonably easy to implement (less complex than Class-D, but more complex than Class-AB), and because of the increased efficiency, the heatsinks and power transformers needed are somewhat smaller than one might expect for an amp of the quoted power rating.
+ +There are concerns (raised all over the Net) that there will be switching noises as the supplementary supply rails are switched in and out of circuit, but there is no evidence that this is audible with programme material in any competent commercial products. While some noise may be audible (or at least measurable) with sinewave testing, it's doubtful that it will cause any identifiable distortion with speech or music. This is largely because the supplementary supplies are not switched in until the output power is already quite high, and any effects will be insignificant compared to the sound level of the signal. This isn't something I've had the opportunity to test, but major manufacturers would receive many complaints if their amps made 'untoward' noises where otherwise equivalent amps did not.
+ + +The line between Class-G and Class-H becomes more blurred as more articles are published and more designs are produced. The original Class-H amplifier (which was referred to as Class-G at the time) used a large capacitor that was charged and then switched into the circuit when needed to generate a higher supply voltage to handle transients. Other variants use an external modulated power supply (usually switchmode) that provides a voltage that is just sufficient to avoid clipping, or a supply that's 'hard' switched to a higher voltage when required.
+ +When a Class-H amp uses a switched supply, it doesn't track the input, but is switched to a higher voltage to accommodate signal peaks that exceed the normal (low voltage) supply rails (this is shown in the light green and light blue traces below). There may be situations where the output signal is fairly constant (highly compressed audio for example), and just above the switching threshold. In this case, the amplifier can conceivably dissipate a great deal of power, but it seems that it's not a major problem because thousands are in use and failures are fairly uncommon. Because of the switching, a higher voltage to the output transistors is applied only when needed, so output devices are only subjected to a relatively low voltage for much of the time, and receive the full voltage only if necessary. This reduces the average power dissipation, and increases overall efficiency.
+ +Some external supplies are 'tracking', which is to say that they use the audio signal to modulate the supply voltage in 'real time', so it follows the audio signal closely. Another system uses switching, so the supply voltage is raised (from a low voltage to high voltage state) when required to reproduce a peak signal. The amplifier stage itself is linear - usually Class-AB. While making use of one or more separate supply rails for each polarity does increase total output stage dissipation at the transition voltage (it may be dramatic with some signals), the theory is that it will only happen occasionally.
+ +When a power supply modulation principle is used, it's often done using switchmode supplies, and there are two - one for each supply voltage polarity. The quiescent supply voltage is only ±12V, but can increase up to ±110V as needed by the output signal. The tracking supply is shown below in dark green and dark blue.
+ +
Figure 9 - Tracking Power Supply Class-H Amplifier Voltages
Do the above qualify as Class-H or Class-G? According to my classification system it's Class-H, but if you prefer to think of it as Class-G then be my guest. Either way, this can be a complex scheme to implement, but can provide the 'sound quality' of Class-AB and close to the efficiency of Class-D. Most switchmode tracking supplies are deliberately slow, so they track the audio envelope rather than individual cycles. This reduces efficiency but makes the supply far easier to implement.
+ +One of the first amps that could be classified as Class-H was the Carver (so-called) 'magnetic field amplifier'. This used switching in the AC mains supply to vary the voltage to the main power transformer. The design was let down by the use of a transformer and heatsinks that were far too small, so sustained high power could cause the 'magic smoke' to escape and the amp wouldn't work any more.
+ ++ It is commonly accepted by technicians and engineers that all electronic devices rely on 'magic smoke' held within their encapsulation.+ +
+ Should anything cause this smoke to escape, it means that the device can no longer function. Yes, this is facetious, but the principle is sound. +
Because the lines that separate Class-G from Class-H are so blurred (they are really non-existent), it's probably fine to use either term for either type of amplifier. However, it would be nice if some convention was applied so we would know exactly what technology is used in any given amp. My preference is to classify Class-H as any design where the power supply voltage is externally modulated, such as with a tracking switchmode power supply. There is little or no agreement anywhere as to the true distinction between them though, so it's really a moot point. Feel free to consider them differently from my description, or consider them to be the same thing with different names.
+ + +Proprietary to Crown Audio, the BCA (balanced current amplifier) is a patented form of Class-D [ 2 ]. It uses a BTL (bridge-tied load) output stage, with two PWM signals in phase with each other. With zero signal, the two switching outputs cancel to give zero output, and with signal each is modulated so that one part of the switching circuit handles the positive portion of the signal, and the other handles the negative portion (allegedly!). It's claimed that the output switching signals are 'interleaved' (symmetrical interleaved PWM), hence Class-I.
+ +It has also been claimed that little or no output filtering is used or needed, but that seems rather unlikely because of RF interference problems. Great and glowing (but largely unsubstantiated) claims are made as to how it is superior to 'ordinary' Class-D amplifiers, but the documentation is sparse and quite unhelpful from a technical standpoint. There are some other claims that don't really stand up to scrutiny as well, but I don't intend to cover this in any more detail.
+ +Intriguingly, there is also a Class-I amplifier described in a Chinese publication [ 3 ] that is completely different from that used by Crown. It's a Class-AB amplifier with an 'adaptive' power supply, which really makes it Class-H (although that depends on the description of Class-H that you might think is the least inappropriate).
+ + +Subject of patents, registered trade mark and much hoo-hah, Class-T is simply a slightly different form of Class-D, and still qualifies as Class-D, regardless of alternate claims. Tripath was the original maker of Class-T amplifiers and dedicated single ICs that usually only needed a few external passive components. Despite all the claimed benefits and a fairly wide customer base, Tripath filed for bankruptcy and was bought by Cirrus Logic in 2007. Where Class-T differs from 'classic' Class-D as described above is that the modulation technique does not use a comparator, and the switching frequency is dependent on the amplitude of the signal. As the amp approaches clipping, the frequency falls. It is claimed to be 'different' from other modulators, but there doesn't appear to be much evidence that the difference is significant - despite claims to the contrary. The modulation scheme is sometimes described as Sigma-Delta (Σ-Δ).
+ +Class-T and several other Class-D amplifier makers share similar modulation methods, which at it's simplest simply means adding positive feedback around an amplifier so it oscillates at between 200kHz and 600kHz or so. Naturally, if you were to apply positive feedback to a conventional Class-AB amplifier, it would fail very quickly. The output devices are not nearly fast enough and the remainder of the circuit is not optimised for switching. This means that the actual circuitry is quite different from a conventional amp, but the principle is the same.
+ +When an amplifier is made to oscillate to 'full power' with no input signal, when the signal is applied the duty cycle of the switching waveform will change. As it changes, the amplifier produces PWM by itself, without the need for a triangle waveform generator or signal comparator. A great many claims are made - especially by the now defunct Tripath and devotees - that this method is supposedly much better than all fixed frequency switching, and glowing reports of sound quality can be found all over the Net.
+ +Despite the fact that Cirrus Logic does not appear to have done anything at all with Tripath technology, Class-T ICs are readily available all over the world, with the source of the ICs being China. Whether they are 'genuine' or otherwise is unknown, but one would think that any existing stock would have been depleted in the years since they stopped manufacture. It seems probable that those ICs currently available are not 'genuine' Tripath devices, but they seem to work well enough (yes, I have tried out a couple).
+ +Overall, I doubt that there is really much real difference between a decent 'traditional' Class-D amp and a Class-T, and most of the comments about high frequency 'sweetness' (for example) are simply the result of the output filter interacting with the loudspeaker load. As always, unless comparisons are made using double-blind methodology and are statistically significant, then the 'results' have no value and are meaningless.
+ + +This is not a class of amplifier, but a method of using two amplifiers (of any class) to effectively double the available supply voltage. Almost all automotive sound systems use BTL amplifiers in the head unit, and each amplifier can deliver around 18W into 4 ohms from a nominal 12V supply. A single amplifier is only capable of a little over 4W under the same conditions. The only reason that BTL is included here is to dispel the myth that it's a class of operation.
+ +Many commercial amplifiers use the BTL connection as normal, while others (particularly professional equipment) offer BTL as a switchable option to get the maximum possible power (often far more than any known loudspeaker can actually handle without eventual (or even immediate) failure. A basic diagram of a BTL amplifier is shown below, in this case it's a pair of the same amps that were shown in Figure 1 - Class-A inductor load. I used this amplifier because it's the most unlikely - solely to prove a point.
+ +
Figure 10 - BTL Connection Based On Class-A Amplifiers
As already explained, using an inductor give you a voltage swing of almost double the supply voltage. The peak-to-peak voltage from each amp is 56V (19.8V RMS), but when connected in bridge the output is 39.6V RMS. Power into an 8 ohm load is 196W, but each amplifier sees an equivalent load impedance of half the speaker impedance. If the individual amps are only rated for 8 ohm loads, then the speaker must be 16 ohms and power will be 98W.
+ +The main thing to remember here is that BTL is not an amplifier class, it can be used with any class of amp.
+ + +![]() | + + + + + + |
Elliott Sound Products | +How Much Power? |
The question posed above is a truly vexing one, and there are as many answers as there are people asking the question. The short answer is "it depends", and I readily admit that this probably doesn't qualify as a useful answer for most people. To make matters worse, the long answer is the same as the short one, so we need to examine the dependencies. Unfortunately, there are a great many, and they change with the type of music you like, your loudspeakers, your room, and whether you expect to reproduce 'concert level' SPL (sound pressure level) in your listening space.
+ +I'm not going to look at 'pro audio' as used for sound reinforcement in large (or even small) venues, although the basics apply equally regardless of the specific application. However, there is a short section that might help. It all starts with the loudspeakers, and is heavily influenced by your expectations. Depending on your age group, you will have different needs, taste in music, and tolerance for loud music (or a requirement for louder than normal music to compensate for hearing loss).
+ +The last point is critical. When we were (or are) young, loud music was expected, and in general, the louder the better. Unfortunately, this means that you will suffer from hearing loss and/ or tinnitus (ringing in the ears) when you get older. Naturally, young people are psychologically incapable of projecting themselves into the future to understand that what you do when young can stay with you for life. In case you were wondering, I was no different, and I'm now the unhappy sufferer of tinnitus, which is permanent, never-ending and incurable. While my hearing threshold is raised (so I can't hear very soft sounds), my tolerance for very loud music (or noise) is reduced. I'm by no means alone.
+ +It should be self-evident that amps should be used within their limits. An amplifier that's clipping some (or most) of the time is acceptable only it it's a guitar amp, as the vast majority are used with heavy overdrive. For hi-fi, this is obviously unacceptable, as you literally 'lose' up to half of the music. One of the many (and mostly false) claims that you'll see is that when an amp clips it outputs DC. This is unmitigated drivel! If an amplifier outputs DC, it has failed, and isn't an amplifier any more. Clipped AC is still AC, regardless of how heavily it's clipped. The polarity alternates from positive to negative, so it takes a particularly twisted view of physics to imagine that this somehow equates to DC. Sometimes you'll see claims of "little bits of DC", which is also drivel and again ignores basic physics. What actually happens is that the power is at its maximum possible value on a more-or-less permanent basis (while the amp is being abused by heavy clipping). A 20W amp driven into full clipping may output up to 40W, and much more of that power is delivered to the tweeter than normally will be the case.
+ +Without exception, this article concentrates on amplifiers used within their ratings, providing normal programme material with no more than a very occasional transient being clipped. This will generally go un-noticed by the majority of listeners. Provided you listen at 'sensible' levels, which means an average power of only about 1-2W, it doesn't matter if your amp is rated for 50W or 5kW - most of the power will never be used. Naturally, if you do have a 5kW amp and someone turns it way up, your speakers probably won't survive for more than a few seconds.
+ +While it a most useful form of reference, the dB/W (or just dBw) only enjoyed a very brief spell in the limelight. By definition, it's an amplifier's output power, referenced to 1W. For example, a 50W amplifier would have an output level of 17dBw (8 ohms) or perhaps 19.8dBw into 4 ohms. With this information, you can determine the peak output of any loudspeaker, simply by adding the speaker's sensitivity (dB/W/m) to that of the amplifier. For the 50W case and the 'reference' 8Ω speaker I've used in this article (86dB/W/m), you get 103dB - the peak level at 1 metre at the onset of clipping. This is useful, but you must still contend with the room, listening distance and a myriad of other minutiae that all influence the sound level at the listening position.
+ +An orchestra has also been used as a reference, because it's more predictable than a rock concert. Of course, not every one is 'into' orchestral music, but it remains one one the most demanding in terms of reproduction in the home. With 'pop/ rock' and other genres, live performance levels depend on the venue size and where you sit/ stand, the type of PA (public address/ sound reinforcement system) and how much power is available (which often exceeds 50,000W - 50kW!), and there is no way to predict the level. An orchestra at full crescendo will produce around 100dB SPL (average, 110dB SPL peak) at the third row [ 1 ], representing an acoustic power of about 400mW (0.4W, with 4W peak). From this, we can extrapolate the electrical power required (based on 100dB SPL), provided we have enough information. This still doesn't mean that there's an exact answer, because there most certainly is not .
Project 191 is specifically designed to let you monitor the peak voltage and current delivered by your amplifier, under normal listening conditions. It's not something that is common, but if you really want to know how much peak power you are using, then it's well worth building. It's far easier than trying to use an oscilloscope to monitor the voltage, as it is too easy to miss a transient that causes the amp to clip momentarily.
+ + +There are many charts and guidelines, but the following is a pretty good estimation of the likelihood of hearing damage (from any source - not just loud music). Many hi-fi systems (and especially headphones) are able to create sound levels capable of causing permanent hearing loss, so you must be very careful to avoid damaging levels. While we do have two ears, one is not a spare!
+ +Continuous dB SPL | Maximum Exposure Time + |
85 | 8 hours |
88 | 4 hours |
91 | 2 hours |
94 | 1 hour |
97 | 30 minutes |
100 | 15 minutes |
103 | 7.5 minutes |
106 | < 4 minutes |
109 | < 2minutes |
112 | ~ 1 minute |
115 | ~ 30 seconds |
Note that the exposure time is for any 24 hour period, and is halved for each 3dB SPL above 85dB. The above shows the accepted standards for recommended permissible exposure time for continuous time weighted average noise, according to NIOSH (National Institute for Occupational Safety and Health) and CDC (Centres for Disease Control) [ 2 ]. Although these standards are US based, they apply pretty much equally in most countries - hearing loss is not affected by national boundaries.
+ +You need to be aware that if your ears 'ring' after a concert or even a loud listening session at home, that indicates the you have done permanent, irreparable damage to your hearing. Yes, it will pass after a few hours or days, but if you keep doing it, it eventually becomes permanent, and is called tinnitus [ 3 ]. As a sufferer, I can assure readers that it's not something to aspire to. Home listening will rarely be loud enough to cause problems, especially for people who live in apartments - the neighbours will let you know (in no uncertain terms) when you've reached their limits. Concerts (and of course, industrial (work related or otherwise) noise) are prime causes of hearing damage.
+ +This is not intended to scare anyone - sensible levels are ... sensible, and it's often easier than you imagine to drive a hi-fi system to get loud enough for long enough to cause problems. This information is provided because it's relevant to how we listen, and therefore how much power we really need for a home hi-fi system. The 'reference' level (used to calibrate sound level meters) is 1 Pascal, which is 94dB SPL. The calibration process involves producing exactly 94dB SPL at 1kHz, usually in a small chamber into which the meter's microphone is inserted. You don't need to know this, but it's provided as 'background' information that might come in handy one day.
+ + +Sensitivity (or efficiency) is the first thing that needs to be assessed. The range is fairly wide, depending on the driver(s) themselves, and how they are configured (e.g. horn loaded, direct radiating, etc.). The listening space also plays a significant role, in particular whether it is reverberant or highly damped. Most home listening spaces are somewhere in between, and the level of room treatment (if used), furnishings, floor covering, etc., influences the amount of reverberation experienced. Some people go to great lengths to treat their listening space for the best reproduction, while others allow fashion to dictate the space. While these things all matter when it comes to the quality of reproduction, the effects on SPL are less certain.
+ +The distance between your listening position and the loudspeakers makes a big difference, so 'near field' listening (where the speakers are no more than around 1 metre from you) requires less power than if the speakers are at one end of a large room and you listen from the other end. This is generally not considered to be a good idea, but this article is about how much power you need, rather than optimum speaker and listener placement. The latter is a topic unto itself, and there are probably as many opinions as there are listening spaces, although there really are many sensible guidelines.
+ +If a typical domestic hi-fi (or even 'lo-fi') speaker system has a sensitivity of (say) 86dB/W/m, that means that with an input of 1W, the SPL will be 86dB at a distance of 1 metre from the speaker. Sensitivity isn't always specified correctly, with some manufacturers using a 'reference' voltage of 2.83V RMS regardless of the actual impedance (2.83V gives 1W into 8 ohms). If the speakers are 4 ohms, that's 2W, not 1W, a difference of +3dB.
+ +For much of this discussion, 86dB/W/m will be assumed, as it's representative of many systems. Higher sensitivity is better, but that often compromises other parameters (such as box size for the required low frequency cutoff point). The design of loudspeaker drivers involves many compromises, and efficiency is one of the first parameters that suffers in order to get good low frequency response. Before we go too much further, I suggest that you read Power Vs. Efficiency, which examines the power handling capacity of loudspeaker drivers. Since even a high efficiency loudspeaker will be lucky to exceed 5% efficiency, the vast majority of the remaining power is dissipated as heat in the voicecoil. There are losses in the suspension and even the magnetic circuit, but these don't amount to very much in the majority of drivers.
+ +The hypothetical speaker described here (86dB/W/m) has an efficiency of around 0.2%, meaning that for every watt of input, only 0.2W emerges as acoustic energy (sound). The remaining 99.8% of the input power is converted into heat. If one were to find a driver that measured 112.1dB/W/m, it would be 100% efficient, with no losses at all. Needless to say, this driver does not exist. Even horn compression drivers have a theoretical maximum efficiency of 50% (measured using a plane-wave tube), and most will only manage around 24-30% in real life. These are the most efficient (conventional) drivers that exist, and expecting anything to be 100% efficient is unrealistic. You can calculate the efficiency easily with the following formula ...
+ ++ Efficiency = 10 ^(( dB SPL - 112.1) / 10) × 100 ++ +
For our speaker, that calculates to an efficiency of only 0.209% - hardly something to crow about. While efficiency is important for a large auditorium or other spacious venue, it's pretty much a non-issue for home listening. Certainly, there are people who love their high-efficiency horn loaded systems driven by a couple of watts from a tiny valve (vacuum tube) amplifier, but that's not something most listeners want. Horn loading is a wonderful concept, but for low frequency performance, the horn has to be large (both in length and mouth area). This rarely sits well in most home environments.
+ +The speaker sensitivity itself doesn't really tell us very much, other than how loud it will be at a distance of one metre and with one Watt of input power. This isn't the average listener's primary criterion, because we want to know how loud it will be at the listening position. Almost all systems will be stereo, so there are two sources, driven by two amplifiers. If each is driven with 1W, the acoustic power into the listening space is doubled (+3dB). This falls at 6dB for each doubling of distance in free field (the inverse square law). This is almost never the case in reality, because few (if any) home speakers are operated in free field (i.e. open space without 'significant' boundaries). At some point in the room, one enters the 'reverberant field', where the level is relatively unchanged by distance. The point where this occurs is known as the 'critical distance'. It's highly dependent on the room geometry, room treatment and directionality of the loudspeakers.
+ +Part of the difficulty of analysing and understanding loudspeaker efficiency ratings is that we are rarely told how the measurement was taken. The microphone will usually be on axis of the speaker (or the mid-point where multiple drivers are used), and it's location can be any distance from the source, 'normalised' to the level at 1 metre. What we generally are not told is whether the measurement was in 'free space' (no significant boundaries), half space (on an infinite baffle, with no other significant boundaries), or using some other (perhaps proprietary) method. A speaker's directionality also comes into play, so a driver with a horn or a waveguide may show a significant improvement in sensitivity, because it's directional. Even if a speaker appears to indicate that its sensitivity is greater than the theoretical maximum of 112.1dB/W/m, that doesn't mean the supplier is lying (not that this would ever happen in audio of course ), because if it's highly directional that improves the apparent efficiency.
Omnidirectional speakers (equal radiation in all directions) are preferred by some, along with other arrangements such as bi-polar (figure 8 radiation pattern). These can generally be expected to show reduced sensitivity compared to a 'normal' forward radiating design, but the way the sensitivity is measured can sometimes be misleading. Not too many people have access to calibrated microphones or sound level meters, but you can often get a passable estimate using nothing more than a smartphone app. These are far from precision devices (even the best of them), largely due to the microphone and the acoustically large (at high frequencies) case of the phone itself. However, it's better than nothing, and great precision isn't necessary because music is so varied.
+ + +The listening room plays a very significant role in the reproduction of music of any genre. Many hi-fi enthusiasts will have a well damped listening room, with absorption panels to limit the amount of reverberation, and diffusers to ensure that the remaining reverb is scattered. Bass traps (absorbers) may also be used to limit standing waves. Soft furnishings, rugs or carpet, heavy curtains and bookshelves (preferably filled with books) all help to create a diffuse sound field, so the sound directly from the loudspeakers is by far the most dominant. This diffuse field with a minimum of reverberation (and especially so-called 'slap' echoes, because they are very distinct) ultimately reduces the overall level you hear.
+ +You can almost always get a good idea of a very reverberant space in a bathroom, which will have tiled floor and walls, and minimum (if any) sound absorbing material except maybe a bath mat and a couple of towels. If your listening room is similar, then it's unrealistic to expect good clear sound, because the reverb will muddy everything. In some cases, the listening environment is dictated by 'fashion', which at the moment seems to mean minimalist, with hard floors, lots of glass (but no heavy drapes/ curtains), and few (if any) rugs. In many cases, a hi-fi or home theatre layout and furnishings may be dictated by 'SWMBO' (s/he who must be obeyed), and there's little that can be done to appease one's partner and get a satisfactory listening space.
+ +A reverberant room usually needs very little power before the entire performance turns into mush - with all the details obscured by echoes of varying durations. While this is far from ideal, for many people there is simply no choice, other than to use headphones for personal listening, or to set up a 'man/ woman cave' where things can be arranged to get a reasonably acoustically dead listening environment. The latter may not be possible either, unless one's domicile has an extra room that can be dedicated and shut off from the rest of the house.
+ +With apartment living now becoming much more common than it once was, there will be definite limits to the level you can produce, lest the wrath of the neighbours descend upon you with great force. Bass is especially troublesome, because it can penetrate concrete walls, floors and ceilings if loud enough. Bass also has the ability to travel great distances without significant attenuation by the atmosphere. Most people have to deal with what they have, and when you have close neighbours it's quite surprising just how little power can be used before someone starts hammering on your door.
+ +There are countless articles on the Net about room treatment, and if that's something you wish to examine in detail I suggest a search. Beware of snake oil - many vendors sell 'products' that achieve exactly nothing (other than in the mind), and these are usually easily identified. Bags of coloured rocks (no, I'm not joking), small stick-on patches, 'holographic' images and the like cannot change a room's acoustic properties, and are fraudulent. Many people claim that a room can be 'equalised', but this is also false. The effects of reverberation and/ or echoes are time related, and you cannot correct time with amplitude (which is all an equaliser can alter). Unfortunately, the home theatre industry seems to have convinced people otherwise, which is shameful IMO. However, in some instances the use of EQ can compensate for speakers that are 'inadequate' at frequency extremes, or have pronounced peaks at some frequencies (for example). However, it must always be understood that ...
+ +You Cannot Correct Time With Amplitude
+ +Equalisers affect only amplitude and phase (the latter is 'incidental', and occurs when any filter is applied), and there is no amount of equalisation that can genuinely 'fix' a bad room. The frequency response at the listening position can be 'corrected' to some degree, but that only means that it will be far worse elsewhere in the listening space. This has become one of the greatest myths around, with respected manufacturers providing (often 'automatic') 'room correction' features on equipment, because that's what the buyers want. Such 'room correction' uses a microphone, and these do not (and cannot) 'hear' the same way that we do. A great deal (most probably the majority) of our hearing is in the brain, not our ears, something that cannot be replicated by current systems.
+ + +The next issue is dynamic range. An orchestra has a (theoretical) dynamic range of perhaps 70-80dB, with up to 85dB being possible, but not necessarily achievable in real life. Much depends on the venue, how much audience noise is present, and the ambient noise in the venue itself. With other music genres, the range is from perhaps 60-70dB down to almost zero (i.e. the music starts loud, and is loud throughout the performance). The only break may be between songs, so the effective dynamic range can be as low as 6dB (a power ratio of 4:1).
+ +Recordings are often much the same. Some have good dynamics, and include soft bits and loud bits as required, but many 'post-production' facilities have engaged in the 'loudness war' [ 6, 7 ] that have been with us in earnest since the 1970s or so (it actually started in the 1940s!), where every recording made tries to be louder than anything that came before it. That this is a travesty is not disputed by many, others don't seem to mind that a solo acoustic instrument is just as loud as the whole band/ orchestra (etc.) playing at full crescendo. Unfortunately, the idea of pp (pianissimo - very soft) and ff (fortimisso - very loud) [ 4 ] seems to have been lost in many recordings. ppp (softer than very soft) and fff (louder than very loud!) are gone - to some producers, everything has to be fff or people will presumably not like it.
+ +Almost without exception, recordings use varying amounts of compression to limit the dynamic range. Some take it to extremes, so there is little or no variation of loudness, leading to flat, lifeless recordings that might be alright in the noisy interior of a car, but that sound dreadful when heard on a good system in the home. The available dynamic range also depends on the ambient noise in the listening space, and unless you are blessed with a rural location far from the madding crowd, you can generally consider yourself lucky if the background noise level is below around 30dB SPL. Traffic noise, planes, trains and neighbours all conspire against getting much better than this, other than late at night. Few of us have the luxury of a dedicated soundproofed listening room. Note that in almost all cases, ambient noise is measured using A-Weighting (see A-Weighting (Sound Level Measurements & Reality) for my take on this).
+ +Although we can hear things that are below the noise floor, we can't expect to hear them clearly. In general, that means that we'll probably have the TV set for a level approximately the same as 'normal speech' (typically around 60dB SPL), perhaps a little more depending on the programme material (and to compensate for hearing loss - especially for older people). This is also a realistic level for background music, but if we are having a listening 'session' that may increase somewhat.
+ +Remember that our reference speaker has a sensitivity of 86dB/W/m, so to get an average (and comfortable) level of 75dB SPL, we probably won't need more than 100mW/channel (average, and assuming a reasonable listening space). However, audio isn't about averages, because there are dynamics involved. We must also consider the peak to average ratio, i.e. how much power is needed to reproduce peak levels for the average output level of interest. Consensus is hard to find on this, and it can vary from 20dB down to as little as 6dB, depending on the material itself, and how much post-processing (predominantly compression) has been applied. An average figure of 10dB is, well, average. There are also some potentially confusing references to 'crest factor', which is ostensibly the same thing, but is sometimes used in unexpected ways. For example, a pure sinewave has a crest factor of 3dB, meaning that the ratio of the peak to RMS voltage is 1.414 (√2).
+ +Some amplifiers (e.g. B&O Icepower modules, and based on the datasheet for the ASX series)) are designed for a peak to average ratio of 8:1 (18dB), and if operated at close to full power with material having a lower dynamic range, the amp will shut down due to over-temperature. Rest assured that many other manufacturers will take a similar approach. Ultimately, it's all about heatsinking, and keeping it to a minimum consistent with normal programme material. Heatsinks are bulky and expensive, so minimising them reduces costs. If you build your own gear these's no limit to the heatsink that can be used, but ultimately cost has to be considered.
+ +Essentially, the goal is to determine just how much power is needed from an amplifier to provide a satisfactory level in the listening space, without clipping, and without causing the amplifier to overheat. If we take 85dB SPL as our basic target level, this is not unreasonable, and as shown in the table above, our ears can handle that for 8 hours without causing damage. If we now assume a peak to average ratio of 10dB, that means that we need 10W to reproduce the peaks. In theory then, a 10W/ channel amplifier will do just fine. Or will it?
+ +A mere 10W/ channel actually will be alright, but it's very limiting. You'll be able to listen to (most) music at up to 85dB SPL, but if you try turning it up for a track you particularly like (or because there is more background noise than normal), you'll run out of power and the amplifier will clip the transients and higher level peaks. Equally, some music has a wider peak/average ratio, with anything up to 20dB being common with well engineered material. Some (small) amount of clipping can go un-noticed, and to save you the trouble, some CDs are produced with the material pre-clipped, saving you the all the bother of over-driving your amplifier. This is (of course) an appalling state of affairs and should never happen, but it does.
+ +We mustn't forget about the well reported claims that underpowered amplifiers cause speaker failure (tweeters in particular). While it can be argued that this is a myth, there is a modicum of truth behind it, which makes it harder to dispel. The problem is not the underpowered amplifier, it's the user pushing up the volume until the amp is in heavy clipping. What this does is limit the dynamic range, which may fall below 3dB. Imagine a 50W power amp, which can deliver up to 5W average power before the peaks start to clip. Now increase the volume until it's clipping badly (around 50% of the time). The 'normal' peak to average ratio is reduced from around 10dB to as little as 3dB, so the average power is now closer to 50W, and not the 5W normally expected. In particular, the HF energy is increased disproportionately, due to a combination of extreme compression and additional harmonics.
+ +To take this idea of 'underpowered amps kill speakers' to its extremes, by that reasoning, a 10W amplifier should be able to destroy any speaker ever made. This is quite clearly nonsensical, but not by as much as you may think. There remain people who insist that underpowered amplifiers are the #1 cause of speaker failure, but never provide proof, analysis or bench-test data to support their claim. The reader should always remain vigilant when reading forum posts or articles churned out as 'click-bait' (something that's become depressingly common). Without a detailed analysis, most claims made are worthless!
+ +The idea of using a larger amplifier to provide 'headroom' can only work if the additional power remains unused. In other words, you can avoid speaker damage by using a bigger amplifier, but not if the user increases the volume so that all the power is used again. When a speaker (driver or system) is rated for (say) 100W, that doesn't mean that it can handle the full power at any frequency, it means that if used sensibly (without excessive clipping) it can handle the output from a 100W amplifier with normal programme material. As a specific example, a tweeter rated for 100W will die almost instantly if you actually apply 100W to it - the rating is for system power only. In normal use, that same tweeter will only get around 10W, and that's close to its maximum power handling capacity. In general, it's not unreasonable to use a 150-200W amplifier with speakers rated for 100W, but only if the amp is never driven into clipping!
+ +Speakers also need headroom. If a driver is operated at (or near) its maximum rating for long periods, the voicecoil will get hot, and its resistance increases. This raises the speaker's impedance, so less power can be delivered for a given voltage. Power compression isn't a common problem with home hi-fi, but it's of considerable concern for high power systems. If your drivers have no headroom, they will not be able to respond to crescendos in the music. The power may increase by (say) 15dB, but the speaker might only manage 10dB, so you lose dynamics. At worst, this turns the music to mush - detail is lost, and the music sound compressed and lifeless.
+ +All of this brings us back to the original question ... How much power do you need? As should now be apparent, the question might seem simple, but the answers are not.
+ + +For most systems these days, it seems to be the accepted norm that somewhere between 50W and 150W/ channel is 'about right'. Even with low efficiency drivers, this lets you get to an average level of between 5 to 15W, with peaks taking up the remainder. That lets you get to a listening level of perhaps 92dB SPL (room and distance dependent of course), and up to 97dB SPL with a 150W amp - assuming the speakers can handle the power of course.
+ +If we assume that the peak to average ratio is somewhere around 4:1 (12dB), then our 50W amplifier can deliver peak levels (transients) of around 102dB, with the average level being about 90dB SPL. However, nothing is 'cast in stone', because the type of music you listen to makes a very big difference to the overall experience. Some music may present much greater peak to average ratios than the 4:1 quoted, and it's extremely difficult to get reliable information on this unless you perform your own measurements. Material with a wide dynamic range (say 40dB or more) may leave soft passages down around 50dB SPL, which is fairly soft. It's unrealistic to expect that a home hi-fi can handle the full dynamic range of an orchestra, while keeping the softest passages above the ('typical') noise floor of 30dB SPL. That would require that the system reproduce up to 100dB SPL average, meaning that peaks may reach 112dB SPL.
+ +To examine this issue, we need to look at amp power again. We know that as a rough guide we'll get around 86dB SPL with 1W (stereo of course), so to get another 26dB above that to accommodate peaks/ transients, we need a bit over 400W of amplifier power for each speaker. The average power during loud passages will be a little over 31W. While these may not look too outlandish at first glance, very, very few home speaker systems will tolerate 500W peak power without serious distortion. If this power is maintained for any length of time, driver failure will be the inevitable result.
+ +One respected manufacturer (Klipsch [ 5 ]) used a fully horn loaded system to get around this. The systems were primarily designed with wide dynamic range material in mind, and used a folded corner horn with a horn loaded compression driver for the top end. This was never an approach taken lightly, and while a great many found their way into domestic environments, the actual number would be tiny compared to more conventional (or perhaps less unconventional) loudspeakers.
+ +For the vast majority of listeners, an amplifier capable of delivering around 50 - 150W will be more than sufficient, but another approach helps squeeze every last Watt from a system - biamping or triamping. This topic is covered in some detail in the article that started the ESP website - Biamping - Not Quite Magic (But Close). By splitting the audio signal prior to the main power amplifiers there are some real gains to be had in terms of SPL, but there are other benefits as well. Of these, the flexibility of an electronic crossover can't be matched by any passive design, but as with everything, there are limits. One of the biggest is that it becomes extremely difficult to swap speakers around, because the external crossover network means that they can't just be connected to any amplifier you like, because you need two (or three) stereo amplifiers, rather than just one.
+ +Average dB SPL | Average Power (W) | Peak Power (W) | Peak Power (W) ¹ + |
56 | 1m | 10m | 32m + |
66 | 10m | 100m | 320m + |
76 | 100m | 1 | 1.2 + |
86 | 1 (reference) | 10 | 32 + |
96 | 10 | 100 | 320 + |
106 | 100 | 1,000 | 3,200 + |
116 | 1,000 (1k) | 10k (!) | 32k (!!) + |
Every 10dB requires that the power is multiplied (or divided) by a factor of ten. If we allow for 15dB peaks (indicated in the last column ¹), this is increased to a factor of 32. From this you can see that at more-or-less typical listening levels (which I'd place at around 75dB SPL, ±10dB) you only average about 100mW, and peaks won't go much beyond 1W (3.2W 'worst case'). If we were to choose 86dB (which is actually quite loud), that increases to 1W and 10W for peaks. Note however that the 10:1 ratio is far from an exact science, and it can vary by up to +10dB. Some instruments are also capable of large amounts of asymmetry (muted trumpet is one of the most extreme I've seen), so the peaks are predominantly of one polarity. In some cases, this may increase the required voltage swing further than expected.
+ +It may not seem right, but doubling the power (a 3dB increase) is not 'twice as loud'. To obtain the subjective effect of 'twice as loud' means that the power must be increased by 10dB - 10 times as much. In fact, an increase of 3dB is audible, but not readily so, and the smallest increase (or decrease) that we are able to discern is 1dB. Indeed, the very definition of the decibel is based on the smallest increment that's normally audible, but in a subjective test, listening levels should be matched to within 0.1dB. Our hearing tricks us in many unexpected ways, and a system that's 1dB louder than another will usually be declared as sounding 'better' (assuming both have equivalent frequency response, directivity, etc.).
+ +Bass heavy material can be far more demanding than other programme material, with a much higher than expected peak to average ratio. This is an area where biamping (and up to full 4-way active) systems will outperform almost any system using passive crossovers. The subwoofer can require far more power than expected, depending on the topology. For example, mine uses an equalised sealed box, and while it can get to 20Hz easily enough, it needs 400W to do so at 'realistic' sound levels. That's more power just for the sub than I have available in the rest of the system combined (which is 3-way active, not counting the sub). I seriously doubt that I have ever clipped the main amplifiers.
+ +At this point, we've come close to a full circle. All of the aspects have been examined, and the answer is still "it depends". However, if you know the speaker sensitivity and your preferred listening level, you can get a reasonable estimate. It also turns out that somewhere between 50 and 150W really is 'about right', with the higher power generally needed only if you listen at higher levels than normal, have particularly inefficient speakers, or listen to material with a much greater dynamic range than most 'modern' recordings (whether analogue [vinyl] or digital).
+ +This doesn't mean that a higher power amp is necessarily 'wasted'. If your passion is classical music, it can have a large dynamic range (but only if well recorded of course). Other recordings can also have a greater than 'normal' dynamic range, and if your speakers can handle short bursts of up to 200W or more, then a bigger than average amplifier may well be warranted. Bear in mind that when most 'typical' home hi-fi speakers are driven to their limits, loudspeaker driver distortion can quickly become a serious problem. Consider that a mid-bass driver may normally show an excursion of perhaps 2-5mm with bass, and an excessively powerful amp may push/ pull the voicecoil well outside the magnetic gap. This leads to high levels of amplitude modulation of higher frequencies, which in turn causes excessive intermodulation distortion products and greatly reduced performance overall. Everything has its limits!
+ +One of the easiest ways to get more SPL is to use more efficient speakers. I've used 86dB/W/m for the examples here, but if you have speakers that are rated for (say) 89dB/W/m, the power needed for a given SPL is halved. It's unusual for hi-fi speakers to exceed 90dB/W/m, because the requirement for 'decent' bass response demands a low resonance driver, and this is not possible while maintaining high efficiency.
+ ++ Small cabinet, High efficiency, Bass response ... Pick any two ! This is often called 'Hoffman's Iron Law'. ++ +
Basically , it means that you can have good bass in a small enclosure, but efficiency will be low. Likewise, high efficiency and good bass are possible, but perhaps not in an enclosure that will fit through the doorway. For most people, this means modestly sized enclosures (often bookshelf or small 'free standing' types), and expecting lots of dB/ watt is unrealistic. Also, consider that smaller drivers have less thermal mass, and since up to 99% of the power delivered to the speaker ends up as heat, power handling is always a compromise. You can't expect a 200mm diameter driver to handle a continuous average power of more than 50W or so, although the peak power can be considerably higher.
+ +
Figure 1 - Level Captured Over 750 Seconds
The above image is a direct capture from my oscilloscope, showing the level captured from FM radio over a 750 second (12½ minute) period. It includes a variety of music styles, and speech between songs. The peak level is generally ±1.5V, but it occasionally goes above this. In the interests of 'headroom', we can assume that the maximum is ±2V. The average RMS value is 380mV. The long-term peak to average ratio is therefore 11.4dB, which isn't too far off the estimates provided earlier.
+ +FM radio is not the ideal medium, but it's the most accessible in my workshop. Like all broadcasts, limiters are employed to ensure that the transmitted signal is kept reasonably constant. If done properly, these have a fast attack to prevent over-modulation of the transmitter, and a slow decay to preserve (at least some) dynamic range.
+ + +While the primary focus of this article is home hi-fi, similar principles apply to live sound systems and instrument amplifiers (e.g. guitar amps and the like). The differences are considerable, usually because large spaces have to be filled with enough sound to keep the punters happy. For instrument amps, the 'traditional' 100W guitar amplifier is generally huge overkill, because most guitar speakers are far more efficient than any home system. 100dB/W/m is not uncommon, so with a fairly typical amount of 'overdrive' (i.e. distortion) a 100W amp can easily deliver over 125W on a fairly continuous basis. Even with a single driver, that will produce up to 120 dB SPL at 1 metre. That's seriously loud, with a maximum exposure time of about 10 seconds in any 24 hour period!
+ +Most modern sound reinforcement systems use line arrays (which I dislike intensely, but that's another story). Many are not particularly efficient, but that's more than compensated by using multiple amplifiers, each capable of up to 2kW, and sometimes more. Interestingly, one brochure I looked at claimed that the mid-high section was capable of 114dB/W/m - almost 2dB greater than the theoretical maximum of 112.1dB/W/m (100% efficient). While this might seem impossible (or at least highly improbable), it comes down to directionality. If all of the acoustic radiation is concentrated in one direction (rather than 'free field' (in all directions at once - omnidirectional) then the figures can indeed appear to be greater than those indicated by the formula shown in Section 2. By their very nature, line arrays are directional, and the longer the line (physically), the more directional it becomes. All directional speaker systems will show (sometimes unexpectedly) higher sensitivity than a normal home hi-fi speaker, but that doesn't always mean that they are actually as efficient as claimed.
+ +Large venues need a lot of amplification, and unfortunately, few venues are designed for optimum sound quality. The 'modern' approach seems to be that the sound contractor is tasked with providing sound at a reasonable level to every seat in the house. No-one seems to care much if the sound is crap, provided everyone gets more-or-less the same crap and at more-or-less the same SPL. It helps if the sound is intelligible, but that may (or may not) be a requirement. No-one expects high fidelity, and that is certainly not what they get (despite a multiplicity of 'specifications' that say otherwise). Modern concerts (especially of the rock/ pop genre) are about 'bums on seats' - the more people you can assault with high-level sound, the greater the profits. Call me cynical (which is fair and reasonable), but I'd much rather get good sound, where I can actually hear the nuances of the performance, rather than making my tinnitus worse than it is already.
+ +It is possible to get good sound, but not when it has to be served up to 60,000 people in a stadium designed for watching football matches. I used to run PA systems for a living, mixing the band, and ensuring that the punters got the best sound I could provide. Despite all the technological advances, it seems that many systems today can't manage to do as well as we achieved 30-odd years ago. This is another topic altogether of course, and no more electrons will be used up to even try to cover it here. There is more info in the article Public Address Systems for Music Applications.
+ +One example does warrant a paragraph though. Guitar speakers are normally very efficient, with around 100dB/W/m being typical. That means that when driven at full power (usually with some clipping), the SPL can easily reach a long term average of well over 100dB SPL. Given that guitarists are often very close to the speakers (within 2 metres or so), it should come as no surprise that a great many of the 'old rockers' are as deaf as posts. Audience members who get as close to the stage (or the PA system) as possible are not much better off, and many of them will be similarly afflicted in later years, if not already.
+ + +As noted earlier, this is a topic well known to professional audio people, but it gets nary a mention (not even in passing) for hi-fi. It's a very real phenomenon, and is caused by the loudspeaker driver's voicecoil getting hot under sustained power. Voicecoils are usually made from copper or aluminium, and like all metals they have what's called a 'thermal coefficient of resistance'. This means that if the voicecoil gets hot, its resistance rises, and the speaker's overall impedance increases.
+ +Without getting into too much detail here, this demonstrates that if the temperature of an 8Ω voicecoil (typically having ~6Ω DC resistance) rises by 100°C, its resistance will increase by around 2.4Ω, so the '8 ohm' speaker now has an impedance of about 11.2Ω. This does two things - it reduces the power being delivered to the speaker for a given voltage, and therefore make the speaker less efficient. The other thing that happens is that the crossover frequency and response changes, because it is no longer loaded with the design impedance.
+ +Because the vast majority of hi-fi speakers use a passive crossover, that means that the tonal balance is affected, and invariably not in a good way. The more power you use into a speaker, the worse the problem becomes, and if you go beyond the limits of the speaker it will be damaged. With professional loudspeakers, many manufacturers go to extreme lengths to minimise power compression, but consider that a figure of 6dB is considered about average. When driven at full rated power (actually voltage), the output level is half the 'nominal' value. A 98dB/W/m driver has been reduced to 92dB/W/m, equivalent to reducing the amp power by a factor of four! A 1kW amp is delivering only 250W because the impedance has doubled.
+ +This is a very good reason not to push your speakers too hard, so if you need more SPL, then you'll be better off using more efficient speakers than a bigger amplifier. Power compression affects all drivers at all power levels, but provided you stay with the maker's recommendations it's not likely to be noticed by most listeners. Naturally, the less power you use, the smaller the effects (to the point where they are virtually inaudible). Speaker efficiency is always quoted when you purchase individual drivers, but often not for complete systems. It is an important parameter, but the ramifications aren't something that many people understand fully (if at all).
+ +I don't recommend that you ask any hi-fi salesperson about power compression, because 99% of them will have no idea what you're talking about.
+ + +This article was prompted by tests I did on a small bench amplifier I'd just built, which can deliver about 25W into 8 ohms. When I cranked it up (with the oscilloscope monitoring the output), just below the onset of clipping I thought that it was surprisingly loud, certainly louder than I expected, and all this through a fairly average 2-way vented box with a 125mm (5" in the old measurements) woofer. Admittedly, the programme material was from FM radio, so the comparison (to a hi-fi system) isn't exactly fair, but so much modern material already has similar amounts of compression that it's not too far off the mark either.
+ +Since then, I've used some fairly dynamic material from a demo CD a friend put together some years ago, and for the majority of the time I thought that at just below clipping, it was too loud to be comfortable for any length of time. The system is also only mono (stereo in my workshop is impractical for a variety of reasons), so it didn't have the benefit of another channel with the same power helping things along. While listening, I also monitored the peak (via the oscilloscope) and RMS voltages, and the ratio of 4:1 was passably consistent (statistically speaking). Some material (such as a drum solo) exceeded that ratio by a good margin, but the peak level is still set by the CD player, so keeping the amp below clipping wasn't difficult. This is amply demonstrated in Figure 1.
+ +I found that around 80dB SPL was about as loud as I was comfortable with. This was achieved quite easily at my workbench, which is around 2.5 metres from the speaker. The difference between speaker level at 1 metre and 2.5 metres was only 2dB, largely due to the construction of that part of the workshop. My main workshop speaker system is 3-way active, horn loaded mids and highs, and a dual 300mm vented box for the bottom end. That easily outclasses the little speaker and amp I was testing, but I still tend to listen at no more than 70dB SPL (and usually much less).
+ +To assist people who really want to know the peak voltage and current they use during listening sessions, a project has been published (see Project 191) that describes a peak detector. It can capture and hold the peak levels of both voltage and current, and you can read the maximum voltage and current with a DMM (digital multimeter) after the event. If either reaches your amplifier's maximum capabilities, you know there is a problem. Another project that's a bit more advanced calculates the actual power in real time (if used with an oscilloscope). This is described in (see Project 192). For most purposes, Project 191 is the better choice, as it tells you the peak voltage and current, the two parameters that let you decide easily whether your amp is big enough or not. Usage is described in the article.
+ + ++ 1 Amplifier Power: How Much is Enough? - Stereophile+ +
+ 2 Noise Dose - NoiseHelp
+ 3 Hearing Loss, Tinnitus - EarScience
+ 4 Dynamics (Music) - Wikipedia
+ 5 Kilpshorn History
+ 6 Loudness War - Wikipedia
+ 7 The Loudness Wars - Why Music Sounds Worse - NPR +
![]() | + + + + + + + |
Elliott Sound Products | +Analogue vs Digital |
The title may not make sense to you at first, because it's obvious that digital exists. You are reading this on a (digital) computer, and the contents of the page were sent via a worldwide digital data network. You can buy digital logic ICs, microprocessors and the like, and they are obviously digital ... or are they? For some time, it's been common to treat logic circuits as being digital, and no knowledge of analogue electronics as we know it was needed. Logic diagrams, truth tables and other tools allowed people to design a digital circuit with no knowledge of 'electronics' at all. Indeed, in some circles this was touted as being a distinct advantage - the subtle nuances of analogue could be ignored because the logic took care of everything. Once you understood Boolean algebra, anything was possible.
+ +The closest that many people come (to this day) to the idea of analogue is when they have to connect a power supply to their latest micro-controller based creation. When it comes to the basics like making the micro interact with the real world, if the examples in the user guide don't cover it, then the user is stopped in his/ her tracks. Even Ohm's law is an unknown to many of those who have only ever known the digital aspect of electronics. Analogue may be eschewed as 'old-fashioned' electronics, and no longer valid with a 'modern' design.
+ +It's common to hear of Class-D amplifiers being 'digital', something that is patently untrue. All of the common digital systems are based on analogue principles. To design the actual circuit for a 'digital' logic gate requires analogue design skills, although much of it is now done by computers that have software designed to optimise the physical geometry of the IC's individual parts.
+ +The simple fact is that everything is analogue, and 'digital' is simply a construct that is used to differentiate the real world from the somewhat illusionary concept that we now call 'digital electronics'. Whether we like it or not, all signals within a microprocessor or other digital device are analogue, and subject to voltage, current, frequency and phase. Inductance, resistance and capacitance affect the signal, and the speed of the digital system or subsystem is determined by how quickly transistors can turn on and off, and the physical distance that a signal may have to travel between one subsystem and another. All of this should be starting to sound very much like analogue by now .
It must be noted that the world and all its life forms are analogue. Nothing in nature is 'black' or 'white', but all can be represented by continuously variable hues and colours (or even shades of grey), different sound pressures in various media or varying temperatures. In analogue electronics these are simply converted to voltage levels. Digital systems have to represent an individual datum point as 'true' or 'false, 'on' or off' or zero and one. This isn't how nature works, but it's possible to represent analogue conditions with digital 'words' - a number of on/ off 'bits'. The more bits we use, the closer we come to being able to represent a continuous analogue signal as a usable digital representation of the original data. However, it doesn't matter how many bits are used, some analogue values will be omitted because analogue is (supposedly) infinite, and digital is not.
+ +In reality, an analogue signal is not (and can never be) infinite, and cannot even have an infinite number of useful voltages between two points. The reason is physics, and specifically noise. However, the dynamic range from softest to loudest, darkest to lightest (etc.) is well within the abilities of an analogue system (such as a human) to work with. A 24 bit digital system is generally enough to represent all analogue values encountered in nature, because there are more than enough sampling points to ensure that the digital resolution includes the analogue noise floor. (Note that a greater bit depth is often needed when DSP (digital signal processing) is performed, because the processing can otherwise cause signal degradation.)
+ +Every digital system that does useful work will involve sensors, interfaces and supervisory circuits. Sensors translate analogue mechanical functions into an electrical signal, interfaces convert analogue to digital and vice versa (amongst other possibilities that aren't covered here), and supervisory circuits ensure that everything is within range. A simple supervisor circuit may do nothing more than provide a power-on reset (POR) to a digital subsystem, or it may be a complete system within itself, using both analogue and digital processing. The important thing to take from this is that analogue is not dead, and in fact is more relevant than ever before.
+ +The development of very fast logic (e.g. PC processors, memory, graphics cards, etc.) has been possible only because of very close attention to analogue principles. Small devices (transistors, MOSFETs) in these applications are faster than larger 'equivalents' that occupy more silicon. Reduced supply voltages reduce power dissipation (a very analogue concept) and improve speed due to lower parasitic capacitance and/or inductance. If IC designers were to work only in the 'digital' domain, none of what we have today would be possible, because the very principles of electronics would not have been considered. It's instructive to read a book (or even a web page) written by professional IC designers. The vast majority of their work to perfect the final design is analogue, not digital.
+ +It must be said that if you don't even understand the concept of 'R = V / I' then you are not involved in electronics. You might think that programming an Arduino gives you some credibility, but that is only for your ability to program - it has nothing to do with electronics. If this is the case for you, I suggest that you read some of the Beginner's Guides on the ESP website, as they will be very helpful to improve your 'real' electronics skills.
+ +There should be a 'special' place reserved to house the cretins who advertise such gems as a "digital tyre ('tire') inflator" (and no, I'm not joking) and other equally idiotic concepts. As expected, the device is (and can only be) electromechanical because it's a small air compressor, and it might incidentally (or perhaps 'accidentally') have a digital pressure gauge. This does not make it digital, and ultimately the only thing that's digital is the readout - the pressure sensor is pure analogue. Unfortunately, the average person has no idea of the difference between analogue and digital, other than to mistakenly assume that 'digital' must somehow be 'better' due to a constant barrage of advertising for the latest digital products.
+ + +The introduction should be understood by anyone who knows analogue design well, but it may not mean much to those who know only digital systems from a programming perspective. This is part of the 'disconnect' between analogue and digital, and unless it is bridged, some people will continue to imagine that impossible things can be done because a computer is being used. In some cases, you may find people who think that 'digital' is the be-all and end-all of electronics, and that analogue is dead.
+ +A great deal of the design of analogue circuitry is based on the time domain. This is where we have concepts (mentioned above) of voltage, current, frequency and phase. These would not exist in an 'ideal' digital system, because it is interested only in logic states. However, it quickly becomes apparent that time, voltage, current and frequency must play a part. A particular logic state can't exist for an infinite or infinitesimal period, nor can it occur only when it feels like it.
+ +All logic systems have defined time periods where data must be present at (or above) a minimum voltage level, and for some minimum time before it can be registered by the receiving device. The voltage (logic level) and current needed to trigger a logic circuit are both analogue parameters, and that circuit needs to be able to source or sink current at its output. This is called 'logic loading' or 'fan-out' - how many external gates can be driven from a single logic output. Datasheets often tabulate this in terms of voltage vs. current - decidedly non-digital specifications.
+ +The digital world is usually dependent on a 'clock' - an event that occurs at regular intervals and signals the processor to move to the next instruction. This may mean that a calculation (or part thereof) should be performed, a reading taken from an input or data should be sent somewhere else. The timing can (in a few cases) be arbitrary, and simply related to an event in the analogue circuitry, such as pressing a button or having a steady state signal interrupted.
+ +The power supply for a digital circuit needs to have the right voltage, and be able to supply enough current to run the logic circuits. Most microcontroller modules provide this info in the data sheet, but many users won't actually understand this. They will buy a power supply that provides the voltage and current recommended in the datasheet. This might be 5V at 2A for a typical microcontroller project board. If someone were to offer them a power supply that could supply 5V at 20A, it may be refused, on the basis that the extra current may 'fry' their board.
+ +Lest the reader think I'm just making stuff up, I can assure you that I have seen forum posts where this exact scenario has been seen. Some users will listen to reason, but others will refuse to accept that their board (or a peripheral) will only draw the current it needs, and not the full current the supply can provide. This happens because people don't think they need to understand Ohm's law because they are working with software in a pre-assembled project board. Ohm's law cannot be avoided in any area of electronics, as it explains and quantifies so many common problems. Imagine for a moment that you have no idea how much current will flow through a resistance of 100 ohms when you supply it with 12V - you are unable to understand anything about how the circuit may or may not work!
+ +There will be profound confusion if someone is told that LED lighting (for example) requires a constant current power supply, but that it also must have a voltage greater than 'X' volts. If you don't understand analogue, it's impossible to make sense of these requirements. I've even seen one person post that they planned to power a 100W LED from standard 9V batteries, having worked out that s/he needed 3 in series. It's patently clear that basic analogue knowledge is missing, but the person in question argued with every suggestion made. S/he flatly refused to accept that 9V batteries (around 500mA/ hour) would be unable to provide useful light for more than a few seconds (perhaps up to one minute) and the batteries would be destroyed - they are not designed to supply over 3A - ever. It goes without saying that the intended switching circuit was also woefully inadequate.
+ +Almost without exception, beginner users of Arduino, Raspberry Pi, Beaglebone, and other microcontroller platforms imagine that they only need to understand the programming of the device, and as soon as they try to interface to a motor, proportional air valve, or even a lamp (usually LED) or a relay, the wheels fall off the project. The instruction sheets and user guides only go so far, and the instant something different is needed that is not specifically explained in the instructions or help files, the user is stopped in his/ her tracks. This is because the basics of analogue electronics are not only not known, but are considered irrelevant. Some of the questions I've seen on forum sites show an astonishing lack of understanding of the most basic principles. Some of the most basic and (to me) incredibly naive questions are asked in forum sites, and the questioner usually provides almost no information, not to hide something, but because they don't know what information they need to provide to get useful assistance.
+ +Anyone who is reasonably aware of analogue basics will at least know to search for information when their project doesn't work as expected, but if your experience is only with digital circuits, you will be unable to understand the basic analogue concepts, and it's all deeply mysterious. All 'electronics' training needs to include analogue principles, because without that the real world of electronics simply makes no sense.
+ +Digital systems use switching, so linearity is not an issue - until you have to work with analogue to digital converters (ADCs) or digital to analogue converters (DACs). In analogue systems, linearity is usually essential (think audio for example), but digital is either 'on' or 'off', one or zero. The ADC translates a voltage at a particular point in time into a number that can be manipulated by software. The number is (of course) represented in binary (two states), because the digital system as we use it can only represent two states, although some logic circuits include a third 'open circuit' state to allow multiple devices to access a single data bus.
+ +While linearity is not normally a factor other than input/ output systems, the accuracy of a computer is limited by the number of bits it can process. For example, CD music is encoded into a 16 bit format, sampled 44,100 times each second (44.1kHz sampling rate). This provides 65,536 discrete levels - actually -32768 to +32767 because each audio sample is a signed 16 bit two's complement integer [ 1 ]. The levels are theoretically between 0V and 5V, but actually somewhat less because zero and 5V are at the limits of the ADCs used. These limits are due to the analogue operation of the digitisation process, and linearity must be very good or the digital samples will be inaccurate, causing distortion.
+ +If we assume 3V peak to peak for the audio input (1V to 4V), each digital sample represents an analogue 'step' of 45.776µV, both for input and output. While it may seem that the processes involved are digital, they're not. Almost every aspect of an ADC is analogue, and we think of it as being digital because it spits out digital data when the conversion is complete. When you examine the timing diagram for any digital IC, there are limits imposed for timing. Data (analogue voltage signals) must be present for at least the minimum time specified before the system is clocked (setup time) and the data are accepted as being valid. The voltages, rise and fall times, and setup times are all analogue parameters, although many people involved in high level digital design don't see it that way.
+ +This is partly because many of the common communication protocols are pre-programmed in microcontrollers and/ or processors (or in subroutine libraries) to ensure that the timings are correct, so the programmer doesn't need to worry about the finer details. This doesn't mean they aren't there, nor does it mean that it will always work as intended. A PCB layout error which affects the signal can make communication unreliable or even not work at all. The real reasons can only be found by looking at the analogue waveforms, unless you consider 'blind' trial and error to be a valid faultfinding technique. You may eventually get a result, but you won't really know why, and it will be a costly exercise.
+ +
Figure 1 - Analogue or Digital?
The above circuit is quite obviously analogue. There are diodes, transistors and resistors that make up the circuit, and the voltage and current analysis to determine operation conditions are all performed using analogue techniques. The circuit's end purpose? It's a two input NAND gate - digital logic! When 'In1' and 'In2' are both above the switching threshold (> 2V), the output will be low ('zero'), and if both inputs are low (less than ~1.4V), the output remains high ('one') [ 2 ]. There is no way that the circuit can be analysed using digital techniques. These will allow you to verify that it does what it claims, but not how it does it!
+ +The threshold voltages are decidedly analogue, because while the input transistors may start to conduct with a voltage below 1.2V or so, the operation will be unreliable and subject to noise. This is why the datasheet tells you that a 'high' should be greater than 2V, and a 'low' should be below 0.7V. Ignore these at your peril, but without proper analysis you may imagine that a 'high' is 5V and a 'low' is zero volts. That doesn't happen in any 'real world' circuit (close perhaps, but not perfect!) It should be apparent that the output of the NAND gate can never reach 5V, because there's a resistor, transistor and a diode in series with the positive output circuit. Can this be analysed using only digital techniques? I expect the answer is obvious.
+ +How does it work? The two inputs are actually a single transistor with two emitters in the IC, but the two transistors behave identically. If either 'In1' or 'In2' is low, Q2 has no base current and remains off because the base current (via R1) is 'stolen' by either Q1A or Q1B. The totem pole output stage is therefore pulled high (to about 4.7V) by the current through R2. When 'In1' and 'In2' are high, the base current for Q2 is no longer being stolen, and it turns on with the current provided by R1. This then turns on Q3 and turns off Q2, so the output is low. You will have to look at the circuit and analyse these actions yourself to understand it. The main point to take away from all of this is that a 'digital' gate is an analogue circuit!
+ +The above is only a single example of how the lines between analogue and digital are blurred. If you are using the IC, you are interested in its digital 'truth table', but if you were to be asked to design the internals of the IC, you can only do that using analogue design techniques. Note that in the actual IC, Q1A and Q1B are a single transistor having two emitters - something that's difficult using discrete parts. The truth table for a 2 input NAND gate is next - this is what you use when designing a digital circuit.
+ +In 1 | In 2 | Out + |
0 (< 0.7V) | 0 (< 0.7V) | 1 (> 3V - load dependent) + |
1 (> 2V) | 0 (< 0.7V) | 1 (> 3V) + |
0 (< 0.7V) | 1 (> 2V) | 1 (> 3V) + |
1 (> 2V) | 1 (> 2V) | 0 (< 0.1V) + |
The truth table for a simple NAND gate is hardly inspiring, but it describes the output state that exists with the inputs at the possible logical states. It does not show or explain the potential unwanted states that may occur if the input switching speed is too low, and this may cause glitches (very narrow transitions that occur due to finite switching times, power supply instability or noise). The voltage levels shown in the truth table are typical levels for TTL (transistor/ transistor logic).
+ +It should be pretty clear that while the circuit is 'digital', almost every aspect of its design relies on analogue techniques. If you were to build the circuit shown, it will work in the same way as its IC counterpart, but naturally takes up a great deal more PCB real estate than the entire IC, which contains four independent 2-input NAND gates. The IC version will also be faster, because the integrated components are much smaller and there is less stray capacitance.
+ +It is educational to examine the circuit carefully, either as a simulation or in physical form. The exact circuit shown has been simulated, and it performs as expected in all respects. All other digital logic gates can be analysed in the same way, but the number of devices needed quickly gets out of hand. In the early days of digital logic, simpler circuits were used and they were discrete. ICs as we know them now were not common before 1960, when the first devices started to appear at affordable prices. It's also important to understand that when the IC is designed, the engineer(s) have to consider the analogue properties.
+ +This is basically a nuisance, but if the analogue behaviour is ignored, the device will almost certainly fail to live up to expectations. No designer can avoid the embedded analogue processes in any circuit, and high-speed digital is usually more demanding of analogue understanding than many 'true analogue' circuits! This is largely due to the parasitic inductance and capacitance of the IC, PCB or assembly, and the engineering is often in the RF (radio frequency) domain.
+ +The earliest digital computers used valves (vacuum tubes), and were slow, power hungry, and very limited compared to anything available today. The oldest surviving valve computer is the Australian CSIRAC (first run in 1949), which has around 2,000 valves and was the first digital computer programmed to play music. It used 30kW in operation! It's difficult to imagine a 40 square metre (floor size) computer that had far less capacity than a mobile (cell) phone today. Check the Museums Victoria website too. It should be readily apparent that a valve circuit is inherently analogue, even when its final purpose is to be a digital computer.
+ +Before digital computers, analogue computers were common, many being electromechanical or even purely mechanical. Electronic analogue computers were the stimulus to develop the operational amplifier. The opamp (or op-amp) is literally an amplifier capable of mathematical operations such as addition and subtraction (and even multiplication/ division using logarithmic amplifiers). Needless to say the first opamps used valves.
+ + +All digital signals are limited to a finite number of discrete levels, defined by the number of bits used to represent a numerical value. An 8-bit system is limited to 256 discrete values (0-255), and intermediate voltages are not available. Conversion routines can perform interpolation (which can include up-sampling), increasing the effective number of bits or sampling rate, and calculating the intermediate values by averaging the previous and next binary number. These 'new' values are not necessarily accurate, because the actual (analogue) signal that was used to obtain the original 8 bits is not available, so the value is a guess. It's generally a reasonable guess if the sampling rate is high enough compared to the original signal frequency, but it's still only a calculated probability, and is not the actual value that existed.
+ +An analogue signal varies from its minimum to maximum value as a continuous and effectively infinite number of levels. There are no discontinuities, steps or other artificial limits placed on the signal, other than unavoidable thermal and 1/f (flicker/shot) noise, and the fact that the peak values are limited by the device itself or the power supply voltages. There's no sampling, and there's no reason that a 1µs pulse can't coexist with a 1kHz sinewave. A sampling system operating at 44.1kHz might pick up the 1µs pulse occasionally (if it coincides with a sample interval), but an analogue signal chain designed for the full frequency range involved can reproduce the composite waveform quite happily, regardless of the repetition rate of the 1µs pulse.
+ +There is also a perceived accuracy with any digital readout [ 3 ]. Because we see the number displayed, we assume that it must be more accurate than the readout seen on an analogue dial. Multimeters are a case in point. Before the digital meter, we measured voltages and currents on a moving coil meter, with a pointer that moved across the dial until it showed the value. Mostly, we just made a mental note of the value shown, rounding it up or down as needed. For example, if we saw the meter showing 5.1V we would usually think "ok, just a tad over 5V".
+ +A digital meter may show the same voltage as 5.16V, and we actually have to read the numbers. Is the digital meter more accurate? We tend to think it is, but in reality it may be reading high, and the analogue meter may have been right all along. There is a general perception that if we see a number represented by normal digits, that it is 'precise', whereas a number represented by a pointer or a pair of hands (a clock) is 'less precise'. Part of the reason is that we get a 'sense' of an analogue display without actually decoding the value shown. We may glance at a clock (with hands) and know if we are running late, but if someone else were to ask us what time it is, we'll usually have to look again to decode the display into spoken numbers.
+ +
Figure 2 - Analogue And Digital Meter Readouts
In the above, if the meter range is set for 10V full scale, the analogue reading is 7.3 volts (near enough). Provided the meter is calibrated, that's usually as accurate as you ever need. The extra precision (real or imagined) of the digital display showing 7.3012 is of dubious value in real life. It's a different matter if the reading is varying - the average reading on a moving coil meter is easily read despite the pointer moving a little. A digital display will show changing numbers, and it's close to impossible to guess the average reading. The pointer above could be moving by ±0.5V and you'll still be able to get a reading within 100mV fairly easily. Mostly, you'd look at the analogue meter and see that the voltage shown was within the range you'd accept as reasonable. This is more difficult when you have to decode numbers.
+ +The same is true of the speedo (speedometer) in a car - we can glance at it and know we are just under (or over) the speed limit, but without consciously reading our exact speed. A digital display requires that we read the numbers. Aircraft (and many car) instruments show both an analogue pointer and a digital display, so the instant reference of the pointer is available. While we will imagine that the digital readout is more accurate, the simple reality is that both can be equally accurate or inaccurate, depending (for example) on something as basic as tyre inflation. An under-inflated tyre has a slightly smaller rolling radius than one that's correctly inflated, so it will not travel as far for one rotation and the speedo will read high.
+ +Digital readouts require more cognitive resources (in our brains) than simple pointer displays, and the perception of accuracy can be very misleading. A few car makers have tried purely digital speedos and most customers hated them. The moving pointer is still by far the preferred option because it can be read instantly, with no requirement to process the numbers to decide if we are speeding or not. There's a surprisingly large amount of info on this topic on the Net, and the analogue 'readout' is almost universally preferred.
+ +Most of the parameters that we read as numbers (e.g. temperature, voltage, speed, etc.) are analogue. They do not occur in discrete intervals, but vary continuously over time. In order to provide a digital display, the continuously varying input must be digitised into a range of 'steps' at the selected sampling rate. Then the numbers for each sample can be manipulated if necessary, and finally converted into a format suitable for the display being used (LED, LCD, plasma, etc.). If the number of steps and the sample rate are both high enough to represent the original signal accurately, we can then read the numbers off the display (or listen to the result) and be suitably impressed - or not.
+ + +Within any digital system, it's possible to bypass the laws of physics. Consider circuit simulation software for example. A mathematically perfect sinewave can be created that has zero distortion, meaning it is perfectly linear. You can look at signals that are measured in picovolts, and the simulator will let you calculate the RMS value to many decimal places. There's no noise (unless you tell the simulator that you want to perform a noise analysis). While this is all well and good, if you don't understand that a real circuit with real resistance will have (real) noise, then the results of the simulation are likely to be useless. The simulator may lead you to believe that you can get a signal to noise ratio of 200dB, but nature (the laws of physics) will ensure failure.
+ +To put the above into perspective, the noise generated by a single 200 ohm resistor at room temperature is 257nV measured from 20Hz to 20kHz. This is a passive part, and it generates a noise voltage and current dependent on the value (in ohms), the temperature and bandwidth. If the bandwidth is increased to 100kHz, the noise increases to 575nV. Digital systems are not usually subject to noise constraints until they interface to the analogue domain via an ADC or DAC, but other forms of noise can affect a digital data stream. Digital radio, TV and internet connections (via ADSL, cable or satellite) use analogue front end circuitry, so noise can cause problems.
+ ++ In all cases, if you have an outdoor antenna with a booster amplifier, the booster is 100% analogue, and so is most of the 'digital' tuner in a TV or radio. The signal remains analogue + through the tuner, IF (intermediate frequency) stages and the detector. It's only after detection and demodulation that the digital data stream becomes available. If you want to see the + details, a good example is the SN761668 digital tuner IC from Texas Instruments. There are others that you + can look at as well, and it should be apparent that the vast majority of all signal processing is analogue - despite the title 'digital tuner'.+ +
+ + The same applies to digital phones, whether mobile (cell) phones or cordless home phones. Transmitters and receivers are analogue, and the digitised speech is encoded and decoded before + transmission and after reception respectively. +
The process of digitisation creates noise, due to quantisation (the act of digitally quantifying an analogue signal). Unsurprisingly, this is called quantisation noise, and the distortion artifacts created in the process are usually minimised by adding 'dither' during digitisation or when the digital signal is returned to analogue format. Dither is simply a fancy name for random noise! Dither is used with all digital audio and most digital imaging, as well as many other applications where cyclic data will create harmonic interference patterns (otherwise known as distortion). A small amount of random noise is usually preferable to quantisation distortion.
+ +Few digital systems are useful unless they can talk to humans in one way or another, so the 'ills' of the analogue domain cannot be avoided. It's all due to the laws of physics, and despite many attempts, no-one has managed to circumvent them. If you wish to understand more about noise, see Noise In Audio Amplifiers.
+ +It's essential to understand what a simulator (or indeed any computer system) can and cannot do, when working purely in the digital domain. Simulated passive components are usually 'ideal', meaning that they have no parasitic inductance, capacitance or resistance. Semiconductor models try to emulate the actual component, but the degree of accuracy (especially imperfections) may not match real parts. If you place two transistors of the same type into the schematic editor, they will be identical in all respects. You need to intervene to be able to simulate variations that are found in the physical parts. Some simulators allow this to be done easily, others may not. 'Generic' digital simulator models often only let you play with propagation time, and all other functions are 'ideal'. Most simulators won't let you examine power supply glitches (caused by digital switching) unless the track inductance and capacitance(s) are inserted - as analogue parts.
+ +The 'ideal' condition applies to all forms of software. If calculations are made using a pocket scientific calculator or a computer, the results will usually be far more precise that you can ever measure. Calculating a voltage of 5.03476924 volts is all well and good, but if your meter can only display 3 decimal places the extra precision is an unwelcome distraction. If that same meter has a quoted accuracy of 1%, then you should be aware that exactly 5V may be displayed as anything between 4.950V and 5.050V (5V ±50mV). You also need to know that the display is ±1 digit as well, so the reading could now range from 4.949 to 5.051. The calculated voltage is nearly an order of magnitude more precise than we can measure. No allowance has even been made for component tolerance yet, and this could make the calculated value way off before we even consider a measurement!
+ +If the meter hasn't been calibrated recently, it might be off by several percent and you'd never know unless another meter tells you something different. Then you have to decide which one is right. In reality, both could be within tolerance, but with their error in opposite directions. Bring on a third meter to check the other two, and the fun can really start. Now, measure the same voltage with an old analogue (moving coil) meter. It tells you that the voltage is about 5V, and there's no reason to question an approximation - especially if the variation makes no difference to the circuit's operation.
+ +Someone trained in analogue knows that "about 5V" is perfectly ok, but someone who only knows the basics of digital systems will be nonplussed. Because they imagine that digital logic is precise, the variation is cause for concern. I get emails regularly asking if it's alright that nominal ±15V supply voltages (from the P05 power supply board) measure +14.6V and -15.2V (for example). The answer is "yes, that's fine". This happens because the circuits are analogue, and people ask because they expect precision. Very precise voltages can be created, but most circuits don't need them. To an 'old analogue man', "about 5V" means that the meter's pointer will be within 1 pointer width of the 5V mark on the scale - probably between 4.9 and 5.1 volts. Likewise, "about ±15V" is perfectly alright.
+ +Ultimately, everything we do (or can do) in electronics is limited by the laws of physics. In the early days, the amplifying devices were large vacuum tubes (aka valves), and there were definite limits as to their physical size. Miniature valves were made, but they were very large indeed compared to a surface mount transistor, and positively enormous compared to an integrated circuit containing several thousand (or million) transistors. None of this means that the laws of physics have been altered, what has changed is our understanding of what can be achieved, and working out better/ alternative ways to reach the end result. Tiny switching transistors in computer chips get smaller (and faster) all the time, allowing you to perform meaningless tasks faster than ever before .
An area where the laws of physics really hurt analogue systems is recordings. Any quantity can be recorded by many different methods, but there are two stand-out examples - music and film. When an analogue recording is copied, it inevitably suffers from some degradation. Each successive copy is degraded a little more. The same thing happens with film and also used to be an issue with analogue video recordings. Each generation of copy reduces the quality until it no longer represents the original, and it becomes noisy and loses detail.
+ +Now consider a digital copy. The music or picture is represented by a string of ones and zeros (usually with error correction), which can be copied exactly. Copies of copies of copies will be identical to the original. There is no degradation at all. There are countless different ways the data can be stored, but unlike a printed piece of engraved metal, ink on paper (or papyrus) or a physical photograph, there is an equally countless range of issues that can cause the data to disappear completely. The physical (analogue) items can also disappear too, but consider that we have museums with physical artifacts that are thousands of years old. Will a flash drive achieve the same longevity?
+ +Will anyone look after our digital data with the same diligence? If all of your photos are on your smartphone or a flash drive and it fails, is lost or stolen, they are probably gone forever. The long forgotten stash of old grainy 'black and white' photos found in the back of an old dresser or writing desk can bring untold delight, but I expect that finding a flash drive in 50 years time will not have the same outcome. It's probable that someone might recognise it as an 'ancient storage device', but will they be able to extract the data - assuming the data even survived?
+ +By way of contrasts, think of vinyl recordings and floppy discs. I have vinyl that's 50 years old, and not only can I still play it, the music thereon is perfectly recognisable in all respects. The quality may not be as good as other vinyl that's perhaps only 30 years old, but the information is still available to me and countless others. Earlier records may be over 80 years old, and are still enjoyed. How many people reading this can still access the contents of a floppy disc? Not just the 'newer' 3½" ones, but earlier 8" or 5¼" floppies? Very few indeed, yet the original floppy is only 47 years old (at the time of writing). Almost all data recorded on them is lost because few people can access it. If it could be accessed, is there a computer that could still extract the information? How much digital data recorded on any current device will be accessible in 50 years time?
+ + +Before anyone starts to get complacent, let's look at a pair of circuits. Both are CMOS (complementary metal oxide semiconductor), so they both have N-Channel and P-Channel transistors. Circuit 2 has some resistors that are not present in Circuit 1, and that will (or should) be a clue as to what each circuit might achieve. I quite deliberately didn't show any possible feedback path in either circuit though.
+ +
Figure 3 - Two Very Different CMOS Circuits
Look at the circuits carefully, and it should be apparent that one is designed for linear (analogue) applications, and the other is not. What you may not realise is that the non-linear circuit (Circuit 1) actually can be run in linear mode, and the linear circuit (Circuit 2) can be run in non-linear mode. "Oh, bollocks!" said Pooh, as he put away his soldering iron and gave up on electronics for good .
Circuit 1 is a CMOS NAND gate, and Circuit 2 is a (highly simplified) CMOS opamp, with 'In 1' being the non-inverting, and 'In 2' is the inverting input. Early CMOS ICs (those without an output buffer, not the 4011B shown in Circuit 1) were often used in linear mode. While performance was somewhat shy of being stellar (to put it mildly), it works and was an easy way to get some (more or less) linear amplification into an otherwise all-digital circuit, without having to add an opamp. When buffered outputs became standard (4011B, which includes Q5-Q8) linear operation caused excessive dissipation.
+ +The lines become even more blurred when we look at a high speed data bus. When the tracks on a PCB or wires in a high speed digital cable become significant compared to wavelength, the tracks or wires no longer act as simple conductors, but behave like an RF transmission line. No-one should imagine that transmission lines are digital, because this is a very analogue concept. Transmission lines have a characteristic impedance, and the far end must be properly terminated. If the terminating impedance is incorrect, there are reflections and standing waves within the transmission line, and these can seriously affect the integrity of the signal waveform, as discussed next.
+ + +In the Coax article, there's quite a bit of info on coaxial cables and how they behave as a transmission line, but many people will be unaware that tracks on a PCB can behave the same way. The same applies to twisted pair and ribbon cables. If PCB tracks are parallel and don't meander around too much, the transmission line will be fairly well behaved and is easily terminated to ensure reliable data transfer. There is considerable design effort needed to ensure that high speed data transmission is handled properly [ 4 ]. This is not a trivial undertaking, and the misguided soul who runs a shielded cable carrying fast serial data may wonder why the communication link is so flakey. If s/he works out the characteristic impedance and terminates the cable properly, the problem simply goes away [ 5 ]. This is pure analogue design. It might be a digital data stream, but moving it from one place to another requires analogue design experience.
+ +This used to be an area that was only important for radio frequency (RF) engineering, and in telephony where cables are many kilometres in length. The speed of modern digital electronics has meant that even signal paths of a few 10s of millimetres need attention, or digital data may be corrupted. A pulse waveform may only have a repetition rate of (say) 2MHz, but it is rectangular, so it has harmonics that extend to well over 100MHz. Even if the rise and fall times are limited (8.7ns is shown in the following figures), there is significant harmonic energy at 30MHz, a bit less at 50MHz, and so on. These harmonic frequencies can exacerbate problems if a transmission line is not terminated properly.
+ +It's not only the cables that matter when a signal goes 'off-board', either to another PCB in the same equipment or to the outside world. Connectors become critical as well, and higher speeds place greater constraints on the construction techniques needed for connectors so they don't seriously impact on the overall impedance. There are countless connector types, and while some are suited to high speed communications, others are not. While an 'ordinary' connector might be ok for low speed data, you need to use matched cables and connectors (having the same characteristic impedance) if you need to move a lot of data at high speeds. This is one of the reasons that there are so many different connectors in common use.
+ +
Figure 4 - Transmission Line Test Circuit
This test circuit is used for the simulations shown below. Yes. it's a simulation, but this is something that simulators do rather well. Testing a physical circuit will show less effect, because the real piece of cable (or PCB tracks) will be lossy, and this reduces the bad behaviour - at least to a degree. The simulations shown are therefore worst case - reality may not be quite as cruel. Note too that all simulations used a zero ohm source to drive the transmission line. When driven from the characteristic impedance, the effects are greatly reduced. Most dedicated line driver ICs have close to zero ohms output impedance, so that's what was used. This is a situation where everything matters! [ 6 ]
+ +In each of the following traces, the red trace is the signal as it should be at the end of the line (with R2 set for 120 Ohms - the actual line impedance). For the tests, R2 was arbitrarily set for 10k, as this is the sort of impedance you might expect from other circuitry. Note that if the source has an impedance of 120 ohms, there is little waveform distortion, but the signal level is halved if the transmission line is terminated at both ends.
+ +The input signal is a 'perfect' 10MHz squarewave, and that is filtered with R1 and C1 (a 100MHz low pass filter) to simulate the performance you might expect from a fairly fast line driver IC. The green trace shows what happens when the line is terminated with 10k - it should be identical to the red trace!
+ +
Figure 5 - Mismatched Transmission Line Behaviour
In the above drawing, you can see the effect of failing to terminate a transmission line properly. The transmission line itself has a delay time of just 2ns (a distance of about 300mm using twisted pair cable or PCB tracks) and a characteristic impedance of 120 ohms. There is not much of a problem if the termination impedance is within ±50%, but beyond that everything falls apart rather badly [ 7 ]. Just in case you think I might be exaggerating the potential problems, see the following oscilloscope capture.
+ +
Figure 5A - Oscilloscope Capture Of Mismatched Transmission Line
The above is a direct capture of a 2MHz squarewave, feeding a 1 metre length of un-terminated 50 ohm coaxial cable. The source impedance is 10 ohms, ensuring a fairly severe mismatch, but not as bad as when the source impedance is much lower. This has been included to show that the simulations are not something dreamed up, but are very real and easily replicated. It is quite obvious that this cannot be viewed as a 'digital' waveform, regardless of signal levels. The 'ripples' are not harmonically related to the input frequency, but are due to the delay in the cable itself. If the input frequency is changed, the ripples remain the same (frequency, amplitude and duration). Each cycle of the reflection waveform takes about 65ns, so the frequency is a little over 15MHz. Note that the peak amplitude is considerably higher than the nominal signal level (as shown in Figure 5).
+ +An electrical signal in a vacuum travels at 300 metres/ µs, or around 240m/ µs in typical cables (0.8 velocity factor). This eventually works out to be roughly 2ns for each 240mm or so of PCB trace. A PCB track that wanders around for any appreciable distance delays the signal it's carrying, and if not terminated properly the signal can become unusable. It should be apparent that things can rapidly go from bad to worse if twisted pair or coaxial cables are used over any appreciable distance and with an impedance mismatch (high speed USB for example). Proper termination is essential.
+ +It is clearly wrong to say that any of this is digital. While the signal itself may start out as a string of ones and zeros (e.g. +Ve and GND respectively), the way it behaves in conductors is dictated solely by analogue principles. The term 'digital' applies to the decoded data that can be manipulated by gates, microprocessors, or other logic circuits. The transmission of signals requires (analogue) RF design principles to be adopted.
+ +You may think that the protection diodes in most CMOS logic ICs will help. Sadly, they can easily make things a great deal worse, as shown next. However, this will only happen if the source signal level is from GND to Vcc, where Vcc is the logic circuit supply voltage. At lower signal levels (such as 1-4V in a 5V system for example) the diodes may not conduct and mayhem might be avoided.
+ +
Figure 6 - Mismatched Transmission Line With Protection Diodes
If ringing causes the input to exceed an IC's maximum input voltage limits (as shown above), the protection diodes will conduct. The 0-5V signal is now close to unusable - it does not accurately show 'ones and zeros' as they were transmitted. When the diodes conduct, the transmission line is effectively terminated with close to zero ohms. This creates reflections that corrupt the signal so badly that the chance of recovering the original digital data is rendered very poor. If this happened to a video signal, the image would be badly pixellated. It may be possible to recover the original data with (hopefully) minimal bit errors by using a filter to remove the high frequency glitches, but proper termination solves the problem completely.
+ +It is a mistake to imagine that because digital is 'ones and zeros', it is not affected by the analogue systems that transport it from 'A' to 'B'. There is a concept in digital transmission called the BER (bit error rate), and this is an indicator of how many bits are likely to be corrupted in a given time (usually 1 second, or per number of bits). For example, HDMI is expected to have a BER of 10-9 - one error every billion bits, which is around one error every 8 seconds at normal HDMI transmission speed for 24 bit colour and 1080p.
+ +Unlike TCP/IP (as used for network and internet traffic), HDMI is a one-way protocol, so the receiver can't tell the transmitter that an error has occurred (although error correction is used at the receiving end). The bit-rate is sufficient to ensure that a pixel with incorrect data (an error) will be displayed for no more than 16ms, and will not be visible. Poor quality cable may not meet the impedance requirements (thus causing a mismatch), and may show visible errors due to an excessive BER. Cables can (and do) matter, but they require proper engineering, not expensive snake oil.
+ +Another widely used transmission line system is the SATA (serial AT attachment) bus used for disc drives in personal (and industrial) computers. This is a low voltage (around 200-600mV, nominally ±500mV p-p) balanced transmission line, which is terminated with 50 ohms. Because of the high data rate (1.5MB/s for SATA I, 3.0MB/s for SATA II), the driver and receiver circuitry must match the transmission line impedance, and be capable of driving up to 1 metre of cable (2 metres for eSATA). If you want to know more, a Web search will tell you (almost) everything you need to know. The important part (that most websites will not point out) is that the whole process is analogue, and there is nothing digital involved in the cable interfaces. Yes, the data to and from the SATA cable starts and ends up as digital, but transmission is a fully analogue function.
+ +Commercial SATA line driver/ receiver ICs such as the MAX4951 imply that the circuitry is digital, but the multiplicity of 'eye diagrams' and the 50Ω terminating resistors on all inputs and outputs (shown in the datasheet) tell us that the IC really is analogue. 'True' digital signals are shown with timing diagrams (which are themselves analogue if one wants to be pedantic), but not with eye diagrams, and terminating resistors are not used with most logic ICs except in very unusual circumstances (I can't actually think of any at the moment). While there is no doubt at all that you need to be a programmer to enable the operating system to use a SATA driver IC in a computer system, the internal design of the cable drivers and receivers is analogue.
+ +The 'eye diagram' is so called because it resembles eyes (or perhaps spectacles). The diagram is what you will see on an oscilloscope if the triggering is set up in such a way as to provide a 'double image', where positive and negative going pulses are overlayed. An ideal diagram would have nice clean lines to differentiate the transitions, but noise, jitter (amplitude or time) and other factors can give a diagram that shows the signal may be difficult to decode. The eye diagram below is based on single snapshots of a signal with noise, but the 'real thing' will show an accumulation of samples.
+ +
Figure 7 - 'Eye Diagram' For Digital Signal Transmission
The central part of each 'eye' (green bordered area in the first eye) shows a space that is clear of noise or ringing created by poor termination. The larger this open section compared to the rest of the signal the better, as that means there is less interference to the signal, and a clean digital output (with a low BER) is more likely. Faster rise and fall times improve things, but to be able to transmit a passable rectangular (pulse) waveform requires that the entire transmission path needs a bandwidth at least 3 to 5 times the pulse frequency. The blue line shows the signal as it would be with no noise or other disturbance. Note that jitter (unstable pulse widths) is usually the result of noise that make it difficult to determine the zero-crossing points of the waveform (where the blue lines cross).
+ +Some form of filtering and/ or equalisation may be used prior to the signal being sampled to re-convert the analogue electrical signal back to digital for further processing, display, etc. It is clear that the signals shown above aren't digital. They may well be carrying digitally encoded data, but the signal itself is analogue in all respects. Fibre optic transmission usually has fewer errors than cable, but the transmission and reception mechanisms are still analogue.
+ ++ To give you an idea of the signal levels that are typical of a cable internet connection, I used the diagnostics of my cable modem to look at the signal levels + and SNR (signal to noise ratio). Channel 1 operates at 249MHz, has a SNR of 43.2dB with a level of 7.3dBmv (2.31mV). Just because a system is supposedly digital, + the levels involved are low, and it would be unwise to consider it as a 'digital' signal path. The modulation scheme used for cable internet is QAM (quadrature + amplitude modulation) which is ... analogue (but you already guessed that+ +). QAM is referred to as a digital encoding technique, but that + just means that digital signal streams are accommodated - it does not mean that the process is digital (other than superficially). +
Some may find all of this confronting. It's not every day that you are told that what you think you know is largely an illusion, and that everything is ultimately brought back to basic physics, which is analogue through and through. The reality is that you can cheerfully design microcontroller applications and expect them to work. Provided you are willing to learn about simple analogue design, you'll be able to interface your project to anything you like. The main thing is that you must accept that analogue is not 'dead', and it's not even a little bit ill. It is the foundation of everything in electronics, and deserves the greatest respect.
+ + +If you search for "analogue vs digital", some of the explanations will claim that analogue is about 'measuring', while digital is 'counting'. It's implied (and stated) that measuring is less precise than counting, so by extension, digital is more 'accurate' than analogue. While it's an interesting analogy in some ways, it's also seriously flawed. It may not be complete nonsense, but it comes close.
+ +In the majority of real-world cases, the quantity to be processed will be an analogue function. Time, weight (or pressure), voltage, temperature and many other things to be processed are analogue, and can only be represented as a digital 'count' after being converted from the analogue output of a vibrating crystal, pressure sensor, thermal sensor or other purely analogue device. Digital thermometers do not measure temperature digitally, they digitise the output of an analogue thermal sensor. The same applies for many other supposedly digital measurements.
+ +If physical items interrupt a light beam, the (analogue) output from the light sensor can be used to increment a digital counter, and that will be exact - provided there are no reflections that cause a mis-count. So, even the most basic 'true' digital process (counting) often relies on an analogue process that's working properly to get an accurate result. Fortunately, it's usually easy enough to ensure that a simple 'on-off' sensor only reacts to the items it's meant to detect. However, this shows that the 'measuring vs. counting' analogy is flawed, so we should discount that definition because it's simply not true in the majority of cases.
+ +As noted earlier, much of the accuracy of digital products is assumed because we are shown a set of numbers that tell us the magnitude of the quantity being displayed. Whether it's a voltmeter showing 5.021V or a digital scale showing 47.8 grams, we assume it's accurate, because we see a precise figure. Everyone who reads the numbers will get the same result, but everyone reading the position of a pointer on a graduated scale will not get the same result. This might be because they are unaware of parallax error (look it up), or their estimation of an intermediate value (between graduations) is different from ours.
+ +One way of differentiating analogue and digital is to deem analogue to be a system that includes irrational numbers (such as π - Pi), while digital is integers only. This isn't strictly true when you look at the output of a calculation performed digitally, but internally the fractional part of the number is not infinite. It stops when the processing system can handle no more bits. A typical 'digital' thermometer may only be able to show one decimal place, so it can show 25.2° or 25.3°, but nothing in between (this is quantisation error). The number displayed is still based on an analogue temperature sensing element, and it can only be accurate if calibrated properly.
+ +You also need to consider an additional fact. In order for any ADC or DAC to provide an accurate representation of the original signal, it requires a stable reference voltage. If there is an error (such as a 5V reference being 5.1V for example), the digital signal will have a 2% error. With a DAC having the exact same reference voltage, the end result will be accurate, but if not - there is an error. The digital signal is clearly dependent on an analogue reference voltage being exactly the voltage it's meant to be. This fact alone makes nonsense of the idea that digital is 'more accurate' than analogue.
+ + +While this article may look like it is 'anti digital', that is neither the intention nor the purpose. Digital systems have provided us with so much that we can't live without any more, and it offers techniques that were impossible before the computer became a common household item. CDs, digital TV and radio, and other 'modern marvels' are not always accepted by some people, so (for example) there are countless people who are convinced that vinyl sounds better than CD. One should ask if that's due to the medium or recording techniques, especially since there are now so many people who seem perfectly content with MP3, which literally throws away a great deal of the audio information.
+ +Few would argue that analogue TV was better than 1080p digital TV, and we now have UHD (ultra high-definition) sets that are capable of higher resolution than ever before. It wasn't long ago that films in cinemas were shown using actual 35mm (or occasionally 70mm) film, but the digital versions have now (mostly) surpassed what was possible before. Our ability to store thousands of photographs, songs and videos on computer discs has all but eliminated the separate physical media we once used. Whether the digital version is 'as good', 'better' or 'worse' largely depends on who's telling the story - some film makers love digital, others don't, and it's the same with music.
+ +The microphone (and loudspeaker) are perfect examples of analogue transducers. There is no such thing as a 'true' digital microphone, and although it's theoretically possible, the diaphragm itself will still produce analogue changes that have to be converted to digital - this will involve analogue circuitry! Likewise, there's no such thing as a digital loudspeaker, although that too is theoretically possible (although most must still be electromechanical - analogue). Ultimately, the performance will still be dictated by physics, which is (quite obviously) not in the the digital domain.
+ +The important thing to understand is that all digital systems rely extensively on analogue techniques to achieve the results we take for granted. The act of reading the digital data from a CD or Blu-Ray disc is analogue, as is the connection between the PC motherboard and any disc drives. The process of converting the extracted data back into 'true' digital data streams relies on a thorough understanding of the analogue circuitry. Those who work more-or-less exclusively in the digital domain (e.g. programmers) often have little or no understanding of the interfaces between their sensors, processors and outputs. This can lead to sub-optimal designs unless an analogue designer has the opportunity to verify that the system integrity is not compromised.
+ +Analogue engineering is indispensable, and it does no harm at all if a programmer learns the basic skills needed to create these interfaces (indeed, it's essential IMO). An interface can be as complex as an expensive oversampling DAC chip or as simple as a transistor turning a relay on and off. If the designer knows only digital techniques, the project probably won't end well. Any attempt at interfacing to a motor or other complex load is doomed, because there is no understanding of the physics principles involved. "But there's a chip for that" some may cry, but without understanding analogue principles, it's a 'black box', and if something doesn't work the programmer is stopped in his/ her tracks. Countless forum posts prove this to be true.
+ +This article came about (at least in part) from seeing some of the most basic analogue principles totally misunderstood in many forum posts. At times I had to refrain from exclaiming out loud (to my monitor, which wouldn't hear me) that I couldn't believe the lack of knowledge - even of Ohm's law. Questions that are answered in the many articles for beginners on my site and elsewhere were never consulted. The first action when stuck is often to post a question (that in many cases doesn't even make any sense) on a forum.
+ + +++ ++
+1 Compact Disc Digital Audio - Wikipedia + 2 Chapter 6, Gate Characteristics - McMaster University + 3 Analog And Digital - ExplainThatStuff + 4 High-Speed Layout Guidelines - Texas Instruments + 5 Twisted Pair Cable Impedance Calculator - EEWeb + 6a AN-991 Line Driving and System Design Literature - Texas Instruments + 6b Transmission Line Effects Influence High Speed CMOS - AN-393, ON-Semi/ Fairchild + 7 High Speed Layout Guidelines - SCAA082, Texas Instruments + 8 As edge speeds increase, wires become transmission lines - EDN +
![]() | + + + + + + + |
Elliott Sound Products | +Arc Mitigation & Prevention |
A great deal of what you need to know about arc prevention and/ or mitigation is shown in the second relays article - Relays (Part 2), Contact Protection Schemes. However, there are many other techniques that were only mentioned briefly, largely because they are either little-known or are still covered by patent protection. While this prevents the circuits from being used commercially without infringing, the information is available from the patent documents, so the techniques can still be discussed.
+ +Where circuits are provided, they will show the general scheme, but with only representative component values unless these were also made available in the patent documents. Since most circuits of this nature have to be designed for a particular set of conditions, component values only apply for a limited range of voltages and currents, and there is no 'one size fits all' solution. Arcing contacts have been the bane of industrial systems for as long as they have existed, but today systems run faster than ever before, so contact erosion becomes critical.
+ +Every time a set of contacts arc, material is removed from one contact and re-deposited on the other. With AC, one might imagine that this balances out, but surface erosion causes higher resistance and greater losses. DC systems are particularly hard, because DC creates bigger and better arcs than AC, even at lower voltage and current. A standard 'miniature' style relay can withstand no more than 30V at rated current, and with typical contact clearance of only 0.4mm, higher voltage will cause a sustained arc that can (and does) totally destroy the relay contacts. A photo can be seen in 'Relays - Part 2', linked above. This can happen with only a slight overvoltage - the voltage and current ratings for relays are not described as such, but they represent 'absolute maximum' values to obtain the rated life.
+ +This article is intended to provide information and ideas - it is not intended to be a construction guide. The basic DC arc prevention scheme shown in Figure 4.2 has been bench-tested, and it works exactly as described. Even with an 80V supply and a 4Ω load, there was zero arcing when the relay contacts opened. This demonstrates that relays can be operated safely at well beyond their voltage rating, but it comes with some risk. Electronic parts can fail, and the result may be catastrophic. Considerable testing is necessary to ensure that whatever you choose to do will be safe and reliable, and always include a fuse or other safety device to guard against severe overloads that may cause additional damage or fire.
+ + +The basics of a relay are fairly simple, but there are many styles, and countless variations. Multiple contact sets are common, and most are available with different configurations. The contacts are almost always mounted on phosphor-bronze or similar material that has the ability to flex many thousands of times before breaking (mechanical failures are remarkably uncommon. When the coil is energised (AC or DC, depending on the intended usage), the armature is attracted to the pole-piece, and an actuator pushes the moving (common) contact arm to open the normally closed contacts, and close the normally open contacts. Relays with only normally open contacts are common, but it's not often that you'll come across one having only normally closed contacts. They do exist, but changeover contacts are probably the most common of all.
+ +
Figure 1.1 - Relay General Principle
While most people won't necessarily come across contactors very often (if at all), they are really just a large relay. The internals are far more robust, and most are designed for 3-phase mains. While single-phase and 2-phase versions also exist, they are less common. The basic internals are shown below. The most notable difference is that most contactors have two contacts in series, and a wider separation. Many relays (particularly miniature types) have a contact separation of between 0.4mm and 0.8mm, where a typical contactor may have a total separation of 5-10mm.
+ +
Figure 1.2 - Contactor General Principle
The primary differentiator between relays and contactors is that the contactor is far more rugged, and contacts are generally spring-loaded to ensure very good contact. Most use an AC coil, with a laminated steel core (yoke) and armature. The magnetic pull is designed to be a great deal higher than any relay, and most use two sets of contacts in series. Due to the size and complexity, they are generally far more expensive than most relays. There's a wide variation in styles, but that's usually only the cosmetics - the principles are unchanged. When the coil is powered, the armature pulls in and closes the contacts. To allow for contact wear and erosion, the moving contacts are usually spring-loaded. The fixed contacts are generally rigid, and power connections are generally bolted onto the accessible terminal.
+ + +Most control systems use relays for turning equipment on and off. These remain the dominant control switch, as they are low-cost, reliable and are used in the millions. The cousin of the relay is the contactor - it's principle of operation is identical, as described above. The majority (but by no means all) are activated with AC at the nominal mains voltage, although 24V AC is also common as it qualifies as 'SELV' (safety extra-low voltage). Contactors are used for motor control, and are commonly 3-phase, so have three sets of (usually) normally open contacts to switch the power. Some may include auxiliary contacts that may be used to indicate that the contactor is activated or otherwise, or to signal the contactor state to a system controller.
+ +I don't intend to cover contactors further here, as they are in a different league from the relays that most people will use.
+ +
Figure 2.1 - Automotive Relay Insides
It's almost universal that people use a diode in parallel with DC relay coils to absorb the potentially damaging back-EMF. It won't damage the relay, but if not suppressed it will usually kill the relay driver transistor or IC. For example, if a relay coil draws 50mA at (say) 12V, when turned off, the magnetically stored charge has to go ... somewhere. If we assume a 1MΩ impedance, the voltage will theoretically rise to 50kV. This never happens in reality, but in excess of 1kV is common. The diode reduces that to a mere 0.7V, but it has an unexpected side effect. Relay activation and release times are usually shown in the datasheet, but the release (drop-out) time is invariably quoted with no protective diode.
+ +
Figure 2.2 - Relay Test Circuit
We can use an automotive relay with a 218mA, 12V coil as an example, which has a resistance of 55Ω, operated at 13.5V. Like most relays, the datasheet will say that the drop-out voltage is 10% of the nominal operating voltage (1.2V). At 1.2V, the coil current is only 22mA. With 280mH of coil inductance (not provided in the app. note, but I measured it on a similar relay), it takes 9.6ms for the coil current to fall to 22mA with a diode, and the relay can't even start to release until the current has fallen below that. This does two things. Firstly, it delays the relay drop-out time, so the figure quoted in the datasheet can't be achieved. Secondly (and more importantly), it reduces the armature's speed because there is still some magnetic energy left in the poles.
+ +The answer to this is described in the Relays (Part 1) article, but is repeated here because it's important. If the simple diode is replaced by a diode in series with a 24V zener, the current falls to 22mA in less than 1.9ms. More importantly, the armature can accelerate at close to its maximum, back towards the rest position. This is because the current derived from the back-EMF decays much faster. You can use a higher voltage zener diode to get even faster response, at the expense of a higher back-EMF. The transistor driving the relay must be rated for at least 20% higher voltage than the back-EMF that will be generated. Some examples are shown in the following table, adapted from a TE-Connectivity application note [ 4 ].
+ +Suppression | Release Time (ms) | Theoretical Back-EMF | Measured Back-EMF + |
Unsuppressed | 1.5 | ∞ | -750 + |
Diode & 24V Zener | 1.9 | -24.8 | -25 + |
680Ω Resistor | 2.3 | -167 | -120 + |
470Ω Resistor | 2.8 | -115 | -74 + |
330Ω Resistor | 3.2 | -81 | -61 + |
220Ω Resistor | 3.7 | -54 | -41 + |
100Ω Resistor | 5.5 | -24.6 | -22 + |
82Ω Resistor | 6.1 | -20.1 | -17 + |
Diode | 9.8 | -0.8 | -0.7 + |
A resistor in parallel with the relay coil (with or without the diode) is an old technique that was common in very early systems - before diodes were readily available. I first saw this used with electric clocks, operating from 1.5V. The de-facto standard was to use a resistor with 10 times the resistance of the coil (allowing a 15V back-EMF pulse). Clearly, allowing a higher back-EMF means faster release times, but at the expense of an added zener diode (or a resistor) and a higher voltage requirement for the drive transistor. However, allowing higher back-EMF has distinct advantages, in that the relay releases faster, which reduces arcing at the contact faces. This leads to less contact erosion and longer relay life. It's a trade-off, but in some applications it can be very important. Control systems are often especially vulnerable, because they have a high 'work load', and down time is very expensive.
+ +While the figures shown are from the referenced application note, they are easily measured with an oscilloscope, and can be simulated. The 'unknown' is the relay's inductance, which is almost never published. It's essential for simulations, but it's generally irrelevant. It can't be measured directly, but can be determined by measuring the resonant frequency with a paralleled capacitor. The armature must be closed manually to obtain the inductance value that determines the drop-out time. Measuring the inductance is difficult, because it's a very low Q circuit due to eddy-current losses in the solid core and armature. It's not necessary, but a series circuit (at very low voltage and impedance) gives the best result.
+ ++ L = 1 / (( 2π × f )² × C ) ++ +
While the diode by itself is by far the most common approach taken by the DIY (and audio) fraternity), as seen in the table it's far from ideal. Fortunately, it's rarely necessary to ensure the fastest possible release time, but it is useful for DC protection relays. However, the extra few milliseconds doesn't cause any problems - the problems arise from the relay trying to interrupt a high DC voltage at considerable current. This is not a simple task! It can be made marginally easier by ensuring that contacts separate as quickly as possible.
+ + +An arc is formed when ionised air particles bridge the gap between the contacts. Once the voltage exceeds the critical potential (which depends on the contact materials and many other factors), the ionised air particles allow conduction, and the air (or other gas) and vapourised contact material turns into plasma (the fourth state of matter). The temperature of the arc can be over 5,000°C, depending on available current. No known contact material can withstand that - even tungsten, which melts at less than 3,500°C.
+ +Snubber circuits are one way to help extinguish an arc, as the initial energy is absorbed by the capacitor, and the stored charge is dissipated by the resistor. This arrangement does not mean that you can exceed the relay's voltage rating, but it does reduce arcing to the point where contact damage is minimised, allowing reasonable (or at least acceptable) contact life. Like so much in electronics, it's a compromise.
+ +
Figure 3.1 - Basic Snubber Circuit
The values for resistance and capacitance are not overly critical. The capacitor needs to be large enough to absorb the energy, but not so large that it can allow significant current flow with an AC supply. The resistor needs to be small enough to let the capacitor absorb the initial energy, but not so small that a high current flows when the contacts close. A reasonable starting point is as follows ...
+ ++ R1 - 0.5 to 1 Ohm per contact volt+ +
+ C1 - 500nF to 1 µF per contact amp +
There are more 'advanced' snubbers, typically including a diode to allow maximum capacitive arc damping, but these are only suitable for DC circuits. AC is less troublesome than DC, because the voltage and current pass through zero every 10ms (50Hz) or 8.33ms (60Hz), although the two may not happen at the same time (due to phase shift with reactive loads). Any arc that forms usually can't last longer than one half-cycle, but if ionised particles are still present the arc may re-strike if the contacts are pushed to their limits. Relay specifications take this into account.
+ +Because the plasma forming an arc is both fluid and conductive, it can be manipulated by, and creates, a magnetic field. If a magnet is positioned where it will interact with the arc, it can be stretched until it extinguishes (at least that's the theory). This technique is used in some industrial contactors, but it requires experimentation to get the magnet position right. While it certainly works, it's not something I'd recommend because most relays are fully sealed. You can't see inside them, so you have no way of knowing if the magnet is in the right place, or is strong enough to extinguish the arc. Magnetic arc 'extinguishing' systems are a fairly specialised field, and while experimentation is always encouraged, don't expect miracles.
+ +Most magnetic systems are a part of a more complex overall solution, that often includes specially fabricated arc chutes or arc splitters. These guide the arc away from the contacts, and divide it into smaller segments that are cooled by the chute until the arc extinguishes (shown in Figure 3.1). The contacts are usually provided with arc 'horns' (aka arc runners) that rely on the fact that an arc will tend to rise. The effect known as a 'Jacob's ladder' also relies on this - the arc moves up a pair of wires due to convection - the air around the arc is (super) heated, so it rises.
+ +None of these are available in common relays, because they are not designed to interrupt fault currents. Relays are used for control, and are not considered to be safety devices. However, arcs will occur every time the relay is activated with any voltage present and interrupting current flow. Often, the arc is not visible, but it's there anyway, even with surprisingly low loads. Every time the contacts arc, a little bit of damage is done to the contact surface, meaning that contact resistance rises as the relay is used.
+ +You will find arc chutes in circuit breakers (CBs), as these are designed to be a safety cutout. Most use a thermal system for prolonged (but minor) over-current (up to 200% of rated current, where the CB should disconnect within 3-20ms), and a magnetic trip circuit to protect the wiring against short circuits. The magnetic section is designed not to operate unless the fault current exceeds a specific value, and most circuit breakers are designed to be able to break a fault current of at least 4.5kA (4,500A). The absolute magnetic trip value is rarely specified in datasheets, but is covered in the relevant standards for the country where the CB will be used.
+ +I tested a 16A thermal-magnetic circuit breaker, and the internal resistance was 23mΩ. At rated current, it will dissipate 5.9W, rising to 9.2W at 20A, and almost 21W at 30A. With 50A, the breaker could be heard to buzz (the magnetic circuit was on the verge of tripping), but with a dissipation of 57W, the thermal cutout operated in less than 1 second. At 100A, it cut out within 10ms, checked over multiple test cycles. Mostly, about 1 half-cycle was enough to trip the magnetic cutout. The passive arc mitigation system used in circuit breakers is complex, and it needs a photo ...
+ +
Figure 3.2 - Circuit Breaker Internal Mechanism
Perhaps the most remarkable thing about circuit breakers is the very low cost (the one pictured was under AU$5.00) compared to the number of precision parts involved. The actuator mechanism is quite complex, as it must provide positive contact closure, but can be tripped with very little force. The bi-metallic strip gets hot at high current and bends upwards. If deflected sufficiently, it will touch the trip mechanism, and only very light pressure is needed to release the contacts.
+ +Should normal mains voltage be applied under fault conditions, a large arc will be created. This is stretched by the 'arc horn', and is then split and cooled by the arc chute. The latter is a series of metal plates (9 in the unit shown) that are insulated from each other. The breaker shown has a fault current rating of 6,000A (6kA). That is normally not possible because the mains impedance will usually be somewhere between 0.5Ω and 1Ω (230V mains). This means that the worst-case current will usually be less than 460A. Higher current breakers are used in 120V countries because appliances draw more current for the same power.
+ +It's almost never mentioned, but AC circuit breakers can also be used with DC. I wouldn't exceed 100V or so, but the wide contact separation and arc mitigation elements should be more than capable of breaking a DC arc quite easily. Naturally, if this is something you want to try, you must test it thoroughly before installation. Should testing indicate that the CB cannot break the voltage and current you are using, then you have to use something different.
+ +
Figure 3.3 - Circuit Breaker Cutout Curves
The family of curves shown it adapted from an 'Engineering Talks' article [ 5 ], and shows the expected range where the breaker will activate. The 'B-Curve' is not common, and most systems use the 'C-Curve'. 'D-Curve' (delay) breakers are used when high inrush current is expected, and they allow higher peak current without tripping. The CB shown above is a C-Curve type.
+ +The current scale is normalised to unity. For a 16A C-curve breaker, the magnetic cutout will activate at currents between 5.5 × 16A (88A) and 9 × 16A (144A). Based on my tests, 100A provided reliable tripping, although the open-circuit voltage of my test transformer is only 4V. Circuit breakers don't care about the voltage (other than for arc mitigation), and are operated only bt current flow. Below the magnetic cutout current, disconnection is due only to current through the bi-metallic strip. The load should disconnect within 20 seconds at 3 times the rated current (48A), and this was confirmed by testing.
+ +Note: - any experiments you perform are at your own risk entirely. You will be dealing with fairly high DC voltages, and looking at a sustained arc can cause irreparable eye damage due to the intense ultraviolet light emitted. There is also a risk of serious burns and fire. No experiments should be carried out if you have little experience with high voltage, high current and arcs in general. This is a fairly specialised field, and extreme care is required. If you wish to run this kind of test, I suggest Project 207 - High Current AC Source. This allows very high current at a safe low voltage (around 4V RMS open circuit).
+ +Contact arcing is such a problem for industrial systems that countless patents have been lodged for new, no-so-new, exciting and mundane ways to reduce contact damage. Some should never have been granted because they are 'common knowledge', while others are quite innovative. If you want to know about the various systems that have been devised, a patent search will provide many results.
+ +The passive designs you'll come across are not intended to allow the use of any relay contacts beyond their rated voltage and current limits. When contacts arc, damage is done to the contacts, and the idea is to minimise this damage, not to let you operate the relay beyond its rated current limits. However, active arc prevention (and/ or hybrid relays) do allow you to operate a standard relay at a higher DC voltage than recommended. Active circuit failure has to be considered, because any semiconductor can fail for any number of reasons.
+ +
Figure 3.4 - Arc Voltage; 60V DC Supply, 8Ω Load
The arc in the screen capture lasts for a little over 350ms, and this test was done with a relay having 0.8mm contact separation. No suppression was used, but the armature's movement was damped by the external supply used. With a smaller gap, this arc would be sustained and would have a lower impedance, thus hastening the demise of the contacts. How it's dealt with depends on the application, and in many cases the recommended solution is to use two sets of contacts in series. This increases the overall separation distance, and also provides more contact area to help cool the arc, which will cause it to extinguish. Operating contacts at above their rated DC voltage is never recommended, which is why there are so many products made that are designed to quench (or prevent) arcs from forming in the first place. The same setup as described was tested with two sets of contacts in series, and while there was an arc, it was small and extinguished quickly.
+ +As noted above, a snubber network can be used in parallel with the contacts. This will not allow operation above the maximum rated voltage or current, but if properly designed for the application, a snubber will reduce arcing. This can help to reduce contact erosion, but it's not as effective as active techniques. However, it's cheap to implement and can extend the life of a relay. Snubbers cannot be relied upon to allow operation at voltages and currents above the rated maxima.
+ + +Passive systems can only suppress an arc, but cannot prevent one from forming. Where it's important to ensure that there is no arc at all, an active system is required. These can eliminate the arc completely, by ensuring that the EMR contacts only carry the active current, with the current interruption function handled by semiconductor switches.
+ +Active arc suppression involves semiconductors and other support components. Unlike the suppression system shown for a circuit breaker, there are definite limits to the current that can be interrupted, and they are not intended for use as a safety cutout. Should a major fault develop that trips the CB, there's a good chance that the controller (which can use active systems) may be damaged. Active systems are intended for use where high loading is expected, and/ or rapid cycling which will lead to early contact failure.
+ +One thing that an active systems allows (and this includes hybrid relays), is that the full contact current can be used with either AC or DC! Normally, DC operation is limited to around 30V for most relays, but if the contacts only have to carry current and never break an arc, then the only limitations are those imposed by the relay's insulation and the contact gap. Even 0.4mm will withstand 500V or more under static (no current) conditions, so if an arc can never eventuate, then the relay's only limitation is contact resistance and insulation ratings. Dry air will not allow an arc at a voltage less than ~30kV/ centimetre (3kV/ mm), so even 0.4mm separation can (theoretically) withstand 1.2kV before a spontaneous arc will develop.
+ +The following circuit has been simulated and workbench tested, and it does exactly what's claimed for it in the patent. Although I came across the patent drawings more-or-less by accident, I was partway there with some other experiments I was playing with. It may appear simple, but the component values require optimisation for best performance. Like many other arc interrupter/ suppression techniques (which includes capacitive snubber networks), the circuit does allow some leakage across the contacts when they are open. This can be hazardous if used in an industrial system, and it would breach regulations if used on an emergency stop system, or a safety isolator.
+ +
Figure 4.2 - MOSFET Arc Extinguisher (DC Version)
The above circuit is shown primarily to demonstrate the circuitry necessary to ensure that an arc is quenched (or in this case, not started at all). The circuit is based on a patent taken out by International Rectifier (one of the pioneers of MOSFETs). The patent (US7145758) is still current, so I have only included indicative component values, being those I used for my test. A more recent patent uses additional parts to switch off the MOSFETs much faster than the simplified version shown. The MOSFET will conduct for around 300-500µs (depending on component values used), while the 'enhanced' version turns off in 100µs or less. In this (and the next) circuit, the arc is not 'mitigated', it is prevented from happening at all. The MOSFET will conduct when there's around 12V or so across the contacts, so an arc doesn't get a chance to form.
+ +Workshop tests show that it works extremely well. There is a small arc when the contacts close, which is caused by contact bounce. When the contacts are opened, there wasn't even a hint of an arc, even with a test voltage of 80V DC and a nominal 4Ω load. While that suggests 20A DC, in reality it's less because the power supply isn't regulated. It's still a very severe test, and was done with the same relay used to produce Figure 3.3. With the higher voltage and current, the relay would sustain a continuous arc without the MOSFET circuit - I know this because I tested it (and a mighty arc it was, too!). This is a case where reality and simulation were in 100% agreement. Even after a number of switching 'events' in fairly rapid succession, the MOSFET I used didn't even get warm. Instantaneous power would be about 400W, but the duration is very short (less than 1ms).
+ +It's an elegant solution, and the added cost and complexity is such that it will pay for itself fairly quickly, thanks to reduced 'down-time' of critical equipment. Circuits such as that shown are used where contacts are constantly opening and closing under load, so arc suppression means far longer life for the relay/ contactor. This class of circuit is intended for industrial applications, where contact operation is in the hundreds (or thousands) of cycles a day, and failures are very costly. There are quite a few companies whose livelihoods depend on arc suppression technology, either as users or sellers.
+ +DC is by far the worst for contact arcing. Most miniature relays only have a contact separation of around 0.4mm, and a continuous (and destructive) arc can be created remarkably easily. The DC rating for most relays is 30V DC at rated current, but that's the figure provided to obtain rated life. Most will be able to sustain an arc with a voltage of around 40V DC at rated current. By using a circuit such as that shown, there is no arc as the relay opens - none at all!
+ +
Figure 4.3 - MOSFET Arc Extinguisher (AC Version)
In the AC version, two identical but inverted circuits are used in parallel with the contacts, because the polarity is unknown. One or the other circuit will conduct, depending on the polarity at the time the contacts open. The total active device dissipation can be very high with either circuit, depending on the voltage and current. However, it has a brief duration (generally less than one millisecond), so unless operated with a short duty-cycle (with many switching events per minute), the average dissipation is low enough that it won't cause problems. However, if you happen to be switching 10A at 230V, the peak dissipation may exceed 2kW, and the MOSFET(s) used must to be able to handle that.
+ +A MOSFET such as the IRFP450 is rated for 500V, 56A pulsed drain current, and a dissipation of 190W (at 25°C). The safe operating area graph indicates that 300V at 10A (3kW) is permissible, provided the duration is less than 200µs. This is not a recommendation, but is an example of a device that may be suitable.
+ +There is a place in audio for the AC version shown - loudspeaker DC protection. A normal relay cannot break the DC output from a failed amplifier if the voltage is much more than ±30V. The Figure 4.3 circuit will break almost any DC voltage of either polarity reliably, something that's simply not possible with any standard relay. Normally, the relay should always be connected so it shorts the speaker (not the amplifier!) when the relay opens due to a fault. In this role, the relay is considered 'sacrificial' - it will almost certainly be destroyed (but your speakers are saved). Don't use any circuit that doesn't short the speaker, as it won't save anything from destruction with more than 30V.
+ + +These are the ultimate for the prevention of arcs, including those created by contact bounce. They are covered in detail in the article Hybrid Relays using MOSFETs, TRIACs and SCRs, so will only be discussed briefly here. Because the 'solid state' switch is activated first, the electromechanical relay's contacts never have to switch much current, and the EMR reduces power dissipation to the lowest level possible. When released, the solid state switch remains on until the EMR has released, so there can be no arc.
+ +However, this comes with some complexity, including the requirement for an isolated driver for the electronic switching. This is easy enough with TRIACs and SCRs, but is more difficult with MOSFETs. However, there are solutions for this, and example circuits are shown in the article. An electronic timer is also needed, which can be a simple comparator, a 555 timer, or it can all be controlled by a microprocessor. There are tangible benefits, especially with high current (particularly DC at more then 30V), if the switching cycle is short, or if very precise timing is required.
+ + +It's now possible to arrange a switching system for almost any imaginable load, over a wide range of voltages and currents. Switching DC need not be the problem it's always been, but there is an inevitable increase in complexity. Semiconductors used in conjunction with EMRs provide capabilities that were not possible in the early days of switching systems, but careful design is essential to ensure that the electronic parts run as cool as possible. This often means adding a heatsink.
+ +While heatsinks are a nuisance and add cost and bulk to the end product, operating any electronic parts at high temperatures reduces their allowable dissipation, and failures become more likely. Whenever a hybrid solution is used, it's essential that there's a safety cutout in the system, so that a faulty semiconductor doesn't wreak havoc on a machine or an entire production line (and yes, that can happen easily if you miss something that causes a switching system to become a short circuit).
+ +No system can ever be ideal in all respects, and the art of design is to work through the compromises needed (and compromises are always needed in any design) to arrive at an end result that does what's needed. Amateurs who have a good understanding of the risk/ reward equation will usually over-engineer the solution, since they may only be building a small number of switches. The industrial designer is forced to push everything to its limits to keep costs down. There's not much point having the 'best' system available if it's so expensive that no-one will buy it.
+ +This article is intended to show principles, and is not a construction or design guide. However, it should help if you find yourself with a seemingly intractable problem where arc mitigation or prevention is required. It's unlikely that many DIY builders will need more than 'moderate' power - up to perhaps 1kW or so, and the parts needed are not especially expensive. For those who simply want to experiment with ideas, this should give you a head-start.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +Audio Signal Mixing |
The mixing of a number of audio signals is such a common thing to do that one would expect the Net to be riddled with articles on how and why signals are mixed. There are plenty of circuits that show how it can be done, but very little that explains the benefits or drawbacks of any particular scheme.
+ +In the early days, there was little or no requirement for mixing. In most cases, the band or (small) orchestra used one microphone, and the amplified output went straight to air for broadcasts or direct to the cutting lathe for recordings. This was before tape or wire recording was used. Because there was so little need for mixing, very simple schemes could be used. Peoples' expectations were low too - at the time it was sufficiently amazing that recordings or 'wireless' were even possible, so no-one was listening for any of the issues discussed below.
+ +Even though there were issues, there were also ways to ensure that they did not impinge in any way on the listeners' enjoyment of the programme material. If audio circuits had to be switched, master level controls would be reduced momentarily to minimise switching noises for example. As audio broadcasts and recordings became more complex, simple manual techniques were no longer suitable because of the number of channels.
+ +Many of the earliest mixers may have had perhaps 4 channels at most. Even such a small mixer started to become problematical though. As channels were switched in or out there would be level changes on the remaining channels. Likewise, even adjusting a level control (fader) could cause the overall programme level from other channels to change.
+ +To understand the reasons, we need to look at the circuits that were used (and still are in many applications).
+ +For the uninitiated, it may seem a little silly that we need to use a whole bunch of circuitry to mix signals. Surely if we just connect the outputs of the various sources together they will mix just fine, no? No!
+ +In fact, many people have done just this and managed to get away with it, but it's purely good luck rather than good management. Consider that most modern equipment uses opamps or other 'solid state' output circuits, and these generally have a very low output impedance. 100 Ohms is typical, but some are a little more, others less.
+ +A 1V signal fed from a 100 ohm output (A) into another 100 ohm output (B) will do two things ...
+ +At a peak voltage of 5V (a perfectly normal transient for example), the driving equipment will be expected to provide 25mA. This exceeds the ability of most opamps, so the signal will distort. Needless to say, equipment 'B' sees the same problem. Worst case is when 'A' has a positive-going transient and 'B' has a negative-going transient. The maximum expected current flow can be very high, and nearly all opamps will distort badly with very low load impedances. The issues are shown simplified in Figure 1, with 3 pieces of equipment simply joined (perhaps using a couple of Y-Splitters in reverse).
+ +
Figure 1 - How Not To Mix Signals
Interestingly, direct mixing may work with some older valve (vacuum tube) equipment, but in general the same issues apply despite the relatively high output impedance. While the impedance is high and the expected current is low, valve equipment simply cannot provide much current at all, and even light loading by today's standards (say 10k) can easily cause a significant increase of distortion and premature clipping.
+ +It would be possible to make the output impedance of all equipment much higher, so direct mixing would not cause any circuit stress. The problem would then be that we are back to the position we had when valve gear ruled ... high impedance causes relatively high noise and high frequency rolloff with long cables. Cables can also become microphonic, and this is why so many pieces of valve kit used output transformers - to provide a low impedance (optionally balanced) output to prevent the very problems described. Low output impedance is here to stay, as are mixers, so now we can examine the methods in more detail.
+ + +The simplest passive mixer known is two (or more) resistors - one for each input signal. This is shown in Figure 2, and I have used 3 inputs to enable a full understanding of the possible issues. Indeed, all following examples will use 3 channels, because it's a good number to show the effects properly.
+ +A simple resistive mixer as shown below is a voltage mixer. All inputs are assumed to come from low impedance voltage sources. If the source impedance is higher than expected the signal loss for that channel will be higher than for the other channels. The external resistance (which is assumed to exist inside the source equipment as part of its output impedance) is in series with the mixing resistor, so there is more attenuation. The amplifier stage that follows is commonly referred to as the mix recovery amplifier. It is shown for completeness, but plays no part in the mixing process itself - the mixer is passive, despite the opamp. Note the possibility for crosstalk between channels as shown between channels 1 and 2.
+ +
Figure 2 - Simple Passive Mixer Circuit
Each source is assumed to generate an open circuit output signal voltage of 1V RMS. Because of the mixing resistors, the output from each is 333mV when any one signal is present. When all signals are present at once, the output voltage depends entirely on the instantaneous voltage and phase of each input signal. With typical 1V RMS music signals present at each input (e.g. vocals and a couple of guitars) the output will be between 300mV and 600mV RMS. This relationship is unpredictable though, because it depends on the instantaneous voltage and phase of the signal at each input. Note that the peak output voltage at the mix point cannot - ever - exceed the peak value of the input signals, even when peaks align perfectly for phase and amplitude. For example, mixing 3 in-phase sinewaves of 1V RMS will provide an output of 1V RMS (1.414V peak).
+ +To understand the limitations, we need to look at what happens if an input is disconnected using the switch in Channel 2 (but ignoring the switch in Channel 3 for the time being). When 3 inputs are connected to low impedance external sources, the circuit acts as a voltage divider for each input. Just looking at input #1, it is apparent that R1 forms a voltage divider with R2 in parallel with R3. Since each resistor is 10k, we have a voltage divider consisting of 10k and 5k. Voltage division is ...
+ ++ VD = ( R1 / ( R2 || R3 )) + 1 ... Where VD is voltage division, and || means "in parallel + with".+ +
+ VD = ( 10k / 5k )) + 1 = 3 +
The output is therefore 333mV for a 1V input (as noted above). If input #2 is simply disconnected from the source by unplugging it, or by using a simple switch as shown in Channel 2, the voltage division ratio changes ...
+ ++ VD = ( R1 / R3 ) + 1+ +
+ VD = ( 10k / 10k ) + 1 = 2 +
... so the output level is now 500mV instead of 333mV - a 3.5dB increase. This is one of the problems with passive mixing - any change of inputs (the number or impedance) changes the output level. The change becomes smaller as more channels are added, but so does the signal level from each individual input. As shown for Channel 3, a switch can be used that doesn't simply disconnect an input from the source. With this switching, the unused mixing input is shorted to earth when the source is disconnected. This maintains the normal voltage division ratio, so inputs can be connected or disconnected at will. Doing so may cause momentary level 'surges', clicks or pops if the switch is operated while programme material is being mixed.
+ +If there are 16 channels (not a large mixer by today's standards), with 1V of input the output from each individual input would only be 59mV. More channels means less signal. The relationship described above still tends to hold though, so provided all channels are used the output will still tend to be between 300mV and 600mV (assuming a 1V input to each channel). Disconnecting an input as shown with the Channel 3 switch provides no noise advantage, and the mix recovery amp operates at normal gain at all times.
+ +
Figure 3 - Passive Mixer With Channel Level Controls
Things become even more irksome when we add level controls (or faders) for the input channels. Unless buffer amplifiers are used, changing one fader affects the level of the final mix. This is clearly unacceptable. Even with a large number of inputs, there will still be a small change just by operating one fader, and there may be a problem with crosstalk when stereo channels are used.
+ +In Figure 3 you can see that the faders change the impedance seen by the mixing resistors. The effective source impedance is maximum when the fader is (electrically) centred, and will have a value of one quarter of the fader's total resistance. Needless to say, there are ways around all of the issues faced, but passive mixing is rarely used in any professional equipment. If all mix sends are buffered, there is no longer a limitation, but the controlled sends for every send on every channel must be buffered. This could easily add 4 or more opamps to each channel of a mixer - 64 extra opamps just for a 16 channel mixer with 4 sub-mix buses! Quite clearly, this is unacceptable.
+ +The passive mixing technique is still useful though, for example to sum the bass outputs of an electronic crossover to allow the use of a single subwoofer. There are also a few simple mixing tasks for which a pair of resistors is ideal, and it would be silly to add a whole bunch of extra circuitry for such a simple task.
+ + +The biggest issues with passive mixing are interaction between channels, crosstalk (important for stereo mixers) and noise. The voltage from each channel is attenuated by the number of channels plus one - so a 24 channel mixer has a signal attenuation of 25 for each individual channel. 1V input gives a 40mV output for a single channel. While this is not a major issue because a low noise amplifier can recover the signal easily, interaction and crosstalk cannot be tolerated in a professional mixer.
+ +Imagine the result of using a passive mixer, but rather than simply connecting the mix resistors together, they are all connected to earth (ground). Crosstalk is no longer possible because the mix output voltage is zero. Likewise, interaction is equally impossible because all mixed signals are shorted to earth. Shorting out the mix bus (to earth/ ground) is not useful, but what if we could use a virtual earth that could make use of the current flowing from each mix resistor?
+ +Active mixing relies on exactly that principle. The idea is to use a current amplifier (aka transconductance amplifier), with an input impedance of close to zero ohms. The amplifier relies only on the current through the mixing resistors, and because the mixer amplifier is a virtual earth (hence the name 'virtual earth mixer'), there can be no crosstalk or interaction between channels. Each input resistor connects to the virtual earth, so there is almost no voltage present at the mixing point.
+ +
Figure 4 - Active Mixing Circuit ('Virtual Earth')
The general scheme is shown in Figure 4, and the opamp is an inverting stage, with all mixed signals connected to the inverting input. While it's not commonly described as such, an opamp connected this way is a current amplifier (inverting). Whatever current flows into (or out of) the input is balanced by the current flowing through the feedback resistor (R4), such that the difference between the two inputs is close to zero. In essence, the opamp causes the instantaneous current I4 to be exactly equal and opposite to the sum of instantaneous currents I1, I2 and I3.
+ +Since the non-inverting input is connected to earth (aka ground), the inverting input therefore becomes a 'virtual earth'. In reality, it will have measurable impedance - for it to be a true virtual ground would require that the opamp has infinite gain over the audio frequency range. If one uses a TL072 (for example), you can expect the impedance at the inverting input to remain below 100 ohms up to around 32kHz. The signal voltage at the virtual earth will be well below 1mV up to 1kHz, and remain below 30mV at 30kHz. Better opamps will obviously provide better performance. Capacitance between the mix bus and ground must be minimised, or the mixer may become unstable at very high frequencies.
+ +It is not uncommon to use external transistors to increase the performance of the opamp. This was a common trick many years ago, where transistors were added to the front end of µA741 opamps to obtain more gain and (more to the point) much lower noise. These days there's no need, because there are many exceptionally low noise opamps available. These will almost always beat a discrete circuit in all respects - especially input impedance (which must be as low as possible) and distortion. Discrete (or hybrid) circuits may be better for noise, but should not be necessary unless you exceed 16 channels or so.
+ +Virtual earth mixers have an interesting characteristic that will seem strange at first. Even though the gain for a signal from each individual channel may be unity (a common approach), the circuit has a far greater gain for noise. This 'noise gain' is created because all of the input (mixing) resistors are effectively in parallel. So while the signal gain for one channel may be unity, the noise gain is ...
+ ++ An = Rfb / ( Rmix / N ) + ...+ +where An is noise gain, Rfb is the value of the feedback resistors, Rmix is the value of the + mixing resistors and N is the number of channels.
+
For the 3 channel mixer shown, the noise gain is therefore 3 (at least when all pots are at maximum or minimum), and this applies whenever the inputs are connected to a source. Noise gain is minimised by disconnecting all mixing resistors that are not being used. The signal gain is not affected when channels are connected or disconnected because of the virtual earth mixing scheme, and there are no clicks or pops provided there is no DC in any of the channels.
+ +However ... while signal gain at mid frequencies may not be affected as channels are switched in and out, the frequency response of the mixing amplifier is what you would normally expect with it operating at the noise gain obtained using the above formula.
+ +For example, using 10k mixing resistors and a 10 channel mixer, the individual channel gain is -1 (unity, but inverted), the noise gain is 10, and the mix amp will have the frequency response you'd expect of the same opamp operating with a gain of 10. If more channels are added, the high frequency -3dB point will reduce, exactly as it does if you try to operate any opamp with high gain. While this is rarely a limitation in practice, it needs to be considered as part of the process.
+ +Thermal noise is created by the mixing resistors themselves, and it becomes significant because of the sheer number of them in a large mixer. Low values are best, but there is a practical minimum - this is generally considered to be between around 2.2kΩ to 5.6kΩ. However, values below 3.9k are usually not practical due to excessive opamp loading with multiple buses, and much above 5.6k means more noise. For a 3 channel mixer, 10k is perfectly reasonable, although I used 10k here more for convenience than anything else.
+ +Use of virtual earth (or virtual ground if you prefer) mixing is almost universal now. I know of no commercial mixers that use a passive mix bus, because they just don't work very well. Before opamps (or even transistors for that matter), extremely low input impedances suited for current input amplifiers were still available. While these circuits were more common for RF designs, they were actually well suited to virtual earth mixing.
+ +
Figure 5 - Common-Base & Common-Grid Current Amplifier Stages
The most common discrete low input impedance stage is a common-base/ common-grid (aka grounded base/ grid) stage. These are shown above. Simulation of the grounded grid stage shows some interaction, as does the grounded base circuit. As shown, the grounded base circuit has an input impedance of less than 14 ohms across the full frequency range (2.2k input resistors), and the grounded grid circuit has an input impedance of about 660 ohms (10k input resistors).
+ +
The signal voltage at the mix point will typically be around 4mV with the values shown, and total gain is dependent on the values of input resistor. The transistor stage is designed for a single 30V DC supply. The gain is 0.95 (grounded base) and 0.3 (grounded grid) with the suggested values. The gain of both can be increased by reducing the input resistors, but at the expense of greater interaction from separate inputs.
+ +Unlike an opamp, these circuits are non-inverting. While interesting and potentially useful, as far as I'm aware these circuits were not commonly used. I used the common base circuit in some PA amp heads I built many years ago, but most of the others around at the time used a passive (voltage) mixer. As a mixing technique, they leave a lot to be desired compared to an opamp stage.
+ +Note that the valve (vacuum tube) grounded grid circuit is not the only way to achieve this result, nor is the grounded base transistor circuit. There are several options for both solid and vacuum state mixers, however these are (for the most part) outside the scope of this article. Those shown are for information only. Figure 7 shows two alternatives, which use negative feedback to create the 'virtual earth' required. Opamps are preferred though, due to lower noise and distortion.
+ +Note that I refer to the circuit below as 'common collector' because the collector circuits of each section are common to the output. This is not a common collector circuit in the normal sense (i.e. emitter follower), but I couldn't think of a better name for it.
+ +
Figure 6 - 'Common Collector' Transistor Mixer
The circuit shown above was used by a few manufacturers in the early days of transistor circuits. A similar arrangement can be used with valves as well. There is some interaction from external pots or if signal sources are connected/ disconnected, and this arrangement is limited to a small number of channels. While it doesn't look like it's the case, the circuit shown is in fact a passive mixer. The transistor stages make it appear to be a true active mixer, but it's not. Each transistor acts as a current modulator, and the total current from all transistors (both signal and DC) is summed in R11.
+ +Like all simple transistor circuits, the noise and distortion contributions are quite high compared to even rather pedestrian opamps. You can expect the distortion to be around 5% with an output level of 2V RMS - an atrocious result, and quite unacceptable. While distortion will be reduced as the level decreases, noise will become intrusive at low levels.
+ +
Figure 7 - Alternative Early Current Amplifier Stages
It would be remiss of me not to include the stages shown above. These are reasonable equivalents to the standard opamp stage, and work in a similar manner (they are inverting, too). The input impedance is lower than the stages shown in Figure 5, and there is (almost) no interaction between the inputs. Low frequency impedance can be improved with higher value caps for C1 and C2 if necessary. The valve stage input impedance will be higher still because there is far less open loop gain. I was able to simulate this, and the result was a little surprising - it's far better than you'd expect. As a virtual earth mixer, the valve stage works well, but all impedances will be a great deal higher than with any transistor circuits. High impedances inevitably lead to increased noise.
+ +The mixing (input) resistors for the transistor stage would normally be 10k (unity gain for a single channel). With the values as shown, the input mixing resistors for the valve stage need to be no less than 100k. This provides a gain of two for each channel, and higher values may be needed to reduce the gain if there are more than four channels. Using a pentode for V1 will improve performance due to its higher gain, although it will still be lower than the gain of a transistor circuit.
+ + +Because the virtual earth mixing system is extremely low impedance, it is very susceptible to induced noise current. The mix bus(es) usually run the full width of the mixer, and become extremely sensitive simply because of the length of the bus itself. Anything that generates a magnetic field that's close to the mixer will cause noise at the output. The most common types of noise are hum (from nearby transformers) and/or buzz (from transformers that supply rectifiers). If the mixer chassis is not completely shielded RF noise may also be a problem in some cases. The standard way that this type of noise is eliminated is to use a balanced mix bus. Any external noise will be injected into both the +ve and -ve mix bus, and is cancelled out by the balanced mixing amplifier. If RF is a problem, a balanced mix bus is unlikely to be very helpful, because RF will affect many other parts of the circuitry as well.
+ +
Figure 8 - Balanced Mix Bus & Mixing Amp
The switching shown is naturally optional. One of the nice things about virtual earth mixing (whether balanced or unbalanced) is that channels can be connected and disconnected at will. Provided there is absolutely no DC in the audio circuit, switching is generally silent. If a signal is switched while it is at maximum level, there may be a slight click. Some mixers use 'soft switching' to ensure that there are no clicks or pops no matter when the signal is switched - this usually involves using FETs as switching devices.
+ +Although there is a great deal more circuitry involved to create a balanced mix bus for a large mixer, there is also a useful reduction of noise. The mixed output has 6dB more signal output because of the balanced bus, but noise is only increased by 3dB. While 3dB lower noise may not seem like much, it is still a worthwhile improvement. The extra parts are of little consequence in a top-of-the-line mixer - these are often of the "if you have to ask the price you can't afford one" category.
+ + +The virtual earth (or virtual ground) mixer stage is almost universal, and has fewer limitations than the apparently simpler passive mixing technique. Like most things in electronics, both methods are compromises. As is probably obvious by now, the benefits of the virtual earth mixer generally outweigh any disadvantages. This is shown by the fact that it is almost universal for any mixer with more than two channels.
+ +For very simple mixers, simple resistor mixing stages are common and are well suited to the task. Common uses for such simple circuits are to convert a stereo signal into mono - either full range or only the bass frequencies. For any application where the separate channels need individual gain controls, then even for the simplest of mixers the virtual earth stage is preferred.
+ +There is no advantage using valve (vacuum tube) circuits for mixing - although interesting from a nostalgic perspective, they don't work very well and can't be recommended. For most hobbyist applications, a simple unbalanced virtual earth mixer will do everything that is needed, and performance will be very good indeed if a reasonably good opamp is used (OPA134 or NE5534 for example).
+ +![]() | + + + + + + + |
Elliott Sound Products | +Transformers For Small Signal Audio |
Using a transformer in a small signal audio circuit is a simple process, and at first glance there is nothing that can go wrong. The term 'small signal' is used to differentiate between transformers used for so-called line level applications and those used to drive speakers (for example). Indeed, in many cases everything works as it should, but there are some traps for the unwary. Common issues may include high distortion at low frequencies, grossly accentuated bass response, little or no bass, or a combination of problems.
+ +The casual experimenter may think that audio transformers are a thing of the past, but this is definitely not the case. Transformers have unique attributes that cannot be matched by active circuitry. Even though there are some extremely good electronic interfaces, none provide the exceptional characteristics that come with a transformer. These include ...
+ +Obviously, transformers have their fair share of negative attributes too. The imperfections are primarily due to the fact that real world materials are used in their construction, and like all real materials they are imperfect. The main issues are ...
+ +Despite the limitations, transformers can do a very credible job of isolating signals from hostile environments, impedance conversion and converting unbalanced signals to balanced and vice versa. Nearly all valve (vacuum tube) amplifiers use a transformer to reduce the relatively high impedance of the valve plate to something useful for driving speakers, and high quality microphone preamps and mixing desks either use mic (and/or line) transformers as a matter of course or offer them as an option.
+ +However, there are various ways (exciting or otherwise) where you can run into trouble. There are equally novel ways that you can use to bypass the limitations of even cheap transformers and it may even be possible to make a silk purse from a sow's ear (no, not really - just kidding. )
As noted earlier, this article is focussed on 'small signal' audio transformers, such as those used for microphone preamps, line level input and output applications, and as signal isolators. By definition, 'small signal' refers to transformers that are typically rated for impedances between 50 and perhaps 10kΩ, and used with voltages from a few millivolts up to about 10V (RMS), and covering the normal audio bandwidth. This is usually taken to be from 20Hz up to 20kHz, but it's not uncommon to extend this in both directions - say from 10Hz to 40kHz - allowing an extra octave either side of the audio band. They are not used at significant power levels, and the maximum will be a few milliwatts.
+ +One of the 'reference' levels is dBm, defined as 1mW at an impedance of 600Ω. This equates to a voltage of 774.6mV, which is normally rounded to 775mV. dBu is a reference level based on 775mV RMS, but without specifying the impedance. dBV is 1V RMS, and again no impedance is stated.
+ + +A transformer is defined first and foremost by its turns ratio, which is equal to the voltage ratio from input to output. In some cases, a transformer that's classified as 1:1 may actually be around 1:1.1 to account for insertion loss - that energy that doesn't get through the transformer due to losses. The main source of loss is the resistance of the windings, and the insertion loss will usually be specified (assuming that it is specified) at 1kHz.
+ +Low frequency performance is determined by the transformer's inductance, signal level and the source impedance. A transformer intended for high impedance use requires a much greater inductance than one intended to be driven by a low impedance source. The core material and size is also very important, as these determine the voltage and frequency at which the core will start to saturate. Core saturation causes distortion, and for this reason it is usually extremely important to ensure that there is little or no DC flowing in the primary or secondary, as this will cause asymmetrical core saturation.
+ +As the core approaches saturation, distortion rises dramatically. At low frequencies in particular, maximum signal level, source impedance and distortion are interdependent, and cannot be categorised separately. Because of this, they must always be examined together. Should a data sheet discuss any one (or two) of these parameters in isolation, the data are meaningless and the actual performance must be determined by measurement. However, there are still traps for the unwary, with some than can be used to advantage.
+ +Interestingly, if you drive an ideal transformer (i.e. one with zero winding resistance) from a zero impedance source, the distortion will be close to zero at any frequency. However, real transformers always have winding resistance, but it is possible to use a driver circuit that has negative impedance. If the negative drive impedance exactly matches the winding resistance, the result is a zero ohm source. Unfortunately, negative impedance amplifiers are inherently unstable, and can create far more problems than they will ever cure. Nonetheless, we'll look at this option later in this article.
+ +Another interesting point about transformers is that distortion is highly frequency dependent, so is far less intrusive than an equal amount of distortion from an amplifier [ 1 ]. The frequency dependent distortion is unique to transformers, especially since it is worse at low frequencies. Amplifiers will often have higher distortion at high frequencies, where it can create far more problems. In particular, intermodulation distortion will usually be much lower than expected, based on the low frequency harmonic distortion figure.
+ +A transformer's high frequency response is limited by leakage inductance. This is caused by magnetic flux that manages to 'escape' from the core, and it appears as a separate inductance in series with the primary. There are ways that are used to minimise leakage inductance, and these must be applied in any transformer that has a significant step-up or step-down ratio. 1:2 or 2:1 ratios (or less) are usually easy enough to make with acceptable leakage inductance. While measuring leakage inductance might seem to be a rather esoteric test, in reality it's actually quite simple - short circuit the secondary and measure the primary inductance. A perfect transformer would show zero inductance.
+ +
Figure 1 - Transformer Equivalent Circuit
Figure 1 shows the equivalent circuit of a transformer. It is greatly simplified, but serves to illustrate the points. Since the windings are usually layered, there must be capacitance (CW) between each layer and indeed, each turn. This causes phase shifts at high frequencies, and at some (high) frequency, the transformer will be 'self resonant'. This is not a problem with power transformers (for example), but does cause grief when a wide bandwidth audio transformer is needed.
+ +The leakage inductance (LL) is effectively in series with the transformer. Although small, it tends to affect the high frequencies in particular, and is especially troublesome for audio output transformers. This is typically measured with an inductance meter, with the output winding short circuited. Any inductance that appears is the direct result of leakage flux. RL is a resistance in parallel with the leakage inductance, and indicates that it is not perfect. Self-resonance occurs when LL resonates with CW.
+ +RS is the source resistance, which may range from a few milliohms up to perhaps 1kΩ for typical audio coupling applications. High primary source impedance means that the primary inductance must also be high.
+ +LP is the primary inductance, and as you can see, there is a resistor in parallel (RP). This represents the actual impedance (at no load) presented to the input voltage source, and simulates the iron losses. Iron loss and saturation are frequency dependent, but are difficult to model. The series resistance (RW) is simply the winding resistance, and represents the copper losses (insertion loss). The required inductance is directly proportional to impedance and inversely proportional to frequency (low frequency - high inductance).
+ +CP-S is the inter-winding capacitance, and for many transformers it can be a major contributor to noise at the output. This is often overcome by using an electrostatic shield (aka Faraday shield) between primary and secondary, which is connected to chassis earth and shunts the capacitively coupled noise to earth so it cannot pass between primary and secondary.
+ +There is another issue with audio transformers, especially those that are used at low levels. Hum fields from nearby power transformers can often be very high, and many high quality audio transformers are fitted with one or more magnetic shields to minimise hum pickup. This is not an easy task, and good magnetic screening is difficult to obtain. The materials used must have very high permeability or screening will not be effective. It's not uncommon for very high quality transformers to be fitted with two or even three magnetic shields, with each providing perhaps 30dB or so attenuation of 50-60Hz hum. If the external field is especially strong, it may saturate a high permeability shield. In extreme cases, it may be necessary to enclose the transformer in a steel outer enclosure - steel has a relatively low permeability and is much harder to saturate.
+ +Be very careful with any audio transformer with Mu-Metal or similar magnetic shielding. If the transformer is dropped (for example), the properties of Mu-Metal and other high permeability materials can be reduced quite dramatically. When the shielding cases are manufactured, they must be annealed after bending and other machining operations or the magnetic shield's properties will be decidedly sub-optimal.
+ + +In many cases, you can simply use a transformer as purchased, and hope that it will do what you need (and/or what the brochure or datasheet claimed). If it does, then you don't need to do anything, but with many 'reasonably priced' audio transformer you may find that the specifications will either not match the reality, important information will be omitted, or both. One important detail is nearly always missing - primary inductance. If you want to be able to use the transformer in any way other than as described in the datasheet, you need this information.
+ +One of the most misleading (and generally useless) specifications is the transformer's impedance. A transformer doesn't have an impedance by itself - the impedance at the primary (input) terminals is entirely dependent on the impedance at the secondary and vice versa. Datasheets commonly state the impedance, but as a specification it's not useful. For anything other than top of the line audio transformers, you usually won't find any information on the maximum low frequency voltage, distortion at that voltage and frequency, or any Zobel network that might be needed to tame high frequency self-resonance effects.
+ +To allow you to get the most from a transformer, it's necessary to know the primary inductance. It may also helpful to know the frequency that was used to measure it. Inductance is omitted in almost all datasheets! However, you can get an estimate by looking at the bass -3dB frequency and the claimed impedance. For example, a transformer might claim to be -3dB at 30Hz with an impedance of 600Ω. That means that the inductive reactance is equal to 600Ω at 30Hz, so ...
+ ++ L = XL / ( 2π × f ) Where XL is inductive reactance and f is frequency+ +
+ L = 600 / 2π × 30 )
+ L = 1.59 H +
Will this be accurate enough? In some cases, probably not, so you'll have to measure it. Some LCR (inductance, capacitance, resistance) meters will give a fairly accurate result, and some will give a reading that's way off. All is not lost though, because you can measure the impedance quite easily, and then use the above formula to calculate the inductance. To get an accurate result, you will need to ensure that the measurement frequency is high enough (or the level low enough) to avoid core saturation, because even very slight saturation will cause a large error. The applied signal must be a sinewave (but you knew that already).
+ +You also need to ensure that the impedance measured is at least 10 times the winding resistance, and preferably more to get higher accuracy. All rather tedious really, but there's a better way.
+ +The easiest technique you can use is to supply the transformer with a sinewave via a capacitor. You are going to measure the resonant frequency of the circuit. If you use a 100nF cap (accurate to the same level as you expect your measurement) this should give a good result. With this method, the winding resistance is (almost) immaterial, but the final answer depends on the tolerance of the capacitor and how accurately you can measure the frequency. In general, if you get within 5 or 10% that will usually be sufficient.
+ +Connect the generator, capacitor and transformer primary winding in series, and monitor the voltage across the transformer with an oscilloscope. At some frequency it will rise to a maximum, which can actually be much higher than the audio generator output level! Reduce the level from the audio generator to keep the voltage across the transformer below the claimed maximum level, and measure the frequency carefully. Let's assume that you measure a frequency of 399Hz at resonance. Now you can use the formula below to determine the inductance ...
+ ++ L = 1 / (( 2π × f )² × C )+ +
+ L = 1 / (( 2π × 399 )² × 100n )
+ L = 1.59 H +
Now, in this case the results are the same, but in reality they can be very different. If you have an inductance meter, I recommend that you use that to measure the inductance too - not because it's useful, but to demonstrate that inductance meters often give readings that are wildly inaccurate. There are several reasons that meters will get the wrong answer, including winding resistance and core losses. Although generally quite small, core losses appear as a resistance in parallel with the winding, and combined with the winding resistance cause most meters to give you an answer that looks plausible, but is quite wrong.
+ +In some cases you may find that the above method doesn't work as well as you may have hoped, often due to a very low Q resonance. It's worst when measuring very high inductances (100H or more) as this causes the resonance Q to fall dramatically. In part, this is also due to the core loss (modelled as RP) which increases the measured resonant frequency slightly. If this is the case, you can measure the frequency where the output on the secondary has a phase shift of 90° with respect to the input (primary). You still use the series capacitor, but the frequency used in the above formula is that where the phase shift is 90°, and not where the amplitude is greatest. The secondary must be open-circuit, so use a 10MΩ scope probe. While this will give a more accurate reading, it's not usually necessary to be too precise because the inductance is (hopefully) great enough to ensure good low frequency performance. The ease (or otherwise) of measuring a precise phase shift depends on the test gear you have available - it can be done with an oscilloscope, but it's fairly irksome.
+ + +Nearly all audio transformers need a Zobel network in parallel with the secondary winding. This is used to terminate the transformer at high frequencies, where it becomes self-resonant. There's no easy way to determine the values needed, but a reasonably good start is to select a Zobel resistor value that's roughly equal to the claimed impedance. The Zobel capacitor is usually best selected by trial and error (aka empirically).
+ +The optimum Zobel network may be specified in the datasheet for high-quality transformers, which saves you the trouble of selecting the values yourself. Most transformers won't need a Zobel network if the secondary is loaded with the stated impedance (e.g. 600Ω, 10kΩ, etc). However, this arrangement is usually sub-optimal, as it implies impedance matching, which is usually not needed (or desirable) for audio. Most sources are low impedance (typically less than 100Ω), and most inputs are comparatively high impedance (10k or more). This ensures minimal signal loss (technically known as 'insertion loss') so you get the highest signal level possible.
+ +Without the Zobel network, you will usually find that the output level increases with frequency, with the starting point determined by the transformer's characteristics. You'll generally see a gradual increase beyond 10kHz or so, and the peak level can easily reach 6dB above the 'mid-band' output level. The frequency and amplitude of the peak are determined by leakage inductance and inter-winding capacitance.
+ +I measured the self resonant frequency of test transformer #1 (described next). The peak was at 500kHz (almost exactly), and the level rose from 187mV to 407mV (6.75dB). This is easily tamed with a Zobel network, and the datasheets for quality transformers will often include suitable values. At 20kHz, the level had also increased slightly, rising to 194mV (+0.32dB). The Zobel network shown in Figure 3 (CZ and RZ) almost completely eliminated the self resonance peak. Zobel networks are totally transformer dependent, and any transformer will require a network designed for that specific component.
+ + +To test and demonstrate the use of the above info, I used a small 600Ω transformer that was originally intended for telephony. Despite the rather humble nature of such trannies, it can be made to work down to 30Hz, and is somewhat better than the 'telephone' transformers you can get from various suppliers. It uses a ferrite core, and has acceptably low winding resistance and a reasonable inductance. Because the ferrite core is quite small and has a very high permeability, the maximum level at low frequencies is very limited. This applies to all audio transformers - if you need to be able to handle a reasonably high level, the core has to be much larger than you might expect.
+ +
Figure 2 - Test Transformer #1
As you can see, this transformer has no external magnetic shielding, and it measures 24mm (long) x 20mm (wide) x 13mm (high, excluding pins). The values obtained are a combination of measured and calculated, especially for the primary inductance. The transformers are 1:1 - 600Ω in and 600Ω out. The electrical parameters I obtained are as follows ...
+ +++ ++
+Parameter Value Measured With ... + Primary Resistance 55Ω Ohmmeter + Secondary Resistance 67Ω Ohmmeter + Primary Inductance 1.83 H Inductance Meter (and obviously wrong) + Primary Inductance 2.21 H Calculated + Leakage Inductance 364 µH Inductance Meter +
To calculate the primary inductance, I used a 100nF series capacitor (measured at 94nF), and resonance was at 349Hz. Input voltage was 89mV for a voltage across the transformer of 1V RMS at resonance. That represents a voltage peak of 21dB, an interesting observation, but not actually useful. Using the above formula, inductance was calculated to be ...
+ ++ L = 1 / (( 2π × f )² × C )+ +
+ L = 1 / (( 2π × 349 )² × 94n )
+ L = 2.21 H +
The leakage inductance was obtained with an inductance meter, by measuring the primary inductance with the secondary shorted. This is the standard way that leakage inductance is measured, but it can still provide an inaccurate reading depending on the meter you use. With a load impedance of 600Ω, 364µH means that the output will be less than 1dB down at 100kHz. This is more than adequate for even hi-fi applications, but this transformer is let down by its low frequency performance.
+ +I measured around 1.1% THD with 300mV input at 30Hz, driving the transformer from a 50Ω source. This isn't a great result, but was a little surprising since the transformer I'm using here was never intended to work much below 300Hz. Expecting a low frequency extension of a decade (roughly 3.2 octaves) is unrealistic, but it works fine provided the level is kept low.
+ +When I tested this transformer, the response was commendably flat - even down to 20Hz. Response was down by less than 0.7dB, which isn't bad for such a lowly component. The Zobel network shown in Figure 2 maintained flat response.
+ +If you need some bass boost, it's easily achieved by using a series capacitor. Using a 100µF cap gave a small but useful 1dB boost at 20Hz, and smaller values will provide boost at higher frequencies. I'm not entirely sure why anyone would want to introduce a deliberate low frequency boost, but it can easily happen by accident. If a user decided that adding the cap was necessary to remove any DC (which is entirely true), it's important to ensure that the cap is large enough so that you don't create a series resonant circuit that is within the audio band.
+ +A series resonant circuit is rarely an advantage, and Figure 3 shows what happens if the cap is too small - in this case 22µF. Although the resonance can be tamed quite easily by adding series resistance, that's at the expense of output impedance. The demonstration circuit is shown below, along with the modified frequency response graph.
+ +
Figure 3 - Test Transformer #1 Circuit And 'Accidental' Response
By adding the capacitor in series with the transformer, a series resonant circuit is created that boosts the output at low frequencies. You can add series resistance to tame the large peak, but a far better solution is to use a much bigger capacitor. There can be no doubt that blocking DC from the transformer will cause far less distortion than any capacitor, assuming that you believe that caps are somehow 'evil'. In this instance, a 220µF cap will cause no bass boost of any consequence at any frequency, and this is the optimum value for this particular transformer. Provided you know (or have calculated) the primary inductance, the resonant frequency is determined by ...
+ ++ fo = 1 / ( 2π × √ ( L × C )), so for the example given+ +
+ fo = 1 / ( 2π × √ ( 2.2 × 220µ )
+ fo = 7.2 Hz +
I also tested a couple of other transformers, and the bass response will be boosted by resonance in all cases. It's very important to ensure that any capacitor used in series with the transformer primary is large enough to prevent unwanted boost. If you use a cap that's too small (possibly because you didn't think about resonant circuits), then you'll get response that looks like that shown below.
+ +
Figure 4 - Output Boost Due To Capacitance
In the above, let's assume that a 10µF cap was used, perhaps because it seemed like a good idea at the time. However there is no series resistance, so the cap and the transformer's inductance will combine to create a resonant circuit (just as shown in Figure 3). If one fails to realise that a series resonant circuit is created there will be a very large output boost at the resonant frequency, and (this is the killer!) the circuit will appear to be close to a short circuit across the opamp's output! The opamp may not be able to supply enough current, and will distort horribly.
+ +You can see that the amount of boost depends on the load impedance, so this transformer output will sound quite different depending on the input impedance of the load. A high impedance load (10k) causes a boost of over 10dB at 34Hz, and that will be audible with almost all programme material. Because a series resonant circuit has a very low impedance at resonance, the drive circuit may be overloaded, so you can get both a clipping drive amplifier and transformer core saturation! Two serious problems for the price of one.
Transformers must always be used with care, and it's essential to test the circuit with the selected transformer. Failure to do so can cause issues as described, and while it might be declared as having 'better bass' than a system that's been optimised, it will be neither accurate nor predictable.
+ + +Fortuitously, a 'real' 600Ω 1:1 line transformer arrived not long before I was about to publish this article, so I was able to run tests on a more representative sample. The full details are explained below. This is a far more substantial unit than #1, with the core alone measuring 35 x 30 x 13mm. The basics of the transformer are ...
+ +++ ++
+Parameter Value Measured With ... + Primary Resistance 34.8Ω Ohmmeter + Secondary Resistance 35.5Ω Ohmmeter + Primary Inductance 5.09 H Calculated + Leakage Inductance 153 µH Inductance Meter +
To calculate the primary inductance, I again used a 100nF series capacitor, and resonance was at 223Hz. Using the formula shown earlier I was able to calculate the inductance. An inductance meter gave me a very wrong answer (around 700mH on the 20H range, but over-range on the 2H setting). Inconsistencies like that demonstrate clearly that the measurement is wrong, and you have to resort to calculating the inductance.
+ +A photo of the transformer is shown below for reference. I have no qualms about displaying the manufacturer (Harbuch Electronics in Hornsby, Australia) as they are one of the very few transformer manufacturers left in Australia and richly deserve some promotion. The transformer itself is intended for high quality audio use, and is somewhat cheaper than most of the major (and better known) overseas makers. Overall performance is excellent, and it can easily handle +10dBV at any frequency in the audio range. High frequency performance is extraordinarily good, extending to well beyond 100kHz with no evidence of ringing - even with zero load.
+ +
Figure 5 - Test Transformer #2
At 20Hz, distortion was only 0.15%, and with 100µF capacitor in circuit, there is a very slight boost at very low frequencies, with the output level rising to 3.13V (at 20Hz) from the 400Hz level of 3.09V (input was 3.16V RMS). This represents a change of 0.11dB which can safely be ignored.
+ +I tested this transformer with the negative impedance circuits shown below, even though it's not really necessary. Distortion is commendably low at 20Hz, and further attempts at improvement end up making little difference. The added 'improvements' can easily make things worse by creating a potentially unstable condition with infrasonic signals.
+ + +Before describing negative impedance drivers, it's necessary to explain the concept. A physical resistor has normal, positive resistance. If a voltage is applied to one end and a load to the other, current will flow that's directly related to the voltage and resistance ... Ohm's law. As the load resistance is reduced, the load voltage is reduced too, because more current is drawn from the source and more voltage is 'lost' across the resistor. This is the principle behind a voltage divider.
+ +A negative resistance (negative impedance is more accurate) cannot exist in nature, and must be built using active and passive parts (Note 1). If a load resistor is connected to the output of a negative impedance circuit, again, current will flow. However, if the load resistance is reduced, the output voltage increases. In fact (and very much in theory), if the 'real' resistance (impedance) is exactly the same value as the negative impedance, the two cancel and the result is zero ohms. The circuit will attempt to provide infinite current at an infinite voltage (real circuits will prevent this of course). This is not an easy concept to grasp, but hopefully the following will make sense regardless.
+ +If the drive amplifier is configured to have negative impedance, and that exactly equals (but is opposite to) the winding resistance, the inherent limitations of the transformer are all but eliminated. However, as noted earlier, negative impedance circuits are inherently unstable, so it is necessary to ensure that the circuit used cannot oscillate or do anything else that you wouldn't like - regardless of what the load might do. The circuit is known as a negative impedance converter (NIC).
+ +Using a NIC to drive the transformer means that the primary inductance doesn't cause low frequency rolloff, so at low levels the output from transformer #1 can be flat down to as low as 8Hz. At realistic levels (around 0.5V RMS) the minimum useable frequency is 20Hz.
+ +This isn't a new idea by any means, and it is the subject of several patents [ 3 ] and has been described by some transformer manufacturers. No-one seems to have discussed any problems though, which is unfortunate. In particular, negative impedance is treated as if it were the most natural thing in the world, something it most definitely is not. Over the years, I have experimented with negative impedance amplifiers on many occasions, and while the idea always seems good, the unstable nature of these amplifiers (in particular with non-linear loads) is often their downfall. The situation may be a little different when driving small signal transformers, but there are still some issues that you have to deal with.
+ +
Figure 6 - Negative Impedance Driver #1
The circuit shown above is one way to achieve the required result, and it provides an output signal that cancels the winding resistance and the distortion generated as the core saturates. I was unable to simulate the effects of saturation realistically, so the circuits were built and the waveforms are shown below. Both circuits I tested work very well (but with caveats).
+ +It is important to understand that when a negative impedance converter is loaded with a positive resistance that exactly equals its negative resistance, the gain is - theoretically - infinite! That means that the above circuit will have problems with DC offset. Capacitor coupling is not really an option unless you are prepared to use a very large capacitance (at least 2,200µF and preferably more).
+ +The above circuit must be driven from a low impedance source, using at least a 12dB/octave (preferably 24dB/octave) high pass filter (not shown). The filter is needed to remove infrasonic frequencies. The negative output impedance is determined by the value used for R4. In this case, the transformer's winding resistance is 56Ω, so R4 would in theory also be 56Ω. To prevent the possibility of infinite gain, this was reduced to 51Ω. DC coupling is required, or a very large coupling capacitor is necessary (not less than 2,200µF). The series capacitor creates resonance, and the circuit will be unstable at the resonant frequency. Any transient will generate a possibly large infrasonic frequency disturbance at the input of the transformer. With no coupling cap, the DC Offset control is used to ensure that there is no DC across the transformer (DC gain is very high!).
+ +The inclusion of an input filter is essential, as it reduces the input level at frequencies where the NIC will attempt infinite (or at least extremely high) gain. The only sensible way to tame the circuit is to ensure that the negative impedance is somewhat less than the worst case positive impedance. While you will sacrifice some of the benefit of the NIC by making its impedance less than the optimum, the circuit will be far less troublesome if R4 is made 51 or 47Ω instead of 56Ω. While the full benefit isn't achieved, there are fewer problems.
+ +
Figure 7 - Negative Impedance Driver #2
Figure 7 shows an alternative NIC circuit. This version has the advantage that it can never have infinite gain at DC, and doesn't need a very high value capacitor. However, it is still not without problems. The input capacitor (C1) must be chosen carefully to roll off the output at a sensible frequency, and C2 must be at least 10 times the value expected (based on the standard capacitive reactance formula). The -3dB frequency with 10k and 22µF is 0.7Hz. Depending on the opamp, this circuit might still require a DC offset control because the transformer itself is DC coupled, although measured offset was very low during tests. The transformer primary is wired in reverse, because the circuit is inverting.
+ +In most respects this circuit is an improvement over that shown in Figure 6, and while the difference is largely academic it can handle 'real world' variations. This is the circuit that I would use in any real application, because it has unity gain at DC. C1 is absolutely essential, and the value is dependent on the transformer. C2 isn't quite so important, but it does need to be much bigger than you might think. With the other values as shown, there is no benefit to be gained by making it larger than 22µF as shown. C1 and C2 are shown as bipolar (non-polarised) electrolytics, but standard electros can generally be used with no ill effects. Using this driver with transformer #1, I was able to get 500mV at 20Hz with only about 0.8% distortion - very similar to what was seen with the Figure 6 circuit.
+ +While this version needs two extra capacitors, a DC offset control is not necessary (it's mandatory with Driver #1). Performance is otherwise equivalent, although after testing both circuits, I recommend this one. A major benefit is that it can never have high gain at DC or very low infrasonic frequencies, which means the high pass filter isn't as critical. On the negative side, the transformer's primary is floating, with neither end of the winding connected to earth/ ground.
+ +
Figure 8 - Negative Impedance Driver Output & Transformer Output
The above is a direct capture from my oscilloscope, and used the circuit shown in Figure 6. The NIC output is shown on the left, and is quite obviously very distorted. The right-hand capture shows the output and an FFT showing the harmonics. Distortion at the transformer output is 3.8%, vs. 14% if the transformer is driven directly from the audio generator. The frequency is 20Hz, and the input level is 600mV. With 500mV input, distortion is around 0.8% at 20Hz.
+ +The input waveform is distorted, and the distortion (almost) exactly compensates for the saturation effects in the transformer that would otherwise cause the output to be distorted. Low frequency response is theoretically flat to only a few Hertz, but in reality the opamp will clip should the transformer magnetising current become too high. This happened with my test transformer (#1) at about 13Hz with a signal level of 600mV RMS. At 30Hz, output distortion is reduced from 5% to only 0.25%, which is pretty good for a cheap and nasty little transformer.
+ +
Figure 9 - Negative Impedance Driver Output & Error Signal
The above was captured using test transformer #1, and with the Figure 7 driver circuit. At a frequency of 18Hz (500mV), the transformer core is saturating, but is just below the limits where the drive circuit cannot compensate. The drive waveform is highly distorted (around 16% THD), and the error signal is developed across R4. Transformer output distortion is 1.6%, but it would also be 16% or more if the transformer were driven from a normal voltage source. While this all looks very promising, intermodulation distortion can be expected to be much higher than you would get from a better transformer.
+ +With any NIC, should you be tempted to make the output impedance exactly the same as the winding resistance, you will get the effect of close to infinite gain. In all cases, I recommend that the negative output impedance should be about 10% less than the transformer's primary winding resistance. If the negative impedance is greater (more negative) than the winding resistance, the circuit may oscillate at some infrasonic frequency, and will be very unstable when subjected to transients (such as tone burst signals, which I tested).
+ +I ran the same test with transformer #2, but with the output impedance reduced to -33Ω to suit the transformer's winding resistance. With an input voltage of 7V RMS at 20Hz and a 50Ω source (my generator), the transformer had 0.45% distortion. That was reduced to only 0.064% using negative impedance drive. So, while negative impedance certainly works exactly as expected, the level and minimum frequency must be tightly controlled or bad things will happen. This means a very good high pass filter, as well as 'de-tuning' the circuit to prevent excessive gain at very low frequencies where the inductance has little effect.
+ +Another thing that you need to be aware of ... the drive circuit has to be able to provide the current required by the load, plus the non-linear current required by the transformer as it enters saturation. Many opamps will be unable to manage without creating considerable distortion themselves. Some small IC power amps can be used, but most are not unity gain stable so are eliminated. Output stage protection is a must, or the drive circuit may be destroyed by the first infrasonic 'event' it encounters.
+ +In most cases, it will be far less risky to use a traditional low impedance drive circuit with a high-quality transformer than to attempt negative impedance. Performance can be very good, but the transformer will be expensive - expect to pay AU$50 - AU$100 or more for a transformer from a reputable supplier. Even then, performance may not be quite as good at low frequencies, but you are also far less likely to create ongoing problems. For test transformer #2 the distortion reduction was theoretically worthwhile, but reducing distortion from 0.45% to 0.064% at 20Hz (7V RMS or +17dBV) may not really worth the effort - especially since there is almost no energy at that frequency with most programme material. At sensible operating voltage at low frequencies, the distortion will be negligible.
+ + +While a decent audio transformer will have very little influence on the overall sound quality with normal programme material, this is definitely not the case if the core saturates. For this reason, no audio transformer should ever be allowed to have any drive voltage at a frequency where the core enters saturation. If this is allowed to happen, there will be severe intermodulation of all signals. While you might be quite sure that you'll never have any infrasonic energy in your programme material, it's usually present anyway. Whether you are using negative impedance drive or not, it's always wise to include a good high-pass filter. The frequency should normally be set so that the transformer core cannot saturate at any signal level within the capability of the equipment, and the rolloff should be a minimum of 12dB/octave (second order). Better protection is obtained with a higher order (24dB/octave is usually enough).
+ +
Figure 10 - Infrasonic Disturbance Created By Negative Impedance
For example, the above waveforms were captured using a 30Hz tone-burst signal. My oscillator certainly doesn't create any disturbance, so the low frequency effects you can see are the direct result of using a negative impedance drive circuit without an effective infrasonic filter. Given that the frequency is so low, you might imagine that it cannot get through the transformer, but you would be mistaken. C1 was 10µF rather than the suggested 2.2µF so the result is exaggerated, but it's still something you need to be aware of. The simple act of turning a signal on and off creates some infrasonic energy, and the same applies to music signals that start and stop or have an asymmetrical waveform.
+ +
Figure 11 - Simulated Infrasonic Disturbance With Negative Impedance
The above is a simulation (again using a 10µF cap for C1), and exactly the same issue is apparent. The waveform shown is at the output of the drive opamp. When C1 is reduced to 2.2µF the effect is reduced, but is not eliminated. This proves conclusively that the effect is both real and easily demonstrated by any method. So, the requirement for a infrasonic filter is very real indeed, and it also provides additional benefits.
+ +Once a transformer core saturates, it's much like a clipping amplifier - parts of the input signal are removed, including other frequencies that are presented along with the one causing saturation. As demonstrated above, the frequency that causes saturation may not even be audible, and there are many potential sources of such low frequencies. Many will be transient and/or intermittent, and can cause problems that can be very hard to track down if you are unaware of the possibilities. Because the transformer has a finite inductance, the load on the drive amplifier increases rapidly when the core saturates, so the drive amp will either go into current limiting (clipping) or it may fail if it is not protected against short circuits.
+ +It's obvious that DC is also capable of causing saturation, and even a hundred millivolts of DC will usually cause gross distortion at low frequencies. As the frequency is reduced, the amplitude needed to cause partial saturation is reduced as well. If a 5V signal at 40Hz causes (say) 10% distortion, then only 2.5V is needed at 20Hz or 1.25V at 10Hz to do the same. As noted earlier, it's a good idea to include a high pass filter in front of the amplifier that drives any audio transformer, with the cutoff frequency selected to prevent significant distortion at any level below drive amplifier clipping. For example, if a transformer shows 5% distortion with 4V input at 35Hz, then you might want to restrict the maximum input level to 4V and add a 12dB/octave filter tuned to no lower than 35Hz.
+ +This arrangement will ensure that the transformer cannot be driven beyond the amplitude or frequency where significant saturation occurs. It is quite true (as discussed in Section 1) that the frequency-dependent nature of transformer distortion creates fewer problems than a similar amount of distortion from an electronic circuit. However, this assumes that the transformer is never driven into saturation. The distortion produced is extremely unpleasant and can be very audible indeed.
+ + +In general, impedance matching is neither recommended nor required. If you plan to send your balanced audio feed into a kilometre (or more) long line then yes, match the impedances, but otherwise don't even think about it. This is one of the oldest myths with audio equipment, and has created confusion for people for a very long time. Part of the problem is that many equipment manufacturers claim that the inputs are '600Ω'. In the majority of cases that's simply the nominal source impedance, not the input impedance.
+ +If the source and load are both 600Ω (or any other equal impedance), there will be a loss of 6dB because a simple voltage divider is created. Matched impedances are necessary for very long lines (such as telephone systems) and for maximum power transfer as found with RF (radio frequency) installations. In the vast majority of audio installations, we are only concerned with transferring a signal voltage from one piece of equipment to another. The load impedance is almost always at least 10 times as great as the source impedance - a connection known as a 'bridging' load. Several such loads can be connected across the line without significant signal loss.
+ +Even with microphone preamps, it's standard practice to make the input impedance of the preamp at least 2.2k, and often higher. Failure to observe this rule will result in a significant loss of signal level and an increase of noise because more gain is needed to account for the reduced signal. The same applies to 'line' outputs and inputs. While a transformer may be classified as being '600Ω', its actual output impedance is likely to be less than 200Ω, with much of that being the winding resistance of the transformer itself. Line inputs will normally be expected to have an impedance of 10k or more.
+ +That means that even if the output impedance from the transformer is 600Ω (it will usually be a lot less), a 10k load only means a loss of 0.5dB. If the load were 600Ω, the loss is a full 6dB (half the voltage). With a more typical 200Ω output impedance, the signal loss is only 0.17dB (980mV from a 1V source voltage). This is a linear relationship, and it applies for any signal voltage below saturation, regardless of whether the signal is at -20 or +20dBV.
+ +If you were to use the negative impedance option to drive the transformer, its output impedance is mainly the resistance of the secondary winding. This reduces the loss to almost nothing. For example, a 10k load on test transformer #1 causes a loss of 0.06dB when it's driven from a negative impedance source, because the primary resistance is cancelled by the -50Ω drive circuit shown in Figure 6.
+ +Impedance matching does have one very minor advantage - the input voltage before saturation is increased slightly, or you can get down to a slightly lower frequency before saturation occurs. This is due to the resistance of the copper winding in the transformer's primary. The effect is small, and it is not recommended that you try to use this method to do anything useful. At 600mV/ 30Hz input (xfmr #1) I measured an output voltage of 557mV at 2.9% THD (almost completely 3rd harmonic). With a 560Ω load, the output fell 2dB (434mV) and the distortion was reduced to 2.0%. So, for a level reduction of 2dB, there was a 3.2dB improvement in distortion. While interesting, this isn't worth the effort. Simply reducing the input level to 500mV and operating the transformer with a high impedance on the secondary provided more voltage (463mV) and roughly the same distortion (2.2%).
+ + +A question that's sure to cross your mind is "can I use a mains transformer for audio?". The answer is "yes ... but", as it comes with many caveats. I tested a small (about 10VA) mains transformer, and with a 7V input it gave about 650mV output (nominally 230V to 22V output). Distortion performance was generally pretty awful, even at relatively high frequencies (400Hz and 1kHz). The distortion should have been negligible, but it wasn't, and it showed about 0.2% at both frequencies. A decent audio transformer would show less than a tenth of that (< 0.02%). Despite running the transformer with a fraction of its rated voltage, low frequency performance was poor, with easily visible distortion on my scope at less than 40Hz.
+ +This is less than inspiring, and is largely due to the nature of the laminations used (copper wire doesn't cause distortion), and the expectation that there will always be a fairly significant amount of core saturation in use (as a mains transformer). Audio transformers use low-loss cores, but that's not the case for a mains transformer. So, while you can use a mains transformer for line-level audio, it's not recommended. Toroidal transformers are better in this respect, but that's an expensive way to get a signal transformer that will be far bigger than an equivalent 'audio transformer'.
+ +Of course, transformers are used in valve (vacuum tube) amplifiers, occasionally for inter-stage (coupling) but almost always for the speaker output. Even when every care is taken with the design, the transformer will nearly always contribute some distortion, but it's usually not noticed because the valves create more distortion than the transformer. Negative feedback reduces both distortions, and is used in most (but not all) valve amps. The art of designing a good output transformer is becoming lost, which is a shame.
+ +Just for interest's sake, I tested a 200VA toroidal transformer as well. It's quite apparent that it's not sensible to consider such a thing in an installation, but its performance was very good. Distortion at 30Hz with 1V output was 0.05%, and it's unlikely that many transformers would beat that. However, it's big and weighs almost two kilograms. A pair of those would work nicely for stereo, but somehow I expect that to be an unlikely solution.
+ + +Transformers are always interesting, despite their apparent simplicity. For anyone who hasn't already done so, I recommend that you read the Beginners' Guide to Transformers. There are three sections, mainly dealing with power transformers but also covering general principles. Most people don't really think deeply about transformers in any of their applications, but they are by far the most fascinating of all the passive components. There are also a lot of myths and misunderstandings, many of which show that the writers completely fail to understand the basic principles. It appears that part of my job is to dispel as many myths as I can.
Using resonance to obtain a bit of low frequency boost is not something I've seen discussed, and this is a technique that can be used if necessary. You also won't find much info about the interaction of capacitors and transformers (unwanted resonance), and the required capacitance and optional series resistance can only be determined after careful measurement and some experimentation.
+ +The circuit you use to drive the transformer must be capable of supplying enough current to feed the load and the transformer's magnetising current. This becomes more critical at low frequencies. If you need an output level of (say) +24dBV (just under 16V RMS), you will need either a small power amp IC or a step-up transformer because most opamps can't be operated at ±25V or more. Don't expect to get more than 10mA peak from most opamps (although some can provide up to 25mA peaks), and be prepared to engineer the drive circuit carefully or you won't get the performance you expect.
+ +Using a negative impedance driver can improve performance dramatically, but it does come with serious caveats. It will be necessary to test the complete system very carefully to ensure that it can never become unstable with any frequency, amplitude or load. It's worthwhile doing a Web search on the topic if you are interested, as there is some information available. However, nothing I have seen mentions the unstable regions or gives any warning at all that bad things can happen once the drive circuit cannot handle the combination of the load and the transformer's saturation current. Most articles seem to assume that the negative impedance circuit will be directly connected to the transformer (no series capacitor), but don't offer any info on how to remove DC created by the NIC itself, nor do they warn you that a NIC can have very high gain at DC.
+ +In general, I recommend that you select a transformer that is designed for your application, and use a low impedance (not negative impedance) source. While the use of a series capacitor is usually a good idea to prevent any DC in the windings, make sure that you test it thoroughly to ensure that resonance is well below the lowest frequency of interest. Ideally, the drive circuit will include a high pass filter to prevent any infrasonic frequencies from reaching the transformer.
+ +As a final check, I did a listening test with transformer #1 and the Figure 7 negative impedance driver. The average voltage was around 1V RMS from an FM tuner, and the error signal (across R4) showed a surprising amount of activity with most of the music that was playing at the time. While I thought I could hear a difference between 'traditional' voltage drive and the NIC, I couldn't be certain. If there was a difference it was rather subtle, but the sound did seem a little cleaner with the NIC, especially with bass-heavy material.
+ +However, it's far easier to reduce the level a little to reduce distortion to be within acceptable limits. That is a simpler method, and doesn't require messing around with negative impedance and the subtle problems it can create. Remember that it's always a good idea to include a capacitor in series with the primary, but make sure it's large enough so it doesn't create a series resonant circuit at any frequency above ~10Hz or so, and that there is no bass boost as a result. This needs to be measured so you can be sure. I consider the inclusion of a high-pass filter before the transformer drive circuit to be mandatory, whether a NIC is used or not.
+ +The rest is up to you to experiment further.
+ + ++ 1. Audio Transformers - Bill Whitlock (Jensen Transformers)+ +
+ 2. A262A2E Audio Transformer - Walters Group Holdings Ltd.
+ 3. Low-distortion transformer-coupled circuit - US Patent US4614914A
+ 4. Audio Transformer Design Manual - Robert G Wolpert +
![]() | + + + + + + + |
Elliott Sound Products | +Balanced Interfaces |
Firstly, I'd like to thank Bill Whitlock for giving permission to re-publish this material. There is a great deal of confusion, disinformation and unmitigated nonsense on the Net when it comes to any discussion of balanced systems. The following material unashamedly recommends Jensen transformers and the THAT Corporation's InGenius® IC that was patented by Bill, and provides far better performance in critical applications than any of the standard active balanced receivers.
+ +The remainder of the material (which is copious) covers the principles involved in great detail. It is important to understand that one of the biggest issues with any balanced connection is the so-called 'Pin 1 Problem', where noise is injected into the equipment circuitry from the cable shield. As noted within the article, randomly disconnecting one end or the other of a cable's shield is almost always a bad idea - the problem must be solved within the equipment. Disconnecting the mains safety earth is always a bad idea. It is provided to ensure safety, and is especially important with 230V (50Hz) mains.
+ +While Bill's material was originally dedicated to 120V 60Hz systems, where necessary I have included the relevant information for 230V 50Hz countries. Bear in mind that UL certification has no meaning outside the US and Canada, and local regulations can be quite variable. Many countries (including Australia) now follow the European (IEC) regulations fairly closely, so if in doubt, you must verify that what you intend to do is legal where you live.
+ +As noted within the text, the 600Ω line came from the early telephone systems - as did a vast amount of the technology that we now take for granted. Telephone engineering was at the very forefront of early electronics, and much as we love to hate telephone companies, we owe the early pioneers a great deal for their contributions to audio. While not relevant to this article, it's worth noting that modern 'phone lines are no longer considered to be a nice resistive 600 ohms in most countries. Various 'complex' impedance models are now used instead, because they resemble an actual line more accurately than a simple resistive impedance. Impedance matching (to the 'new' complex impedance) and longitudinal balance are just as important as ever for analogue telephone lines that are extended to millions of households from local exchanges (central offices) throughout the world.
+ +One point needs to be made, and that is the correct wiring for an XLR or stereo phone plug (TRS - tip, ring and sleeve). The proper connections are shown below, and while there have been deviations they are essentially an abomination. The 'Pin 3 Hot' technique was used by some US manufacturers for unbalanced inputs, simply to save time! A single bus was used to bridge pins 1 and 2 along the length of unbalanced inputs, because it was easy (therefore fast and cheap). It was a bad idea then, and it's still a bad idea. Pin 2 is 'Hot' - end of story!
+ +
XLR And TRS (Tip, Ring & Sleeve) Connections
I haven't shown the 'TRRS' (tip, ring, ring & sleeve) because it's generally only used for mobile (cell) phones and some tablets and/or other consumer devices. While the sleeve is supposed to be earth/ ground, a certain company (that makes iThings) managed to stuff that up, by deciding that the sleeve would be for the mic connection. A seriously bad idea, but others had to follow suit so headsets and the like would be compatible. Others are also guilty of making changes that were neither necessary nor desirable (some video cameras for example). Unfortunately, when 'marketing' gets a say in design, the result is very often an abomination!
+ +The text below is close to verbatim - metric measurements have been added where necessary. All diagrams have been re-drawn to reflect normal ESP styles and to reduce image size, but are otherwise unchanged. Where additional comments have been made, these are indented, in italics and show a small ESP logo at the end ... like this .
To download a copy of Bill's original PDF from which this material was taken, click here and have a look at the transformer range and other material on the Jensen Transformers website.
+ +Please note: the earth (ground) symbol used in the diagrams below is different from that shown in Bill's original PDF, but it has exactly the same meaning. There is no consensus on the 'correct' symbols for earth/ ground/ chassis, and several different interpretations are to be found with only a cursory search. In all drawings, the earth symbol used indicates the common or 'zero voltage' point of the circuit. This may or may not be connected to protective earth (known as 'earth ground' in the US) and may or may not be connected to the chassis. Bill's original drawings are no different in this respect. I have had one (yes, only one) complaint that the symbols I used are wrong, which I dispute. The 'triangles' used in the original drawings are used to indicate the common, and are also commonly reserved for distinction between analogue and digital earth/ ground points, and often have an 'A' or 'D' within to indicate the difference. However, there are no 'universally' accepted symbols - with the possible exception of the earth symbol shown in the drawings herein, but surrounded by a circle. This means 'protective earth' - i.e. the earth pin on a mains plug or receptacle.
+ + +High signal-to-noise ratio is an important goal for most audio systems. However, AC power connections unavoidably create ground voltage differences, magnetic fields, and electric fields. Balanced interfaces, in theory, are totally immune to such interference. For 50 years, virtually all audio equipment used transformers at its balanced inputs and outputs. Their high noise rejection was taken for granted and the reason for it all but forgotten. The transformer's extremely high common-mode impedance - about a thousand times that of its solid-state 'equivalents' - is the reason. Traditional input stages will be discussed and compared. A novel IC that compares favourably to the best transformers will be described. Widespread misunderstanding of the meaning of 'balance' as well as the underlying theory has resulted in all-too-common design mistakes in modern equipment and seriously flawed testing methods. Therefore, noise rejection in today's real-world systems is often inadequate or marginal. Other topics will include tradeoffs in output stage design, effects of non-ideal cables, and the 'pin 1 problem'.
+ + +The task of transferring an analog audio signal from one system component to another while avoiding audible contamination is anything but trivial. The dynamic range of a system is the ratio, generally measured in dB, of its maximum undistorted output signal to its residual output noise or noise floor. Fielder has shown that up to 120dB of dynamic range may be required in high-performance sound systems in typical homes [ 1 ]. The trend in professional audio systems is toward increasing dynamic range, fueled largely by increasing resolution in available digital converters. Analogue signals accumulate noise as they flow through system equipment and cables. Once noise is added to a signal, it's essentially impossible to remove it without altering or degrading the original signal.
+ +Therefore, noise and interference must be prevented along the entire signal path. Of course, a predictable amount of random or 'white' noise, sometimes called 'the eternal hiss', is inherent in all electronic devices and must be expected. Excess random noise is generally a gain structure problem, a topic beyond the scope of this paper.
+ +Ground noise, usually heard as hum , buzz, clicks or pops in audio signals, is generally the most noticeable and irritating - in fact, even if its level is significantly lower than background hiss, it can still be heard by listeners. Ground noise is caused by ground voltage differences between the system components. Most systems consist of at least two devices which operate on utility AC power. Although hum, buzz, clicks, and pops are often blamed on 'improper grounding', in most cases there is actually nothing improper about the system grounding. To assure safety, all user accessible connections and the equipment enclosure must be connected to the safety ground conductor of the AC power system. A properly installed, fully code-compliant AC power distribution system will develop small, entirely safe voltage differences between the safety grounds of all outlets. In general, the lowest voltage differences (generally under 10 millivolts) w ill exist between physically close outlets on the same branch circuit and the highest (up to several volts) will exist between physically distant outlets on different branch circuits. These normally insignificant voltages cause problems only when they exist between vulnerable points in a system - which is more unfortunate than improper. Users who don't understand its purpose will often defeat equipment safety grounding - a practice that is both illegal and extremely dangerous.
Safety must take precedence over all other considerations!
+ +Although UL-approved (or other country specific approval) equipment supplied with a 2-prong power cord is safe, its normal leakage current can also create troublesome ground voltage differences. This topic, as well as unbalanced interfaces, is also beyond the scope of this paper.
+ +Ground noise is very often the most serious problem in an audio system . As Bruce Hofer wrote: "Many engineers and contractors have learned from experience that there are far more audible problems in the real world than failing to achieve 0.001% residual distortion specs or DC-to-light frequency response." [ 2 ]. Carefully designed and executed system grounding schemes can reduce ground voltage differences somewhat but cannot totally eliminate them. The use of 'balanced' line drivers, shielded 'balanced' twisted-pair cables, and 'balanced' line receivers is a long standing practice in professional audio systems. It is tantalising to assume that the use of 'balanced' outputs, cables, and inputs can be relied upon to virtually eliminate such noise contamination. In theory, it is a perfect solution to the ground noise problem, but very important details of reducing the theory to practice are widely misunderstood by most equipment designers. Therefore, the equipment they design may work perfectly on the test bench, but become an annoying headache when connected into a system. Many designers, as well as installers and users, believe grounding and interfacing is a 'black art'. College electrical engineering courses rarely even mention practical issues of grounding.
+ +It's no wonder that myth and misinformation have become epidemic!
+ + +The purpose of a balanced audio interface is to efficiently transfer signal voltage from driver to receiver while rejecting ground noise. Used with suitable cables, the interface can also reject interference caused by external electric and magnetic fields acting on the cable. The true nature of balanced interfaces is widely misunderstood. For example "Each conductor is always equal in voltage but opposite in polarity to the other. The circuit that receives this signal in the mixer is called a differential amplifier and this opposing polarity of the conductors is essential for its operation." [ 3 ]. This, like many explanations in print (some in otherwise respectable books), describes signal symmetry - "equal in voltage but opposite in polarity" - but fails to even mention the single most important feature of a balanced interface.
+ +SIGNAL SYMMETRY HAS ABSOLUTELY NOTHING TO DO WITH NOISE REJECTION - IMPEDANCE IS WHAT MATTERS!
+ +A good, accurate definition is "A balanced circuit is a two-conductor circuit in which both conductors and all circuits connected to them have the same impedance with respect to ground and to all other conductors. The purpose of balancing is to make the noise pickup equal in both conductors, in which case it will be a common-mode signal which can be made to cancel out in the load." [ 4 ].
+ +The impedances, with respect to ground, of the two lines is what defines an interface as balanced or unbalanced.
+ +In an unbalanced interface, one line is grounded, making its impedance zero. In a balanced interface, the two lines have equal impedance. It's also important to understand that line impedances are affected by everything connected to them. This includes the line driver, the line or cable itself, and the line receiver. The line receiver uses a differential amplifier to reject common-mode voltages. The IEEE Dictionary defines a differential amplifier as "an amplifier that produces an output only in response to a potential difference between its input terminals (differential-mode signal) and in which output due to common-mode interference voltages on both its input terminals is suppressed" [ 5 ]. Since transformers have intrinsic differential response, any amplifier preceded by an appropriate transformer becomes a differential amplifier.
+ +
Figure 1 - Basic Differential Interface Circuit
The basic theory of the balanced interface is straightforward. (For purposes of this discussion, assume that the ground reference of Device A has a noise voltage, which we will call 'ground noise', with respect to the Device B ground reference.) If we look at the HI and LO inputs of Device B with respect to its ground reference, we see audio signals (if present) plus the ground noise. If the voltage dividers consisting of ZO/2 and ZCM on each of the lines have identical ratios , we'll see identical noise voltages at the two inputs of Device B.
+ +Since there is no difference in the two noise voltages, the differential amplifier has no output and the noise is said to be rejected. Since the audio signal from Device A generates a voltage difference between the Device B inputs, it appears at the output of the differential amplifier. Therefore, we can completely reject the ground noise if the voltage divider ratios are perfectly matched. In the real world, we can't perfectly match the voltage dividers to get infinite rejection. But if we want 120 dB of rejection, for example, we must match them to within 0.0001% or 1 part per million!
+ +
Figure 2 - Equivalent Circuit of Basic Differential Interface Circuit
The ground noise received from Device A, since it exists on or is common to both wires, is called the common-mode voltage and the differential amplifier provides common mode rejection. The ratio of differential or normal-mode (signal) gain to the common-mode (ground noise) gain of the interface is called the common mode rejection ratio or CMRR (called 'longitudinal balance' by telephone engineers) and is usually expressed in dB. There is an excellent treatment of this subject in Morrison's book [ 6 ].
+ +If we re-draw the interface as shown in Figure 2, it takes the familiar form of a Wheatstone bridge. The ground noise is 'excitation' for the bridge and represented as Vcm (common mode voltage). The common mode impedances of the line driver and receiver are represented by RCM+ and RCM-.
+ +When the + and - arms have identical ratios, the bridge is 'nulled' and zero voltage difference exists between the lines - infinite common-mode rejection. If the impedance ratios of the two arms are imperfectly matched, mode conversion occurs. Some of the ground noise now appears across the line as noise.
+ +The bridge is most sensitive to small fractional impedance changes in one of its arms when all arms have the same impedance [ 7 ]. It is least sensitive when upper and lower arms have widely differing impedances. For example, if the lower arms have infinite impedance, no voltage difference can be developed across the line, regardless of the mismatch severity in upper arm impedances. A similar scenario occurs if the upper arms have zero impedance. Therefore, we can minimise CMRR degradation due to normal component tolerances by making common-mode impedances very low at one end of the line and very high at the other [ 8 ]. The output impedances of virtually all real line drivers are determined by series resistors (and often coupling capacitors) that typically have ±5% tolerances. Therefore, typical line drivers can have output impedance imbalances in the vicinity of 10 ohms. The common-mode input impedances of conventional line receivers is in the 10 k to 50 k ohm range, making their CMRR exquisitely sensitive to normal component tolerances in line drivers. For example, the CMRR of the widely used SSM-2141 will degrade some 25 dB with only a 1 ohm imbalance in the line driver.
+ +Line receivers using input transformers (or the InGenius® IC discussed later) are essentially unaffected by imbalances as high as several hundred ohms because their common-mode input impedances are around 50 M ohms - over 1000 times that of conventional 'active' receivers.
+ +Note that this discussion has barely mentioned the audio signal itself. The mechanism that allows noise to enter the signal path works whether an audio signal is present or not. Only balanced impedances of the lines stop it - signal symmetry is irrelevant. When subtracted (in the differential amplifier), asymmetrical signals: +1 minus 0 or 0 minus -1 produce exactly the same output as symmetrical signals: +0.5 minus -0.5. This issue was neatly summarised in the following excerpt from the informative annex of IEC Standard 60268-3:
+ ++ "Therefore, only the common-mode impedance balance of the driver, line, and receiver play a role in noise or interference rejection. This noise or interference + rejection property is independent of the presence of a desired differential signal. Therefore, it can make no difference whether the desired signal exists + entirely on one line, as a greater voltage on one line than the other, or as equal voltages on both of them. Symmetry of the desired signal has advantages, but + they concern headroom and crosstalk, not noise or interference rejection." ++ + +
The first widespread users of balanced circuits were the early telephone companies. Their earliest systems had no amplifiers yet needed to deliver maximum audio power from one telephone to another up to 32km (20 miles) away. It's well known that, with a signal source of a given impedance, maximum power will be delivered to a load with the same, or matched, impedance. It is also well known that 'reflections' and 'standing waves' will occur in a transmission line unless both ends are terminated in the line's characteristic impedance. Because signal propagation time through over 30km of line is a significant fraction of a signal cycle at the highest signal frequency, equipment at each end needed to match the line's characteristic impedance to avoid frequency response errors due to reflections. Telegraph companies used a vast network with a huge installed base of open wire pair transmission lines strung along wooden poles. Early telephone companies arranged to use these lines rather than install their own. Typical lines used #6 AWG wire at 12 inch spacing and the characteristic impedance was about 600 ohms, varying by about ±10% for commonly used variations in wire size and spacing [ 9 ]. Therefore 600 ohms became the standard impedance for these balanced duplex (bi-directional) wire pairs and subsequently most telephone equipment in general.
+ +Not only did these lines need to reject ground voltage differences, but the lines also needed to reject electric and magnetic field interference created by AC power lines, which frequently ran parallel to the phone lines for miles. Balanced and impedance matched transmission lines were clearly necessary for acceptable operation of the early telephone system. Later, to make 'long distance' calls possible, it was necessary to separate the duplexed send/receive signals for unidirectional amplification. The passive 'telephone hybrid' was used for the purpose and its proper operation depends critically on matched 600 ohm source and load impedances. Telephone equipment and practices eventually found their way into radio broadcasting and, later, into recording and professional audio - hence, the pervasive 600 ohm impedance specification.
+ +
Figure 3 - Impedance Matched Source and Destination Circuits
In professional audio, however, the goal of the signal transmission system is to deliver maximum voltage, not maximum power. To do this, devices need low differential (signal) output impedances and high differential (signal) input impedances. This practice is the subject of a 1978 IEC standard requiring output impedances to be 50 ohms or less and input impedances to be 10 k or more [ 10 ].
+ +Sometimes called 'voltage matching', it minimises the effects of cable capacitances and also allows an output to simultaneously drive multiple inputs with minimal level losses. With rare exceptions, such as telephone equipment interfaces, the use of matched 600 ohm sources and loads in modern audio systems is simply unnecessary and compromises performance.
+ ++ Where voltage matching techniques are used in (analogue) telecommunications systems, it is referred to as 'bridging'. The high impedance balanced load is bridged + across the 'phone line, allowing signal capture only. This technique is not especially common but is used for line monitoring. It is expected that the telephone line + is properly terminated at both ends when bridging is used+ + +. +
Since performance of the differential line receiver is the most important determinant to system CMRR performance and can, in fact, reduce the effects of other degradation mechanisms , we'll discuss it first. There are two basic types of differential amplifiers: active circuits and transformers. Active circuits are made of op-amps and precision resistor networks to perform algebraic subtraction of the two input signals. The transformer is an inherently differential device which provides electrical isolation of input and output signals.
+ +The active differential amplifier , sometimes called an 'actively balanced input' is realisable in several circuit topologies. These circuits are well known and have been analysed and compared in some detail by others [ 11, 12, 13, 14 ].
+ +In our discussion here, we will assume that op-amps, resistors, and resistor ratios are ideal and not a source of error. The following schematics are four popular versions in their most basic form, stripped of AC coupling, RFI filtering, etc. Because the common-mode input impedances, from either input to ground [ 15 ], are all 20 k ohms, these four circuits have identical CMRR performance. Even when perfectly matched, these impedances are the downfall of this approach. To quote Morrison: "many devices may be differential in character but not all are applicable in solving the basic instrumentation problem" [ 16 ].
+ +
Figure 4 - Common Actively Balanced Receiver Circuits
The following graph shows the extreme sensitivity of 60 Hz CMRR vs source impedance imbalance for these circuits. These circuits are almost always tested and specified with either perfectly balanced sources or shorted inputs. In real equipment, imbalances commonly range from 0.2 ohm to 20 ohms, resulting in real-world interface CMRR that's far less than that advertised for the line receiver.
+ +
Figure 5 - CMRR vs. Input Source Imbalance in Percent and Ohms
There are other problems:
+ +
Figure 6 - 'Traditional' Balanced Input Circuit Analysis
An audio transformer couples a signal magnetically while maintaining electrical (aka galvanic) isolation between input and output. It is an inherently differential device, requiring no trimming and its differential properties are stable for life. The next graph shows a circuit simulation model for a Jensen JT-10KB-D line input transformer.
+ +![]() Resistance in Ohms, capacitors in Farads, and inductance in Henrys. |
Its common-mode input impedances are determined by the 50 pF capacitances of the primary to the Faraday shield, which is grounded, and small parasitic capacitances to the secondary, one end of which is usually grounded. These high common-mode input impedances, about 50 M ohms at 60 Hz and 1 M ohms at 3 kHz, are responsible its relative insensitivity to large source impedance imbalances, as shown in the previous graph. There are other advantages, too:
+ +Noise rejection in a real-world balanced interface is often far less than that touted for th e receiving input. That's because the performance of balanced inputs have traditionally been measured in ways that ignore the effects of line driver and cable impedance imbalances. For example, the old IEC method essentially 'tweaked' the driving source impedance until it had zero imbalance. Another method, which simply ties the two inputs together and is still used by many engineers, is equally unrealistic and its results essentially meaningless. This author is pleased to have convinced the IEC, with the help of John Woodgate, to adopt a new CMRR test that inserts realistic impedance imbalances in the driving source. The new test is part of the third edition of IEC Standard 60268-3, Sound System Equipment - Part 3: Amplifiers, issued in August 2000. A schematic of the old and new test methods is shown below. It's very important to understand that noise rejection in a balanced interface isn't just a function of the receiver - actual performance in a real system depends on how the driver, cable, and receiver interact.
+ +
Figure 8 - IEC Normal-Mode and Common-Mode Test Circuits
The new circuit uses a technique known as 'bootstrapping' to raise the AC common-mode input impedance of the receiver to over 10 M ohms at audio frequencies. The schematic below shows the basic technique. By driving the lower end of R2 to nearly same AC voltage as the upper end, current flow through R2 is greatly reduced, effectively increasing its value. At DC, of course, Z is simply R1 + R2. If gain G is unity, for frequencies within the passband of the high-pass filter formed by C and R1, the effective value of R2 is increased and will approach infinity at sufficiently high frequencies. For example, if R1 and R2 are 10k each, the input impedance at DC is 20 k. This resistance provides a DC path for amplifier bias current as well as leakage current that might flow from a signal source. At higher frequencies, the bootstrap greatly increases the input impedance, limited ultimately by the gain and bandwidth of amplifier G. Impedances greater than 10 M ohms across the audio spectrum can be achieved. Another widely used balanced input circuit is called an instrumentation amplifier. The circuit shown below is a standard instrumentation amplifier modified to have its input bias resistors , R1 and R2, bootstrapped. Note that its common-mode gain, from inputs to outputs of A1 and A2, is unity regardless of any differential gain that may be set by RF and RG. The common-mode voltage appearing at the junction of R3 and R4 is buffered by unity gain buffer A4 which, through capacitor C, AC bootstraps input resistors R1 and R2. To AC common-mode voltages, the circuit's input impedances are 1000 or more times the values of R1 and R2, but to differential signals, R1 and R2 have their normal values, making the signal input impedance R1 + R2. Note that capacitor C is not part of the differential signal path, so signal response extends to DC. The bootstrapping does not become part of the (differential) signal path.
+ +
Figure 9 - Input Bootstrap Circuit Raises Impedance
The new circuit also has advantages in suppressing RF interference. Audio transformers inherently contain passive low-pass filters, removing most RF energy before it reaches the first amplifier. In well-designed equipment, RF suppressing low-pass filters must precede the active input stages. A widely-used circuit is shown below. At 10 kHz, these capacitors alone will lower common-mode input impedances to about 16 k ohms. This seriously degrades high frequency CMRR with real-world sources, even if the capacitors are perfectly matched. A tradeoff exists because shunt capacitors must have values large enough to make an effective low-pass filter, but small enough to keep the common-mode input impedances high. The new circuit eases this tradeoff.
+ +
Figure 10 - Input Low-Pass Filter for RF Suppression
The circuit above also shows how bootstrapping can make the effective value of these capacitors small within the audio band yet become their full value at RF frequencies . By forcing the lower end of C2 to the same AC voltage as the upper, current flow through C2 is greatly reduced, effectively decreasing its value. If gain G is unity, at frequencies below the cutoff frequency of the low-pass filter formed by R and C1, the effective value of C2 will approach zero. At very high frequencies, of course, the effective capacitance is simply that of C1 and C2 in series (C1 is generally much larger than C2). For example, if R = 2 k ohms, C1 = 1 nF, C2 = 100 pF, and G = 0.99, the effective capacitance is only 15 pF at 10 kHz, but increases to 91 pF at 100 kHz or higher. The schematic below shows a complete input stage with bootstrapping of input resistors R1/ R2 and RF filter capacitors C1/ C2. Series filter elements X1 and X2 can be resistors or inductors, which provide additional RFI suppression. A paper by Whitlock describes these circuits in much greater detail [ 8 ].
+ + +The InGenius® circuit, covered by US Patent 5,568,561, is licensed to THAT Corporation. The silicon implementation differs from the discrete solution in many respects. Since all critical components are integrated, a well controlled interaction between resistor values and metal traces can be duplicated with similar performance from die to die. But the integration of certain components creates new challenges.
+ +
Figure 11 - InGenius® Differential Amplifier
The process used by THAT Corporation for this device is 40-volt Complementary Bipolar Dielectric Isolation (DI) with Thin Film (TF). The DI process has remarkable advantages. Truly high performance PNP and NPN transistors, as good as their discrete counterparts, can be made on the same piece of silicon. Each device is placed in a tub that's isolated from the substrate by a thick layer of oxide. This, unlike more conventional Junction Isolated (JI) processes, makes it possible to achieve hundreds of volts of isolation between individual transistors and the substrate. The lack of substrate connection has several advantages. It minimises stray capacitance to the substrate (usually connected to the negative rail), therefore wider bandwidths can be achieved with a simpler, fully complementary circuit design. Also, it makes possible stable operational amplifier designs with high slew rates. In fact, the typical slew rate of the InGenius® line receiver is better than 10 V/µs.
+ +The op-amp design topology used is a folded cascode with PNP front end, chosen for better noise performance. The folded cascode achieves high gain in one stage and requires only a simple stability compensation network. Moreover, the input voltage range of a cascode structure is greater than most other front ends. The output driver has a novel output stage that is the subject of U S patent 6,160,451. The new topology achieves the same drive current and overall performance as a more traditional output stage but uses less silicon area.
+ +The InGenius® design requires very high performance resistors. Most of the available diffused resistors in a traditional silicon process have relatively high distortion and poor matching. The solution is to use thin film (TF) resistors. The family of thin film resistors include compounds such as, Nichrome (NiCr), Tantalum Nitride (TaNi) and Sichrome (SiCr). Each compound is suitable for a certain range of resistor values. In InGenius, SiCr thin film is used due to its stability over time and temperature and sheet resistance that minimises the total die area. Thin-film on-chip resistors offer amazing accuracy and matching via laser trimming, but are more fragile than regular resistors, especially when subjected to Electrostatic Discharge (ESD). Careful layout design was required to ensure that the resistors can withstand the stress of ESD events.
+ +The CMRR and gain accuracy performance depend critically on matching of resistors. The integrated environment makes it possible to achieve matching that would be practically impossible in a discrete implementation. Typical resistor matching, achieved by laser trimming, in the InGenius® IC is 0.005%, which delivers about 90 dB of CMRR. In absolute numbers, this means the typical resistor and metal error across all resistors is no greater than 0.35 ohms! Discrete implementations with such performance are very difficult to achieve and would be extremely expensive.
+ +Real-world environments for input and output stages require ESD protection. Putting it on the chip, especially for an IC that can accept input voltages higher than the supply rails, posed interesting challenges. The conventional solution is to connect reverse-biased protection diodes from all pins to the power pins. In the InGenius® IC, this works for all pins except the input pins because they can swing to voltages higher than the power supply rails. For the input pins, THAT's designers developed a lateral protection diode with a breakdown voltage of about 70 volts that could be fabricated using the same diffusion and implant sequences used for the rest of the IC.
+ + +There are three basic types of line drivers: ground referenced, active floating, and transformer floating. Schematics in Figure 12 show simplified schematics of each type connected to an ideal line receiver having a common-mode input impedance of exactly 20 k ohms per input. (Differential or signal voltage generators are shown in each diagram for clarity, but for common-mode noise analysis the generators are considered short circuits. The receiver ground is considered the zero signal reference and the driver ground is at common-mode voltage with respect to the receiver ground.)
+ +
Figure 12 - Balanced Line Driver Topologies
The following graph in Figure 13 compares simulated CMRR performance of the three sources with this receiver setup. The ground referenced source has two anti-phase voltage sources, each referenced to driver ground. The resistive common-mode output impedances are Rs1 and Rs2. The differential output impedance ROD is simply RS1 + Rs2. The common-mode voltage VCM is fed into both line branches through RS1 and RS2. VCM appears at the line receiver attenuated by two voltage dividers formed by RS1 and 20 k ohms in one branch and RS2 and 20 k ohms in the other.
+ +
Figure 13 - Balanced Line Driver Performance
As discussed previously, ratio matching errors in these two voltage dividers will degrade CMRR. (It could be argued that Rs1 need not equal Rs2 and that the common-mode input impedances need not match because this condition is not necessary for ratio matching. However, equality is necessary if we wish to allow interchange of system devices.)
+ +Since typical values for RS1 and RS2 are 20 ohms to 100 ohms each with independent tolerances of ±1% to ±10%, worst case source impedance imbalance could range from 0.4 ohms to 20 ohms. With these imbalances, system CMRR will degrade to 94 dB for 0.4 ohms, or to 60 dB for 20 ohms. Since the imbalances are resistive, CMRR is constant over the audio frequency range. The active floating source is built around a basic circuit consisting of two opamps cross-coupled with both negative and positive feedback to emulate a floating voltage source. The resistive common-mode output impedances are RCM1 and RCM2. The differential output impedance is ROD. The common-mode voltage VCM is fed into both line branches through RCM1 and RCM2. VCM appears at the line receiver attenuated by two voltage dividers formed by RCM1 and 20 k ohms in one branch and RCM2 and 20 k ohms in the other, with ROD across the line. ROD is typically 50 to 100 ohms. Since the common-mode output impedances of this circuit are increased by precise balancing of resistor ratios which also interact with output signal balance (symmetry), adjustment is difficult and values for RCM1 and RCM2 are not specified directly. One manufacturer of this circuit specifies output common mode rejection (OCMR) by the BBC test method [ 21 ]. The results of this test can be used to determine the effective values of RCM1 and RCM2 using computer-aided circuit analysis. Values of 5.3 k ohms and 58.5 k ohms were found for a simulated part having OCMR and SBR (signal balance ratio) performance slightly better than the 'typical' specification. For this simulated part, system CMRR was degraded to 57 dB. Since the imbalances are resistive, CMRR is constant over the audio frequency range.
+ +The transformer floating source consists of a transformer whose primary is driven by an amplifier whose output impedance is effectively zero by virtue of conventional negative feedback. The common-mode output impedances Ccm1 and Ccm2 consist of the interwinding capacitance for multi-filar wound types, or the secondary to shield capacitance for Faraday shielded types. Differential output impedance ROD is the sum of secondary and reflected primary winding resistances. For typical bi-filar transformers, CCM1 and CCM2 range from 7 nF to 20 nF each, matching to within 2%. Typical ROD range is 35 to 100 ohms. System CMRR will be 110 dB to 120 dB at 20 Hz, decreasing at 6 dB per octave since the unbalances are capacitive, to 85 dB to 95 dB at 500 Hz, above which it becomes frequency independent. If, instead of the active receiver, a Jensen JT-10KB-D input transformer is used, its full CMRR capability of about 125 dB at 60 Hz and 85 dB at 3 kHz is realised with any of the sources and conditions described above.
+ +The GROUNDED LOAD behavior of these three sources is an important consideration if unbalanced inputs are to be driven. Of course, for any line driver, either output should be capable of withstanding an accidental short to ground or to the other line indefinitely without damage or component failure. This is best accomplished with current limiting and thermal shutdown features.
+ +The GROUND REFERENCED source will output abnormally high currents into a grounded line. Hopefully, it will current limit, overheat, and shut down. If not, at the system level, it will be forcing high, and probably distorted, currents to a remote ground. These currents, as they return to the driver, will circulate through the grounding network and become common-mode voltages to other devices in the system. The usual symptom is described as 'crosstalk'.
+ +The ACTIVE FLOATING source compromises CMRR, output magnitude balance, and high frequency stability in quest of a 'transformer-like' ability to drive a grounded or 'single-ended' input. However, to remain stable, the grounded output must be carefully grounded at the driver [ 22, 23 ]. Since this makes the system completely unbalanced, it is a serious disadvantage.
+ +The TRANSFORMER FLOATING source breaks the ground connection between the driver and the unbalanced input. Because the transformer secondary is able to 'reference' its output to the unbalanced input ground, power line hum is reduced by more than 70 dB in the typical situation shown in the schematic in Figure 14. Because the ground noise is capacitively coupled through Ccm1, reduction decreases linearly with frequency to about 40 dB at 3 kHz.
+ +With the transformer floating source, if it is known that an output line will be grounded, an appropriate step can be taken to optimise performance. With a differentially driven transformer, drive should be removed from the corresponding end of the primary to reduce signal current in the remotely grounded output line. In the case of single-ended driven transformer, simply choose the secondary line corresponding to the grounded end of the primary for grounding.
+ +
Figure 14 - Transformer Output Driving Unbalanced Input
Grounding one output line at the driver, which is required to guarantee stability of most 'active floating' circuits, degenerates the interface to a completely unbalanced one having no ground noise rejection at all.
+ + +The primary effects on system behavior caused by the interconnecting shielded twisted pair (often called STP) cable is caused by the capacitance of its inner conductors to the shield. The two inner conductors of widely used 22 gauge foil shielded twisted pair cable, when driven 'common-mode', exhibit a capacitance to the shield of about 220pF per metre (67 pF per foot). But the capacitance unbalance can be considerable. Measurements on samples of two popular brands of this cable showed capacitance unbalances of 3.83% and 3.89%, with the black wire having the highest capacitance in both cases. On one sample, insulation thickness was calculated from outside diameter measurements and assumed that the stranded conductors in both wires conductors were identical. The black insulation was 2.1% thinner and, since capacitance varies as the inverse square of the thickness, this would seem to explain the unbalance.
+ +
Figure 15 - Effects Of Cable Shield Terminations
Perhaps this topic needs some attention from cable manufacturers. This is important because, if the cable shield is grounded at the receive end, these capacitances and the output impedances of the driver form two low-pass filters. Unless these two filters match exactly, requiring an exact match of both driver output impedances and cable capacitances, mode conversion will take place. Such conversion is aggravated by long cables and unbalanced driver impedances. Because of its high common-mode output impedances, the active floating driver is very vulnerable to this conversion mechanism. Its cable shields must be grounded only at the driver end.
+ +But this conversion CAN be avoided. The upper schematic shows how the common-mode noise is low-pass filtered. Remember that our reference point is the receiver 'ground'. If we simply ground the cable shield at the driver end instead, as shown in the lower schematic, no common-mode voltage appears across the cable capacitances and no filters are formed! Since the shield now is at the common-mode voltage and so are both driver outputs, there is no common-mode voltage across the cable capacitances and they effectively 'disappear'. As far as the common-mode voltage on the signal conductors is concerned, the cable capacitances are now in parallel with the source impedances, virtually eliminating the unbalancing effects of the capacitances.
+ +Grounding of shields at both driver and receiver creates an interesting tradeoff. The cable effects will, predictably, fall between the two schemes described above. The 'advantage' is that, because it connects the two chassis, it can reduce the common-mode voltage itself even though it may degrade the receiver's rejection of it, especially as we approach 20 kHz. It would be far better, of course, to use some other means, such as a dedicated grounding system or even the utility safety ground (power cord 3rd prong), to restrain common-mode voltage. Devices with no safety ground (two prong power cords) are the most offensive in this regard, with their chassis voltage often well over 50 volts AC with respect to system safety ground. The current available is very small, posing no safety hazard, but it creates a very large common-mode voltage unless somehow restrained. As we mentioned earlier, it is NOT necessary to have symmetrical signals on the balanced line in order to reject common-mode noise.
+ +Signal symmetry is a practical consideration to cancel capacitively coupled signal currents which would otherwise flow in the cable shield. In a real system, there will be some signal currents flowing in the shield because of either signal asymmetry or capacitance imbalance in the cable. If the cable shield is grounded only at the driver, these currents will harmlessly flow back to the driver and have no system-level effects. But if the shield is grounded only at the receiver, these currents will return to the driver only after circulating through remote portions of the grounding system. Because the currents rise with frequency, they can cause very strange symptoms or even ultrasonic oscillations at the system level.
+ +Sometimes, especially with very long cables, leaving the shield 'floating' at the receive end may result in increased RF common-mode voltage at the receiver because of antenna effects and high RF fields. To minimise this potential problem, a 'hybrid' scheme can be used to effectively ground the receive-end shield only for RF [ 24 ].
+ + +There is yet another reason not to solidly ground the shield at the receive end of the cable. When interference currents flow in their shield, certain cables induce normal-mode noise in the balanced pair. Details on this subject are covered in AES papers by Neil Muncy and Brown-Whitlock. Both conclude that cables utilising a drain wire with the shield are far worse than those using a braided shield without drain wire [ 25, 26 ].
+ + +The 'Pin 1 Problem' +
Dubbed the 'pin 1 problem' (pin 1 is shield in XLR connectors) by Neil Muncy, common-impedance coupling has been inadvertently designed into a surprising number of products with balanced interfaces. As Neil says, "Balancing is thus acquiring a tarnished reputation, which it does not deserve. This is indeed a curious situation. Balanced line-level interconnections are supposed to ensure noise-free system performance, but often they do not" [ 26 ].
+ +
Figure 16 - Pin 1 Problem Allows Shield Currents to Flow in Signal Circuitry
The pin 1 problem effectively turns the SHIELD connection into a very low-impedance SIGNAL input! As shown in Figure 16, shield current, consisting mainly of power-line noise, is allowed to flow in internal wiring or circuit board traces shared by amplifier circuitry. The tiny voltage drops created are amplified and appear at the device output. When this problem exists in systems, it can interact with other noise coupling mechanisms to make noise problems seem nonsensical and unpredictable. The problem afflicts equipment with unbalanced interfaces, too. Fortunately, there is a simple test to reveal the pin 1 problem. The 'hummer' is an idea suggested by John Windt [ 27 ]. This simple device, which might consist of only a 'wall-wart' transformer and a resistor, forces an AC current of about 50 mA through suspect shield connections in the device under test. In properly designed equipment, this causes no additional noise at the equipment output.
+ + +The following steps will ensure that your equipment doesn't create noise problems in real-world systems ...
+ +
Figure 17 - Avoid Pin 1 Problem with Separate Pathways for Shield Currents
This work is based in part on a 1994 AES paper by this author [ 28 ].
+ +++ + +There is a lot of info on the Net about shielding and the so-called 'Pin 1' problem. In particular, Rane has produced some technical notes that will be useful (see + [ 29 ] and [ 30 ]), but manufacturers and home builders don't always get it right. In some + cases you may find that RF (radio frequency) energy manages to get through despite your very best efforts. Mobile (cell) phones are (or were, depending on the technology + used) a potentially useful source of RF for testing, and most people in the industry will be acquainted with the noise made by mobiles. VHF analogue TV transmitters were + the bane of many recording studios and live performances, but digital broadcasting seems to have minimised that source of interference. However, many areas will still have + analogue TV, so grounding is still a very important part of the set-up.
+
If the information in this article seems to be more complex than you expected, that's because few explanations have ever gone into the level of detail that's needed to understand balanced interfaces properly. Many people have considered balanced lines to be a panacea, but unless the equipment is designed properly there are many opportunities to mess up the entire process. Properly set up balanced interfaces can ensure trouble-free signal transfer for long distances in very (electrically) noisy environments, but if the proper precautions aren't taken the end result can be just as bad as using completely unbalanced interconnections.
+ +As Bill has pointed out very clearly, the expectation that a balanced connection will have equal but opposite signals on each line is not required. Many very expensive microphones use a scheme where only one signal line is driven. Provided the impedances are matched as described, this method works perfectly. I have previously described this method of obtaining a balanced connection as "Hey, that's cheating" - be that as it may, it works just as well as the 'real thing'. The only down side is that only half the level is available.
+ +For many applications, the use of balanced interconnects is simply not needed at all. In general, a home hi-fi needs balanced interconnects like a fish needs a bicycle, but someone, somewhere, decided that balanced connections 'sound better', but not because of noise reduction. Balanced connections are not used because they sound better or even different from any other. They are used where mains earth (ground) noise causes (or may cause) interference to the signal.
+ +There is also an all pervading myth that only a balanced connection can be truly noise-free when run for long distances. Many very expensive and highly specified sound measurement microphones use a simple coaxial cable and a BNC connector, with a special 'current loop' power supply. The cables can be run for 50 metres or more in virtually any environment without any concern for noise (or hum) pickup. This is equipment at the forefront of both technology and cost, and an unbalanced connection is not used to save a few dollars ... quite the reverse.
+ +Where balanced connections are used (from different pieces of powered equipment), one useful trick is to connect pin 1 of the input XLR connector to chassis via a parallel resistor and capacitor. The resistor prevents high current loops but maintains the electrical connection, and the capacitor shorts any RF noise to chassis. Typical values are 10 ohms in parallel with 100nF. For equipment that will be used for live work (concerts etc.), the resistor should be rated for at least 5W, because a simple connection error during setup can easily burn out lesser resistors. It's not unknown for even metal-clad 20W resistors used in this way to fail (sometimes catastrophically) given a worst case connection mistake. The use of XLR connectors used to be quite common for speaker connections (a very poor choice, but phone/ jack plugs and sockets are much worse!), and a speaker lead plugged into an amplifier input can cause some serious damage.
+ +I urge the reader to re-read this article as many times as necessary to ensure that everything is thoroughly understood. Despite having been with us for a very long time, the 'simple' balanced interface is still the source of more myth and misinformation than any other. A good understanding of the principles, methods and most importantly the reasons for using balanced interfaces will help dispel many long-held but often false beliefs.
+ +Finally, when in doubt or for a 'mission critical' application - USE A TRANSFORMER. The transformer's most attractive and endearing characteristics are that it provides true galvanic isolation (no electrical connection between source and destination electronics) and it has an inherently fully differential output and/or input. If available, winding centre taps should not be connected to ground or to anything else. The exception is when the centre tap is used for phantom power ... which must be connected to the P48V supply via a resistor. Never direct connect the centre tap to the P48 supply voltage. While not exactly standard, a 3.3k resistor may be used without any problems.
![]() |
Elliott Sound Products | Balanced Inputs - Part IV |
The single opamp balanced input stage (aka differential amplifier) has created a great deal of controversy during its life, and some people remain baffled by it's apparent odd behaviour. Indeed, when one analyses the circuit it is hard to imagine that it can perform properly, because the input impedance changes depending on how it's used. You only need to build one and test it to discover that it works just as claimed, but that's never convinced everyone. Anyone who's used a variety of sources will be aware that the voltages on the two inputs are often widely different under some conditions. This is often used as a reason to avoid it altogether, but that would be a mistake.
I will try to 'de-mystify' the circuit in this short article, in the interests of ensuring that its somewhat tarnished reputation is at least partially restored. The circuit is shown in the next section, and most readers will recognise it instantly. The misconceptions are all about input impedance, which is different for each input when it's connected to a source.
In the 'Designing With Opamps' article, I made the point that an opamp will, via the feedback path, make both input voltages the same (I call this 'The 1st Rule Of Opamps' - see Designing With Opamps). With any linear circuit, this relationship will always be true. Once you understand that one simple rule, you can analyse any linear circuit with confidence. The 2nd rule isn't relevant here, as it only applies when the 1st rule can't be satisfied, meaning that the circuit is non-linear.
In the following drawings and explanations, all resistors are 10k. This is done purely for ease of calculation, and the gain is unity. These circuits are used with gains both greater and less than unity, which simply means that the ratio of the input and feedback/ ground resistors is changed. The requirement for less than unity gain is uncommon, but there are situations where it's needed. Most readers won't have an immediately obvious application for less than unity gain, but it's worth remembering that it can be done.
Where the lowest possible noise is required, the resistor values should be reduced. Be aware that many opamps cannot drive very low impedances, so if you reduce the resistor values too far, you'll get higher distortion or even premature clipping as the opamp runs out of current. For most 'ordinary' opamps, the minimum resistance is around 2.2k, but you can use less with devices designed to drive 600Ω loads.
There's a persistent myth that the shield has to be impervious to RF (radio frequency) signals. The reality is that even a rudimentary shield will usually suffice, because the RF signal is impressed onto the shield, not the conductors. Poor grounding practices can lead to induced shield current being injected into circuitry, causing noise. This was particularly prevalent with early mobile (cell) phones. Few (if any) audio circuits can amplify the frequencies involved, but they can (and do) form rudimentary RF detectors. The noise heard is not the RF itself, but the envelope (the modulation 'pattern') of the RF waveform.
The circuit for the differential amplifier is found almost anywhere on the Net. It's also used for subtraction in analogue computer (and other) systems, and an example of it in this role can be seen in the article Subtractive/ 'Derived' Crossover Networks. The circuit is a useful tool, and is used in a wide range of different applications. As with any differential amplifier, the low-frequency common mode rejection ratio (CMRR) is almost completely determined by the accuracy of the resistors used. The theoretical worst case CMRR is 40dB with 1% resistors.
The most basic differential stage is shown below. This circuit (albeit in more advanced form) is the front-end of most opamps, and although it's shown using BJTs (bipolar junction transistors) it can also use JFETs (junction field-effect transistors), MOSFETs (typically in CMOS ICs) and valves (vacuum tubes). Because it operates without feedback it has limited use as shown, but a fully developed version can be seen in the Project 66 microphone preamplifier.
Figure 1.1 - Basic Discrete Differential Amplifier
This circuit is not without problems, as the output voltage is restricted (<1V from each output) before problems arise. With the values (and transistors) shown, the gain is around ×9.4, or 19dB across the two outputs. There's some emitter degeneration, and distortion performance is good with input levels below 100mV. This arrangement is used as the input stage of opamps (and many power amps - the circuit should be very recognisable). The output should ideally be obtained as current, not voltage, and when it's provided with feedback linearity is a great deal better. CMRR is very high, but only with perfectly matched transistors and resistors (R4, R5, R6 and R7), and when the output is taken from both outputs (requiring another differential amplifier). All parameters are improved once a voltage gain stage is added and feedback is applied. Note that the values shown are only as an example, and that overall performance is very limited if the output is single-ended.
When a high-gain opamp is used, everything falls into place, with predictable gain and a high CMRR (assuming close tolerance resistors). If you need a very high CMRR, the PCB traces are important too. Their resistance is (usually) not an issue, but stray capacitance can cause issues at high frequencies if you're not very careful. The opamp also causes a degradation of CMRR at high frequencies, as its open-loop gain falls with increasing frequency. This topic is covered in some detail in Balanced Inputs & Outputs - The Things No-One Tells You.
The following drawing is adapted from the Design of High-Performance Balanced Audio Interfaces article, and shows the conditions for various input configurations. The output voltage is indicated with a '+' or '-', meaning it's not inverted or inverted respectively. Note that the input impedance of the 'X' (non-inverting) input is always 20k, as there are two 10k resistors in series. The input impedance of the opamp is so high that it's irrelevant. There is one connection that's missing though, and that's the one that causes people so many problems. So, which one is missing? The condition where the differential inputs are earth/ ground referenced. While you might think this is unusual, it's actually the case for the vast majority of sources. This is covered in the next section.
Figure 1.2 - Four Input Conditions For a Differential Amplifier
With a floating input source (Input XY), the voltages shown might seem impossible. Circuit analysis shows that in this case, the attempt by the source voltage to provide a negative current into Input Y results in the voltage at that point being zero, because 1V must exist across R3, and it's balanced out by the 1V at the opamp's inverting input. You need to examine the direction of current flow, indicated by the arrows. Having Input Y at zero volts is the only way that the opamp can be in its linear region, because both opamp inputs must be (very close to) the same voltage. 100µA flows through R3 and R4, so both resistors must have 1V across them.
By implication and calculation, this means that the input impedance at the 'X' input is 20k, and the 'Y' impedance is zero. While it's easy to assume that this compromises the circuit in some mysterious way, that's not the case. This is one of the reasons the circuit is criticised, with claims that it can't work properly because the impedances are unequal. Note that the conditions shown only apply at low frequencies (below 100Hz) because the gain of the opamp falls at higher frequencies causing the circuit balance to be affected. Performance is still acceptable up to 10kHz (perhaps more, depending on the opamp used).
There is no doubt at all that this is difficult to get your head around, but it's real. The current through R3 and R4 is exactly equal, but of opposite polarity. This is easy to simulate, but much harder to prove by measurement unless you have access to a fully floating voltage source. A battery powered audio oscillator is one method, or you can use a transformer. With the latter, keep the frequency below 400Hz to minimise the effects of stray capacitance which will seriously mess up your measurements at higher frequencies.
When you analyse the circuit, it's important that the source impedance is as low as possible. Any resistance/ impedance in the source causes the gain to be reduced, because the external resistance is in series with R1/ R3. In general, the source impedance should be no more than 10% of the resistance of R1/ R3, resulting in a loss of gain of less than 1dB. Impedance matching is never necessary with audio signals (up to 20kHz) unless the cable is more than 2km in length (λ=c/f)¹. This is common with telephony, but not with audio installations.
¹ λ is wavelength, c is velocity and f is frequency. Cables should be less than ¼λ at the highest frequency of interest (20kHz for audio) when impedance matching is not used.
When the source is balanced, but earth/ ground referenced, things get a little awkward. Almost all electronic balanced line drivers have a ground reference whether you like it or not. This is due to the way they work, and you can't short one output to ground and expect the other to provide double the voltage. There are line drivers that will do just that (see Project 87, Figure 3 for an example), but because these circuits can be unstable, they are not commonly used.
Even a transformer can be grounded, usually with a centre tap. It's almost always a bad idea, but that's never stopped anyone from doing it. The disadvantage of a ground-referenced balanced output is that it almost guarantees a ground loop, and it's up to the receiver circuit to remove the unwanted common-mode signal due to slightly different ground potentials. The connection of the shield is critical, and is the #1 cause of the 'Pin 1 Problem', which has plagued audio installations for as long as balanced connections have been in use.
Figure 2.1 - Input Conditions With Ground-Referenced Source
The source is just shown as a 'centre-tapped' voltage generator, and is assumed to have a zero impedance at each output. This makes analysis easier, because adding external output impedances just causes headaches. Note that the source impedances must be equal! When used normally, the situation will be a little different from that shown, but it doesn't affect the operation of the circuit, provided the two output impedances are the same - the voltages do not have to be equal. As noted above, the 'X' input has an input impedance of 20k, and nothing external will change that. The 'Y' input impedance is less straightforward.
As shown, the 'Y' input impedance is 6.67k (rounded), and this is the only way that the opamp's linear operating conditions can be achieved. While it may all appear somewhat unlikely, the voltages shown can easily be measured or simulated, and the circuit behaves as we expect. The fact that the input impedances are not equal may seem like it will compromise CMRR, there is an important fact that is often ignored if the analysis is not performed rigorously.
The differential and common-mode behaviours are completely independent of each other. If you refer back to Figure 1, you can see that a common-mode signal is cancelled, because the same voltage appears on each input, and there is no output. This isn't changed if there's a differential signal present or not. So, a common-mode signal is rejected and a differential-mode signal is amplified, with the two functions remaining independent, regardless of any misconceptions that abound.
An easy way to get an effective increase in signal-to-noise ratio is to use a higher level. However, care is needed, because you have to allow enough headroom to ensure that peaks aren't clipped. In pubic address and studio work, it's common to use a reference level of +4dBu, which is around 1.23V RMS. If a peak to average ratio of 10dB is assumed (which is usually obtained only by using compression), the peak voltage will be 5.5V. Allowing a more realistic 20dB peak to average ratio, the peaks will be at ±17.4V. This is the reason that many professional mixers use ±18V supplies.
By default, most balanced line drivers double the level, since the same voltage is present on each conductor, but one is inverted. This improves the signal-noise ratio, but if the level is too high, the balanced receiver will clip. However, this is easily solved, simply by making the values of R1 and R3 higher than the values of R2 and R4.
If R1 and R3 are increased to 20k, the gain is exactly one, referred to the input of the line driver. However, in the interests of lower noise, it's better to reduce the values of R2 and R4 as shown below. The total voltage between the twisted-pair conductors is still 2V, but this arrangement lets you operate with higher gain without overdriving the receiver opamp. Remember that each wire in the pair only has 1V with respect to ground, but because one is 180° out-of-phase, the total voltage is 2V.
Figure 3.1 - Input Conditions With Reduced Gain
The voltages and currents are as you would expect, and this arrangement can be scaled for any input attenuation desired. The general operating parameters aren't changed, so it's performance is pretty much unchanged. Along with the reduction of level, you also get a better CMRR. With the values shown, it's improved by 6dB, which isn't spectacular, but it's worthwhile (and comes free!). If building this version, you'd use 5.1k resistors rather than 5k, and it will show a tiny gain (172mdB, or 0.17dB). This is inconsequential.
Naturally, the gain of the circuit can also be increased, but doing so will reduce the CMRR. The circuit is not really suitable for a microphone preamp, although many manufacturers have done so. One of the problems is that mic preamps need variable gain, and that's difficult to achieve with this particular circuit. A modest gain range can be implemented, but it requires positive feedback (which is rarely a good idea), and it can be improved with an additional opamp. This isn't covered here. Of course, R2 and R4 can be replaced by a dual-gang potentiometer, but that will seriously affect the CMRR because they never track perfectly, and have a wide tolerance.
With 10k resistors, an imbalance of just 10Ω between any two resistors will cause the 'ideal' CMRR to fall from 91dB to 66dB (simulated using a TL071 opamp). In reality, it's almost impossible to achieve the 92dB figure, as that requires better than 0.1% tolerance resistors (10Ω in 10k is 0.1%). However, countless differential amps have been made using 1% resistors, and that will typically allow a CMRR of better than 40dB. While that's a long way off the theoretical 92dB, in a typical application it's usually sufficient.
In reality, there's an external factor that often causes far more interference than a 'sub-optimal' balanced receiver's CMRR ...
Unfortunately, there is something between the sending circuit and the receiving circuit - the cable! If the common-mode performance is inadequate, the cable should be the first suspect. To ensure that CMRR is as high as possible, the shielded cable conductors must be twisted together to ensure that any noise injected into the wires is always equal. When used in what's laughingly called the 'real world', cables will be mistreated, and are regularly forcibly rolled into a 'convenient' format for storage. If this causes the twist to be deformed (and it will!), expect common-mode noise to be a problem because poorly twisted wires will not 'talk' to each other properly. They can then carry different currents from common-mode sources (mains leads and power transformers being the main sources).
In theory (always a wonderful thing), the shield isn't necessary at all. Data is most commonly sent between point 'A' and point 'B' via UTP (unshielded twisted-pair), with Ethernet being the most common data connection. The primary difference between the different categories (CAT3, CAT5, CAT5e, etc.) is the amount of twisting employed. There's also a requirement for the cable's impedance to be matched to the receiver, something that is not necessary for audio. These data connections are balanced, and there are specifications that state the minimum bending radius that can be used without (significant) loss of performance due to misalignment of the individual twisted pairs.
Unfortunately, some 'roadies' don't seem to understand how important it is to roll signal cables in such a way to ensure that the cable naturally 'falls' into place when being rolled up. Failure to do this can cause the twist to be disturbed, so parts of the cable don't have the correct internal geometry, allowing noise to be injected. It's not a problem with the balanced line drivers or receivers, but with the cable itself.
Despite any negative reactions you may read about this common circuit, the point that's often missed is that it does exactly what it says on the tin (as it were). While it doesn't seen 'right' in some respects, that's largely due to a misunderstanding of how it functions. The fact that it has unequal input impedances for the two input signals is immaterial, because it's input impedance for common-mode signals is identical. This is the only thing that really matters, and the way it works with a signal vs. common-mode 'noise' is perfectly alright. There is an expectation that the input impedance and signal level should be equal with a balanced line, and this holds perfectly true for common-mode noise. It doesn't matter at all for the wanted signal, and some signal sources do not have equal but opposite signals on both connections. This does not affect their performance.
I have (quite deliberately) avoided using the formulae that have been developed to analyse the circuit, because for 99% of cases they don't really help. The only thing that's important is to ensure that the resistors are accurate. R1 is always the same value as R3, and R2 is always the same value as R4. The circuit can be configured for gain or attenuation, but is not easily made variable. If you happen to need variable gain, there are far better circuits, which are described elsewhere on the ESP site.
Hopefully, this article has removed some of the doubts you may have had about this simple circuit, and has helped to explain it in a way that makes sense.
![]() | + + + + + + |
Elliott Sound Products | +Balanced I/O (Part 3) |
This is the third article on the topic of balanced interfaces, and it covers things that don't appear elsewhere. Balanced inputs and outputs are considered essential for many applications, but the common circuits can seriously degrade the CMRR (common mode rejection ratio) without you realising it. The degradation is almost always at the top end of the frequency range, and the primary cause is phase shift within the driving circuits. This article looks at the reasons for degradation, and what can be done to prevent the CMRR from falling at high frequencies.
+ +In reality, some degradation is almost impossible to prevent without the use of precision (and generally high speed) circuitry, where every part of the circuit is optimised carefully. This is needed to ensure that both inputs or outputs have exactly the same propagation delay at all frequencies. If this sounds like it may be hard to achieve, you'd be absolutely correct. For the purposes of this article, the 'audio range' is defined as being from DC to 100kHz. Above 100kHz things get a great deal harder, especially if the response is expected to extend down to low frequencies (DC to a few hundred Hertz).
+ +Despite their generally excellent performance, opamps and other circuitry (such as FDAs) are the primary cause of the issues seen. The CMRR of all opamps is a frequency dependent parameter, and some datasheets specify it at 60Hz, sometimes with a graph showing CMRR vs. frequency. In other cases CMRR is provided as a minimum and 'typical' specification with a defined source impedance, but with no frequency information. This almost always means that it refers to DC or low frequency performance only.
+ +It's important to understand that while the problems described here are very real, most of the time they won't create any issues. This is because the majority of the issues faced are caused by hum loops (aka earth/ ground loops), so the predominant frequencies to be 'eliminated' are mains (50/60Hz) and their harmonics. Even allowing for up to the 17th harmonic, this remains under 1kHz for 50 and 60Hz mains (the 17th harmonic of 60Hz is 1.02kHz).
+ +The issues described affect nearly all balanced input and output circuits, including those described on the ESP website. This shows clearly that the issues described here are not normally a problem at all, but in the interests of providing the most complete information, the problems and their solutions are described in detail. A separate article looks at balanced input circuits based on instrumentation amplifiers (INAs), so only limited info on those is provided here.
+ +For the purposes of simulation and demonstration, TL072 opamps are used throughout this article. This is because they are very common, low cost, high performance devices (although they really don't qualify as 'hi-fi' compared to many far superior devices available today). The main reason they were used is simply for consistency. To be able to show that one circuit is 'better' or 'worse' than another, the number of variables has to be kept to a minimum. It also helps that the simulator I use has a fairly good model for the TL07x series, but does not include many other well known (and especially audiophile) types.
+ +The common mode rejection ratio (CMRR) for the TL07x series of opamps is quoted as a minimum of 75dB with a 'typical' figure of 100dB. By way of comparison, the LM4562 (a 'premium' opamp) has a minimum CMRR of 110dB, with a typical value of 110dB, and the AD797B (very expensive) has a minimum CMRR of 120dB and a typical value of 130dB.
+ +Also, except where noted otherwise, resistor values are considered to be exact. This is unrealistic in the real world, but it helps to highlight issues that are related to the opamp or circuit topology, while ignoring those that are due to component tolerance. At the very least, 1% resistors are essential, but higher precision is necessary if a particularly high CMRR is required. Component tolerance is just as important for balanced drivers (transmitters) or receivers.
+ +It is not the intent of this article to produce circuits (or circuit ideas) that have unlimited common mode rejection, because it's not possible with real-world parts. The idea is to alert the reader to the limitations of common circuits, so that the effects can be mitigated where a design really does need the maximum possible rejection. All circuits are imperfect, but with careful design the imperfections can be minimised. At some point, one has to decide whether the added PCB real estate and/or cost is worth it for the gains realised.
+ +The problems investigated here are based on the real world limitations of opamps. In a simulator we have access to 'ideal' devices, and if these are used everything works perfectly at all frequencies from DC to daylight. Since we can't actually buy an ideal opamp (sad but true), we have to deal with the limitations as best we can. It's usually not too difficult, simply because the frequency range occupied by audio is (not entirely accidentally) the very same range for which most opamps are designed. Note that use of high priced discrete opamps will rarely (if ever) improve anything, and in many cases they will be worse, not better.
+ +You will see here that CMRR diagrams appear 'upside down', and show the CMRR as a negative dB figure. This is due to the way I ran the simulations, but the end result still shows what happens at the frequencies between 10Hz and 100kHz (the range I used for the simulations).
+ + +Microphones are mentioned first because they are a 'special' case. It's traditional that mic preamps are balanced, and many people think that it's essential. However, it's not (and never has been) a requirement for microphones, because they are a fully floating source. This significant, because it means that a mic can be connected to a preamplifier using only a shielded lead (coaxial), and there is no noise penalty. There are caveats - a cheap lead without a full braided shield will pick up noise, not because it's unbalanced, but because the gaps in the shield mean that the inner and outer conductors cannot 'talk' to each other properly, and high frequency (e.g. radio frequency) noise may penetrate the gaps in the shield.
+ +Contrary to popular belief, an unbalanced mic cable can be just as quiet as a balanced cable, for the simple reason that the mic has no earth/ ground reference. There are situations where an earth reference may exist - typically involving a human touching both a guitar and a microphone. However, this connection is a relatively high impedance and almost never causes a problem. Despite this, it's traditional that mic preamps feature balanced inputs, but the reason is not to get 'higher performance'.
+ +Back in the 1970s, most PA systems used unbalanced mic inputs. While many had some significant problems, microphone lead induced noise was not one of them. Microphones were overwhelmingly dynamic types, with very few 'exotic' mics in use. High impedance microphones fell from favour quite early - they were common during the 1960s because most gear was valve-based, and gain was expensive. These mics were always noisy, because high impedance circuits are susceptible to electromagnetic interference, triboelectric noise from the cable itself, and circuit noise in the preamp.
+ +A mic is not only floating, but well shielded from external noise, other than hum induced from nearby transformers directly into the voicecoil. A few mics include a 'hum bucking' coil to cancel out noise from magnetic fields. The mic body was then (and still is) earthed via the cable shield, which minimises electrostatic noise pickup. This is the noise you hear if you touch the pin at the end of a guitar or RCA lead. Another noise source was lighting dimmers, but provided no-one did anything silly like bundling lighting and mic cables together, even this usually produced little noise.
+ +Over the years, people started using 'proper' mixers, and even many early versions still had unbalanced inputs. This started to change as performers used more and more equipment, much of which was mains powered. With that came earth/ ground loops, and much hum and noise ensued. Balanced inputs became the standard, to accommodate 'line level' equipment, microphones and phantom power. The latter forced the change, because phantom power is applied between the two conductors and the shield. Maintaining the balanced (and floating) mic capsule means that if a dynamic mic is fed with phantom power (accidentally or otherwise) it won't be damaged because both ends of the voicecoil have the same DC voltage (+48V).
+ +Because mixer inputs were balanced, it became standard for mics to be connected with (now standard) shielded twisted-pair cable, fitted with XLR connectors. Using the balanced connection also allowed the use of phantom powered mics, which is a common requirement with modern stage and studio setups. This change made absolutely no difference to floating signal sources (like dynamic microphones), but meant that all XLR leads were the same and there was no chance of mix-ups when setting up for a show. Despite anything else you may come across describing balanced mic leads, they are not necessary. In fact, if the internal conductors aren't twisted properly, a balanced mic lead can result in more hum pickup (via the cable) than a good quality unbalanced lead. This is exactly the opposite of almost everything you'll read elsewhere.
+ +Balanced line drivers and receivers (as covered below) are needed when mains powered equipment is connected to a mixer (or other mains powered equipment), and the intent is to prevent (or at least minimise) the creation of earth loops. Since this condition cannot exist when mics are used normally, using balanced cables with XLR connectors is a convenience, not a requirement. A low impedance microphone with no connection to ground has little or no common mode noise, as that requires a loop, which is not possible with a floating source.
+ +Consider that many very expensive laboratory microphones use an unbalanced connection. 4mA current-loop (aka ICP, IEP, IEPE or CCP) mics are made by PCB Piezotronics, ROGA Instruments, BSWA, G.R.A.S, Brüel & Kjær, etc. Most are fully calibrated, and rated as Class-I or Class-II, and they are used primarily for sound level monitoring and high-precision measurement work. These are at the very top of the 'tree' when it comes to microphones, and they can use up to 100 metres of unbalanced coaxial cable, terminated with BNC connectors. It should be obvious that if a balanced connection were necessary for low noise, then these mics would be balanced. It's worth noting that the mic and preamp are separate, and there are different mics that can be used depending on usage. Prices are never provided, so you know straight away that they cost more than most of us can afford (think in terms of $many hundreds$). I've worked with these, and they are virtually dead silent, even in the presence of workshop interference.
+ + +One of the most common balanced output stages is shown below. This is basically the same as the one used in the PCB version of Project 87B. While it has flaws as described below, its performance is normally more than acceptable for general purpose use, and that was the design intent.
+ +CMRR at DC is almost perfect, reaching better than 100dB, but even by the time the frequency has risen to 1kHz, CMRR is down to 70dB, and reduces at 6dB/ octave as the frequency increases. At 20kHz, CMRR is only 44dB - not complete rubbish, but certainly not wonderful. This is based on the resistor values being perfect, and even 1% tolerance will degrade things further. If there is a 1% difference between R3 and R4, the best CMRR obtainable is 46dB, but at 20kHz it's still 44dB, so things are not quite a dire as they may otherwise appear.
+ +
Figure 1 - Basic Balanced Driver Circuit
This is a common circuit, and can be found in hi-fi equipment, commercial (live sound) gear, and almost anywhere that a balanced output is needed. As discussed above, it's not perfect because one signal is delayed by only U1A (which is immaterial in this arrangement), but the other (inverted) signal has the extra delay of U1B. When combined, they are not (and never can be) perfectly matched at all frequencies. The small time delay (aka propagation delay) of U1B plus its high frequency phase shift means that the two signals are not exact but inverted replicas of each other. There will always be small amplitude and phase errors that mean the summed output is non-zero.
+ +The boxed network creates a phase lead for U1B, which improves the CMRR vs. frequency quite dramatically (at least in a simulation - reality may be different), but it may not be easy to get it right and the advantage is (perhaps surprisingly) probably not worth the effort. However, if it is included, the measured CMRR is improved by about 25dB (assuming that all resistor values are exact of course). One thing that is not immediately apparent is the fact that an inverting opamp stage operates with a 'noise gain' of two. While the signal gain is (-) unity, the actual internal gain is x2, so inverting and non-inverting buffers can never be exactly equivalent. The value of R4 is (relatively) unimportant, and is selected to ensure that excessive high frequency boost is not created.
+ +
Figure 2 - CMRR Of Output Voltage
The reduction of CMRR with increasing frequency is obvious. It's measured simply by using exactly equal value resistors from each output, shown in Figure 1. If amplitude and phase are equal (but opposite), the result is zero signal at the mid-point of the two resistors. Any deviation of amplitude or phase between the two results in degraded cancellation at the 'CMRR' output.
+ +The 'phase lead' circuit helps to counteract the lagging phase response of U1B, and the resistor and capacitor values depend on the opamp. 10pF is right for the FET input TL072, but opamps with bipolar inputs will require a value to suit (usually larger than 10pF, but unlikely to be more than 47pF). U1A also has a lagging phase response, but correcting that would make matters worse, not better.
+ +You may also imagine that providing the input to U1B directly from the input along with U1A would mean that the two circuits would be closer to being identical, but it doesn't help. The best case CMRR (at DC) is reduced to just under 100dB, and at 20kHz it's only 5dB better than the uncompensated response (green trace above), but is 20dB worse than the compensated version.
+ +If the circuit is re-configured so that both opamps are used with a noise (or internal) gain of 2, the performance can be improved. To give you some idea of how little phase shift can create a problem, consider two perfect sinewave generators, producing sinewaves 180° apart (i.e. one is inverted). If the amplitudes are the same, the output is zero. As in really zero - nothing at all. This applies whether you have 1V or 1kV sinewaves - they cancel perfectly.
+ +If one output is shifted by a mere 1° (181 or 179° phase shift between the two), with 1V inputs, the sum is 8.72mV, or -41dB. You can calculate this for any phase angle with the following formula ...
+ ++ Output = VIN × sin( θ ) / 2 Where VIN is input voltage and θ is the phase angle + between the two voltages ++ +
Unfortunately, it's not at all difficult to accumulate a 1° phase difference between two seemingly similar circuits, especially at higher frequencies, and doubly so if they are cascaded (one following another). For most things it doesn't matter (and doesn't even happen between two hi-fi signal channels), but when you are trying to cancel a wide band signal it matters plenty. In similar manner, if two equal and opposite voltages (assume 1V) are summed with 10k resistors, a difference of just 10Ω (1 in 1,000 or 0.1%) will cause an output of 500µV at the summing point. In general, expecting better than 60dB of CMRR is unrealistic unless you are willing to use 0.01% tolerance components. These are not readily available, and are expensive.
+ +Another fairly popular circuit is known by a few names, such as 'earth/ ground cancelling output' or 'ground compensated output'. It does provide a balanced output, but it is impedance balance only, and there is no signal on the second ('cold') line.
+ +
Figure 3 - Ground Compensated Circuit
It should be obvious that the circuit shown has no output common mode rejection as such. What it does instead is to use the noise signal to cancel any noise that would otherwise appear across the load. Noise will appear on both signal lines, and the 'cold' (-Out) lead couples that back into the opamp in such a way as to cause the noise signal be cancelled. Cancellation can never be total of course, due to normal opamp limitations, but a significant part of the 'ground noise' can be effectively removed. This arrangement works whether the remote load is balanced or unbalanced.
+ +The next circuit shows how two opamps can be forced to operate in an almost identical manner, so any inherent phase shift difference is minimised. Although they seem to be operating more-or-less identically, in reality that's not the case. U1B is operated as a unity gain inverter, so has (almost) zero common mode signal at its input terminals, as both remain at (nominally) zero volts. However, U1A does have a common mode input voltage, namely half the input voltage. This means that the two are not identical, but they are much closer than obtained by the Figure 1 circuit.
+ +
Figure 4 - Optimised Balanced Output Driver Circuit
You should recognise the circuit based around U1A - it's the standard single opamp balanced input circuit (but it is drawn differently). The circuit has a gain of two, so the input voltage is divided by two at the non-inverting opamp input, ensuring that the output signal is the same amplitude (and phase) as the input signal. You can use the complete balanced circuitry for both opamps if you wish. Then both are identical, but one is driven via the +ve input and the other via the -ve input. However that provides no benefit, and only increases noise and component count.
+ +The CMRR at the output is greater than 96dB up to 3.7kHz, and is still better than 87dB at 20kHz. This is a fairly dramatic improvement over the Figure 1 design, but it adds 4 more resistors - all of which must be close tolerance. In most cases it's not necessary, but if you are after the best possible result it works well. The input impedance is 6.67k with the values shown, and it expects to be driven from a low impedance (such as the output from an opamp). Noise performance can be improved by using lower value resistors throughout, but at 'line' levels it's unlikely to cause a problem.
+ +
Figure 5 - CMRR Of Output Voltage
This is the CMRR response, plotted again from 10Hz to 100kHz. The difference is immediately obvious. While the (in reality unlikely) maximum CMRR of better than 100dB at low frequencies is limited to a 'mere' 97dB below 1kHz, it remains at that level, where the previous circuit was rising steadily from a few Hertz. Note that the vertical scale is compressed, and even at 100kHz the CMRR is greater than 70dB. Whether you can achieve results this good in a real circuit is doubtful, but the potential clearly exists. Substituting different opamp models in the simulation does change it a little (some are better, others worse), but the results generally follow the trend shown. Regardless of opamp, it will always outperform the basic circuit (Fig. 1).
+ + +In all circuits that follow, CMRR is measured by tying the two inputs together and applying a 1V signal to the two inputs at the same time. An ideal circuit would mean that the output would be zero, implying infinite common mode rejection. As should be apparent by now, the ideal opamp does not exist - and that includes expensive discrete designs that are often no better than decent integrated circuit types.
+ +Make sure that you have a look at the article on Instrumentation Amplifiers (INAs), because that describes them in greater detail than you'll find here. In reality, most of the balanced receiver circuits shown below are also considered to be 'INAs', even when they are quite obviously not the full implementation of the 'true' INA circuitry. A comparatively new (at the time of writing) device is the INA1650 [ 9 ], which includes just about everything that is needed for a balanced input. It's claimed in the datasheet that CMRR is better than 70dB to well over 100kHz. Like so many of the latest devices, it's only available in a surface-mount package.
+ +The standard single opamp balanced input circuit generally gets a bad rap for performance, largely because it has unequal input impedances on each of its inputs. However, this is largely a distraction, because much of the time it doesn't matter. If it's fed from a true balanced source (i.e. not earth/ ground referenced) it's immaterial. The source sees the total impedance, and the common mode performance is usually much better than most people give it credit for.
+ +However, the unequal impedances may cause problems in some installations, in particular where the source signal is earth referenced (i.e. a symmetrical signal about zero volts, with an actual or inferred earthed centre tap). Both balanced driver circuits shown above have just that - there is no output 'earth' as such, but both signals are directly referred to the zero volt line (earth) because they are driven by opamps. While it is possible to create a 'pseudo-floating' output using opamps, the circuit relies on some positive feedback and it may become unstable under some conditions. It's the closest electronic equivalent to a transformer, but it's still not as good because there is no galvanic isolation.
+ +
Figure 6 - 'Conventional' Balanced Input Circuit
The traditional balanced input stage is shown above. The input impedance of each individual input depends on the source - it's not a fixed value, and is different for common mode and differential inputs. This has convinced many people that it can't work properly, but that is not true at all. It is a compromise, but it's not as bad as it appears at first look. With a fully floating input (a microphone capsule for example), you'll actually measure very different voltage at each input pin. With a 1V source, at the non-inverting input you'll measure 1V, and almost nothing on the inverting input.
+ +While this is somewhat confronting, it actually doesn't matter. You still get the output voltage you expect (1V), and common mode noise is rejected just as effectively as any other arrangement. With a common mode signal, the current into each input is identical, and therefore, the common mode impedance must also be identical. Signal balance is not a requirement for a balanced line (although many people expect it). The thing that makes a balanced line balanced is its common mode impedance - if the impedance is equal, then the line is truly balanced.
+ +Look closely at the Figure 6 circuit, and assume that the output of U1A is zero (which it will be for a common mode signal). We shall ignore any output DC offset, as well as the output impedance of the opamp for this exercise. With the common mode signal (i.e. applied to both inputs simultaneously), each input 'sees' a voltage divider and two 10k resistors in series. The input impedance for each input is therefore 20k (10k + 10k, and ignoring the input impedance of the opamp), so the impedances are perfectly balanced. They are different for a differential mode signal, but that doesn't matter!
+ +For example, if a 1V common mode signal is applied to each input, the current into each is 50µA - an impedance of 20k. If the common mode signal is applied to each input via two external impedances (as may be the case with a shielded mic cable), the input current remains identical. For example, an external 10k on each input reduces the current to 33.33µA, so the impedances are quite obviously the same, provided the impedance of the source (including cable) is also balanced.
+ +Adding input filters to remove signals much above 20kHz means that it's easy enough to ensure at least 90dB of CMRR up to any sensible frequency. A suitable filter might be 1k in series with each input, with 3.3nF to ground. The two 3.3nF caps must be very carefully matched or the CMRR will be seriously degraded. This arrangement is shown next.
+ +
Figure 7 - Conventional Balanced Input Circuit With Input Filters
The filters use R1/2 and C1/2 to create a low pass filter, tuned to 48kHz. The source must be low impedance, or the filter frequencies will be reduced, potentially leading to a loss of high frequencies. R3 and R4 have been reduced from the nominal 10k to 9k to ensure that the gain is maintained at unity, but in reality you can easily use 10k instead and accept the small gain reduction.
+ +The two caps must be exact - the absolute value isn't especially important, but the balance between them is critical if a high CMRR is expected. There are some tricks that can be used to make the filters less critical [ 2 ]. There are several integrated versions of the basic balanced input circuit that, while basically complete within the IC itself, offer no real advantage other than a smaller PCB area.
+ + +
Figure 8 - Input CMRR Of Figures 6 And 7 Circuits
It's hard to argue that the result shown in Figure 8 is poor, because it isn't. The result is primarily determined by the opamp, but even with a lowly TL072 it's a good result, with a CMRR of better than 90dB up to just under 10kHz. When the input filters are added, the CMRR is better than 90dB up to any sensible frequency.
+ + +
Figure 9 - Balanced Input Stage (Project 87A)
The circuit shown above is the same as that used for Project 87A. It has the advantage that the input impedance can be as high as you like, based only on the opamps' input current (which is negligible for FET input types). Common mode rejection is acceptable generally, but the optional 'phase lead' network (which must be adjusted to suit the opamp being used) improves matters. Without it, the CMRR is still 30dB at 20kHz, but the phase network can improve that to about 58dB at 20kHz. Use of input filters as shown in Figure 6 improves CMRR further if high frequency common mode noise is an issue.
+ +R7 can be installed to increase gain. If R7 is omitted, the circuit has a gain of 6dB (x2), and it cannot be reduced without adding voltage dividers to the inputs. If R7 is 10k, the circuit has a gain of 12dB (x4). Reducing the value of R7 increases the gain further (e.g. 1k gives a gain of 26.8dB (x22), but CMRR is reduced as the gain is increased. CMRR is reduced by (roughly) the same amount as the gain is increased, so 20dB gain means 20dB worse CMRR. If R7 is used, the phase lead circuit (if used) must be adjusted to compensate.
+ +
Figure 10 - 'Super Balanced' Input Stage [ 1, 10 ]
This circuit has been described by Douglas Self (he calls it the 'Superbal'), and it was invented by Ted Fletcher [ 10 ]. It ensures that the impedance at both inputs is the same as the input resistors (10k in this case). For the most part this doesn't give quite a much benefit as you might imagine, but the equal impedances certainly help to ensure that the CMRR isn't compromised by the unequal load on each signal line. The CMRR is very slightly better than the Figure 6 circuit at low frequencies, but by 20kHz there's no difference. One thing that is not mentioned is that the output voltage is half that of the Figure 6 circuit (-6dB) because of the feedback via U1B. This is of no account for line inputs operating at +4dBu or so. Total input impedance is 20k as shown, with each input having an impedance of 10k.
+ +
Figure 11 - 'Super Balanced' Input Stage CMRR
As before, adding input filters will improve the CMRR at high frequencies. As is obvious, the CMRR is very similar to the 'conventional' version, with the only real difference being that both inputs now have the same impedance. If you happen to think that's important, then it's well worth using, but mostly it doesn't matter a great deal.
+ + +FDAs are a convenient way to make a circuit that can accept balanced or single-ended inputs, and provide balanced or single-ended outputs. This means an FDA can convert unbalanced to balanced, balanced to unbalanced, or buffer a balanced connection (balanced in/ out). Some are designed for low voltage operation (e.g. 5V) which is useful for interfacing with ADCs (analogue to digital converters) or balanced DACs (digital to analogue converters), but their input and output voltages are too limited for professional audio or anywhere that a decent signal level is expected.
+ +One example of a 'full supply' (up to ±16.5V maximum) is the OPA1632 - but it is only available in SMD packages. Many others are also unavailable in standard DIP packages, making them less suitable for DIY projects. Unlike an opamp, an FDA has two inputs (inverting and non-inverting) and two outputs (also inverting and non-inverting), and two separate feedback paths are used. The same caveats regarding resistor tolerances apply with the FDA, so if maximum CMRR is expected, close tolerance parts are essential. Most also provide the ability to have DC offset correction, or to create a fixed DC offset to match the requirements of ADCs.
+ +While these devices appear to be the answer to all your balanced/ unbalanced conversion woes, they have similar limitations to maximum CMRR at high frequencies as you find with opamp circuits. Some are designed to handle video, so have a much wider bandwidth than most opamps (up to 200MHz for unity gain), but sadly there are usually no graphs that show input and output CMRR with respect to frequency. In use, I doubt that any will be found wanting, other than those using 5V supplies. These are not suitable for general purpose audio line-in or line-out applications.
+ +The equivalent circuit is fairly convoluted, and while I show the (claimed) equivalent circuit for the OPA1632 below, I do not propose to go into great detail by way of explanations. The equivalent circuit is 'functional', in that it shows the internal functionality, but the reality is somewhat different. The circuit shown has been simulated and it works, but the output voltage is limited to around ± 3.7V rather than ±12V (with 15V supplies) that you'd normally expect. Note that this is for the simulation - the 'real' OPA1632 extends to at least ±12V with 15V supplies, depending on the load impedance.
+ +
Figure 12 - OPA1632 Functional Equivalent Circuit [ 6 ]
Essentially, the circuit is similar to that for an opamp. The difference is that rather than providing a single output, there are two, with one for each input. When two independent feedback paths are added, it allows very flexible input and output options. The VOCM input allows the designer to set a specific DC common mode voltage where this is needed. If not used (and assuming dual supply voltages), the common mode voltage will be set to zero by grounding the VOCM input.
+ +
Figure 13 - General Usage Of FDA (OPA1632 Pinouts Shown, PSU Pins Not Included)
An unbalanced input can be applied to either input pin, and the other is grounded. A balanced input is applied to both input pins, and no ground is needed, although providing a DC path to ground is required. If an unbalanced output is needed, the output can be taken from either output pin, and the unused one is left floating. This means that the one IC can be used for balanced in to unbalanced out, unbalanced in to balanced out, or balanced in to balanced out. Despite what you might think, the output is (or is not) true unity gain, depending on your expectations. This is despite the equal value input and feedback resistors. As shown, the gain is 'unity' only in that a 1V unbalanced input provides a 1V balanced output, and a 1V balanced input gives a 1V balanced output. The actual voltage on the each output pin is 500mV, and being 180° out of phase, this is 1V.
+ +The gain is changed in the same way as with an inverting opamp circuit. If the feedback resistors (R3 and R4) are made twice the value of the input resistors (R1 and R2), the gain is two. Input impedance at each input is 10k as shown. If a higher input impedance is needed, you'll have to add input buffers or increase the value of the input and feedback resistors, which will increase resistor thermal noise. The two 100 ohm resistors at the outputs serve the same purpose as they do with an opamp circuit, and isolate the output pins from capacitive or resonant loads (such as cables).
+ +In the case of FDAs, there is no substitute for the datasheet and/ or application notes. I could ramble on for many paragraphs trying to explain the things you can (and can't) do with an FDA, but most of it would have to come from the datasheet anyway. It may take a while to understand the many options that may be available, but it's worth persevering if you need a single-chip solution to balanced and unbalanced conversions. Also, bear in mind that FDAs are usually fairly costly, and it will usually be cheaper to use opamps for 'line level' applications.
+ +
Figure 14 - Fully Differential Amplifier Using Opamps
You can create an FDA using a pair of opamps as shown above. Quite a few resistors are needed, and as before they must all be close tolerance. The circuit is simply a pair of differential input amps, with the inputs cross-coupled. The input can be applied to either the '+In' or '-In' terminals, with the unused terminal grounded. A balanced input can be applied between the two inputs in exactly the same way as an integrated FDA. No additional ground reference is needed because of R2 and R6, so a balanced source can be fully floating.
+ +
Figure 15 - Output/ Input CMRR Of FDA Using Opamps (Exact Values)
With the values shown, the input impedance is 6.67k to each input (13.34k for a floating balanced input), and the circuit has unity gain at each output (so it has an overall gain of x2). Simulated output CMRR is better than 80dB up to 40kHz, but of course that's using resistors of exact values. Input CMRR is better than 70dB up to 40kHz (again with exact values). You will never achieve the best case performance even with 0.1% resistors, but input and output CMRR can be expected to be better than 40dB with 1% resistors throughout. (A simulated 50-step Monte Carlo analysis with all 10k resistors varied by ±1% shows worst case input and output CMRR to be 40dB.)
+ +This is a versatile circuit, and will work well for either balanced to unbalanced or unbalanced to balanced conversion. However, it does have a relatively low input impedance because the source has to drive the inputs of both opamps. In most respects, it should work at least as well as an integrated FDA, but of course it doesn't have provision for DC offset. While this could be added, there's no point for a normal line driver or receiver.
+ + +While a transformer may (theoretically) provide infinite CMRR for inputs or outputs, the reality is different. By necessity, transformers have at least two windings, which are coupled by magnetic induction. However, there is also some capacitive coupling between the windings, and this degrades the common mode rejection, especially at high frequencies. Transformers are also only usable over a relatively limited frequency range, with perhaps 3 decades (say from 20Hz to 20kHz) being readily achievable (with a little to spare for a well designed component). An electrostatic screen between primary and secondary helps minimise capacitive coupling.
+ +If exceptional common mode performance is needed the transformer almost certainly needs to be driven by a balanced driver, (or followed by a balanced receiver for an input circuit). If there is significant capacitive coupling between the windings, using balanced drivers or receivers won't help a great deal - if at all.
+ +One thing a transformer (even a cheap one) does provide is galvanic isolation. This means that there is no ohmic connection between the windings, and this isolation barrier may be used for electrical safety and/or to isolate sensitive circuitry from a hostile external environment. Naturally the transformer must be rated for the degree of isolation required, so using a cheap 1:1 10k transformer (around $2 to $3 on-line) for 230V mains isolation is not an option.
+ +Unfortunately, decent transformers are expensive, and this limits their usage in many cases. It's also unfortunate that CMRR is usually very good (even for cheap types) at low frequencies, but falls at high frequencies due to inter-winding capacitance. This is also where opamp circuits are limited, so the benefits might not be a great as hoped for. Unfortunately, it's very difficult (mainly time consuming) to build a simulation model for a real transformer, so I measured one that I have to hand. It's nothing special (quite the reverse in fact), and is nominally 10k 1:1 ratio. Inductance measured 200mH, but is actually higher because inductance meters usually don't work well with transformers. Winding resistance is 130 ohms, and there's 3.6nF capacitance between primary and secondary.
+ +It's the capacitance that ruins everything. CMRR at 100Hz is excellent (as expected), but at 10kHz it's only 30dB, falling to 21dB at 30kHz. Adding a balanced opamp stage at the transformer's secondary is not as helpful as you would hope, because the opamp's CMRR is poor at high frequencies too. The only way to get good results at high frequencies is to use a transformer with an electrostatic shield between the windings.
+ +In general, transformers should be driven from the lowest practicable impedance. There are advantages to using negative impedance [ 8 ], but this isn't always practical or even possible. The primary winding resistance means that even a transformer driven from a zero ohm source still has a defined source impedance - the primary winding resistance). Negative impedance can be used to cancel most of the winding resistance, allowing closer to zero ohm source impedance.
+ + +It's no accident that when valve circuitry was the standard, balanced inputs and outputs were transformer based. Valves simply don't have the gain to allow much feedback, and the matching between them isn't good enough to rely on without trimming (which may be needed several times during the life of a set of valves). Their output impedance is too high to drive a nominal 600 ohm line without a transformer, and they are best avoided in this role.
+ +Of course, it is possible to use valves, but the performance will never even approach that you can get with ICs - dedicated or otherwise. Even a 'solid-state' discrete design will be vastly superior to any attempt at an 'equivalent' valve circuit without a transformer. The cost will also be a great deal higher and power consumption much greater. There are no sensible reasons to even try to use valves for direct-coupled balanced drivers or receivers.
+ + +We often expect perfection (or something close) with electronic circuits, but as shown this is an impossible dream. What we can achieve is results that are 'good enough', which doesn't mean they are inadequate - it means that even if they were significantly better we would (probably) not hear any difference. Many of the common circuits have been in use for years, and there's no evidence that the 'limitations' cause problems in a well set up system.
+ +Ultimately, circuit design (as with most engineering) is an exercise in compromise. This can even be classified as an 'art form', because the designer has to trade many limitations with many others. The 'art' comes into play to decide those parameters that have the least effect on the desired outcome, all the while ensuring that the circuit remains within budget and is practical. Even ignoring the budgetary constraints for commercial products doesn't mean that the outcome is 'better' than it would be if all parts used were of the highest specification possible. If the end-user can't hear a difference (in a double-blind test of course), then the extra cost and complexity is wasted.
+ +In particular, it's pointless designing any circuit that requires parts that are difficult to obtain (especially obsolete components). In essence, using 'unobtianium' parts is equivalent to basing a design on an IC that hasn't been invented yet, and probably never will be. Compromise is essential for both manufacturers and hobbyists, or the project is doomed to failure because no-one can get the part(s) for it.
+ +If you are happy to use 0.1% resistors (they do tend to be rather expensive - most are over AU$1 each), then your overall CMRR can be improved further. For those with an unlimited budget (there aren't too many), you can get 0.01% tolerance, but for those you pay very dearly indeed (typically over AU$20.00 each!). Of course you can select resistors from the 1% range, but thermal stability may not be as good as true precision components.
+ +Where noise (hum loops in particular) is especially troublesome, often a transformer is the only option. One positive is that only one end (either the line driver or receiver) needs a transformer, and the other end can be 'electronically balanced' using one of the circuits shown here or in the other referenced pages. This arrangement maintains a balanced connection, and includes the galvanic isolation of the transformer. If noise persists, it's far better to find out where it's coming from and fix the source of the noise, rather than trying to keep it out of cables, preamps, etc.
+ +A transformer is the only option if there is a significant voltage differential between the source and destination circuits. This may be due to earth (ground) current, different mains circuits wending their way back to the switchboard, and possibly with the mains being derived from different phases of a three-phase installation. Some circuits require extreme isolation (medical instruments being a case in point), and even a 'conventional' audio transformer will probably be incapable of providing the safety rating (and pass all relevant standards) required. This is another topic altogether of course.
+ +Dedicated ICs can provide very good results, but most of the time they aren't necessary. With signal levels of around 1V, even a troublesome system may only have a few millivolts of common mode noise. If this can be reduced by 40dB (generally fairly easy to achieve even with unmatched 1% resistors), then the noise voltage is reduced 100-fold. Even 10mV of noise is reduced to 100µV, and the 40dB signal to noise ratio (1V signal, 10mV noise) is increased to 80dB. By means of careful circuit layout and well made (and sensibly run) cables, background noise can be all but eliminated.
+ +There isn't always a choice of course. Some systems are used for outside broadcasts and similar, where the environment can be particularly hostile. When this is the case you may have no alternative to a transformer. Not only does a quality part provide good common mode noise reduction, but the galvanic isolation protects the electronics from the evils of the outside world. It's rare that transformers are needed in a domestic installation, but if all else fails this may be your only solution to intractable noise problems.
+ + +++ + ++
+1 Balanced Interfaces - Douglas Self
+2 Balanced Interfaces - Bill Whitlock
+3 Balanced Line Driver with Floating Output - Uwe Beis, Rod Elliott
+4 Projects 87A and 87B - Balanced Line Drivers & Receivers
+5 Instrumentation Amplifiers Vs. Opamps - Rod Elliott
+6 OPA1632 FDA Datasheet - Texas Instruments
+7 Fully Differential Amplifiers - Texas Instruments
+8 Negative Impedance - What It Is, What It Does, And How It Can Be Useful
+9 INA1650 - Texas Instruments
+10 Ted Fletcher's Website - (Inventor of the 'Superbal' circuit) +
![]() | + + + + + + |
Elliott Sound Products | +Bench Power Supplies |
A bench supply is one of the most useful pieces of test gear you will ever own. Building one intended for testing preamps and other low voltage, low current equipment is one thing, but making one that's suitable for testing power amps is another matter altogether. In reality, it's so difficult to get right that the likes of the late Bob Pease recommended to his fellow engineers and others that they don't even try. His advice was to buy one from a reputable supplier, and not put yourself through the grief of spending many hours building one, only for it to blow up the many expensive parts used to build it [ 1 ].
+ +In many ways, it's hard to disagree, and doubly so if you want to get voltages of more than 20V at a couple of amps. These days, the problem is doubled, because to be truly useful, the supply needs to be dual tracking, with both positive and negative supplies, with an output voltage that can be varied from zero to perhaps 25V or so. It ideally needs to be capable of at least 3A output, and with current limiting so you don't kill the supply the first time the output leads are shorted together (and that will happen!).
+ +In essence, there's actually not that much difference between a power supply and a power amplifier, except that a power amp has to source and sink current, while a power supply only has to source current to the load. However, where a power amp will be subjected to fairly high dissipation every so often, a power supply has to be capable of providing perhaps 3-5A output into a short circuit, and not fail. This is a great deal harder than it seems.
+ +Consider a supply that can provide 40V at 5A, but is set for an output voltage of perhaps 1-2V and a current of 5A. The internal voltage will be around 50V, so there's nearly 50V across the regulator transistors, 5A of current, resulting in a dissipation of 250W. This might continue for hours at a time or only a few minutes, but that doesn't mean that you only have to allow for a few minutes, because one day you will need 1-2V at 5A for an hour or more.
+ +No-one ever knows exactly what they'll do with a decent power supply until they have one, and it will end up being used to power amplifiers during testing, charging batteries, measuring very low resistances, or any number of other possibilities. I know this because that's what I do with mine (which I built many, many years ago, but it only provides ±25V at up to 2.5A). I've lost count of the number of times the thermal overload circuit disconnected my load, even with a fan for forced air cooling.
+ +It's commonly accepted that bench supplies should be regulated, and herein lies the problem. Regulation adds complexity and can create stability issues that vary from merely vexing to intractable. No-one wants a power supply that oscillates, nor does anyone want a power supply that kills the device being tested (or charged, measured, etc.). In reality, regulation (or at least 'perfect' regulation) isn't essential. Most power amplifiers don't use regulated supplies, and nor do many other high-current loads. You need to be able to adjust the voltage, and it should be reasonably stable, but ensuring that the output voltage only changes by a few millivolts under load is not needed for most applications. It might make you feel better if the supply has perfect regulation, but your circuits mostly won't care.
+ +Current limiting is another matter though. Ideally, when first powered, your latest project needs to be protected in case there's a fault. Like voltage regulation, the current limiting function needs to be adjustable, but it's rarely necessary for it to require extremely accurate current regulation. If we accept that very accurate voltage or current regulation is not essential, that simplifies the design and makes it a great deal easier to build and get working with the minimum of fuss.
+ +Few people want to mess around for ages trying to perfect a regulator that wants to oscillate, and this will be the case if 'perfection' is the goal. If that's what you really do need, then I must agree with Bob Pease absolutely - buy a commercial supply from a reputable manufacturer. However, you'll likely be up for some serious money if you need dual tracking, high voltage (over 30V) and high current (5A or more).
+ +A generally useful supply will have dual outputs, variable from 0 to 25V or so, with adjustable current limiting. Ideally, it will let you use the two outputs in series, allowing a single supply variable from 0 to 50V. 5A output is useful, but not essential. If you use it for testing DIY audio equipment (preamps, active crossovers, power amplifiers, etc.), then you can verify that the DUT (device under test) functions as expected, has no shorts or other major faults, after which it can be confidently connected to the intended power supply. It's uncommon for any competent design to fail with its 'real' power supply if it's been tested at a lower voltage, using a supply with current limiting that protects against damage if there is a problem.
+ +An expansion of the 'basic' power supply is something called an SMU (source-measure unit). These are usually high accuracy, microprocessor controlled supplies, and are able to source and sink current of either polarity. Most supplies only source current to the load, but an SMU can also be used as an 'active load', typically for power supplies or other equipment being tested. These are also known as '4-quadrant' power supplies, meaning they are designed to source or sink current of either polarity. Fortunately, this is not a requirement for basic testing, and is mentioned only in the interests of completeness. I do not propose to cover these supplies in this article.
+ +++ ++
++
Please note that this is not a construction article. Although it does show schematics, these are primarily for demonstration purposes, and + there is no guarantee that they will function properly as shown. While they have been simulated, this only indicates that the underlying principles are sound, but it does not mean + that the circuit will perform as expected in 'real life'. While the circuits described do look as though they will function well, this has not been verified by building and testing + them! +
It's not an accident that there aren't that many DIY projects for bench power supplies. Most people come to the realisation fairly quickly that it's a very expensive exercise, and that getting a fully working, reliable supply that does exactly what you need is not a trivial undertaking. The circuits shown here are for inspiration, and are provided mainly to give you an idea of the complexities involved - even for apparently simple circuits.
+ + +The primary function of any bench power supply is voltage regulation, but current regulation is also very useful. Both are described below.
+ +The first regulated power supplies used valves (vacuum tubes), with a gas discharge regulator as the reference voltage. Predictably, they weren't very good because of the limited available gain available. A few basic examples are shown below, with the opamp version being a fairly good analogue to the modern 3-terminal regulator ICs. These all suffer from a problem that makes them (generally) unsuitable for a bench supply - they can't get down to zero volts output.
+ +When testing something that's just been built, it's important to be able to start with a very low (preferably zero) voltage, and monitor the current as the voltage is increased. If you see the current climbing rapidly with a supply voltage of only a volt or so, you know there's a problem. Including current limiting (covered a little later) means that fault current can be kept to a value where it's unlikely to cause damage.
+ +
Figure 1.1 - Basic Voltage Regulator Topologies
The series pass device is V1/ Q1, and the controlling element is V2, Q2 or U1 (valve, transistor and opamp respectively). The voltage reference for the valve circuit is a gas discharge tube, and these typically had a voltage of around 90 volts (depending on the device, voltages from 70V to 150V were available [ 5 ]). The transistor circuit uses a zener diode, and the opamp circuit is shown with an external reference. Feedback is used in each case, and VR1 lets you set the voltage to the desired value. These are the basic versions of a regulator in each case, and there are many variations in practice.
+ +The feedback is arranged so that if the output voltage falls (due to a load being connected for example), the controlling device ensures that the series pass element can pass the extra current needed to supply the load at the desired voltage. The ability of any of the circuits to maintain the desired voltage is called the 'regulation', expressed in percent. For example, if the voltage falls by 1% when the load is connected, that forms the specification for the regulator. Higher gain in the control and series pass devices means better regulation.
+ +There's an extra transistor and resistor in the opamp version. 'Rs' is a current sense resistor, and Q2 is the current regulator transistor. If the current is such that the voltage across Rs is greater than 0.6V, Q2 turns on and 'steals' the base current from Q1 (provided via R1). This is the most basic form of current regulation, and it works surprisingly well in practice. If Rs is 1Ω, the output current is limited to 650mA if the output is shorted (or if the load tries to draw more than 600mA). While basic, this arrangement has been used in countless discrete regulator designs over the years.
+ +Predictably, the opamp version will have far better regulation than the other two, because it has extremely high gain. Most modern 3-terminal regulator ICs use a similar (but optimised) topology, and the reference voltage is generally a 'band-gap' arrangement with very high stability. Two values are provided for regulation - 'line' and 'load'. Line regulation is a measure of how much the output changes as the input voltage is varied, and load regulation is a measure of the change of output voltage as the load current is changed. If you look at the data sheet for any 3-terminal regulator, this info is provided, but not always as a percentage - sometimes it's shown as ΔV (change of voltage), usually in millivolts. Most are better than 1% (line and load).
+ +There are many factors that need to be considered in any voltage regulator circuit. One of the hardest to get right is stability, to ensure that the circuit has a fast reaction time, but without oscillation. Using an opamp driving a current amplifier (typically an emitter follower) will usually be stable, but if any additional gain circuits are used within the feedback loop, it will almost certainly oscillate. This means additional components have to be added (usually low-value capacitors), and their optimum location isn't usually immediately apparent. Examples can be seen in Figure 6.1 (single supply, opamp with emitter follower output) and Figure 7.1 (dual supply), where the opamp is followed by a gain stage. Given that most 'ordinary' opamps are limited to a supply voltage of less than 36V, this limits the available output voltage when a gain stage is not included.
+ +In some respects, a power supply is not unlike an audio power amplifier. The only real difference is that amplifiers can source and sink (absorb) current, whereas a power supply only has to source current to the load. Indeed, a perfectly capable regulator circuit can be built using the common power amplifier building blocks. However, power amplifiers aren't expected to drive capacitive loads, where voltage regulators must be capable of driving any load, whether capacitive, resistive or inductive. Of course, a power supply also needs to protect itself from damage (shorted outputs or very low impedance loads), and it must be able to deliver its rated current into any load at any voltage. Series pass transistor dissipation can be extreme, but the supply must carry on regardless. Compared to power supplies, power amps are simple!
+ + +A supply with current regulation used to be fairly uncommon, but it's very useful for a number of reasons. If the maximum current can be set for just above the expected current drain of your circuit being tested, if there's anything wrong a current regulator will limit the maximum current (as its name suggests), and hopefully prevent damage to the DUT. There are only a few techniques that are commonly used for current regulation, almost always using a sense resistor. The voltage across the resistor is monitored, and if exceeds a preset value, the voltage is reduced to maintain the preset current.
+ +One of the most common tricks is to use the base-emitter voltage of an ordinary BJT (bipolar junction transistor) as the 'reference', being roughly 0.65V. If the voltage across the sense resistor increases beyond that, the transistor turns on, and (by one means or another) reduces the voltage. Any attempt to draw more than the preset current results in the voltage falling further. This isn't a precision approach, but it's usually 'good enough'. The voltage can be amplified by an opamp (either IC or discrete) if necessary.
+ +
Figure 1.2 - Basic Current Regulator
Q3 and Q4 perform current limiting. The voltage across RS (the current sense resistor) causes Q3 to conduct, thus turning on Q2. This reduces the voltage by pulling down the reference voltage, reducing the output voltage, and therefore the current. A number of different sensing methods are shown further below, but this is one of the simplest. Because there are two transistors involved, the gain is quite high, so the current limit is 'brick wall'.
+ + + +One way to make a very robust power supply is to use a high-power transformer based supply, and control the voltage using a Variac (see Figure 4.1). This is unregulated, but it's the simplest way to create a high power supply that can be used with almost any amplifier (or other projects, including power supplies). There's no over-current protection (other than fuses), but I have a couple of supplies that use this exact configuration. When I need lots of voltage and current, these supplies are invaluable. However, one needs to be certain that the unit under test has no inherent fault(s) first. This ideally requires current limiting. While 'safety' resistors can be used in series with the positive and negative supplies for initial tests, this is a nuisance.
+ +Most (nearly all in fact) of my initial tests are done using a zero to ±25V, 2A dual tracking supply that I designed and built about 35 years ago (at the time of writing, and it's still working). It has current limiting down to about 100mA, and has a fan for the heatsink, along with an over-temperature shut-down. These are needed because it does get used for 'strange' applications, and yes, the output(s) have been shorted many times - usually by accident, but sometimes because there's a fault in the item being tested. Something as simple as a small solder bridge can spell doom for a power supply that can't protect itself.
+ +The dissipation problem was discussed briefly above, and this is the Achilles heel (as it were) of all high current linear supplies. The answer (of course) is to use a switchmode design, but that is so far outside the scope of normal DIY that it doesn't warrant consideration. Every issue faced by a linear regulator is raised to the 'nth' power for a switchmode supply. Those you can buy have undergone considerable development, and use specialised parts that are not suited to a DIY approach. Unless you are capable of designing and building switchmode transformers, then it's out of the question altogether.
+ +If you have a linear supply that can provide up to (say) 50V at 5A, the best case dissipation at full current with a shorted (or low voltage) output is 250W, but in reality it may be a great deal more. If you think that's fairly easy (there are transistors rated for 250W dissipation after all), think again. The SOA (safe operating area) and thermal limits come into play very quickly, and a transistor with (for example) 56V across it may only be capable of 3A or so, based on a case temperature of 25°C. Ultimately, you will need to provide enough transistors to be capable of handling at least twice the power dissipated, and preferably more. My suggestion would be to use a minimum of 5 × 125W transistors, and while that sounds like overkill, in most cases it will suffice - there's some reserve, but not very much! A lower voltage reduces stresses, and I know from many years of experience that ±25V is usually sufficient for most tests.
+ +At higher voltages, if you used 5 × TIP35C (NPN, 125W at 25°C), they can each pass 1A with 50V across the transistor (50W), but only at 25°C. At elevated temperatures, that is reduced, falling by 2W/ °C above 25°. At a case temperature of 75°C, total dissipation is limited to only 25W for each transistor. That rules them out of contention with a simple scheme, because the dissipation will exceed the maximum allowable as the heatsink becomes hotter. Of course, you can use far more robust transistors, but they will be commensurately more expensive. The TIP35C (125W) is around AU$3.00, vs. over AU$5.00 for the MJL3281 (200W) and more than AU$6.00 for the MJL21194 (200W).
+ +All of the available devices have the same limitations - SOA and temperature always mean that you can get far less power from any transistor than you expect. Forced air cooling is mandatory unless you have access to an infinite heatsink, which in my experience are hard to come by. Even using insulating washers may become impractical, because the additional thermal resistance means that the transistors have to be de-rated even further. In turn, that means a 'live' heatsink, sitting at the full supply voltage. Should it come into contact with an earthed chassis, the result will be a very loud Bang ! As you should now be aware, there are so many things that can go wrong that the advice to buy a commercial supply starts to look very sensible indeed.
+ +Then (of course) there's the transformer. After that there's the high current bridge rectifier, followed by filter capacitors. All of these need to be very substantial, with a 500VA transformer, 35A bridge, and at least 10,000µF of capacitance. Just the hardware (transformer, bridge rectifiers, filter caps, heatsinks and power transistors) will probably cost at least AU$200 - or more. You still don't have a chassis/ case, pots, knobs and ancillary parts, including mains and DC connectors, meters, etc. Remember that for a dual supply (the only kind that's really useful), everything is doubled. You'll be up for at least AU$400 just for the basics, and closer to AU$600 by the time everything is included. If this hasn't convinced you that a commercial supply is worthwhile, then nothing will.
+ +If you were to look at a major supplier (such as RS Components, Element14, etc.) you'll find dual supplies that can do 0 to ±30V at 5A, or 0-60V if the two outputs are wired in series. These may not be in the same league as Tektronix, Keysight or other 'laboratory' equipment makers, but the cost is less than for the major parts alone if you were to try to build your own. While the maximum voltage is less than ideal, I know from years of experience that up to ±30V is quite sufficient for basic testing, and all power amps shown in the projects section were tested with my ±25V supply before being connected to my 'monster' Variac controlled supply (which can deliver up to ±70V at around 10A or more).
+ + +Many of the true lab supplies use digital (via a keypad) entry for the essential parameters. For general use, this is an absolute pain in the bum! Using ordinary knobs and pots is a far better option most of the time, because the effect is immediate. It's common for lab supplies to use a rotary encoder to control the current or voltage, but you have to select the function first, and it may take several full turns to cover the full range.
+ +If something starts to get hot in your test circuit, the last thing you need is to have to press a multitude of buttons or turn a knob ten times to reduce the voltage. With a standard pot, one twist anti-clockwise and the voltage is back to zero. You will never know just how frustrating keypad entry really is until you need to change something quickly. Ideally, there would be a 'ZERO' button to turn off the output, but I've not seen a digital supply that has one. Reading rapidly changing current on a digital display is simply not possible unless it features an averaging function (which will be buried three levels down in a menu - somewhere).
+ +Having used bench supplies all of my working life, I can say with certainty that 'ordinary' pots are more than satisfactory for normal test purposes. Extreme accuracy is rarely essential for most testing, and if by some chance you do need a very accurate voltage or current, it's easy enough to build a separate regulator. Mostly, you won't need it, and if the supply is accurate to within a volt or so, that's almost always good enough. Obviously you need to be careful if you need 3.3V or 5V for logic circuits, but they will often have their own regulator, and will work with 7-12V quite happily.
+ +Digital displays and controls can also give a false sense of security, because we tend to believe the meters because they display voltage and current down to a couple of decimal places. However, unless they are properly calibrated (with a known and calibrated accurate meter), then they could easily tell you that the voltage is 5V when it's really 5.5V or 4.5V. Because all digital systems ultimately rely on DACs and ADCs (digital to analogue and analogue to digital converters), they require an accurate reference voltage. If that goes awry for some reason, then all readings are meaningless.
+ +For this reason, I do not cover digital control systems here. Control of voltage and current remain in the analogue domain - they are analogue functions, and adding another complication is not necessary. Fairly obviously, at least some of the ideas shown can be adapted to digital control, but I don't show any examples.
+ + +The simple arrangement shown in Figure 1.2 belies the likely difficulty of implementing a good current limiter, and this is where things can get difficult. There are two choices - 'high-side' and 'low-side' sensing. 'High-side' means monitoring the current in the positive and negative outputs, and is complicated by the fact that this voltage is not only variable, but also at a voltage that's usually incompatible with opamps. You can't expect an opamp to have its inputs at perhaps 30V or more, since that's generally the maximum operating voltage. This isn't a trivial issue to get around, and it's generally better to monitor the current before the series pass transistor(s) so the voltage doesn't vary so much. However, this makes the voltage problem worse, because the unregulated supply will typically be around 35V or more - well over the range for any low cost opamp.
+ +A simple 'high-side' current limiter is shown in Figure 1.1 ('Opamp' version), but it's not as simple as it looks. It's difficult to make it variable without using an unrealistically large sensing resistor, and accepting that you will lose significant output voltage across the resistor, which will also get hot. A switched scheme is shown in Figure 7.1, and while this certainly works, it's not particularly accurate and nor is it the most practical.
+ +'Low-side' sensing gets around that problem, but it can only be used for a single supply. Sharing a low-side sensing circuit between the positive and negative supplies won't work, because most of the supply current flows between the +ve and -ve outputs, often with little flow in the common connection. It can be done, but it's far from ideal, especially if a single pot is to be used for setting the voltage (a dual tracking power supply). The Figure 6.1 circuit uses low side sensing, and it will still work on both polarities of a dual supply because the outputs have their common point after all regulation.
+ +There are specialised ICs available to get around the high-side current sensing problem. Three 'demonstration' high-side current sensing circuits are shown below. However, these are all shown with a positive supply only. The first two can be used in the negative supply (assuming a complementary design such as Figure 7.1), but the IC version cannot. There doesn't appear to be a solution for that particular problem.
+ +
Figure 3.1 - High-Side Current Sensing Circuit
A current mirror (Q1 and Q2) is used to sense the current across the sense resistor (R1, 100mΩ), and the output is level-shifted by the resistor network. The output is monitored by opamp U1, which is set up as a differential amplifier. VR1 is included so that the zero point can be set (i.e. zero output voltage with zero current through R1). The opamp is deliberately set up with a bit more gain than it needs, and the output is scaled with VR2. As shown, the circuit will provide an output of 1V/A, so at 2A current, the output is 2V. The arrangement shown is fine for up to 5A, and for higher currents, the value of R2 and R3 need to be increased.
+ +While this circuit is capable of high accuracy, it's also very susceptible to temperature variations between Q1 and Q2. Ideally, these would be a 'super-matched pair' in a single package, but these can be difficult to find and while inexpensive, most are now available only in an SMD package. Naturally enough, a similar arrangement can be used without the current mirror, but sensitivity is reduced and the maximum allowable voltage is also lower. The current mirror can handle an input voltage of 50V easily, but the simple differential opamp circuit is limited to about 40V. Higher voltage is possible by increasing the value of R2 and R3, but that reduces the sensitivity even more.
+ +If you were to use the Differential Amplifier circuit, the output voltage varies between zero and 250mV for a current between zero and 2.5A. Sensing current below 100mA (10mV output) is difficult. Of course, you can increase the value of the sense resistor, but at the expense of power dissipation. At 2.5A, a 100mΩ resistor dissipates 625mW, but to get the same sensitivity from the differential amplifier you'd need to use a 1Ω resistor, which will drop 2.5V and dissipate 6.25W. This is clearly a fairly serious compromise. There's also the ever-present issue of opamp DC offset, which may also need to be addressed if you need to regulate to low current (anything below about 100mA is a challenge).
+ +In case you are curious as to the use of a -1.2V supply for the opamps, this ensures they can get to zero volts at the output. The LM358 can (allegedly) get its output to almost zero, but in reality it doesn't quite make it. The small negative voltage allows it to get to zero easily. Most other opamps will not allow such a small negative supply, and will require around -5V to work properly. This will take many above their recommended operating voltage if a 30V supply is used as shown.
+ +In all cases, it's imperative that the input voltage remains within the specified range for any opamp used in this role. With a 30V supply, the inputs should always be at least 4V above the minimum supply voltage, and 4V below the maximum. Whenever possible, the input voltage should be close to 15V (assuming a 30V supply).
+ +A simple solution that can be applied to the simple (one opamp) high-side sensor is to use switched resistors instead of a single fixed value. For example, 100mΩ is fine for higher currents, and you can switch to a 1Ω resistor to allow accurate setting for lower currents (less than 1A for example). This adds another switch, but it also simplifies the design, and opamp DC offset is much less of a problem when you need a low current limit.
+ +There are several special purpose ICs available for high-side current sensing, with one shown in Figure 3.1. These include the LT6100, INA282 and several others, but they are only available in SMD packages, making them rather unfriendly for DIY applications where a PCB is not available. These are very accurate, and allow the voltage of the current monitored supply line to be much higher than the IC's supply voltage. In common with most SMD ICs, they are often only available in packs of five or more, and they aren't exactly inexpensive. If you wanted a dual supply (±25V for example), there is no negative version of these current shunt amplifiers, and this creates additional complexity. The INA282 can (apparently) sense a negative voltage, but it can't exceed -14V. The gain is 50V/V, so a much smaller shunt resistor can be used (0.02Ω shown). That means the output changes by 1V/A, so for 2.5A output, the output voltage will be 2.5V. Because it's an active circuit, it will introduce phase shift, which might make the current regulator unstable. This has not been tested.
+ +The current sense IC datasheets also contain useful information about the proper connection to a current sense resistor. You must ensure that there is effectively zero PCB, Veroboard or hard wiring included in the sensing circuit. The sensing leads must come directly from the current shunt, avoiding any other wiring. This is known as a 'Kelvin' connection, which ensures that track or wiring resistance is not included in series with the current sense resistor.
+ +
Figure 3.2 - Low-Side Current Sensing Circuit
Low-side sensing is a far simpler option, but there are circumstances where it can't be used. For example, you can't use low-side sensing in the Figure 7.1 circuit, because the common is literally common to both the positive and negative supply. In a balanced circuit or if you only draw current from between the two outputs, nothing will register regardless of the current drawn. This method is used in the Figure 6.1 circuit, and there it's not a problem because each supply is a separate entity until the two are connected by the series/ parallel switching.
+ +I haven't shown any of the options that can be used. For example, if you use a very low value sensing resistor, the small voltage across it can be amplified with an opamp to get more voltage. 100mV/ A as shown is fine for loads up to around 5A or so, but with more current the losses become too high. For example, even at 5A, a 0.1Ω resistor will dissipate 2.5W and you lose 0.5V across the resistor. With higher currents this quickly gets out of hand. At 7A, the resistor dissipates almost 5W, and it will get extremely hot. These caveats also apply to high-side sensing of course, as the physics are identical.
+ +The current sense resistor (whether high or low side) must be inside the voltage regulator's feedback loop, or it can't compensate for the voltage drop across the sense resistor. In reality, it usually doesn't matter, because very few circuits that you will test will care if the voltage 'sags' a little under load. For an amplifier that uses a conventional power supply (unregulated), the actual voltage will change far more than it will with a bench supply, even if the current sense resistor is outside the feedback loop.
+ + +If you have the bits and pieces needed to build a robust power amplifier supply, then with the addition of a Variac (see Transformers - The Variac if you don't know what that is) you can build a 'monster' supply that will suit high power testing with almost any load. You don't get regulation, nor is there any current limiting (not even short circuit protection), but with the right parts it's a formidable piece of test gear.
+ +I have a couple, one of which really does qualify as a monster. The circuit is shown below, and it's literally what I use for high power tests. Any piece of equipment that's connected to it has already been verified to be functional, and that's essential because it can destroy almost anything given the opportunity. It's an extremely useful piece of kit, and all project amplifiers published on the ESP site have had their final test with this very supply.
+ +
Figure 4.1 - Variac Based Power Supply
The supply is just a 1kVA transformer, two bridge rectifiers (35A each), and a bank of capacitors salvaged from a very ancient hard disk drive many years ago (the drives that were as big as a washing machine!) It's set to the desired voltage with the Variac that I have on my workbench as a matter of course. The supply isn't regulated, but can supply enough current for any amplifier that I have ever tested with it. Long ago, a Variac was a very expensive piece of kit, but Chinese variable auto-transformers are now surprisingly affordable.
+ +This also means that the applied DC is very similar to that normally provided by a linear supply, but with better regulation due to the oversized transformer and filter capacitors. This is obviously not a cheap option, but it cost me almost nothing because I had everything I needed in my 'junk box'. The 10,000µF caps shown should be considered a minimum - mine uses around 20,000µF on each supply. If you have them available or can afford them, use as much capacitance as you can! Note the inclusion of 'bleeder' resistors - without them, the voltage can remain at a dangerous level for many hours. I normally don't use them because the connected amplifier )or other circuitry) discharges the caps, but that's not necessarily true with test equipment.
+ +The continuous output current is around 7A, but with an amplifier load it can handle 25A peaks (and more) with ease. Do you need something similar? Only you can answer that, but it doesn't need to be as big as the one I use. Of course, there's no current limiting, so you need to be sure that the circuit works before using the 'monster' supply! The output fuses protect against shorted outputs, but will not save your project from damage if it's faulty. A supply such as this is applicable for final tests, not for initial testing or fault finding. There is no current limiting, so a fault can cause significant damage (the fuses only protect the supply, not the load!). Shorted outputs are obviously a cause for some concern, so care is required.
+ + +One approach that has been used in many supplies is a simple transformer 'tap switching' scheme. If you only need (say) 15V or less, the transformer's output is switched with a relay so the AC output is only 15V AC, rather than the full 30V AC needed to get a clean 30V DC output. If the output is run at a low voltage but high current, the dissipation is reduced because there's less voltage across the regulator. When a voltage of 16V DC or more is selected, the relay switches to the full output (30V AC). This can be extended with more taps of course, but that would require a custom transformer, dramatically increasing the cost.
+ +Tap switching supplies have been around for almost as long as I can remember. The most impressive I've seen used a motorised Variac to maintain the AC input at just enough to prevent any ripple breakthrough on the DC side. These were very large, extremely high current, and would have cost a fortune when they were made (sometime in the mid 1970s). This isn't something I'd suggest anyone try to build, as the cost and difficulty of setting it up would be well beyond the budget of even a well-heeled DIY fanatic.
+ +Simple tap switching supplies use two AC voltages, so for a dual supply you need two tapped windings, plus an auxiliary winding to provide the normal ±12V or so for the control circuits. Finding a suitable transformer will be next to impossible, so you'd need to have a transformer custom made. This isn't a problem for manufacturers because they will build many supplies and the cost can be amortised over a complete production run. Hobbyists don't have that luxury.
+ +The use of tap switching reduces the demands on the series pass transistor(s). For a dual supply, you'd need at least two power transformers (and realistically you'd also need a third transformer to provide the control circuit supply voltages). This would increase the already significant cost of building a dual power supply. There's also additional components needed to sense the output voltage, and switch from the low to high voltage tap automatically (and vice versa) using relays. While building any power supply is a challenge, adding tap switching just adds another layer of complexity. I don't propose to go any further with this, as it makes an already complex and difficult job that much harder and more expensive.
+ +There are some savings too of course, particularly in the number of series pass transistors needed and the amount of heatsinking. However, these are not sufficient to offset the cost of the transformers, and the power transistor(s) can still be subjected to short-term conditions that push them outside of their safe operating area. Such excursions may be brief, but a transistor can fail in a millisecond if the SOA is exceeded - especially if its already at an elevated temperature. I recall a friend who built a fairly basic tap-switching power supply from a kit many years ago, and he had nothing but trouble from it. This was a semi-commercial product, complete with case and everything needed to put it together. It failed so many times that he eventually gave up in disgust. No-one wants to go through that!
+ +There's another method that's worth a bit more than a passing mention, even though it does have some serious challenges. Using 'phase cut' circuitry (similar to that used in lamp dimmers), it's possible to vary the input voltage prior to regulation, simply by adopting fairly simple low frequency switching. However, it also imposes far greater than normal stresses on the transformer and the filter cap, but these are not insurmountable problems.
+ +The switching element can be a MOSFET, IGBT (insulated gate bipolar transistor) or an SCR (silicon controlled rectifier), with the switching synchronised to the mains with a simple zero-crossing detector. The idea is to impose a delay, starting from the zero crossing (time zero). It's usually easier (and adds fewer additional challenges) to wait until the input voltage has fallen to the desired voltage, so a 'leading edge' configuration is used. When the input voltage has fallen to just below the threshold voltage, the switch is turned on, charging the main filter capacitor. A simplified block diagram is shown below.
+ +
Figure 5.1 - Phase-Cut Pre-Regulator Block Diagram
The challenges mentioned earlier include extremely high peak currents, especially with a low output voltage at a high current. These can be mitigated by adding an inductor and flyback diode (shown as 'Optional'), with the greatest issue being that the inductor has to carry a large DC component without saturation. This means a low-permeability core has to be used, so more turns are necessary for a given inductance. This adds resistance and increases losses (meaning more heat is generated). However, including the inductor will give better results than you'll get otherwise, and it reduces the high current stresses otherwise imposed on the transformer, bridge rectifier and filter capacitor. The diode (D1) must be a high-speed type, rated for the maximum output current.
+ +This technique has been used in several commercial products, and while it does do exactly what's intended, it makes poor use of the transformer's VA rating if the inductor and diode aren't used. Without these, you can expect the transformer's output current to be up to four times the DC current. That means that for 3A DC output (and using a 25V transformer), the transformer needs to be 300VA, where normally a 150VA transformer would be sufficient. To make matters worse, the inductor has to be fairly large - around 10mH is needed, a large and expensive component.
+ +The circuit works by comparing the input control voltage to the ramp, created by the ramp generator and synchronised to the mains frequency with a zero-crossing detector. When the AC voltage reaches the required amplitude, the switch turns off, preventing the capacitor from charging any further. The 'idealised' waveform is shown (assuming no inductor or storage/ filter capacitor), and it's apparent that the voltage and current supplied to the output is reduced depending on the phase angle. This process and waveforms can be seen in more detail in the Project 157 - 3-Wire Trailing-Edge Dimmer project article. It's a different application, but the process itself is pretty much identical.
+ +I actually have a power supply that uses this arrangement, but its 120V AC input makes it pretty much useless unless I power it from a Variac. At no load, the voltage jumps up then slowly falls until it's below the threshold, when it jumps up again and the process repeats (in a somewhat random pattern). Under load it's not too bad, but this is not a technique I'd recommend. Apart from the fact that the one I have is rated for 150V at 5A, it also weighs in at around 40kg, and has one very large main transformer, a smaller auxiliary transformer to power the electronics, and a large filter choke (inductor). It is very 'old school' in terms of layout and construction, and it never gets any use. I don't even recall how I came to own it! If I need that sort of voltage and current, I use my Variac controlled 'monster' supply.
+ +Yet another approach is to use a switchmode step-down (buck) converter as a tracking pre-regulator. You can think of this as a 'high tech' version of the phase-cut pre-regulator described above, which provides the advantages, but fewer disadvantages (in terms of transformer utilisation at least). Some fairly high powered modules are available surprisingly cheaply, and the idea is to ensure that the voltage provided to the series-pass transistors is only a couple of volts greater than the output voltage. This can improve efficiency so you can get away with much smaller heatsinks, and thermal management isn't such a challenge. A suitable feedback mechanism has to be provided that controls the output of the switchmode converter, such that it is always just great enough to ensure regulation.
+ +The pre-regulator reduces the series-pass dissipation to only a few watts, even at full current. It should go without saying that this approach requires some serious development, and while it's probably the best all-round solution, it's far harder to get right than any of the other options examined so far. This is the electronic equivalent of using a motorised Variac (as mentioned above), but is cheaper to make and easier to control. The design challenges can be extreme if you try to build your own, and keeping switching noise out of the final output can also be difficult. If you need very low noise (for performing noise or distortion measurements for example), the switching noise will almost always intrude on the measurements. This is an option that won't be covered further here.
+ + +A single supply might be attractive for some people, and it's certainly simpler than a dual tracking version. Of course, if you only have one polarity that limits your options as to what you can test, but they are commonly available from any number of suppliers. The circuit shown below is adapted from one that's shown on a number of different websites [2, 3, 4]. As such, it's difficult to know which one was 'first', and there have been many improvements (or at least changes, which aren't always the same thing!) made to it over the years. The basics haven't changed much, and the one shown below dispenses with one voltage regulator in favour of a simple diode regulated negative supply. Because I used LM358 opamps, the negative supply only needs to be around -1.2V at fairly low current.
+ +When the supply is in current limit mode, the LED will come on, indicating 'constant current' operation. It's normally off, so you can tell at a glance if the load is drawing the preset current with a reduced output voltage. Constant current operation is particularly useful for testing high power LEDs or LED arrays, as that's the way they are meant to be driven. You also need an 'on/ off' switch, which reduces the output voltage to zero when in the 'off' position. This is an essential feature (IMO) as it lets you make changes without having to disconnect the supply. The best arrangement is to provide the switching at the output of the supply, as that lets you set the voltage while the DC is turned off. Consider using a relay (or two) for the switching, otherwise you need a heavy duty switch. Wile the voltage can be reduced to (near) zero by pulling the non-inverting input of U1B to ground, there may be 'disturbances' when AC power is first applied. This is avoided by switching the output.
+ +The supply shown below is fairly basic, and you'd need to add meters for voltage and current, and thermal management (a fan and over-temperature cutoff) at the very least. There are countless improvements that can be made, but they would make the circuit more complex, more expensive, and provide more 'exciting' ways to make a seemingly minor error and cause the supply to blow up the first time it's switched on.
+ +
Figure 6.1 - Single Supply Schematic
U1 is a 7815 regulator, but with a 15V zener from the 'ground' pin to raise the voltage to 30V. D10 ensures that there can never be more than 15V input/ output differential across the regulator. Additional reference zener current is provided by R3 to ensure a stable output. U2A is the current regulator. When the voltage at the inverting input (U2A, Pin 2) is greater than that on the non-inverting input (Pin 3), the output goes low, pulling down the reference voltage provided to U2B (the voltage regulator). The voltage is reduced by just the amount required to ensure that the preset current is provided to the load.
+ +The current limit is variable from (theoretically) zero to 2.5A. VR4 allows adjustment to ensure the reference voltage for U2A (TP2) is as close as possible to 825mV (825mV across R18 (0.33Ω) is 2.5A output current). It may be possible to increase the output current to 3A (990mV reference voltage), but you would need to add another series pass transistor to keep the transistors within their SOA at minimum voltage and maximum current. Some ripple breakthrough at maximum output (voltage and current) is likely unless you add more capacitance (C1).
+ +When in voltage mode, U2B compares the reference voltage from VR2 with the voltage at the output, reduced by R16, R11 and VR3 (voltage preset). If the output falls due to loading, U2B increases the drive to the output series-pass combination (Q3, Q4 and Q5) to maintain the desired voltage. The upper output voltage limit is imposed by the opamp (U2), which can't force its output to much above 25V with the typical output current of around 2mA (this depends on the gain of the output section, Q3, Q4 & Q5). Note that the reference voltage is itself referred to the negative output terminal - this ensures that the regulator will correct for any voltage drop across R18. If it were otherwise, regulation would be badly affected, especially at maximum current.
+ +Note that the heavy tracks are critical, and any significant resistance in these sections will upset the current sensing. Also, be aware that the points indicated with a 'ground' symbol are marked 'Com' (Common). They are not connected to chassis or any other ground. The 'Com' designation means only that all points so marked are joined together. Also note the diodes with an asterisk (*), which must be 1N5404 (3A continuous) or better. All other diodes are 1N4004 or equivalent (other than the 25A bridge rectifier of course). Bench power supplies often get connected to 'hostile' loads, and the high current diodes (D8 and D9) are to protect the supply.
+ +The supply uses 'low side' current sensing, so it needs some tricks to use it as a dual tracking supply with both positive and negative outputs. The current sense resistor (R18) is a compromise between voltage drop and dissipation. At maximum current (2.5A), R18 will dissipate a little over 2 watts, which is easily manageable using a 5W wirewound resistor. Both voltage and current regulation are very good (at least according to the simulator), and there's no sign of instability. In theory (always a wonderful thing), the current can be regulated down to a couple of milliamps, but in reality it will not get that low. Expect around 50mA or so, but it might be a bit lower than that (depending on the opamp's own DC offset). Another trimpot can be added to correct for opamp DC offset, but it shouldn't be necessary (and adds something else that needs adjustment).
+ +All of the alternative versions specify a single 2N3055 for the output, but with a shorted output and maximum current, the dissipation will be about 80W, and maintaining the series pass transistor(s) at 25°C will be impossible. The TIP35 devices have a higher power rating (125W) and a good SOA (safe operating area), but there is still a case to be made for using three, rather than the two shown. The BD139 also needs a heatsink, but a simple 'flag' type will normally suffice. In common with any transistor that dissipates significant power, excellent thermal bonding to the heatsink is essential, and you will need to use a fan. This can be thermostatically controlled, and can use PWM (pulse width modulation) for speed control, or it can just turn on and off. Figure 8.1 shows a suitable circuit for both operating the fan and shutting down the supply if it gets too hot (which in this context is no more than 50°C heatsink temperature).
+ + +If you did want to use the Figure 6.1 circuit for a dual supply, the transformer needs two separate windings. The second supply (#2) is identical to that shown above, and the positive output is connected to the GND (or to be more accurate, 'Common') connection of supply #1. Most of the time, power supplies are used with the outputs floating, with no connection to the mains protective earth. This lets you use the supply as a normal positive and negative supply, or the outputs can be used in series, which will give an output of 50V at up to 2.5A. This way, you can ground any terminal you wish to get the supply configuration you need.
+ +To build it as a dual supply, the 'Voltage Set' and 'Current Set' pots will be dual-gang linear pots, with one section of each for the separate supplies. Tracking will not be perfect, but dual-gang linear pots are usually fairly good in this respect. Using two supplies also lets you connect them in series or parallel. The latter is handy if you have a single supply load that draws more current than one supply can provide. Many commercial dual supplies use this scheme, and it can be very useful. While 'proper' dual tracking would only use a single gang pot with electronic coupling to ensure the voltages are identical, this makes the circuit more complex.
+ +
Figure 6.2 - 'Dual Single' Supply Connections
When the switch or relay (double-pole, double-throw or DPDT) is in the series position, the negative of the upper supply is connected to the positive of the lower supply, and both connect to the common terminal. You can have 0 to 50V output, and the common is the centre tap for ±25V. In the parallel configuration, the two positives are joined, along with the two negatives (the common terminal is disconnected). This allows for 0-25V at up to 5A output. Note that the negative terminal is the negative output of the lower regulator. Because the outputs are floating, either the positive or negative terminal can become the system earth/ ground if this is required.
+ +One advantage of using 'dual single' supplies is that they can be used independently (with different voltage and current limit settings), connected in series (usually with tracking) or in parallel for more output current. Unfortunately, if you wanted to use the two supplies independently, you can't use dual-gang pots, and each supply must be set individually. This is a serious nuisance, and fortunately it's not a common requirement.
+ +The arrangement shown let you connect the supplies in series (0 to ±25V or 50V at 2.5A) or in parallel (0 to 25V at 5A). The 'common' terminal should normally not be earthed, so the supplies are floating. This lets you operate the supply without creating ground loops. When in parallel, one supply will usually be at a slightly different voltage from the other, but the current limiter ensures that the current from each supply can't be above the limit (2.5A). There may be a small change in voltage as the current is varied, but this shouldn't create any problems in normal use.
+ +This design means that there is no common circuitry - both regulators are completely independent, and no parts are shared - other than the dual-gang pots used to set voltage and current. This increases the overall cost, but allows greater flexibility. The circuit above doesn't allow for independent supplies, but that is unlikely to be a limitation. A well equipped workshop will have at least two supplies (for example, I also have a separate independent ±12V supply, plus an independent 5V supply). None of these supplies share a common ground - all are fully floating.
+ +The 'on/off' switching is at the final output (just before the output terminals). This lets you set the voltage with no output (meters will be connected before the output switch). A relay (or a pair of relays) lets you use a mini-toggle switch rather than a heavy-duty toggle switch, and is recommended for maximum performance. The relay(s) can be mounted on the front panel, right next to the outputs.
+ + +Now we can look at another 'sensible' option. Again, that means an output of around ±25V DC, at a maximum current of no more than 3A or so. Believe it or not, it's still cheaper to buy one! I know that this isn't the 'DIY way', but it's more practical than building it yourself. I've looked at countless different designs over the years, but few are worth the parts it would take to make them. There remain issues with stability (i.e. not oscillating at any output voltage or current, or with 'odd' loads). This might not sound like a problem, but the interactions between voltage and current regulators can make an otherwise well behaved supply suddenly think it's an oscillator. It goes without saying that this is undesirable (to put it mildly).
+ +Project 44 has been around for quite some time (since 2000), and although the maximum output is only ±25V, it a fairly good option for running initial tests. It doesn't have adjustable current limiting, so output current is set by the LM317/ 337 regulators, at around 1.5A. It's usefulness has never diminished since publication, but you must use 'safety' resistors in series with the outputs so that nothing is damaged if there's an error in the wiring of the DUT. The value for any given ESP project is generally specified in the project article or construction notes (available when you buy one or more PCBs).
+ +One of the things that's expected is that a bench supply needs very good regulation. In reality, this isn't actually the case. Power amplifiers usually don't have regulated supplies, and preamps (and similar low current projects) draw a fairly consistent current, so regulation within the allowable range is easy. If a power supply's voltage falls by (say) 0.5V when heavily loaded, it really doesn't matter, because that's a great deal less than it will have to cope with when connected to a 'normal' power supply. The thing that is critical is current limiting, and while this might appear to be simple enough, it's actually difficult to get it to operate reliably. The current limiting circuitry introduces additional gain into the circuit, and maintaining stability can be irksome at best, and next to impossible at worst.
+ +Often, the critical aspect of any current limited supply is at the transition between voltage and current regulation, where the two different forms of regulation interact. At the onset of current limiting, you have the voltage regulator trying to maintain the preset voltage, and at the same time, the current regulator is trying to reduce the voltage to maintain the preset current. For those who really want to build a power supply, John Linsley-Hood presented a design way back in 1975. An updated version is shown below, but modern transistors have been substituted for the originals, and two series pass transistors are included. Adding a third series-pass transistor on each supply makes cooling easier and imposes less stress on the transistors. In the original circuit, the opamps were µA741s, but if you have them to hand the 1458 (essentially a dual 741) is a better choice. You can also use an LM358 in this circuit.
+ +
Figure 7.1 - Bench Power Supply (After JLH, 1975) [ 6 ]
The above is adapted from the original, which used a single 2N3055 and MJ2955 TO-3 power transistor (one for each rail). Not only were they subject to excessive dissipation in the original (up to 93W at maximum current into a shorted output), but TO-3 devices are rather expensive today. They are also a pain to mount, where flat-pack devices are far simpler in this respect. The TIP35/36 devices specified have a higher power rating (125W vs. 115W each) and a higher collector current, but I've modified the circuit so that it provides a maximum of ±25V and uses a lower voltage transformer. This keeps the series pass transistors to a manageable power level, at no more than 40W each. Feel free to add another series pass transistor for each polarity, reducing the thermal load even further. Q3 (a and b) must have a reasonably good heatsink, as the power dissipation is much higher than it may appear at full output current (and at any output voltage).
+ +The current limit switch is less than ideal, since the switch contacts need to be able to handle the maximum output current (about 2.4A), and it's less convenient than a pot that allows continuously variable current limiting. The 0.27Ω resistors need to be rated for at least 3W, with 1W for the 1.5Ω resistors. The remaining current limiting resistors are 0.5W. While the switch is not as versatile as a pot, the limiting thresholds are designed to protect your circuitry. When first testing, you'd normally use a low current to ensure that nothing is drawing more than it should. The 5mA setting is too low for most circuits, but it can be useful. It can be omitted if you don't think you'll need it.
+ +The output needs either a heavy duty toggle switch or a relay to turn the DC on and off, and this disconnects the supply completely when you don't need any output (such as re-soldering a missed joint etc.). Metering isn't shown - see below for details of adding a voltmeter and optionally an ammeter as well. The two 20k trimpots let you set the maximum voltage (nominally ±25V). They should be roughly centred to obtain the correct voltages. Although not shown on the circuit, you may need to add resistors in series with C4a/b if the supply oscillates when in current limit mode. They weren't included in the original, but the simulated circuit oscillates if they aren't there. A value of around 100 ohms should be sufficient.
+ +The circuit is far from 'perfect' (and nor was the original), but it should work well in practice. The voltage set pots will ideally be a dual-gang pot, so both supplies are varied at the same time. Likewise, the switch (Sw1a/b) will be a 2-pole, 5-position switch. Note that I have not built and tested this circuit, but it has been simulated and it performs as expected. The benefit of a simple arrangement as shown is that it can most likely be built for less than a commercial supply.
+ +The series pass transistors (Q1a/b and Q2a/b) need a very good heatsink, and optimal thermal coupling. If used at low output voltages and high current, you will need a fan to keep the transistors cool enough to ensure they don't fail due to over temperature. The driver transistors (Q3a/b) will also need small heatsinks. The circuit is symmetrical, so while it may appear complex, it's largely repetition. I cannot guarantee that it will be completely stable when in current-limit mode - the simulator tells me it is, but that may just be the simulator itself - reality is often very different from a simulation.
+ +While there is an expectation that a power supply shouldn't ever oscillate, in reality it takes serious engineering to maintain stability along with good transient response. Mostly, a small amount of oscillation usually won't do any harm, and the current limiting is there to ensure that your latest creation doesn't self-destruct if there's a wiring fault. It can also be handy for battery charging (amongst other things), and the limiter's primary purpose is to protect your circuit and the power supply against 'mishaps'. Many supplies will show signs of high frequency instability, rarely when in 'constant voltage' mode, and most often when in constant current mode.
+ +In case you have started thinking that building your own supply doesn't look too daunting, there are some other things needed as well. The transistor temperature is critical, so it's important to include a thermal shutdown mechanism. This can be a simple thermal switch that disconnects the mains if the heatsink gets too hot - simple but not very sophisticated. It's usually better to include an 'over temperature' indicator, and a thermal fan that turns on if the heatsink goes above a predetermined temperature. 'Store bought' supplies may have a variable speed fan, with a final shutdown if the heatsink doesn't cool down. This can happen if there's a sustained high current into a short, a blocked fan filter, or placement on your workbench restricts airflow.
+ + +This is a critical part of any power supply. Ideally, if the thermal limit is reached, the supply should turn off, but this is easier with some circuits than it is with others. For example, the Figure 6.1 circuit is easy, as it's simply a matter of pulling the voltage reference to zero (essentially in parallel with the 'on/off' switch). This can be done with a transistor, relay contacts or it can even be made 'proportional' so the maximum output current is reduced as the heatsinks become hot. Thermal limiting is a bit more difficult with the Figure 7.1 circuit, as the 'set voltage' pots are not referenced to ground, but to the output supply rails. Due to the need for complete isolation, a relay is the best choice, and it simply shorts the set voltage pots. You need a double-pole relay because the two pots are separate from each other (electrically).
+ +The next thing is to decide how best to sense the heatsink temperature. The obvious choice is an NTC (negative temperature coefficient) thermistor, and these are readily available in a range of different values (the value is usually specified at 25°C). Unfortunately, thermistors are a nuisance to mount to the heatsink, unless you can get one with an integral mounting assembly. You can make your own, using a miniature bead thermistor and using epoxy to attach it to a wire lug. Naturally, you need to be careful to ensure that there is no electrical connection from the thermistor to its mounting. You can also use diodes or transistors for thermal sensing, but they are less sensitive than thermistors (only -2mV/°C) and more irksome to set up. A transistor can be configured to provide greater sensitivity (because it has gain), and you can get up to -100mV/°C easily. However, the transistor needs a trimpot (preferably as close as possible to minimise noise pickup), and the sensor requires three wires instead of two. They are also fiddly to set properly. A more-or-less typical 10k (at 25°C) NTC thermistor will show a change of roughly -250 ohms/°C.
+ +Because thermistors vary widely in terms of their value change with temperature, it's essential that a method of adjustment is provided. Ideally, you need an accurate thermocouple thermometer to measure the heatsink temperature, as close as possible to one of the series-pass output transistors. You'll need to use thermal 'grease' to get an accurate reading. Typically, the resistance of a thermistor will have fallen to around 30-40% of the 25°C value at 50°C, but this depends on the material used. The datasheet for the thermistor you buy will usually provide the exact details. Make sure that the thermistor(s) are not installed too close to the fan. If they are, the fan will cool the thermistors easily, but may be unable to keep the heatsink to a safe temperature. This can cause failure.
+ +A cheap opamp is the easiest way to get reliable detection of an over-temperature 'event', and several thermistors can be used, with the hottest one triggering the cooling fan(s) or shutting down the supply. You can use a two stage system as shown below, where a mild over-temperature starts the fans, but if the temperature continues to increase then the supply is disconnected from the load altogether. The two trimpots are used to ensure that the initial voltage across each thermistor is around 5.8V at 25°C, which means approximately 65% of the total resistance or VR1 and VR2. Should the voltage across either thermistor fall to about 5.4V, the fan will turn on. The fan turns off again when the voltage has returned to the 5.4V threshold. If the supply cuts out because the temperature keeps rising, the fan will keep running.
+ +
Figure 8.1 - Thermal Sensing, Fan and Relay Cutout
U1A is a buffer, included to ensure that the hysteresis resistor on U2B doesn't disturb the first comparator. At low temperatures, the comparator U1B has its output low, and U2A is high, so the fan doesn't run and the relay contacts are closed (provided the DC switch is closed). As the temperature rises, one or both thermistors will drop to a lower resistance. When the thermistor voltage falls to ~5.2V, the fan will start, and if the temperature continues to rise, the supply output relay will be turned off when the thermistor voltage falls further. This arrangement ensures that the temperature should never reach a dangerous level. It will be necessary to adjust the trimpots to preset the initial thermistor voltage to an appropriate level to ensure that the fan comes on when the heatsink temperature reaches about 35°C. The LED is there to let you know why everything has suddenly stopped working (the output transistors are too hot !). The last trimpot (VR3) should be set for a cutout temperature of around 45°C. Both comparators have hysteresis, so the fan won't turn on and off rapidly, and nor will the cutout relay. (Note that U2B is not used.)
+ +Thermistors are not precision devices, so you will need to run your own tests with those you can get. It may be necessary to experiment with resistor values to obtain sensible (and safe) temperature thresholds. You may be wondering why I suggest such a low heatsink temperature (45°C). Bear in mind that the thermal resistance from transistor case to heatsink may be around 0.5°C/W, so if the transistors are operating at 35W, the case temperature will be 17.5°C hotter than the heatsink. That means a case temperature of over 60°C. If your mounting techniques aren't good enough, the difference may be greater, leading to a risk of failure. If you can't place a finger on the transistor and keep it there, then it's probably too hot.
+ +Maintaining a safe operating temperature and shutting down the supply (or disconnecting the load) if the power transistors get too hot is a critical part of any power supply. It's the nature of any variable supply that you never know what you'll eventually use it for when it's first built, and every eventuality needs to be catered for. It's far better for the supply to shut down prematurely than to allow the transistors to get so hot that they fail. Transistors fail short circuit (at least initially), which will put the full unregulated supply voltage across the DUT. The damage that can cause may be catastrophic.
+ + +All power supplies need meters. These are normally included for voltage and current, and the most common now is digital. However, 'traditional' analogue moving coil meters are not only cost effective (you can get them surprisingly cheaply), but are also easy to read at a glance. Many digital meters don't provide sensible supply and metering connections (for example, some require a floating supply). This makes the circuitry more complex, and the accuracy that's implied by digital meters is often an illusion. With analogue meters, 'FSD' means full scale deflection.
+ +My preference has always been for analogue meters. If you can get a meter with a dial that's calibrated from 0-30V (for example), one can be used for voltage, and the other for current (0-3.0A). The required shunts and multipliers can be determined easily enough - see the article Meters, Multipliers & Shunts for all the details. It might be possible to use the current-sense resistor as the meter shunt, depending on the sense resistor value and the sensitivity and internal resistance of the meter. In most cases, a 1mA meter movement is a good compromise, and that will let you use the current sense resistor shown in Figure 6.1. Yes, connecting the meter and external resistor will affect the shunt ever so slightly, but the error will be very small (to the point of being infinitesimal).
+ +
Figure 9.1 - Current And Voltage Metering
Basic stand-alone metering circuits are shown above. The current meter is a pain, because the polarity has to be reversed depending on whether it's monitoring the positive or negative shunt. It looks convoluted, but it will work exactly as intended if wired as shown. The total meter resistance assumes the use of a 1mA meter movement, calibrated for 30V (voltmeter) or 3A (ammeter), and assuming an internal coil resistance of 200Ω. If the meter used is more sensitive (or its resistance is different), the resistances will need to be calculated. It almost always easier to use trimpots to set the range than fixed resistors, and suitable values are shown. For a voltmeter (calibrated for 30V FSD) ...
+ ++ Rm = ( V / FSD ) - Rinternal+ +
+ Rm = ( 30 / 1m ) - 200 = 28.8k +
If the shunt resistors for an ammeter are different from the values shown the calibration will be different. The 'total resistance' shown includes the meter's internal resistance (typically around 200Ω for a 1mA movement). Note that if you use a 1mA movement, the shunt resistor will need to be no less than 0.1Ω. A 67mΩ shunt is called for, but this assumes that the meter's resistance is exactly 200 ohms, and there is no provision for adjustment if the reading is off. Whether the same shunt can be used for both current sensing and the ammeter depends on the final topology of the design. It's not always practical, but does reduce voltage losses slightly.
+ +Note that if you use the Figure 6.1 circuit, the two shunts have the same voltage polarity so the reversal shown above isn't necessary. To look at positive or negative output current, the meter is simply switched from one shunt to the other, and the polarity is unchanged. That takes away the crossed wiring shown to the negative shunt in the above drawing.
+ +While a switched ammeter is shown (and that's what my old supply uses), it's better to use a separate ammeter for each output. Provided you have enough front panel space, this removes the tedium of switching the meter, and means that if you forget (and that will happen), you may be monitoring the negative supply, but using the positive supply. Needless to say, that means that you can't see the current and the DUT may be damaged before you realise your mistake. The use of current limiting can mitigate that of course, provided it's set for a non-destructive (low) current when you start testing.
+ +The voltmeter can be switched to measure either positive or negative voltage, or it can simply be wired across the dual supplies (50V for the circuits shown here), and calibrated to show 30V FSD ('Voltage Meter (Alt.)). The implication is that the voltage will be ±25V, or other lower voltage as selected. There may be some small error if the supplies don't track perfectly, but this is usually not a major issue unless you are expecting a precise voltage for some reason. If that's the case, it's better to use an external meter - those on the supply are 'utility' meters - they show the value of voltage and current, but expecting better than around 5% accuracy is unrealistic.
+ + +Digital meters are either the best thing since sliced bread, or a blight on the landscape, depending on your viewpoint. Personally, I prefer analogue (mechanical) meters, but they are usually fairly large and unwieldy, taking up more panel space than digital readouts. The greatest benefit of analogue meters is that you can watch the pointer moving, so an increasing (possibly runaway) current is seen quickly, and varying currents can be averaged by eye quite easily. Digital meters are particularly useless if the current varies quickly, because the display just becomes a blur of digits, and you cannot average a digital readout by eye.
+ +However, digital meters are usually cheaper than analogue movements now, and most are fairly accurate. Because they take up less panel space, they are a good option, provided a few simple precautions are taken. In particular, and especially for the current meter, you need to include averaging circuitry that stops the display from showing a bunch of seemingly random digits when the supply current varies rapidly. This can be as simple as a resistor (1k is always a good starting point) and a capacitor to average the reading. With a 1k resistor, a 100µF capacitor means that you have a 1.59Hz low frequency -3dB point, so most rapid variations will be smoothed out so you can read the current. Failure to include this will provide readings that you can't decipher. It's fast enough to ensure that a trend is easily visible.
+ +No details for digital meters are shown here because they depend on the meter itself. Some are auto-ranging, others use switchable ranges, and the simpler ones just give a reading from '000' to '199', with the option to select a decimal point at the desired position (often via a jumper or link on the meter's PCB). For current measurements, it will often be necessary to use an opamp to boost the small voltage across the current shunt. For example, if you have a 0.33Ω shunt, you'll need to amplify or attenuate the voltage across that to suit the range. For 2.5A full scale, that means you only get 825mV with a current of 2.5A, and that needs to be amplified so the meter shows '2.50' (2.5V into the meter). The amount of amplification or attenuation depends on the meter's sensitivity. For example, a 200mV meter will need to have the shunt voltage reduced by a factor of 33 with a voltage divider. It will read 2.5 (25mV) with the decimal point selected by whatever means are provided. Resolution is only 100mV (±2%, ± the meter's final digit 'uncertainty factor', which can be up to two 'counts'). This (IMO) is not good enough.
+ +Ideally, if you decide to use digital metering, use a meter that offers three full digits (up to '999' rather than '199'), and if possible with auto-ranging. There are many choices, so it's up to you to decide how much you want to spend and what accuracy you need. Again, Meters, Multipliers & Shunts gives some worked examples that you may find helpful.
+ + +This is where things can get ugly. The front panel is the most important part of the supply, because it has voltage and current controls, on/ off switches (mains and DC), maybe a series-parallel switch, meters, and of course the output connectors (typically combination banana sockets/ binding posts). Of course, you'll also add LEDs for power on, current limit and thermal overload. Everything on the front panel has to be accessible for construction or maintenance, and that invariably means a maze of wiring. The front panel has leads for AC mains, DC outputs, all LEDs and pots, and this all adds up (surprisingly quickly). Maintaining a common supply for all LEDs (e.g. anode to the positive auxiliary supply) means that many of the LEDs can share the same anode voltage, which can save wiring. However, this does not apply to the current limit LEDs in a dual version of the Figure 6.1 circuit, because the two supplies must be kept fully independent until the series-parallel switching.
+ +The internals must contain your power transformer(s), rectifier(s) and filter caps, along with the main heatsink(s) for the output transistors. The latter will have input, output and control wiring, as well as connections for the thermistors and fan(s). At the very least, each output module (assuming a dual supply) will have at least six wires. Then there's the regulator control board(s). You'll have one for each supply (assuming the Figure 7.1 dual supply arrangement), plus a thermal controller board to monitor the heatsink temperature.
+ +It's all too easy to get wiring wrong, and you need a very disciplined approach to ensure that you don't make any wiring errors. Avoid the temptation to try to fit all the control boards to the front panel. It may reduce the wiring needed, but makes servicing a nightmare if the various parts of the supply can't be accessed and tested without having to disconnect wires from boards. Whatever sized enclosure you were thinking of using, if it doesn't have lots of free space then it's too small.
+ +Make sure that all connections can be accessed without having to remove boards to get to the underside. Use pins, wire loops, or any other suitable technique so that all wires can be disconnected from the top (or visible) side of the boards. Avoid plugs and sockets - all connections (especially the really important ones) should be soldered, with the wiring arranged so that if you ever need to remove a board to replace something, the wiring is bound with cable ties so that each wire lines up with the appropriate connection point. Along similar lines, if at all possible, when building the boards (most commonly on Veroboard), keep connections along one edge of the board. This will mean adding jumpers on the Veroboard, but that's far better than having wires all over the board itself. Not only does it mean that wiring is simpler, but it also makes mistakes less likely.
+ +Trimpots are a fact of life for any power supply. Voltages and currents need to be set, and meters calibrated. Thermal sensing also has to be calibrated, so nearly all power supplies will have numerous trimpots - you simply cannot rely on fixed value resistors to provide the proper conditions for anything. If you were to build the Figure 6.1 circuit as a dual supply, with the thermal protection and metering, you'll have at least nine trimpots to set everything up correctly. This is pretty much normal for power supplies, but some may have more!
+ +Make sure that important parts of the supply are easily separated from the rest (and the chassis). For example, the heatsink assembly should be made so it can be removed, and all transistors can be accessed without having to dismantle the entire module. One design I've seen has the main filter caps directly in front of the output transistors, so they cannot be removed without removing the filter caps (or the transistors) from the circuit board. The location of the caps is such that you simply cannot access the transistor mounting screws once the assembly is completed. I strongly recommend that you avoid any similar errors. Having to remove (and/ or desolder) components or boards to gain access to any part of the supply makes it a nightmare to work on later. Consider that it may be in operation for 20 years or more before it needs servicing, and by then you will probably have forgotten many of the 'finer points' of the circuit. After that long, you may not even have the schematic any more, so make sure that you include one inside the case! + +
While the basics of a power supply aren't overly complicated, there will always be far more wiring than with any typical audio project. This is unavoidable unless you increase the overall cost even further by making your own PCBs. While doing so means a more professional product, there's no guarantee that you'll get the design right first time, and having to make modifications can be very time consuming. If an error has been made on a PCB layout, it can be difficult to diagnose and locate the error so it can be fixed. In general, it's likely to be much easier to hard-wire the final output section. Because of the high currents involved (which may be present for hours at a time), a normal PCB doesn't offer low enough resistance or high enough current capacity unless you use very wide tracks (I'd suggest a minimum of 5mm tracks for 5A, but even that is marginal for continuous duty).
+ +While it may seem like a minor quibble, I strongly recommend that you use an IEC socket for the mains. In my long experience with test equipment and other gear, there's not much that's quite as annoying as a fixed mains lead. Rather than just unplugging the IEC plug from the back if it needs to be moved, you may have to trace a fixed lead back to its mains outlet, then disentangle it from other leads for the rest of your test bench gear. Depending on just how much gear you have, that can actually be a bigger challenge (and pain in the backside) that you think when it's first installed and plugged in. A minor point, but one worth remembering. Very few test fixtures that I've built have fixed mains leads, and I maintain a good collection of IEC mains leads!
+ +There's one remaining challenge. To test the various parts of your supply before it's fully wired, you need ... a power supply. The chances of getting everything right first time aren't good, so if you don't have a power supply, you will have to devise a way to check that the various sections work properly without the risk of smoke if something isn't right. You may be able to use 'safety' resistors in series with the main supply to limit the damage if there's a wiring error, or (if you have one) use a Variac and a current monitor (see Project 139 or Project 139A so you can test for excessive current as the voltage is increased. Many parts of the supply won't work properly at reduced voltage, so there is always a risk. Testing and calibrating power supplies is not a trivial task, so you'll have a lot to do to get it completed.
+ + +While I've only described the basic supply here, many commercial supplies include a 5V output (usually rated for around 3A), and a few include a ±12V supply as well. Because you never know just how the supply will be configured in future use, these will both be fully isolated. Once you tie the ground (or common) connections together internally, that limits what you can do with the supplies. As already noted, you can never anticipate what you'll use a supply for when it's first built, and it would be unwise to assume anything in advance.
+ +This means at least one, but possibly two additional transformers, plus the rectifiers, filters and regulators. You also need more space on the front panel for the connections. Most commercial supplies do not provide metering for any auxiliary supplies, and the circuitry doesn't need to be anything especially fancy. A couple of P05-Mini boards can be used, one for a single +5V output, and the other for ±12V.
+ +Compared to the cost of the rest of the supply, these can be added for (almost) peanuts, with the possible exception of the transformers. Alternatively, they can be built as a separate unit, which does have some distinct advantages. Predictably, I have one of these as well as those on my workbench, and while it doesn't get used a lot, it's invaluable when I do need an extra supply that's isolated from all the others. It's also small enough that I can take it from the workshop to my office, where I also perform some testing and development work. Indeed, that's where it is at the moment.
+ + +There are precautions that should be followed with any variable power supply. Unless there is a switch that disconnects the DC (or reduces the output to zero), the supply should never be powered on with your load connected. Most circuits have to go through 'startup' phases (capacitors charging, zener voltages stabilising, etc.) before the output will be stable. If your load is connected, it may be subjected to a dangerous voltage, and current limiting may not be enough to prevent damage. Indeed, until all internal circuitry has the required operating voltages, there may not even be any current limiting!
+ +With the Figure 7.1 circuit, once the supply is powered on and working, reducing the voltage to zero with the switch will work. However, during 'startup' (after mains power is applied), this may not be the case! Nothing should be connected to the output when the mains switch is turned on, because the output can be unpredictable. This has been confirmed by simulation - even with the switch turned off, the output rises to over 4V momentarily when power is applied. The Figure 6.1 circuit should be better in this respect, but it's still best not to have your load connected when the mains is turned on.
+ +The power should be turned on, voltage reduced to zero while you make connections, and then the voltage can be set for the desired level. If testing something for the first time, use a low current limiting threshold to minimise damage if there's a fault in the DUT. If you need a current-limited supply, the voltage should be set such that the current limit is reached, but not beyond. For example, it you wanted to ensure a current of 1A through a 10 ohm load, the voltage only needs to be set for an open circuit voltage of around 12V. A higher voltage setting only increases the risk to your load if something goes wrong.
+ +Setting a low voltage (just sufficient for the task) does not reduce the dissipation in the series pass transistors. The only reason is to ensure that the output capacitor(s) can't charge to 25V, then be discharged through the load. This would almost certainly guarantee that the instantaneous current will be much higher than the threshold set. This isn't only advice for the circuits shown here - it applies to all power supplies unless the operating instructions indicate otherwise. Most will advise against connecting anything until the voltage and maximum current are set before you connect the load.
+ +There are several power supply designs that use a microcontroller to manage the functions, but be very wary of anything (DIY or commercial) that requires you to 'program' the voltage or current using a keypad. The use of low-tech conventional pots means that you can increase the voltage (or current) with the twist of a knob, and quickly reduce the voltage if any anomalies are seen. Trying to do this using push buttons is usually impossible, and much damage may be caused simply because you couldn't reduce the voltage quickly enough at the first sign or trouble. The 'high-tech' look and feel of a programmable power supply may be appealing, but it's impractical for anything other than laboratory tests, where the equipment being powered is a known quantity from the outset.
+ + +If all of the above hasn't frightened you away from the idea of building your own supply, I strongly suggest that you start with something fairly simple (such as Project 44). I know that DIY is about doing it yourself, but that should hold true only when it makes sense. As discussed earlier, I did build a ±0 to 25V, 2A supply with fully variable current limiting, thermal cutout and a dual speed fan. It's been in fairly consistent use for around 30 years (at the time of writing), and has never let me down. However, it's a complex circuit, and isn't really suitable for amateur construction. Rather annoyingly, the circuit diagram cannot be found, and it's not an easy circuit to 'reverse engineer'. With seventeen transistors, five opamps, two 12V regulator ICs, five trimpots as well as the expected bunch of resistors, diodes, filter caps, switches, meters and voltage/ current setting pots, it's not something I would recommend - even if I did have a complete circuit for it. The cost would be considered unacceptable to most constructors who may not need it all that often anyway.
+ +The simple circuit shown above (Figure 7.1) is not bad. It's not as good as the one I built, but it's certainly acceptable for normal test-bench work. It does have the advantage that it can limit at a lower current than mine (~50mA is my minimum), and that's useful for sensitive circuitry. More importantly, it's simple enough to build even on Veroboard, with the current limiting circuits wired directly to the switch and voltage setting pots. This leaves only the basic circuit on Veroboard, which should be fairly straightforward. Overall, the Figure 6.1 circuit is better, but the switching for series parallel operation needs to be done with great care.
+ +Perhaps surprisingly (or perhaps not), current sensing is generally far more difficult that it seems at first. It's pretty easy if you use a simple switched resistor scheme, but making it adjustable is not so straightforward. There are specialised ICs that are designed for this exact application, but most are SMD only, and they're not inexpensive - especially if they are only available in a pack of five. This is very common with SMD parts. Of course, this is just the sensing part - it's still necessary to get current regulation. As already noted, at the transition point (from voltage to current regulation), there are two separate regulators, both trying to impose their will on the output. Without a great deal of design time, the result is often oscillation (either transient or continuous).
+ +The main idea of this article is to show you some of the options available. Ideally, most DIY constructors want something that does the job, is reliable, and doesn't cost a small fortune to build. If it can use parts you already have available, then that's even better. If you do have to buy the parts, you want to be reasonably sure that the circuit you choose is up to the task. As already noted, the circuits I've shown had to be adapted to ensure reliability (especially with low output voltage and high current). Failure to provide protective measures (current limiting, fan and over-temperature cutoff) will result in a circuit that not only lets you down, but may blow up the circuit you're testing as well.
+ +When you look at the cost of the components needed, you'll discover very quickly that they add up to a fairly scary figure. Just the transformer(s) will be expensive, and while many of the parts are cheap enough, that doesn't apply to the filter capacitors or the heatsinks. You also have to provide a case and other hardware, and that will require significant machining to accommodate meters, fans, connectors, etc. It's very doubtful that you'll spend less than the equivalent of AU$400 in your currency of choice, even if you have many of the smaller parts in stock. I've seen a 0-30V, 3A dual supply for as little as AU$325 on-line, and it's highly doubtful that you can build one for less unless you have almost everything needed in your 'junk box'.
+ +This should not under any circumstances be seen as a construction article! It is intended only to demonstrate that building even a modest bench supply is not a trivial exercise, and that there are considerations that you may not have thought much about. Some of the designs you'll find elsewhere on the Net are not well designed, and fail to provide adequate safety margins for the series-pass transistor (in particular) and most have no warnings about transistor SOA, thermal failure or any of the things that can go awry. As this article has shown, there are many things that can go wrong, especially if any part of the supply is underrated for the abuse it will get in normal use.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +Bipolar Junction Transistor Parameters |
There are many things about transistors that confuse the beginner and no-so-beginner alike. Some circuits are easy and don't require much more than Ohm's law, while others seem a great deal harder. Paradoxically, it's often the circuits that appear to be the simplest that cause the most problems. A perfect example is a BJT amplifier circuit, using only a single transistor and a pair of resistors (as shown in Figure 1). While this topology is easily beaten by even the most pedestrian opamp for most things, it offers a fairly easy way to determine the transistor's parameters. There are even applications where it's useful, particularly where there are no opamps in the circuit and you need a gain stage.
+ +Only a few simple calculations are needed to let you determine the DC current gain (aka β / hFE), with the benefit that you can set the transistor's actual operating conditions when setting up the test. This is a useful tool to let you understand how the transistor functions, and is easily adapted to the task of matching devices if that's something you need to do. While most circuits don't need matched devices, in some cases doing so improves performance.
+ +In the circuits shown below, the input coupling capacitor has been selected to give a low frequency -3dB frequency of around 10Hz. This is not part of the process for determining the DC characteristics, and is only necessary to measure AC performance. While it's not required, I expect that most readers will want to run AC tests, and they are informative (even if not actually very useful). If nothing else, an AC test that includes distortion measurements is useful to determine the overall linearity - a truly linear circuit contributes no distortion.
+ +A transistor can be in one of three possible states, cut-off (little or no collector current flows), active (or 'linear') and saturated (collector voltage at the minimum possible). For amplification, we need to be in the active region. The cutoff and saturated regions are only of importance in switching circuits. In these cases, it's generally accepted that the base current should around 1/10 of the collector current, regardless of the transistor's β. That means that almost any transistor will work, provided it's rated for the current and voltage used in the circuit. While it's common to see questions asked about substitutions, if you know these basic facts you can work out for yourself what will (or will not) work.
+ +++ ++
+Beta; β: This is the basic notation for the forward current gain of a transistor. + hfe: This is the current gain for a transistor expressed as an h (hybrid) parameter. The letter 'f' indicates that it is a forward transfer + characteristic, and the 'e' indicates it is for a common emitter configuration. The small letter 'h' indicates it is a small signal gain. hfe and small signal Beta are the same. + hFE: The hFE parameter describes the DC or large signal steady state forward current gain. It is always less than hfe. +
The terminology can be different, depending on what source material you're looking at. Not all agree that the terms as shown represent the characteristics, and hfe and hFE are often used interchangeably. Ultimately, the terminology doesn't matter all that much, provided you understand the concept of current gain. Transistors are essentially current-to-current converters, so a small base current controls a larger collector current. Emitter current is always equal to the sum of the base and collector currents.
+ +Note: This article is not intended to show the way to build a simple transistor amplifier, but to allow you to determine the parameters of a transistor. The circuit shown in Figure 1 will definitely work as an amplifier, but it needs input and output capacitors, and it has a very low input impedance. As shown (and perhaps surprisingly), the input impedance is around 660Ω - much lower than you'd expect. This is due to the feedback provided by R2, which acts for both AC and DC. The DC feedback stabilises the operating conditions, and the AC feedback causes the input impedance to be reduced. If the transistor had infinite gain, the input impedance would be zero!
+ + +For the time being, we'll ignore the AC performance, and just examine the biasing requirements. The circuit is shown below, and it would appear to be fairly easy to analyse because it is so simple. However, looks are deceiving. It doesn't take much prior knowledge to determine that the Figure 1 circuit will be in the active region. You only need to look at the resistor values in the collector and base circuits. Since R2 is 240 times the value of R1, it follows that the base current will be in more-or-less the same ratio. If the transistor has a β of around 250 (not at all uncommon), the circuit should bias itself towards the centre of the supply range (i.e. somewhere between 5V and 7V).
+ +
Figure 1 - Collector-Base Feedback Biasing
The analysis problem lies in the word 'feedback'. Whatever happens at the collector is reflected back to the base, so the collector voltage is dependent on the base current, which in turn is dependent on the ... collector voltage! The transistor's hFE changes the relationship between collector and base, and without knowing one of the parameters in advance, it's simply not possible to predict exactly what the circuit will do.
+ +Will the collector voltage be at or near the supply voltage (cut off), ground (saturated), or somewhere in between (active)? The only thing we know for certain is that it will be somewhere in between the two extremes. Provided the transistor is functional (that much should be a given), it's not possible for the collector voltage to fall to zero, nor can it reach the supply voltage. In the first case, the base always needs some current for the transistor to conduct, and in the second case, if the transistor has base current, it must be drawing collector current. Therefore, there must always be some voltage (however small) across the collector resistor.
+ +Even knowing the transistor's gain doesn't help a great deal, because the process is iterative. You'd need to make a guess at the collector voltage, and run a few calculations to see if that gave you a sensible answer, then adjust your guess up or down as appropriate until you arrive at a final figure. It's far easier to build (or simulate) the circuit than to try to second-guess a (somewhat non-linear) feedback network.
+ +It's generally safe to assume that the collector voltage will be roughly half the supply voltage for a transistor circuit that is intended as a linear amplifier. There may be exceptions of course, and the actual collector voltage may be quite different from your first guess. Look at Figure 1 again, and assume a β of 240 for Q1 (based on the relationship between R1 and R2). That means its base current is 1/240 of the collector current. Since there's around 6V across R1 (collector resistor), the current must be around 6mA. That means that the base current can be estimated at 25µA. The voltage across R2 (collector to base) can be calculated using Ohm's law (but we'll ignore base-emitter voltage) ...
+ ++ V = I × R = 25µA × 240k = 6V ++ +
If that were your first guess, you'd be very close! Your initial estimate may not be possible if you underestimate the gain, because we know that the voltage across R2 can be no more than Vce - Vbe (around 5.3V). For example, if your first guess at gain was 150, the voltage across R2 would be way too high (around 9.6V at 40µA). Unless you're after an accurate determination (which is neither needed nor useful), that's actually close enough! I know that it may not appear so at first, but consider that in production, transistors of the same basic type have a gain 'spread' that means no two transistors are guaranteed to give the exact same results. The base to emitter voltage also varies - it's typically taken to be 650mV (0.65V), but that depends on the specific transistor, base current and temperature.
+ +The important thing is that great accuracy doesn't matter. If the circuit is designed properly (and it's actually hard to do it 'improperly' with this particular circuit topology), it will work as intended almost regardless of the transistor used. A circuit such as that shown should never be expected to have an AC output of more than around 500mV to 1V RMS, where its distortion should remain below 1%.
+ + +It may not be apparent that the circuit shown in Figure 1 can actually be extremely useful. It won't be as an amplifier though, but it allows you to match transistors very closely. The thing that needs to be determined first off is the expected collector current, and knowing the collector-base voltage that will apply in the circuit needing matched devices may also help. For example, a power amplifier may use ±35V supply rails, and the input stage may run with a total current of 4mA (set by the long-tailed-pair 'tail' current). However, you don't really need to provide the full collector-base voltage that will ultimately be used.
+ +You now know that current through each transistor should be 2mA. A 20V supply will do just fine for most tests, and good results can still be obtained with a lower voltage. Based on the transistor datasheet, you can get a reasonable initial estimate of the hFE, and use a collector resistor that will drop about 2V at 2mA (1k). Then select an appropriate collector-base resistor, or cheat and use a 1MΩ resistor in series with a 1MΩ pot. The transistor should be installed into three receptacles of an IC socket, or use a solderless breadboard. For example, if the transistors you are using have an hFE of 200, then you know the resistor should be around 1.72MΩ.
+ +Once you have a pot setting that drops 2V across the 1k resistor, the current is 2mA. Then just install transistors until you find a pair that have the same voltage drop across the collector resistance, and the same base-emitter voltage. There will inevitably be a small discrepancy because finding two that are identical is unlikely, but if they are within (say) 5% of each other then that's perfectly acceptable. When installed in the PCB, the two transistors should be thermally bonded, and that ensures that thermal changes affect both devices equally.
+ + +In a simulation with three different transistor types (2N2222, BC547 and 2N3904), the AC output voltage is 161mV, 170mV and 132mV (RMS) for an input of 1mV from a 50Ω source. The variation from the highest to lowest gain is only a fraction over 2dB, and these are very different devices. It's educational to look at their datasheets to see just how different they are, yet all work nearly as well as the other without changing the circuit. The 2N3904 has less gain, but the other two perform almost equally. The distortion is nothing to crow about, but that's expected of a high gain stage with no feedback.
+ +Note that a single stage amplifier such as this is inverting, and it makes no difference if you use a valve (vacuum tube), BJT, JFET or MOSFET. When operated with the emitter, cathode or source grounded, all devices are inverting. A positive-going input causes a negative-going output and vice versa.
+ +
Figure 2 - Collector-Base Feedback Biasing (AC Measurements)
It's tempting to think that the AC gain of a transistor stage is determined by the DC current gain (β or hFE). This isn't the case at all, although the two are related. A transistor functions as a current-to-current converter, where a small current at the base controls a larger current in the collector (and emitter). While this does describe the actions that occur within the device itself, we tend to apply most of our efforts towards voltage amplifiers. However, one does not exist without the other.
+ +For example, we can easily calculate that the β of the 2N3904 is around 200, yet if the collector is fed from a very high impedance we can obtain an AC voltage gain of over 3,300 quite easily. This technique is surprisingly common, and it's used in nearly all power amplifiers as the 'Class-A amplifier' (aka VAS - 'voltage amplifier') stage. The collector is supplied via a constant current source. This provides the desired current, but at an exceptionally high impedance. (An 'ideal' current source has an infinite output impedance.)
+ +I stated above that the circuits shown here include feedback. It may not be immediately obvious, but R2 (collector to base) is a feedback resistor. The feedback is negative, so if the collector voltage attempts to rise, more base current is available (via R2) and the transistor is turned on a little harder, trying to keep the collector voltage stable. This feedback acts on both AC and DC signals, and the input impedance is very low. In fact, the input impedance with the circuit shown is less than 1k (ranging from around 650 to 750Ω), which makes it useful only for low impedance sources. This is one of many different reasons that the circuit shown is not common - very low input impedance, high distortion circuits aren't generally considered useful for most audio applications.
+ +Not that this prevented it from being used back when transistors were expensive and were still in the process of being understood by most designers. However, even then, it was used only for 'non-demanding' applications where its limitations wouldn't be noticed. Today most people wouldn't bother, because there are opamps that are so cheap, flexible and accurate that it makes no sense to use an unpredictable circuit with so many limitations.
+ + +Just for the fun of it, I set up the above circuit, exactly as shown. The supply was 12V DC, and I used a number of transistors. Most were BC546 types (only 4 test results are shown), but from two different makers, and I also tested a few BC550C devices as well. I even tested a BC550C with emitter and collector reversed (after all, they are bipolar transistors). The measured results are shown in the table (I didn't measure distortion). The base-emitter voltage (Vbe) was around 680mV for the BC546 tests, but it wasn't measured for the BC550s.
+ +VCE (DC) | hFE (Calculated) | AC Output (RMS) | AC Gain + |
BC546 + | |||
5.94 V | 276 | 1.52 V | 152 + |
5.80 V | 291 | 1.52 V | 152 + |
5.77 V | 293 | 1.52 V | 152 + |
7.20 V | 177 | 1.24 V | 124 (-1.8 dB) + |
+ | |||
BC550C + | |||
4.36 V | 498 | 2.04 V | 204 + |
4.31 V | 508 | 2.04 V | 204 + |
3.65 V | 675 | 2.12 V | 212 (+0.3 dB) + |
4.83 V | 414 | 1.92 V | 192 (-0.5 dB) + |
+ | |||
BC550C Reversed! + | |||
8.39 V | 112 | 255 mV | 25.5 + |
The results are interesting. Quite obviously, when a low collector voltage is measured, the transistor has a high (DC) gain and vice versa. What is not so apparent is the reason for the variation of AC output voltage, with an input of 10mV RMS input from a 50Ω generator. Equally, several transistors show identical voltage gain, even when it's apparent that their hFE is different. You can expect the AC voltage gain to be related to the transistor's hFE, but there is obviously more involved.
+ +Part of the reason is the intrinsic emitter resistance 're' (commonly known as 'little r e'), which is roughly 26/Ie (in milliamps). If the emitter current is 2.6mA then re is 10Ω. This is not a precise figure, but it's generally close enough for rough calculations. Because it changes with emitter current, it follows that the voltage gain also changes along with emitter current, so the gain is different for a positive input signal (which increases Ie) and a negative input voltage (which decreases Ie). The result is that re changes with signal level, causing distortion. It's also worth noting that a test with 'real' transistors and a simulation give answers that are surprisingly close.
+ +Many early audio designs used comparatively high supply voltages to minimise the change of re by reducing the current variation for a given voltage output. Most of this was rendered un-necessary when more refined circuits with high open loop gain and negative feedback replaced simple transistor stages. These are covered in some detail in the article Opamp Alternatives.
+ +The end result of all of this is that you can determine the parameters of a transistor by installing it in a circuit such as that shown here. You don't need a transistor tester, and the results you obtain will be as accurate as you'll ever need. This is basic circuit analysis, and it helps you to be able to understand more complex circuits, and to appreciate the value of basic maths functions. Mostly, you need little more than Ohm's law to work out the transistor's characteristics.
+ +The main parameter (and the one that most people seem to be interested in) is DC current gain - hFE or β. You only need two voltage readings to be able to determine the gain (assuming that the supply voltage is fixed and a known value, such as 12V). Measure the voltage at the collector and base, with the common point being the emitter (this is a common-emitter stage after all). Now you have all you need to work out the gain.
+ +First, determine the collector current, Ic. This is set by the voltage across R1, which is Vcc - Vce (assume Vcc to be 12V for this example). Then work out the collector current. I'll use a Vce of 6V, but it will rarely be exactly half the supply voltage.
+ ++ Ic = ( Vcc - Vce ) / R1+ +
+ Ic = ( 12 - 6 ) / 1k = 6mA +
Now you measure the base voltage, and determine the current through R2 (240k). We'll assume 0.68V for the example. The current in R2 is the base current.
+ ++ Ib = ( Vce - Vb ) / R2+ +
+ Ib = ( 6 - 0.68 ) / 240k = 22.17µA +
Gain is simply Ic / Ib, so is 6mA / 22.17µA, which comes to 270. That's the transistor's DC current gain. Yes, it is more tedious than reading it from a transistor tester, but it's the exact figure obtained in the circuit being tested. It will change with temperature and collector current, so it only applies in this particular instance. Ultimately though, the exact figure isn't particularly useful. It's not even really useful as a 'figure of merit', because the AC voltage gain of the circuit doesn't change a great deal even if hFE is different.
+ +You can use a circuit such as this to match transistors as described in Section 2 if that's essential for the circuit you are building. Note that Vbe is still a variable, and that needs to be matched independently of hFE.
+ + +In most cases, a defined gain is required, and that's achieved with the addition of another resistor. In the drawing below, I added a 100Ω emitter resistor. The gain is now determined by the ratio of R1 to R3, plus re (internal base resistance). With 100Ω as shown, the theoretical gain is about 9.57 but it doesn't quite make it because the transistor has finite gain so the feedback cannot produce an accurate result. However, it's not too bad, and far more predictable than you would expect otherwise.
+ +
Figure 3 - Emitter Resistor Stabilises Gain
As you can see from the figures, ideally the circuits would have been re-biased to get a collector voltage close to 6.5V (there's a small voltage dropped across the emitter resistor). However, even with the same batch of very different transistors, the gain variation between the highest and lowest gain is now a mere 0.15dB. Distortion is also reduced, but not by the same ratio as the gain reduction. The addition of an emitter resistor is called emitter degeneration, and it is not the same thing as negative feedback. It's effective for stabilising the gain (for example), but does not reduce distortion as well as 'true' negative feedback. The noise from R3 is actually amplified by this circuit and all similar arrangements, so despite the reduction of gain, the noise will not be reduced in proportion.
+ +Not immediately apparent is that the input impedance is much higher, being over 11k for each transistor simulated. The input impedance is (very roughly) determined by the emitter resistance (both internal and external) multiplied by the DC current gain. However, it's also affected by the negative feedback via R2, so it's not a straightforward calculation.
+ +The gain is reduced further when an external load is added, because that is effectively in parallel with the collector resistor (R1). Output impedance is (almost) equal to the value of R1. It's actually a tiny bit less because of the negative feedback via R2 (about 990Ω as simulated). Emitter degeneration does not affect output impedance, unlike negative feedback which reduces it in proportion to the feedback ratio.
+ +I measured the distortion both with and without the emitter resistor. At a signal level of only about 230mV without R3, distortion measured 2.5%. When R3 was included. the gain fell to 9, and even with 900mV of output the distortion was 'only' 0.25%. While this looks like a fairly dramatic improvement, consider that no opamp ever made has that much distortion at any output level. It's also worth noting that the simulator estimates the distortion surprisingly well - for the same conditions the simulator claimed around 0.24%, which is very close to the measured value.
+ + +The Early effect is named after its discoverer, James Early. It is caused by the variation in the effective width of the base in a BJT, due to a change of the applied base to collector voltage. Remember that in normal operation, the base-collector junction is reverse biased, so a greater reverse bias across this junction increases the collector–base depletion width. This decreases the width of the charge carrier portion of the base, and the gain of the transistor is increased.
+ +The transistor's Early effect has some influence over the performance (for AC and DC). At a collector voltage of 5V, the gain is almost exactly 200 (as simulated, Ib=20µA), and that rises to 215 at 10V and at 50V it increases to 317. As you can see from the graph, the slope is quite linear. It follows that as the collector voltage changes, so does the effective hfe. Graphs are shown for three different base currents - 15µA, 20µA and 25µA (the circuit shown only the 20µA current source). The collector current below 2mA (with a collector voltage of less than 500mV) is not shown as it's irrelevant here. The AC waveform is not included in the test circuit or the graph. It's notable that even at a collector voltage of 500mV, the transistor is functioning normally.
+ +
Figure 4 - Early Effect Test Circuit (2N2222)
There is also a change of re as the collector current varies, but I did not try to quantify that in the tests shown (it becomes relevant only when voltage gain is expected). No collector load resistor is used because the base current is maintained at a constant (and very low) value. Over the full range shown below, the AC gain changes by a factor of about 1.6:1 for the current range seen in Figure 5, and with a collector voltage between 1 and 50V. The AC voltage gain is almost directly proportional to the collector current. Although not shown in the test circuit or graph, the AC gain was measured. With a 1µA (peak) signal injected into the base, the AC current gain changes from a low of about 110 at 4mA collector current, to 165 at 6.5mA collector current. Voltage gain is not relevant to this test because only current is monitored.
+ +
Figure 5 - Early Effect (2N2222)
While a look at Early Effect is an interesting observation, it's not especially useful for simple gain stages. In more complex circuits (especially linear ICs), it's common to keep the transistor's collector voltage as constant as possible. This can be seen in the input stage of most power amplifiers for example, where a significant amount of the complete circuit's gain is produced in the input stage. When a long-tailed-pair is used for the input, the collector voltage of the input transistors doesn't change by very much (if at all), so gain variations due to the collector-base voltage are minimised - but only when used in the inverting configuration.
+ +This does not apply when an opamp is operated in non-inverting mode. Consequently, for a unity-gain amplifier, the collector to base voltage can vary from around 28V (peak negative input) down to as little as 2V (peak positive input). This voltage modulation can cause the gain of the input transistors to change by ±10% or more, due to the Early effect (although this is probably not the only reason for the increased distortion). Higher distortion in the non-inverting configuration is a well known phenomenon with opamps, although with competent devices any distortion that is added remains well below the threshold of audibility. Some devices have distortion so low that it's almost impossible to measure it, regardless of topology.
+ +It's also worth noting that if a transistor is used for switching, you need to supply much more base current than you might think is sufficient. This is because at very low collector voltages, the current gain of a transistor is much lower than the datasheet figure. 'Common wisdom' is to ensure that the base current for a switching circuit is roughly 1/10th of the collector current, although you can often get away with less at low current. For the 2N2222 shown, if the switched collector current is 50mA you'd provide a base current of around 5mA to ensure that the 'on' state collector voltage is no more than 100mV. The datasheet claims that the saturation voltage (transistor fully on) is 300mV, with a collector current of 150mA and base current of 15mA. This indicates an hFE of only 10 to obtain full saturation. The datasheet only takes you so far, and you have to run your own tests to obtain realistic figures. It's essential to check a number of devices - a test based on a single transistor doesn't show you the likely results with different devices, even when they are all from the same batch.
+ + +It used to be that BJTs were the predominant technology for switching in digital systems (TTL - transistor-transistor logic). While CMOS (complementary metal oxide semiconductor) devices have now taken the lion's share in digital circuits, transistor switches remain very common. For high power we tend to think of MOSFETs as the most common switch, but IGBTs (insulated gate bipolar transistors) are now a better option for high voltage and high current applications.
+ +Transistors used in switching applications don't operate in linear mode - that's for amplifiers. The transistor is either off (no collector current other than a tiny leakage current which can almost always be ignored), or it's fully on, in a condition known as saturation. The beta (or hFE) is only important to allow the designer to determine how much base current is necessary to force saturation. All switching systems will be subjected to higher than expected dissipation at the instant they are turned on or off. This is because the transitions are not instantaneous. Mostly, this isn't a concern, but it can become important if the switching signal (driving the transistor) has slow transitions. If too much time is spent in the active region (between 'on' and 'off'), peak dissipation can be much higher than expected.
+ +Transistor switches are very common for turning on LEDs and relays, and for many other simple switching applications. NPN or PNP transistors can be used depending on the polarity, and many simple circuits rely heavily on BJTs as switches. There are few surprises, and the circuits are usually easy to calculate to get the appropriate base current to suit the load. The following circuit is common in projects from countless sources, and is also used to switch relays from microcontroller outputs (often only 3.3V at fairly low current).
+ +
Figure 6 - Basic Switching Circuit
The load is shown as a relay, but it can just as easily be a DC fan, LED or a small incandescent lamp. We will know the supply voltage, and (usually) the load current. Using the relay example, if the coil measures 250Ω and it's rated for 12V, we can determine the current with Ohm's law (48mA). If we are using a microcontroller with 3.3V outputs, we only need to know the 'worst-case' gain (hFE) for the transistor to determine the value of Rb. If Q1 is a BC546, we can look at the datasheet and see that it can handle 65V (VCEO) at up to 100mA. The minimum hFE is 110, so to drive the relay, the base current should be at least twice the minimum allowed (most designers aim for between 5 and 10 times the calculated base current). With a 48mA load, base current will not exceed 436µA, so we'll allow 2mA. This is a little under the ×5 suggested, but it's still quite ok.
+ +Because the base-emitter voltage will be 0.7V and we have a 3.3V base 'supply' voltage from the micro, Ohm's law tells us that the value of Rb must be 1.3k (2.6V at 2mA). We would use the closest standard value of 1.2k (or 1k) for convenience. This simple exercise demonstrates how easy it is to determine the values required for 100% reliable operation. Any other switching application is just as simple.
+ +One of the interesting things about simple transistor switches (as opposed to Darlington or Sziklai pairs) is that the collector-emitter voltage will fall to only a few millivolts. You may expect that the collector voltage would be based on the base-emitter voltage, but it's not. With the values described, VCE will be about 110mV, but with more base current it falls further. Even as shown, the power dissipated in Q1 is only 5.28mW, which is negligible.
+ +This won't always be the case of course, because the transistor has a finite switching time, and the worst-case is when it's 'half-on' (i.e. collector voltage of 6V as it goes high or low). In the circuit shown, there will be 24mA load with 6V collector voltage, so peak dissipation is 144mW. This is very comfortably less than the maximum continuous dissipation (500mW), and we don't need to change anything. The 144mW is a transient condition, and will typically last for less than 100µs if the input switches quickly enough.
+ +Exactly the same set of simple calculations can be used for any transistor switching circuit. These circuits are very easy to design, but all of the steps need to be followed to guarantee reliability. If the relay were to be replaced with a fan drawing 200mA, the data sheet tells us that a BC546 can't be used (100mA maximum), and the selected transistor will need more base current. A BC639 can handle the current and worst case power dissipation. However, the minimum gain (as per the datasheet) is only 40, so you'd need at least 5mA base current, but preferably 10mA. This may be more than the microcontroller (or other source) can supply, and I leave it as an exercise for the reader to work out a way to achieve the desired results.
+ +Remember that for switching, you need to supply at least twice the expected base current, and it's common to provide up to ten times as much to force full saturation of the transistor switch. BJT switching circuits become less attractive with very high current, because the base current is effectively 'wasted'. It doesn't contribute to the load current, and is simply another part of the circuit that has to be powered by the supply. Using a Darlington transistor is (or used to be) common, because the hFE is very high (up to 1k), so far less base current is required for saturation. However, a Darlington can't reduce its collector voltage to below 700mV, and at high current it may be as much as 3V.
+ +For example, a TIP141 is rated for a collector current of 10A, and a gain of 1,000 at 5A. The saturation voltage with 5A collector current and 10mA base current is 2V, so it will dissipate 10W, even when driven into saturation. This is wasted power that has to be provided by the supply, but cannot be used by the load. Switching times are also rather slow, so high speed operation isn't recommended. The transistor must be mounted on a heatsink to maintain a safe operating temperature.
+ +This is one of many reasons that MOSFETs are preferred for high current switching. A modern MOSFET may have an on resistance (RDS-on) of perhaps 40mΩ, and with a 5A load the voltage across the device will be only 200mV, dissipating 1W. The gate current is zero in steady-state conditions, but needs to be fairly high during switching (up to 2A or so, depending on the switching speed). However, this high current only lasts for a very short period, typically well below 100µs. Peak dissipation (during switching) may be up to 15W with the circuit described, but the average will be less than 600mW. Compare this with 10W dissipation for a Darlington transistor, and it's easy to see why MOSFETs have become the #1 choice for switching. With such a low total dissipation, a small section of PCB plane will usually suffice as a heatsink!
+ + +The main point here is to demonstrate the fundamentals of very basic transistor biasing, and to discover just how much one can learn from some simple observations. While I strongly recommend building and testing it, I recommend against using if for anything. It can be used for matching, but the main goal is to learn how a transistor works in a circuit. The actual topology doesn't matter as far as the transistor is concerned. It can only perform the one task - convert a small base current into a much larger collector current. By building it, you learn what it does at the most fundamental level.
+ +It's also instructive to look at the AC performance. In particular, note that emitter degeneration (aka 'local feedback') is not as effective to reduce distortion compared to 'true' negative feedback. While the two tests shown indicate that the AC gain is reduced by a factor of around 17 (voltage gain reduced from 160 to 9.3), distortion is reduced by a factor of less than 6. With negative feedback, the improvement is roughly proportional to the reduction of open loop gain. Of equal importance, negative feedback also reduces noise, while emitter degeneration often makes it worse.
+ +Not included in any of the above is any attempt to quantify the power supply rejection ratio (PSRR) of the circuits. This is a measure of how well the circuit can attenuate power supply noise, ripple, etc. It wasn't included for one simple reason - it's so poor that it means that a regulated (or very well smoothed) supply is essential. The power rail voltage must be completely free of any noise, because a full 50% of all supply noise ends up at the output.
+ +Transistors are much more linear than generally believed if the collector voltage and/or current are not varied. This isn't possible in a real circuit, but most power amplifier and opamp input stages operate with an almost constant voltage and only the current is changed. The situation changes in the Class-A amplifier stage (aka VAS - voltage amplifier stage), but that is always operated with (close to) a constant current, and this time only the voltage varies. Most power amplifier and opamp input stages contribute a significant amount of gain, and operate with only small (often negligible) voltage changes due to the signal, and very small current changes as well. When forced to operate over a wide voltage range, the common mode input voltage changes significantly, leading to higher distortion (common mode distortion).
+ +Running tests such as those described here is essential, not only for your own understanding, but to ensure that results will be consistent if a circuit is to be built by others (as a project perhaps). For example, all of the projects published on the ESP website take the normal variations of transistors into account. Because we know that no two components will ever be identical, a designer must consider the typical parameter spread of parts obtained by constructors. If this were not the case, many of the ESP projects would not work!
+ +Note that cries of "I knew it - JFETs (or valves/ vacuum tubes) sound better!" are misplaced, because their distortion is generally higher than BJTs and there are different non-linear effects involved. There is no doubt that JFETs (and to a lesser extent IMO, valves) have their place in circuit design (including within opamps), but 'superior' sound quality is not amongst their virtues. This isn't to say that JFET input opamps sound 'bad' by any stretch - there are several such opamps that have excellent specifications (and sound quality). Every amplifying device known is non-linear, and only the causes (and remedies) are different. The use of valves in very low distortion circuitry generally provides performance that doesn't even come close to a decent opamp.
+ +Switching circuits remain very common, and for low current operation a BJT is hard to beat. Base current is low, and it can be sourced from a low voltage. If you have more than ~1.5V available, it's easy to build a reliable switch that can handle up to 100mA easily. The design process is simple, and the result is usually very reliable if the design is optimised. They are also both readily available and cheap, two factors that are usually desirable (especially for high-volume production). In most cases, substitution is easy if the original part is obscure or out of production.
+ + +This article was inspired in part by Harry Powell (Associate Professor and Associate Chair for Undergraduate Programs) from UVA (University of Virginia), and is based (in part) on a 'Fundamentals 2' lab in Electrical and Computer Engineering. The original is entitled 'ECE 2660 Labs for Module 6'. The material forwarded was due to Harry seeing the article describing a Constant Collector Current hFE Tester for Transistors - Project 177. + +
There are no other references, because the techniques shown are quite common, and the data presented were the results of simulations and workbench experiments to verify results.
+ +![]() |
Elliott Sound Products | PSU Capacitor Bleeders |
As most readers will be aware, none of the power amplifier PSUs (power supply units) on the ESP website use bleeder resistors to discharge the caps when power is removed. This is a deliberate omission, because most amplifiers will discharge the filter capacitors fairly quickly, depending on quiescent current. Adding resistors to make the discharge faster dissipates power, and this is converted to heat. The extra power can increase temperatures inside an un-ventilated case surprisingly quickly.
For example, if you have a power amplifier that draws a quiescent current of 28mA (fairly low by most standards), ±56V supplies will collapse to around 10V within five seconds (assuming 4,700µF capacitors). Mostly, this is quite fast enough to let you work on the amp without having to wait forever for the caps to discharge. However, some people do like the idea of using bleeders, and adding 2k (2 x 1k, 1W in series) will speed this up. However, the bleeder resistors will get quite warm (dissipation is over 1.5W), and it's still rather slow.
The alternative is to use an active bleeder, configured so that it draws close to zero power as long as the mains is present, and it is designed to discharge the caps very quickly when mains power is turned off. Naturally, this requires some circuitry, but it doesn't have to be too complex. It's not difficult to discharge a 56V supply to less than 5V within one second. This can be achieved with any capacitance you like (and any voltage as well).
Note that while you may see references to using a screwdriver to short charged capacitors - Don't! The very high discharge current can damage the capacitor, and it's a risky procedure anyway. If you do need to reduce the stored charge to some low (safe) value, use a high-power resistor with proper insulated probes. Ideally, the resistor will be a value that discharges the cap quickly, but (if you want to be ultra-safe) keeps the current below the capacitor's ripple current rating. Otherwise, a 150Ω 5W resistor will suit most situations and will not damage the cap. Using a screwdriver (or other similar implement) is never recommended by anyone who knows what they are doing.
Probably the simplest way to implement an active discharge system is to use a relay, powered from the 230V (or 120V) mains. When mains power is interrupted, the relay's normally closed contacts connect a discharge resistor. When power is resumed, the relay opens and disconnects the discharge resistor. It's crude, but it can certainly do the job. There are two caveats with this, in that the relay must have a 230V or 120V AC coil, and the contacts have to be rated for the DC voltage in use. This will work well if the DC is less than 30V, but it gets troublesome at higher voltages. DC can cause contact arcing, but provided the current is less than ~250mA (set by the discharge resistor) you should be ok. Have a look at the Relays (Part II) article to see what you can get away with. There's also the issue that you have mains on the relay coil, and supposedly 'safe' DC at the contacts. This makes it a risky proposition unless you are very careful with your wiring. I've been doing mains wiring for most of my life, but this is not the method I'd choose.
The relay coil can also be powered from the transformer secondary, which is a lot safer, as there's no interaction with mains voltages. Finding a relay with a suitable coil voltage may be tricky, as they only come with a limited range of voltages. 12V, 24V and 48V are common, so a series limiting resistor would be needed if the secondary AC is more than 10% higher than the coil's rated voltage. AC coil relays are usually more expensive than DC types, and the relay may cost as much as the parts for an electronic discharge circuit. The relay will have a limited life (especially when switching DC), unlike an electronic circuit.
Note that in all circuits described here, the MOSFET must not be a logic level type. The circuits all rely on the MOSFET needing at least 2V on the gate to turn on, and if it's less, the MOSFET may turn on and off in normal use. The suggested MOSFETs have a minimum threshold voltage of 2V, which ensures that they will remain off when mains power is provided. The MOSFETs shown are only suggestions, you can use anything you wish, provided they have a suitable voltage rating (and aren't logic level). Power dissipation is low, and a heatsink is unlikely to be needed unless you have very high capacitance.
There is little or no consensus as to how quickly filter capacitors should be discharged. It's always a trade-off between speed and dissipation, and with energy costs worldwide increasing all the time, it seems a bit silly to deliberately increase the power consumption of an amplifier or other equipment. It's usually acceptable if the voltage has fallen to about 10% of the maximum within 10 seconds or so, but this isn't always achievable. Some amplifiers will create a large 'thud' through the speakers when the supply collapses, and this has to be considered.
Some power amps (in particular) may use 100,000µF capacitors (or paralleled caps to achieve the same result). Even with 10,000µF charged to 56V, a 330Ω resistor will cause the cap(s) to fall to below 5V in 10 seconds, but it will dissipate close to 10W (x2 for a dual supply), so there's nearly 20W of wasted power. That power is converted directly to heat, and serves no useful purpose. With more capacitance, you either have to accept even more wasted power, or wait longer for the caps to discharge. If you were to use 100,000µF at 56V with a 2k discharge resistor, the voltage will be over 40V for one minute after power is removed, and is still over 30V two minutes after power is turned off.
It should be fairly obvious why I never add the discharge resistors. If you need to keep wasted power to the minimum, the amplifier will almost certainly pull the voltage down faster than (say) a 2k resistor, which will still dissipate over 1.5W as for long as the amplifier is turned on. Discharge resistors were nearly always used with valve (vacuum tube) equipment, because the voltages were much higher than we use now, and valves quickly lose emission as the heater cools. This could easily leave a dangerous voltage across the filter capacitors for several minutes (or longer in some cases).
It's very important to understand that single-supply amplifiers with a speaker coupling capacitor need special attention. If the supply voltage collapses too quickly, the speaker capacitor can force current back through the amplifier, and this can damage output transistors. The amplifier's output must have a diode between the output (before the output capacitor) and the supply rail. This provides a discharge path for the capacitor that doesn't involve reverse biased transistors. Fortunately, such amplifiers are now uncommon, and it should not be an issue.
With modern equipment there's really no need to use discharge resistors, but there will always be constructors who, for one reason or another, prefer to reduce the supply voltage as quickly as possible. It's obvious that using a resistor is not the answer, so we need to add some electronics. The idea is to keep the circuitry as simple as possible, but of course it has to work reliably. Fortunately, this isn't difficult to achieve.
Figure 1 - Relay Bleeder Circuit
The above is an example of a relay based discharge circuit. Bear in mind that some AC coil relays have a slight buzz, which will likely be audible, and if so will be very annoying. This is not the recommended way to make a discharge circuit, but some constructors may find it suits their needs. If you have a dual supply, the relay needs DPDT (double-pole, double-throw) contacts, with the discharge resistors using the NC (normally closed) contacts. When AC is applied, these contacts will open, disconnecting the discharge resistor.
A relay version looks simple, but contact erosion from DC will eventually cause it to become intermittent, or fail permanently. You probably won't know that this has happened until you monitor voltages. If one side of a relay based dual supply discharge fails, you will most likely be rewarded by a loud 'thump' from speakers as one rail falls to zero while the other is still at a higher voltage for a short period. This circuit will work, but it's not recommended.
The essential 'ingredient' is an AC sensing circuit, which detects AC and keeps the bleeder disconnected until the mains is turned off. A simple arrangement using this idea is incorporated into the Project 05 preamp power supply, and is used to activate a muting relay when power is removed. Project 33 uses much the same arrangement, and both are known to work very well.
Once the circuit senses that mains power is no longer available, a bleeder resistor can then be switched into the circuit. Because it's turned off as long as power is available, there's no wasted power, and the bleeder can be a low value to ensure a rapid discharge. While the instantaneous power will be high, it's fairly short-lived, so a 5W resistor will usually be more than sufficient to handle the peak power (which may be 25W or more, depending on the design choices made).
Throwing electronics at the 'problem' isn't quite as bizarre as you may imagine. Some equipment uses mains filters, and the capacitors within can (under some conditions) remain charged. Several manufacturers make ICs designed specifically to discharge the capacitors. The TEA1078 (made by NXP) is one example, but it's by no means alone. In case you were wondering, no, you can't use this IC to discharge big filter capacitors - it's designed to reduce the voltage across a 330nF X2 capacitor to less than 60V in under 300ms. It has minimal current capability.
The AC detector simply uses the AC from the transformer to turn on a transistor (Q1) 50 or 60 times per second, maintaining a low voltage across a capacitor as long as AC is present. A simplified version of the circuit for a single supply is shown below, so that the various parts can be examined. Some of the component values will be changed, depending on how quickly you want the capacitor to discharge, but the circuit can be used with no changes with DC voltages from 22V up to 100V. The only reason for the 15V zener diode is to protect the gate of the MOSFET, which is vulnerable to ESD (electrostatic discharge) and any voltage above 20V may cause the insulation to fail. The result is a dead MOSFET.
Figure 2 - Basic Active Bleeder Circuit
The discharge switching device is a MOSFET, because they require almost no current to turn on, and they provide excellent switching capabilities. A BJT (bipolar junction transistor) can be used, but it's nowhere near as good, will dissipate more power, and may require a heatsink. The MOSFET will have to handle up to 15W, but it's only for a few milliseconds. Any MOSFET with a suitable voltage rating can be used, provided you leave a 10-20% safety margin. The IRF520 (N-Channel) and IRF9520 (P-Channel) are suitable for supply voltages up to ±80V. This will be enough for the vast majority of applications.
Q1 is the AC detector, and it will keep C1 discharged (typically below 1V) so the MOSFET can't conduct. When the mains is interrupted, the voltage across C1 rises and the MOSFET turns on. This discharges the filter capacitor (Cfilt, shown as 10mF - 10,000µF) via the discharge resistor. With 150Ω as shown, the voltage will drop below 5V in about 2.5 seconds. There is no need to make it any faster, and the 150Ω discharge resistor can be used with any DC voltage. At 80V DC, it will dissipate a peak power of 40W, but that will drop below 5W in less than 1.5 seconds. A 5W resistor should be able to handle that without difficulty. The MOSFET will dissipate up to 10W at 80V, but typically only for less than 10ms, and it will not need a heatsink. D2 ensures that the voltage across C1 isn't discharged by R2 as the supply voltage collapses.
Because the MOSFET's gate has voltage for a considerable time, it can continue to conduct. D2 prevents C1 from discharging through R2, and enough gate voltage is present to ensure conduction until the output voltage has fallen to zero. C1 will discharge via R3 (2.2MΩ), but that will take a while, because R3 is deliberately a high value. This does not affect the circuit's ability to be re-started, as the first AC cycle will cause Q1 to discharge the capacitor so normal operation resumes immediately.
R1 should normally pass a peak current of around 500µA to the base of Q1. It's not critical, and it will work fine with anything from 200µA up to 1mA. The value is determined using Ohm's law, using the DC voltage as the reference. For example, a transformer with a 25V RMS secondary will provide 35V DC, so R1 is determined by ...
R1 = 35 / 500µA = 70k (Use 68k)
Apart from the MOSFET and Rdis (the discharge resistor), the only other value that changes is R2. It should normally pass around 1mA. If the DC voltage is (say) 80V, R2 will be 82k (and R1 should be 150k). With a nominal 1mA charge current, C1 will charge at a rated of 0.1V/ ms, so it takes 10ms for C1 to charge to 1V, or 100ms to 10V. The circuit can also be used with high-voltage supplies (see 'High Voltage' below). Just make sure that the MOSFET(s) are rated for at least 20% more voltage than you'll be using. Compared to a resistive bleeder, this circuit will provide a much faster discharge, and will dissipate almost no power when the equipment is in use - about 33mW with the values in Figure 2.
The discharge time is based on a simple time constant, the filter cap (Cfilt) and Rdis (150Ω). The time constant of 10mF and 150Ω is 1.5 seconds, at which time the voltage will be 37% of the original voltage. After two time constants (3 seconds), the voltage has fallen by another 37%, down to 4.8V (for a 35V supply). This process continues, with the voltage falling another 37% for each additional time constant you add. In theory, this is known as an asymptote, and the voltage will never reach zero. In practice, it's generally considered that 10 time constants is close enough for both a full charge or discharge. After 15 seconds (10 time constants) the voltage is only around 1.6mV (when starting from 35V).
Note that there is a small delay before the MOSFET conducts, because C1 has to charge after power is removed. The delay is about 130ms with the values suggested. This is not an issue, and although it can be reduced, there's no reason to try to do so.
The dual version uses a mirror-image for the negative supply. Q3 and Q4 are PNP and P-Channel devices respectively, and there's no longer a requirement for D1 shown in Figure 2, because the PNP transistor clamps the negative voltage for Q1 and vice versa. One could try to be clever and make the negative discharge circuit a slave to the positive version, but that would end up needing more parts. Everything involved is cheap, and the two circuits will be complementary. Small differences are inevitable, but they should not cause any problems with a sensibly designed circuit.
Figure 3 - Dual Supply Active Bleeder Circuit
Components are calculated in the same way as for the Figure 2 circuit, and nothing is particularly critical. Naturally, all parts need to be rated for the voltage being used, and if you don't need a fast discharge, the value of Rdis can be increased. The only down-side of the dual version is that P-Channel MOSFETs are usually a bit more expensive than their N-Channel counterparts, but the difference should be very small in practice (a few cents at the most).
A useful change would allow the circuit to discharge the supply of a valve amp. This may be 450V or more, and using a bleeder is highly recommended. They are often incorporated into the power supply anyway, because they also act as 'balancing' resistors to ensure the same voltage across each cap. While not strictly necessary (for reasons I won't go into here), 100µF electros will typically use a 220k resistor, with two such pairs in series as shown below. This will discharge to 37% of the original voltage in 22 seconds, not including any current drawn by the valves (which is usually the case, unless they have been removed during testing!). Without valves, the voltage can remain hazardous for much longer than we'd like.
Figure 4 - Active Bleeder Circuit For High Voltage
Using a high voltage MOSFET and with the guidelines shown in section 2, the discharge time can be reduced to under one second, with almost zero wasted power. The discharge resistor should be increased to around 4.7k, and even though the instantaneous power is over 40W, a 5W resistor should be able to handle this with ease (peak current with a 450V supply is just over 96mA). R1 should be around 820k, and R2 should be 470k. Ideally, both will be 1W, not because of power dissipation, but to ensure they can handle the voltage. The voltage across C1 cannot exceed 15V, but 10µF, 63V electros are so common that you wouldn't use anything else.
While further improvements are possible, there appears to be no good reason to add any more parts, because it works just fine as it is. If it were for a military system I'm sure that the extra parts count would be of no consequence, but for 'normal' usage by hobbyists and others who need a discharge system, it's already more than acceptable.
There is one other circuit I found, which was patented in 1996 [ 1 ]. It appears that it will work (at least in the simulator). I have reservations about the original though, for a number of reasons. Only one diode was used as originally patented, and it also required C1 to be a reasonably large electrolytic capacitor (which is undesirable for many reasons). Using two diodes as shown reduces the ripple voltage across C1 to about 620mV peak (vs. 1.3V with one diode), which is a better option. The major change is from a BJT to a MOSFET, and this allows C1 to be much smaller, which means you can use a film cap.
Figure 5 - Active Bleeder Circuit (Based on Patent by Fluke Corporation)
The diodes keep C1 discharged (to within a few hundred millivolts of the supply voltage), biasing off Q1. When the mains is interrupted, C1 rapidly charges via R1, Q1 turns on, and the supply is discharged. The patent drawing showed C1 as an electrolytic cap that was subjected to a small reverse polarity when AC is present, which is not optimal. As shown above, C1 has +100mV (relative to the main supply) during normal operation. The peak charge current is beyond what I'd like to see if only one diode is used (it's a cheap addition, and a single diode isn't recommended).
The original design used a BJT as the discharge 'switch', and that required C1 to be much larger than the value shown. Using a P-Channel MOSFET means far lower dissipation in the operating state, because R1 is a much higher value than can be used with a BJT. If you want to see the original, look up the patent document. While the circuit is clever and uses the absolute minimum number of parts, it's not the one I'd recommend. I like the simplicity, but not the compromises. The original only used four parts, but has many more likely problems than the modified version shown, which saves only two parts over my suggested versions. Still, it's the only viable alternative circuit I could find, indication that active capacitor discharge circuits probably fall into the 'esoteric' category.
When servicing equipment, and especially valve guitar amps and SMPS, high and possibly lethal voltages may be stored in filter caps. Making contact with 400V or so isn't fun, and it's something that is ... shall we say 'best avoided'. This final circuit is manual - it's leads are attached to the filter cap with clips or soldered on, and it needs a well insulated case. The wiring must be rated for the likely voltages you'll encounter.
The choices for the SCR are many, and the BT151-800 or BT152-800 are common and reasonably priced. There are many others (too many to list) so a search of your local supplier's website will turn up something suitable. Mostly you won't need the 800V rating, but it's better to have it and not need it, than to need it and not have it . Naturally, you can use a lower voltage rating if you don't expect to use more than (say) 600V or less.
Figure 6 - Manual Discharge Circuit
Make sure that the 'Discharge' button is either recessed or needs some force to activate. An accidental press could damage the power supply if it's still working, and will also cause the discharge resistor to get very hot, very quickly. With a 1kΩ resistor as shown and a 400V supply, the resistor will try to dissipate a little over 160W. You may choose to use a higher value, and 2.2k will dissipate only 80W.
The SCR (S1) is normally off, and the neon lamp (NE1) indicates that the voltage is above ~90V. R1 and R2 should be 1W resistors, or use two 220k resistors in series. This isn't for power handling, but ensures that high voltages don't cause the resistors to fail. C1 will charge to 15V, and is discharged into the gate of the SCR to turn it on. The current is limited by R3 to prevent gate damage. When the voltage has fallen to the point where it's lower than the SCR's holding current, S1 turns off again.
While the drawing shows test clips, the leads can be soldered in place if preferred. Make absolutely sure that they are connected with the right polarity. The circuit will not work if they are the wrong way around, so care is needed. Make sure that the button is never pressed while power is applied, as the connected circuitry may be damaged. If you are lucky, all that will happen is the fuse will blow, but if it's used with a valve rectifier it may be damaged.
With 400V DC and a 220µF filter capacitor (for example), the voltage will be reduced to around 20V within less than 300ms. It's unlikely that it needs to be any faster than that for normal use.
Personally, I'd rather use the Figure 4 circuit, as it only needs three leads, but is automatic - the capacitor will be discharged as soon as mains power is interrupted. However, I'm not entirely sure I'd be happy using it on a SMPS, because everything is at mains potential. If carefully made, the manual discharge circuit will be safe to use, but as noted above, the push-button must be protected against accidental activation.
Because I don't consider a discharge/ bleeder circuit essential (or even necessary), it's hard for me to recommend using the circuits shown here within an amplifier chassis. However, there may be occasions where you find that, for whatever reason, a rapid voltage reduction is needed. Should that be the case, the circuits shown will do the job, and you can select the discharge speed based on your needs.
The high voltage version is recommended for valve amps and other circuits that use a high voltage but can't discharge the filter caps quickly. Leaving high voltages lurking within a chassis is always somewhat dangerous (particularly for service technicians), and by ensuring a rapid discharge means that you are far less likely to get a nasty surprise when working on it. MOSFETs are readily available for most voltages encountered, and the circuit is so simple that it will add little to the build cost, nor will it occupy much space. It can even be made in a small box, with three leads - chassis, transformer and DC, allowing it to be attached while working on an amplifier.
The circuits shown are by no means the only way that an active discharge circuit can be made. There are other possibilities, but most will be more complex. The principles don't change, as it's still essential to detect that the AC has been turned off, and use the detector to turn on the discharge transistor (BJT or MOSFET). The circuits shown here are about as simple as they can be, consistent with good, reliable performance.
As it turns out, I have just the place for the dual supply version of this project. For most high-power amplifier tests, I use a supply that I call the 'monster'. It uses a 1kVA power transformer, and has around 20mF (20,000µF) filter caps for each rail. It's always powered via a Variac so I can set the voltage to whatever is needed. The maximum voltage is around ±90V, and that can do some serious damage. Provided it's powered off with an amplifier connected, it will discharge fairly quickly (typically in about 20-30 seconds or so), but without an amplifier or other load, it holds the voltage for a considerable time. It can be very embarrassing to connect an amplifier to a 'live' power supply, and the dual supply discharge circuit is an ideal addition.
There are no other references, as the circuit I developed appears to be unique. There are a few attempts shown on-line, but none (other than the reference above) that I saw will work very well (some won't work at all, or are poorly executed).
![]() | + + + + + + |
Elliott Sound Products | +Bootstrap |
![]() ![]() |
It's unclear when (or by whom) the term 'bootstrap' originated, but a web search will (as always) provide numerous answers, most of which are likely to be wrong. It's often described as the rather unlikely situation where a person lifts him/herself off the ground by pulling on his/her bootstraps (a loop at the back of boots to help pull them on). This is not a scientific description by any means, but it does paint an amusing mental picture.
Bootstrap circuits are often misunderstood, partly because they are a bit weird, and partly because there are several different types with very different functions. They are unique to electronic circuits, and while it may be possible to create a mechanical bootstrap 'machine', I can't think of a use for it. There is some mention on the Net about a 'bootstrap' system for air-conditioning systems used in aviation, but that's not relevant here.
+ +'Bootstrap' also refers to an open-source web development framework [ 1 ]. It's intended to make the web development process of 'mobile-first' websites easier, by providing a collection of syntax for template designs. It (apparently) helps web developers build websites faster as they don't need to worry about basic commands and functions. It consists of HTML, CSS, and JS-based scripts. This article does not cover this.
+ +In the field of electronics, bootstrap circuits are used to increase input impedance, create 'constant current' sources (particularly [but not restricted to] audio power amplifiers), and to provide a voltage above the main supply rail (Class-D amplifiers, switchmode PSU controllers). A bootstrap system can also be used to allow an opamp to function over a wider than normal supply voltage range, effectively eliminate input protection diode capacitance, or even to make the capacitance of a cable 'disappear'. Unfortunately, the same term is used for all (bootstrap) and it can be difficult to know which is which unless you understand how each one works.
+ +There's even a form of bootstrap circuit used for PCB design, to prevent leakage across the board from upsetting high-impedance circuits. In this mode, it's called 'guarding', and uses a small length of PCB track to encircle a high impedance point. The guard is hooked up to a low-impedance point with the same potential as the input. This is the one application of bootstrapping circuits where it can be used for DC. The guard ring prevents leakage currents from upsetting the circuit's operation, but it doesn't change the input impedance. It's arguable if this really qualifies, but it is a form of bootstrapping IMO.
+ +There are many examples of bootstrap circuits on the ESP website, especially for creating 'constant current' to linearise an amplifier. One thing that's a bit limiting is that all common bootstrap circuits only work with AC. DC operation is not possible because the bootstrap device is a capacitor. Always. If you need a constant current source that works to DC then it must be active (i.e. using an IC, transistor, JFET, etc.). For linear AC, adding bootstrapping usually involves the addition of one resistor and one capacitor. With switchmode converters you add a diode and a capacitor.
+ +There is information on the Net covering bootstrap circuits, but a great deal of it is overly simplistic, explained poorly, or just wrong. Some is seriously wrong, despite lengthy descriptions and scope displays. I don't link to material that's not accurate. There's also a fair bit of other information that's correct, although in some cases it's highly specific to a particular application. In some cases, it looks like authors have made assumptions, but never verified that what they describe is real. This isn't helpful to anyone.
+ + +To understand the two most basic (and earliest) forms of bootstrapping, we don't have to delve into complex maths or anything else that's 'challenging'. A simple mental exercise is pretty much all that's required. This can be augmented with a simulation or a bench test (if you have the necessary equipment). At it's heart, bootstrapping ensures that the AC voltage appearing across a resistor remains constant. If the voltage across a resistor doesn't change, then the current through it doesn't change either.
+ +This applies whether the circuit is configured for boosting the input impedance or providing a constant current. The two are fundamentally equal in all respects. The goal is to make a resistor appear to have a value that's many times its actual value. As shown below, we can make a 5k resistor behave as if it were over 100MΩ, by ensuring that the AC voltage across the resistor is constant. As noted above, if the voltage across a resistor is constant, then the current through the resistor must also be constant. Ohm's law shows this to be true.
+ +In the examples shown, the voltage source can swing below ground, as would be the case with a transistor (for example) with both positive and negative supply voltages. We need only consider positive transitions for basic analysis. The only part of this that may be a bit confronting is that the impedance (apparent resistance) is different for AC and DC. However, inductors and capacitors are similar, in that AC and DC conditions are very different.
+ +Bootstrapping is an AC process, and while it can (in some cases) be adapted for DC, there are other topologies that achieve a result that's superior and easier to implement. Consequently, only AC applications are considered throughout this article and in (most of) the examples that follow. The two primary applications are increasing the input impedance of a circuit, or providing a constant current to obtain higher gain and linearity from an amplifying device. These are explained in detail below. These two processes are essentially identical, as both rely on making a resistor appear to have a much greater value than its physical resistance (technically, this is impedance, not resistance). In the drawing I've shown a simple voltage generator, but in practice it will be a transistor (BJT, JFET or MOSFET) or even a valve (vacuum tube). The principles are unchanged, but the effect can never be as good in reality as it can with 'perfect' parts.
+ +With an ideal current source, only the voltage changes, but the current remains the same. Predictably, the 'ideal' doesn't exist, but we can get fairly close. In the drawing ('A'), a voltage source is shown, with a resistor supplying the DC current needed for operation. This can range from nA to mA in 'typical' circuitry, but in this case it's 1mA. If the source voltage varies by ±1V, the current through the source must vary by ±100µA (Ohm's law). If the voltage is increased to 5V peak, the current varies from 0.5mA to 1.5mA. A calculation will result in an answer of 10k at any voltage.
+ +In the second circuit ('B'), a buffer is used to isolate the source (the buffer cannot be left out!). C1 couples the output of the buffer to the junction of R1 and R2. This forces the voltage across R2 to remain constant. In all cases, Vout is assumed to be a high impedance (1TΩ was used for the simulations - that's the input impedance of a TL07x JFET opamp). The buffer stage can be an opamp, a BJT emitter follower or even a MOSFET source follower. The gain is expected to be unity, but it will never be exactly 1 - somewhere between 0.999 and 0.98 is generally normal. It must be less than unity - if it exceeds unity you'll be applying regeneration (positive feedback) that boosts the gain, and it may oscillate.
+ +Ohm's law lets us determine the effective resistance (impedance) for each condition shown. The AC values shown are peak. Predictably, the resistor in 'A' can be calculated to be 10k. In 'B', R1 is bootstrapped, and with a unity gain buffer (and allowing for the impedance/ reactance of C1), the effective value of R2 becomes 154MΩ. The current through the source varies by only ±6.5nA (ΔI means change of current). If all parts were 'perfect' (and assuming C1 to be infinitely large), the effective value of R2 becomes 'infinite'. This does not (and cannot) apply to any real circuit.
+ +Circuit 'C' shows an active current source, configured for (close to) 1mA DC as with the others. The output impedance measured at the collector of Q1 is 2.44MΩ (close enough). A bootstrap circuit using real parts can be slightly better, but the difference is usually negligible. However, the active version does something that a bootstrap circuit cannot - it works to DC. This is important for some applications, but not for others.
+ +You may initially wonder why the buffer is so critical, so consider condition 'D' (perhaps that should have been 'F' for 'fail). I (almost) never show something that's so obviously wrong, but it's a circuit you'll see on many websites and it's claimed to be 'bootstrapped' (it's not). It doesn't work, it can't work, and even the most rudimentary analysis proves this to be true. The capacitor simply shorts R2 for AC, at a frequency determined by R2 and C1. The source 'sees' 10k for DC and 5k for AC at some frequency. This is the opposite of bootstrapping! Although I generally avoid showing things that don't do what the 'author' claims, this had to be included because it's repeated so often, and it requires debunking!
+ +Remember that bullshit is still bullshit regardless of the number of times it's repeated. Mindless copying on the Net is the source of more dis/mis-information than most mortals can handle.
+ + +The bootstrap process is possibly best known in a mode where it boosts input impedance. This can be very useful, because it's possible to get very high Z-in (input impedance) even with low-value resistors. Normally, the input impedance of an amplifier stage is determined primarily by the biasing resistor(s), but if high values are used this will increase noise. Any resistor at a temperature of greater than 0K (-273°C) makes noise, as covered in the article Noise In Audio Amplifiers. This can be reduced by using lower values, and applying bootstrapping to obtain the desired input impedance.
+ +One thing to be aware of ... bootstrapping involves positive feedback. Most people know that negative feedback improves the bandwidth of an amplifier and reduces distortion. Positive feedback can do the opposite. There may be a small increase in distortion (only for input bootstrapping), and the bandwidth may be reduced. Neither is guaranteed though, as it depends on the overall topology of the circuit. It will never normally be a problem.
+ +The general idea is described in detail in the article/ project High Impedance Input Stages / Project 161, and that covers the bootstrap circuit in great detail. This is a very common application, and it can work very well. A small miscalculation can have unexpected ramifications though, so a full understanding is essential. When used to boost input impedance, positive feedback is involved, and that can lead to instability. The gain of a bootstrapped input stage must be less than unity. A JFET follower will typically have a gain of ~0.9, a BJT follower will be around 0.98, and an opamp buffer has a gain of 0.999... The gain of an opamp buffer is highest (closest to unity) at low frequencies, and it falls ever so slightly with increasing frequency (within the device's bandwidth).
+ +Common circuits such as valve cathode followers or JFET source followers may end up with a bootstrapped input without you realising it. Two circuits are shown next, and both feature bootstrapped input resistors (R1 in each circuit). Most people will assume the input impedance is 1MΩ, but it's not. The 'lower' end of R1 in each case isn't at 0V AC, but is at an AC voltage of about VIn / 1.1 (i.e. about 0.9V for a 1V input).
+ +The two circuits shown are more-or-less equivalent (inasmuch as a valve and FET can ever be), and both have an 'accidental' or perhaps 'incidental' bootstrapped input. They were simulated, and I made no real attempt to optimise either circuit. If the cathode/ source resistor (R2) is bypassed with a capacitor (C3, optional), the effect is improved somewhat. Without bypassing, the input impedances are 25MΩ (12AX7) and 5.9MΩ (JFET). The valve circuit benefits the most from a cathode bypass, with the input impedance increased to 54MΩ. There's a more modest increase with the 2N4584, to 22MΩ. The input impedance falls at high frequencies because the grid/ gate capacitance becomes dominant.
+ +These aren't usually thought of as being bootstrapped, but they are whether you want it or not. There is no form of unwanted interaction - the input resistor is simply buffered by the cathode/ source. The application of 'bootstrapping' occurs whenever you have a buffered version of the input signal applied to the 'other end' of the input resistor. In an ideal case, the voltage at both ends of R1 would be equal, meaning that there can be no current, and the resistor no longer exists (for AC) as seen by the source. It still passes DC bias current though.
+ +Everything changes when you use an opamp, because when configured as a unity gain buffer, the output level is almost identical to the input. With valves, JFETs or transistors this is not the case (their gain is always slightly less than unity). Provided there are no added filter poles, simple bootstrap circuits as shown in Fig. 2.1 are completely benign, despite the use of positive feedback.
+ +Unlike positive feedback that may be used to boost gain (as was common in very early 'regenerative' radio ['wireless'] receivers), the positive feedback doesn't increase the gain or become unstable at high frequencies. Instead, it causes low-frequency problems, where the circuit may end up functioning a little like a high-pass filter (of obscure lineage). This is explained in detail in the referenced ESP article, but is generally not discussed elsewhere. The 'filter' action can create low-frequency boost, and it's important to ensure that it doesn't cause problems within the frequency range of interest.
+ +The bootstrap circuit can be added to a follower or a gain stage. Both are shown below, and the relative responses are almost identical. The gain stage simply elevates everything by 20dB, since it's configured with 20dB of gain. The added bootstrap components (R2, R3 and C2) increase the gain very slightly (0.06dB) as the network is effectively in parallel with R6 (200Ω).
+ +The circuits shown above have an input impedance of more than 10MΩ from 20Hz to just under 20kHz, and the upper frequency can be raised by using a faster opamp. The 1µF cap (C3) is deliberately selected to roll off frequencies below 7Hz to reduce the amplitude of the low-frequency peak. Without that, there's a peak of 3dB at 0.7Hz, caused by the interaction of the source impedance (100k), and the value of C1 and C2. These form a complex relationship, with R1, R2 and R3 forming a peaking filter. If the source impedance is changed, so too is the peak frequency, but fortunately not by a great deal - provided it remains high. If the source impedance is reduced to (say) 10k, the peak increases to 7dB! There's an impedance dip at the 'resonant' frequency, and with the circuits shown it falls to 124kΩ at 0.7Hz.
+ +R3 is included to suppress the peak, and it's only ever needed when you bootstrap an opamp. It's 'implied' with a BJT (bipolar junction transistor) or JFET (junction Field effect transistor), because they have a gain that's less than unity. The Fig. 2.2 circuit still works without R3, and the input impedance is increased a little - except at the peak frequency of 0.7Hz. The amplitude of the peak is greater without R3, as you'd expect.
+ +One thing you discover quite quickly is that simulations and 'real life' can be quite different. The circuit of Fig. 2.2 (Unity Gain) may simulate perfectly, but in my opamp test board (using NE5532 opamps), there is noticeable rolloff at high frequencies when the source is 1MΩ. A -3dB frequency of 25kHz isn't much good for audio, but very few signal sources have such a high impedance, so it's usually not going to cause any problems. The rolloff is caused by stray capacitance and the input impedance of the opamp. It's generally better to use a high input impedance opamp (e.g. TL072, OPA2134, etc.) if you need to cater for high impedance sources.
+ +With the Fig. 2.2 circuits, you could be excused for thinking that the rolloff (before C3) would only be 6dB/octave, since it's controlled by the value of C1 and the effective value of R1 (say 10MΩ). However, the rolloff below the cutoff frequency is 12dB/octave (second-order) because of C2. Knowing this tells us that we have created a filter, accidentally or otherwise. The response eventually levels out to 6dB/octave, but only at unrealistically low frequencies (~100mHz or 0.1Hz). You may find this discussed in books covering analogue electronics, but it's usually never mentioned. As a result, a bootstrap circuit that uses (what appear to be) sensible component values (e.g. 10nF for C1 and 10µF for C2) causes low-frequency boost that may be most unwelcome. With the combination of 10nF and 10µF, there would be a peak of 20dB at ~6Hz. This can be mitigated to a degree by including the high-pass filter at the output.
+ +Understanding the implications of each value is important, otherwise you can face problems, and not know why it's happening. Since we now know that we've created a filter, it should also be apparent that like all filter circuits, the source impedance will affect its performance. The filter may not be a common type (e.g. Sallen-Key, Multiple Feedback, Fliege, etc.), but it most certainly is a filter, and as such should be designed for the expected source impedance. If the source is capacitive, that complicates the process. Most capacitive sensors need a high impedance preamplifier, but the total capacitance has to be used for the design - this includes the capacitance of the cable.
+ +When a capacitive sensor is used (typically a piezoelectric device), its output is reduced by cable capacitance. A 1nf sensor with a 1nF shielded cable (perhaps 5 metres of cable at 200pF/ metre), the output level will be half that expected. The piezo/ cable circuit is a capacitive voltage divider, which works in the same way as a resistive divider. In this case, the capacitive load 'seen' by the input bootstrap circuit is 2nF, as the two are effectively in parallel. You can design for flat response with this arrangement, but if the sensor or cable is changed, the circuit has to be changed to suit.
+ +The graphs were made using the circuits in Fig. 2.2, and only C1 was altered. It makes very little difference if the circuit has gain or not, as the effect is almost identical. Four response curves are shown, using four different values for C1. As the value is made larger, the peak moves to a lower frequency and becomes better damped. If R3 (22Ω) is omitted, the 6Hz peak is increased by about 1.5dB, with less effect with higher capacitance. Once the input cap is large enough, there is no peak, as shown for 10µF. Of these, 1µF is the optimum choice, but only for a 100k source impedance. If that changes, so does everything else. The response shown is without the final high-pass filter (C3, R4).
+ +All amplifying devices can use a bootstrapped input. It works with valves (vacuum tubes), BJTs, JFETs, MOSFETs and opamps. With any device that doesn't have almost perfect unity gain (i.e. anything that's not an opamp), the input impedance isn't increased as much, and the chance of a high-Q bandpass filter (the red trace for example) is minimised. R3 tames the peak to some extent, but it also reduces the effective input impedance.
+ +The theory behind this is quite simple. C2 passes the buffered input signal to the junction of R1 and R2. R2 is only present to ensure there's a DC path to ground so the opamp will function. The voltage across R1 does not vary with the input signal because of C3, and if the same (AC) voltage appears at both ends of a resistor, there can be no current flow. If no signal current can flow through R1, then it must have a very high value (many, many, times the actual value. In this case, it becomes equivalent to at least 10MΩ, despite being only 22k. The bandpass filter is created by complex phase relationships, and I'm not going to try to analyse it because it's an unwanted complication. You only need to be aware of it so that your design isn't compromised.
+ +I've left the simple emitter-follower circuit until last, because it's the least useful. This is due to the requirement for base current to bias the transistor. The input impedance of a transistor emitter follower circuit is far lower than that of a valve, JFET or opamp. The base requires current, and that has to be provided by the biasing circuitry and the signal source. The input impedance is also directly related to the output impedance. This includes the emitter resistor and the external load.
+ +The simple circuit shown above has an input impedance of about 620k without C2, due largely to the input impedance of Q1 itself. The impedance at the base of Q1 is roughly equal to the load impedance (4.7k) multiplied by the hFFE of Q1, in this case about 350 (200 to 800 is 'typical' for a BC549). We have ~1.6MΩ for Q1 and 1MΩ for R1, and as they are in parallel, the result is ~620k. If R1 is bootstrapped with C2, the effective resistance of R1 is very high (> 100MΩ above 10Hz), but the transistor's input impedance is dominant. As there's also a small additional load on the emitter circuit, the transistor's input impedance is reduced a bit. The simulator tells me that ZIN is about 1.4MΩ for the circuit shown.
+ +While bootstrapping certainly works with BJTs, it's not possible to get extremely high input impedances with a simple circuit. This limits the usefulness of the technique unless you're willing to add a more complex circuitry. A Darlington transistor can increase ZIN to over 10MΩ, but it's still not ideal. If you need ultra-high ZIN, then an opamp or a discrete JFET will almost always be a better proposition. This will be most of the time, since valves are very expensive (and may have limited availability), but suitable JFETs can be found (they are rapidly becoming difficult to find though).
+ +The same circuit as Fig. 2.4 can be used with a small-signal MOSFET (e.g. 2N7000). While it will definitely work, MOSFETs are fairly noisy, so it's not a good solution for low-level signals. Because of the high transconductance of MOSFETs (at least when compared to JFETs), a resistor will almost certainly be needed in series with C2 to prevent peaking (as occurs with an opamp). If you simply substitute a 2N7000 for the BC549, use a 2.2k - 10k resistor which will minimise problems within the audio band. A higher resistance means a lower input impedance.
+ +One thing to be aware of is the frequency dependence of the bootstrapped input. All circuits will show a reduction of input impedance as the frequency gets above (about) 5kHz or so (it depends on the device and the bootstrap component values). This may or may not be a problem, depending on the application. It's worth noting that most instrumentation systems that require a high input impedance use a high-value resistor (e.g. 1GΩ or more), and do not use bootstrapping. Frequency response issues and variable input impedance are unwanted effects for measuring instruments.
+ + +In Project 13, I showed a simple 2-transistor circuit that is quite extraordinary, despite its simplicity. It uses a bootstrapped current source (R2). The bootstrap capacitor (C2) ensures that the same voltage is present at both ends of R2, and logically, if the voltage across a resistor is constant, so too is the current through it. The nominal current through R2 is ~160µA, and with C2 it varies by only 74nA. With the values shown, the voltage across R2 is about 6.3V DC, but only 1.5mV AC (with a 1V output). The effective impedance of R2 can be as high as 10MΩ, simply due to the bootstrap capacitor. The result of this is that the gain of Q1 is increased by several orders of magnitude, and because the current barely changes, the linearity is much greater than you'd expect from a simple transistor stage. The effective resistance of R2 is greater than 4MΩ at 10Hz, rising to 12MΩ above 100Hz.
+ +Without C2, the open-loop gain ('RG' open circuit) was simulated to be about ×260. When C2 is connected, the gain increases to over 600. Not only is the gain more than doubled, but distortion is reduced by ×1.5. That really is a win-win - more gain and lower distortion, with the addition of one resistor and one capacitor. At maximum output, the voltage at the junction of R2, R3 and C2 can exceed the supply voltage - this is true bootstrapping!
+ +An active version is shown next. In theory, this should be 'better', but the difference is academic. There will always be differences between the two, but the differences between the transistor parameters used in any two 'equivalent' circuits will usually be far greater than any difference due to the topology. The results shown here have all been simulated, and simulators have identical transistors of a given type, and exact value resistors.
+ +An active CCS (constant current source) is shown above for reference. Both bench tests and simulations show the two different versions to be virtually identical. There are differences of course, but nothing that will be audible, and even measurements may not reveal any change. There is a small difference due to the active CCS having slightly lower current and a higher impedance at low frequencies. This small advantage is only true up to ~2.5kHz as simulated. Real life will be very similar. Component parameter spread will cause greater differences than anything else.
+ +One circuit that's quite common on the Net is supposed to be a linear 'time-base' sweep circuit. It uses bootstrapping to create a current source that charges a capacitor for a linear sweep. Whoever published it first neglected to point out its many failings, which make it pretty much useless (well ok, it's utterly useless). Predictably, it's not shown here because there is no point. It could improved quite easily, but I don't expect that it would be of much interest. If you're building a time base, you won't be cutting corners with sub-optimal circuitry that's been fudged to provide a not-quite-barely-acceptable result. I have no idea how/ why such flawed ideas get so much coverage on the interwebs.
+ +The same bootstrap arrangement seen in Fig. 3.1 is used for power amplifiers, and it's used in P12A (El Cheapo), P3A, P68, P101, P127 (The TDA7293 IC uses a bootstrap circuit internally), and P217 (low power 'practice' amp). In short, nearly all ESP power amp circuits. In these roles, the bootstrap circuit works in exactly the same way as described above, except the resistor values are lower because the Class-A amplifier stage (aka VAS - voltage amplifier stage) needs more current. The measured difference between an active current source and a bootstrapped current source is generally tiny (assuming an optimised design).
+ +Using bootstrapping seems to have fallen from favour with most designers, and I don't know why. There's no doubt that an active constant current source works very well, but so does bootstrapping. There are (slightly) fewer parts, and the average current through the VAS will vary slightly as the supply voltage changes, but that usually makes little to no difference to the amp's operation (and it often happens with both methods anyway). Certainly, no-one has ever complained about the sound quality of any of my designs. Some early single supply amps made cunning use of the output coupling capacitor to provide bootstrapping - an example is Project 12.
+ +The bootstrap circuit uses R9, R10 and C5, forcing the voltage across R9 to be essentially constant. As already noted, if the voltage across a resistor is constant, so too is the current through it. Just like the previous example, the gain of Q4 is boosted due to the high impedance load, and linearity is greatly improved. An active current source makes surprisingly little difference to the gain or linearity of the VAS transistor (Q4), but there is a marginal increase in circuit complexity. I doubt that the performance difference would be audible (even tiny differences can be measured), but an extra active device (a current source transistor) might affect high-frequency stability. The transistor will always have a finite frequency limit, where the bootstrap circuit will work at almost any frequency. The only 'trap' is to make the value of C5 too low to ensure good linearity at the lowest frequency of interest (typically 20Hz, but most amps are expected to operate down to lower frequencies).
+ +The effective impedance of R9 is around 170k. That's an increase of more than ×50, without upsetting the DC operating conditions. With the 100µF cap (C5), the effective impedance is greater than 50k at 10Hz, and is >100k at 20Hz. For an electrolytic capacitor in this role the ESR (equivalent series resistance) is irrelevant, as it's too low to cause a problem. The cap will generally last for 20 years or more, as the ripple current is very low. I don't recall ever having to replace a bootstrap capacitor.
+ +For anyone who accepts the bogeyman stories about how 'bad' capacitors are, then the bootstrap circuit can't possibly be any good. More rational people understand that capacitors are perfectly fine when used appropriately, and there's no reason to lose sleep just because there's one extra cap in a circuit. I don't subscribe to these silly claims, as regular readers will know.
+ +The bootstrap circuit can be replaced by an active current source as shown in Fig 2.2, but reversed polarity (it's referenced to the negative supply). That will add one or two transistors, two resistors, and perhaps a couple of diodes. The performance difference is generally small, so the bootstrap circuit wins for component count and overall cost. It's also an opportunity to use a very clever circuit. Despite the constant current supply to the VAS, its current still varies because it has to provide current into the driver and power transistors. This happens regardless of the type of current source - active or bootstrapped.
+ +There is a limit though, and bootstrapping only works with AC. If you like listening to DC, then use an active current source. This will affect a very small number of listeners.
The 'amplification factor', (aka mu or µ) for a 12AX7 is generally taken to be 100. This is the maximum gain you can get from the valve, and it is rarely achieved in practice as it requires an impossibly high plate resistance. However, if the plate resistor (R3) is bootstrapped as shown above, it becomes (effectively) much higher than its physical value (over 4MΩ as simulated). The circuit has a gain of 39dB, just 1dB shy of the maximum (40dB). The distortion is also reduced dramatically, and with a 10V peak output (7V RMS) it's only 0.16%. This is a circuit that I've built and tested, although I used a 12AY7 in my original circuit, originally developed over 40 years ago.
+ +Without bootstrapping (R2, R3 replaced with a single 200k resistor), the gain is reduced to 36dB, and distortion rises to 0.48% with only 4.7V RMS output. Predictably, the distortion falls with reduced level. The stage has enough gain to allow the application of feedback to get a reduced gain with even lower distortion. With a gain of 20dB (×10), the distortion is 0.0036% with 1.4V RMS output. This is almost unheard of for a valve stage.
+ +You're unlikely to see this often. The technique (as near as I can tell) was first published in August 1947 in Wireless World (British publication), and was applied to a pentode. The general idea was published on the ESP site in 2009 (see Valve (Vacuum Tube) Preamps, but it existed on my site well before that. I constructed my first prototype in around 1980, independently of the original design (which I had not seen at the time). While it remains an interesting circuit that works very well, it's no longer viable due to the cost and comparatively poor performance of valves you can get today (and they are inordinately expensive).
+ + +Many people will have seen this final version of bootstrapping, but were unable to work out what it does. This is no surprise, as explanations in datasheets are generally lacking. You'll find info on the capacitor size needed and the requirements for any external diode (some are internal to the IC), but not much on how it works. This arrangement is most common in Class-D amplifiers, where the cap is indicated as 'CBOOT' or similar. You can probably work out how it functions if you read the datasheet thoroughly, but some are many, many, pages long, with bits of info scattered throughout.
+ +It's debatable if this is a 'real' bootstrap circuit or a simplified charge-pump, but that's immaterial if you want to know how it works. A simplified diagram is shown next, with the essential parts being the output switching MOSFETs, CBOOT, DBOOT and VCC (a 12V supply rail). VDC (the input voltage that's switched by the MOSFETs) can be anywhere from 50V to 200V, sometimes more. Around 400V DC is common for a high-power SMPS (switchmode power supply) that operates from rectified mains voltage.
+ +Unlike the previous forms of bootstrapping that worked continuously in the time domain ('pure' analogue), this type is periodic. CBOOT is charged only when Q2 turns on, pulling the output low or to ground. Current flows from VCC to CBOOT via DBOOT - the bootstrap diode. When Q2 turns off, Q1 turns on, and the gate driver uses the charge stored in CBOOT to provide the voltage and instantaneous current demanded by the MOSFET's gate. Without this bootstrap circuit, it would be necessary to provide an additional floating power supply to enable the gate voltage to exceed VDC, typically by 12V or so. This process is repeated at the switching rate, and can be anywhere from 50kHz to 500kHz.
+ +I've kept the circuit as simple as possible to eliminate any confusion. The PWM (pulse-width modulated) input drives Q2 directly, via the 'low-side' gate driver. The 'high-side' consists of a level-shifter and the high-side gate drive. Both of these are supplied by Vboot, which will be 12V greater than the incoming DC (400V in this example). When Q2 turns on, the output is close to ground, so current flows from VCC (12V), through Dboot and charges Cboot to 12V. When Q2 turns off, the inverted input signal provides gate voltage to Q1 via the level shifter, using the 12V stored on Cboot, referred to the output. The current path when Q2 is turned on is shown by the green arrow.
+ +Vboot is not a steady DC voltage - it varies from 12V (Q2 on) to 412V (Q1 on). There is additional circuitry (not shown) that prevents Q1 and Q2 from turning on at the same time (that would short-circuit the 400V supply). Every time Q2 turns on, the charge in Cboot is replenished. Without the extra voltage, Q1 would be unable to turn on at all, as there's no gate voltage available. The points marked as 'HO' and 'LO' simply mean 'high out' and 'low out' (gate drive signals).
+ +Higher frequencies require less capacitance, but a faster diode. The bootstrap diode will always be a fast switching type - standard diodes (e.g. 1N4004 etc.) are too slow, and cannot turn off quickly. This arrangement is ubiquitous in switching supplies and Class-D amplifiers. There are quite a few circuits where the bootstrap system relies on some additional internal circuitry, and it's not always clear that it is bootstrapped.
+ +Note that I haven't tried to explain the start-up process. Q2 must turn on first, otherwise there is no opportunity for Cboot to obtain a charge. ICs using this scheme are very common, and can be found in countless push-pull switching circuits. There are complete ICs that include signal conditioning, level-shifters, bootstrap connections and everything needed to switch a pair of MOSFETs.
+ +It may seem like there's at least a bit of 'black magic' involved, but it's actually quite straightforward once you understand the concept. There are similarities between this and the power amplifier bootstrapping described above, but the difference is that the audio circuit has to operate over a wide range of output voltages (it's analogue audio), whereas for an SMPS the output is either 'high' or 'low' - 400V or zero for this example. The 400V supply can be replaced with any other (positive) voltage - even as low as 12V!
+ +For this to make complete sense, it's useful to examine a typical MOSFET gate driver IC, in this case the IR2110/ 2113. The high-voltage supply is not connected to the IC. The 'VDD' pin is the logic supply for the IC. 'VB' is the bootstrap voltage. The entire high-voltage section (everything after the 'HV Level Shifter') has its reference voltage (Vs) switching between from 0V (GND) and +400V for this example. The level shifter is really the heart of the circuit, and it's another clever circuit (but outside the scope of this article). The upper MOSFET driver circuits (UV [under-voltage] Detect, Pulse Filter and other logic) have the supply voltage (VB) switching from 12V to 412V.
+ +Another class of IC that can use bootstrapping is called a charge-pump. These often use a system that's very similar to that shown for the SMPS controller/ IRS2110, etc. There is only one voltage applied, and the charge-pump boosts the input voltage by two. This allows you to get a +10V supply when the only voltage available is +5V (or ~24V from 12V). An example is the Microchip TC7660, with the only real difference between that and the conceptual circuit being that it uses a synchronous rectifier (aka 'ideal diode') to improve efficiency. These are not high-current devices, and are generally limited to around 200mA or so maximum, with <50mA being more typical. There are variations designed to generate a negative voltage, and they're available from all the major IC manufacturers.
+ +Q1 gets the high gate voltage needed (+24V) so it can turn on, and some of the charge on Cboot is passed to Cout each time the output switches high (Q1 on). One of the main reasons that the bootstrap circuits are so common is that N-Channel MOSFETs are more readily available and have better performance than P-Channel devices. The small extra effort of adding the bootstrap means that the entire IC can be built using N-Channel MOSFETs. While the circuit shown in Fig. 4.3 is conceptual (it has been simulated), if you were to build it, it would work as expected (use 2N7000 MOSFETs). The switching frequency will typically be at least 100kHz. The output voltage can be expected to be about 22-23V (perfect doubling isn't possible due to diode and MOSFET losses).
+ +Like any switching circuit, a charge-pump generates EMI (electromagnetic interference), and there is no regulation. As you approach the maximum allowable output current, there may be substantial ripple superimposed on the DC. With the version shown (and a 100kHz oscillator) the ripple is 12mV p-p (about 3.5mV RMS) at 20mA output. This can be reduced with extra filtering, but then you lose the main advantages (low cost and small PCB real estate).
+ +Most explanations of charge-pump circuits show a number of switches, that are used to a) charge the bootstrap cap, and b) connect it in series with the supply. These are quite valid, but they don't explain how the circuit actually works. Long before charge-pumps, there was the Cockroft-Walton voltage multiplier - if you don't know about it, look it up. Generally using multiple stages, these can generate very high voltages (kilovolts), but at low current. This isn't particularly relevant here, but it shows that voltage multiplication is not new.
+ + +Bootstrapping is one of those things that doesn't appear to make sense until you look into exactly how each version works. The techniques have been used for many years, and the term 'bootstrap' has even been applied to computer software (e.g. 'bootstrap loader'). The latter is a small piece of code that is executed when a PC or similar is powered on, and it loads the operating system. This technique is no longer used in most cases.
+ +I've not been able to determine when bootstrapping was first used or in what form. No early valve equipment that I've come across used it (at least not that I can recall), and it seems to have appeared along with transistorised audio amplifiers. The Mullard 10-10 (published in the early 1960s) is one example, but many other amps of that era used the same idea. The most common usage is to create a constant current source, but bootstrapped inputs to get very high input impedance is also widely used.
+ +I suspect that one of the reasons that bootstrapping has fallen from favour is due to many amps being designed to operate to DC. Since bootstrap circuits are AC only, that means that DC performance will be (marginally) degraded. I've never seen DC operation as a genuine requirement, and in general it's a very bad idea. This has been discussed at length elsewhere on the ESP site, but suffice to say that no music contains DC, no loudspeaker system can reproduce it, and we can't hear it anyway. To abolish a perfectly viable circuit technique to obtain amplifier operation to DC is pointless at best.
+ +Unfortunately, searching for 'bootstrap circuit' (or pretty much any other similar search term) provides thousands of 'hits', but most are related to the HTML/ CSS application. Including '-html' (to remove references to HTML) gives mostly hits on gate drivers, without many references to the other forms. There are several references to one circuit that claims to be bootstrapped but is completely wrong. In all, it's not fun to try to get information on linear bootstrapping circuits.
+ +This short article hopefully explains the three main types of bootstrapping in a way that's easily understood. The circuits shown have all been simulated to ensure they function as described, and several (particularly bootstrap drivers for audio power amps) have been with us for a long time. The 'accidental' bootstrap obtained with cathode/ source biased valves/ JFETs is one that you probably won't see described as such, but it's real.
+ +Another ESP article I suggest that you read is Using Current Sources, Sinks & Mirrors In Audio, as that goes into greater detail on the improved linearity you can get with a current source/ sink as a load for most amplifying devices (although valves aren't covered).
+ + +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Volume Filling a Reflex Box |
Attenuating rear output (removing 'boxiness') has become increasingly important, because the satellite plus subwoofer speaker system has now become almost universal for home theatre applications. The article about QB5 alignments illustrates a way of overcoming a basic limitation of this scheme, that is if a simple two way satellite is used getting satisfactory output down to 80-100Hz with a typical 125 - 165mm (5 - 6.5 inch) driver can be a problem without a filter assisted reflex alignment.
+ +Another problem that exists is that since the low frequency driver is required to operate over an approx. 80-3kHz band, rear radiation reflecting back through the cone becomes an issue, and the sealed box solution of volume filling presents potential problems with reflex loading.
+ +What follows is a discussion about this problem, and some measurements of a box stuffing scheme intended specifically for bass reflex (i.e. vented) loudspeakers.
+ + +The chart of Figure 1 indicates the sort of attenuation we can expect from the usual 25mm lining, as recommended for most bass reflex projects.
+ +
Figure 1 - Attenuation of Back Radiation
The above shows reflection, (red), and absorption (green) of 50mm polyester, approximately 25kg/m³. The peaks at 1 and 2kHz are due to standing waves in the test fixture. As can be seen the polyester fibre does not start to provide any really useful attenuation until around 1kHz, and then over the range we want we can expect no more than 10-15 dB with a typical 25mm covering at the standard density.
+ +Figure 2 shows the attenuation per metre we can expect from glass fibre two packing densities. (From, Bradbury, [ 1 ], p. 407)
+ +
Figure 2 - Variation of Attenuation with Packing Density
Volume filling of a reflex enclosure can present a potential problem because the filling acts to decrease the effective port Q and reduce its output by increasing the loss resistance [ 3 ]. As an example using the boxnotes download [ 4 ], the resonances inside a 10 litre 167 x 267 x 200mm box are ...
+ +With an average path length reflected back to the driver cone of 0.802 metres, and using 150dB per metre we can, on average, attenuate the rear radiation by 128dB by this calculation, a compelling reason to volume fill the box. A difficulty we face is that this is a high density (100 + kg/m³) of filling and the vent resistance might well be high enough to severely attenuate the port output.
+ +Filling attenuation characteristics are greatly affected by the diameter of the constituent fibres, smaller diameter fibres have attenuation that rises more steeply at the high frequency end than do larger diameter fibres. (From Bradbury, [ 1 ], p. 408.)
+ +
Figure 3 - Variation in Attenuation with Fibre Diameter
+(Wool, d = 0.028mm P = 35kg/m³, Glass Fibre, d = 0.005mm, P = 21kg/m³)
Bradbury's data indicate that in the range 50-150Hz the resistive part of the impedance is the same for the packing density of 9 for glass fibre with a diameter of 0.005mm, and wool with a diameter of 0.028mm, but at higher frequencies the attenuation provided by the smaller fibre diameter increases more rapidly and the attenuation for the same packing density is more than twice as much. (From Bradbury, [ 1 ], p. 410).
+ +
Figures 4, 5 - Wool Filled (left), and Fibreglass Filled Pipes
From the above, if we can keep the attenuation parameter high at high frequencies by making the fibre diameter smaller, we can keep attenuation at low frequencies small enough to potentially not unduly affect Qb, and yet provide useful attenuation at higher frequencies.
+ +If we provide a filling with an average of 50 - 60dB per metre in the region of the most common resonances, then the path length is 0.66m and the attenuation = 33 - 40dB plus a potential 12 - 15dB at the surface, and this is the aim point for what follows.
+ +The product sold at local electronics retailers as a substance for stuffing boxes, (tested in Figure 1), is a polyester fibre material with a round fibre of around 0.01mm diameter and has a density of around 25kg/m³. From Bradbury's data, teasing this out to half its standard density should be about what is needed for volume filling the reflex box.
+ +The enclosure losses caused by volume filling are manifested by the Q of the vent output. Tables such as those by Bywater and Wiebl use a Ql value of 7, the QB5 tables use 5. The leakage Q is the largest loss in the system and it is usual to assume that Qb = Ql. In the following experiments the technique of measuring the vent Q with no box stuffing, and then with a low density volume filling scheme is used, a Ql = 5 is acceptable.
+ +The following tests were carried out on a 8 litre box tuned with an exaggerated peak, and will investigate the internal reflections present in various cases.
+ +
Figure 6 - Combined Vent and Driver Output
+No stuffing, (green), 25mm wall covering, (red), plus 12kg./m³ volume filling, (yellow)
+
+
Figure 7 - Vent Output
+No stuffing, (green), 25mm wall covering, (red), plus 12kg./m³ volume filling, (yellow)
From this the port output is attenuated by 3-4dB. The efficacy of a sound absorption material can be defined as its ratio of reflection to absorption.
+ ++ 4 × z1 × z2 / ( z1 + z2 )² is the transmitted sound+ +
+ (z1 - z2 / z1 + z2 )² is the reflected sound +
The standard method of measuring absorption reflection is a Kundt tube [ 5 ] or reverberation chamber [ 3 ], since I don't possess either one I put together a rig that does give some indication of the properties of several materials.
+ + +The speaker workshop software allows the signal processing operations of convolution and deconvolution to be done, this allows us to get reasonably good results with very basic equipment, this device is made from scrap bits of MDF and particle board and sealed with 'Blu-Tac' or similar.
+ +The test cell is fitted over a 140mm (5.5 inch) driver in a speaker box. It is proportioned so that the test piece can be 90mm in diameter and 50mm deep, and the overall performance is reasonably accurate up to 5kHz.
+ +Several data sets are then recorded with the sample in place and not, and the cell open and closed. By convolving and deconvolving the various data sets it is possible to measure the absorption and reflection coefficients of the sample in the cell, the chart of Figure 1 was obtained in this way.
+ +The input signal is a 10ms pulse and four measurements are taken, cell open and closed with and without sample in place. The FFT is then calculated and the 'cell open' / 'cell open with sample' deconvolved, giving the reflected signal. The same is done with the 'cell closed' / 'cell closed with sample' to give the absorption.
+ +
Figure 8 - Acoustic Absorption/Reflection Test Fixture
These tests are of a 38mm thick acoustic foam tile with an anechoic wedge pattern, available at local electronics chains (in Australia).
+ +
Figure 9 - Absorption, (green) and Reflection of 38mm Acoustic Foam Tile
The plot shows a very rough trace, with reflection exceeding absorption at a few hundred Hz. From this test the polyester fibre material is to be preferred.
+ +Tests were conducted with a box lined with the acoustic tiles and volume filled with the fibre material.
+ +
Figure 10 - Combined Output
+Foam and Poly (yellow), Foam Only, (red), Empty (green)
The overall output is noticeably smoother with both foam and polyester fibre (or foam only) in place. The box Q seems relatively unaffected.
+ + +
Figure 11 - Port Output
+Foam and poly (yellow), Foam only (red), Empty (green)
Prominent resonances apparent in the no stuffing case are removed by stuffing.
+ +
Figure 12 - Waterfall Plot of Empty Box
Figure 13 - Fibre Fill Only
Figure 14 - Fibre Fill + Wall Covering
The waterfall plots were prepared by Fourier Transforming the near-field data set and restricting the time window to 3ms to avoid room and nearby object interference. The foam wall covering results in a generally faster and smoother decay spectrum especially at the lower frequencies, and less vent output attenuation. The inferior test cell result is clearly due to anomalies in the test apparatus.
+ +
Figure 15 - Un-smoothed Near-Field Frequency Response
+Box empty (green), All fibre (red), Fibre plus foam (yellow)
While in no way claiming these tests to be definitive, it does appear that a reflex box can be filled by a medium density fibrous material and that it is beneficial to cover the inner surfaces with acoustic foam. The effect upon the box Q is minimal and the re-radiation through the cone and the port is reduced significantly.
+ + +The above article covers many of the things that I had actually intended to write about (at some stage, when I had the time). As most longer term readers will know, I'm not very fond of small reflex boxes, but I do have a pair set up in one of my rooms. The boxiness Robert refers to at the beginning of the article was immediately apparent, and was solved by adding fibreglass to the boxes. This wasn't done using any scientific methodology - I simply guessed at what seemed like a reasonable amount and used that.
+ +This cured the worst of the problems, but since the speakers concerned are only a temporary affair (and have been for over a year at the time of writing) I wasn't too concerned about trying to get them perfect. They will be replaced at some point, and the techniques discussed will be used with a great deal more diligence to get the best possible sound quality ... tempered by the fact that they will never be expected to replace my main system.
+ +There is no doubt that many 2-way speakers suffer from similar problems. It is very common to find enclosures that will have obvious resonances - including some designed by respected loudspeaker designers. Exactly as Robert describes here, one design in particular used an open cell foam to line the interior walls. Although the designer swears by it, the foam is completely useless for damping the various resonant frequencies in the box. Adding fibreglass in these enclosures disturbed the bass to some degree, but the midrange was so much cleaner that it was well worth the small sacrifice.
+ +I can think of several additional methods that will help break up internal standing waves - a brace behind the mid-bass driver, suitably angled and wide enough to catch (and deflect) most of the rear radiation is one method. Use of non-parallel sides is often used and will also reduce (perhaps dramatically) the standing waves in the box. Combining these techniques should enable one to build a small vented loudspeaker that has almost no bad habits, depending on driver quality of course.
+ +In summary, there is no reason that a small system should sound boxy - it will lack deep bass, but that's inevitable with a small driver in a small enclosure. If the system can be made to sound as good as possible, one is far more likely to listen to it. Listener fatigue is common with smaller systems because of exactly the effects described. By eliminating the problems that cause listener fatigue in the first place, even small systems can give a very satisfying experience.
+ +Naturally, adding a subwoofer to accommodate the deep bass will fill in the missing bottom octaves. If it is well integrated, its contribution will seem to be coming from the satellite speakers, with nothing to give away the sub's location. This is an eerie sensation at first, and if visitors ask how you can get such deep bass from such small speakers, you know that the integration is a success .
+ 1 - L J S Bradbury, "The use of Fibrous Materials in Loudspeaker Enclosures", AES Journal, Vol. 24, No. 3, (April 1976)+ +
+ 2 - R H Small, "Vented-Box Loudspeaker systems part IV: Appendices", AES Journal, Vol. 21, No. 8, (October 1973).
+ 3 - D A Russell, "Absorption coefficients and impedance", G.M.I. Engineering & Management Institute Download
+ 4 - Hyperphysics - Longitudinal Waves - Kundt's Tube
+ 5 - Bill Collison, Boxnotes +
![]() | + + + + + + + |
Elliott Sound Products | +Bucking Transformers |
Before I describe or explain any part of the information on this topic, you must be aware that ...
+ +++ ++
+![]()
Everything in this article involves working with mains voltages. Do not attempt construction + or experimentation unless you are skilled and/or qualified to work with mains voltages. In some jurisdictions, mains wiring must be performed by suitably qualified persons, and it + may be an offence to perform such wiring unless so qualified. Severe penalties (including an accidental death penalty) may apply. No ... I'm not kidding.
+ + Note that both the primary and secondary windings are at mains potential, and there is zero isolation. This is not a problem for mains powered equipment, but if you are + careless it could take you by a very great and dangerous surprise ! The mains earth must be connected between input and output. ++
There is regularly a need to reduce the mains voltage. In some cases, it's because where you live it's just too high and causes problems with electronic equipment. Sometimes, you might have a great transformer for a project, but the voltage is just a bit higher than recommended. A very common requirement is to be able to use 220V equipment at 240V - while this is within the 'normal' range, it can be bad news for some gear. Valve amplifiers in particular can be fairly fussy, and there's definitely a need to reduce the voltage if the heater voltage is much above the typical nominal value of 6.3V.
+ +Many of the articles on the Net also suggest that a bucking transformer can be used in boost mode. Perfectly true, but there are times when this is a very, very, bad idea. Most of the material I've looked at leaves out a great deal of the info you need, so I figured it was time I described the process properly, and ensured that you have all the information needed to build a safe bucking transformer system. A great many of the search results for 'bucking transformers' point to questions being asked on forum sites, so it's obvious that they are not well understood, and often not explained very well.
+ +For the purpose of the exercise here, we'll assume that the mains voltage is 240V and the maximum load is 220V at 10A (2,200VA). This is a large transformer, which will be heavy and expensive. We'll also look at a mains voltage of 120V with a requirement for 110V at 20A - also 2,200VA. Note that transformers are always rated in VA (Volt-Amps) rather than Watts. The two figures are only the same when the load is purely resistive. Most loads are either reactive (contain capacitance or more commonly inductance) or are non-linear. Nearly all electronic circuitry presents a non-linear load.
+ +In each case below, I will only show a basic arrangement, because that will be the most common. There are countless variations that may be provided on some commercial or custom-built products, but including all possibilities is both pointless and impossible.
+ +![]() | All transformer windings shown have a dot at one end. This is the traditional way to indicate the start of a winding, so that windings can be connected in series or parallel correctly. If winding polarities are reversed, the transformer will either give a completely different voltage from that expected, or you can even make the transformer look like a short circuit across the mains. + |
All voltages referred to herein are assumed to be (more or less) exact, but of course the mains voltage is 'nominal' (meaning in name only), and is subject to significant variations from day to day and even at certain times of day. Most equipment is designed to be able to cope with normal variations, but there are small differences that have crept into the mains voltage specifications over time that can place older equipment at risk. Imported equipment intended for a lower voltage (220 vs. 230 vs. 240 for example) can fail because either the voltage really is outside the allowable range or is simply incorrectly specified. In some parts of Australia (especially remote outback areas), it's not uncommon for the '230V' mains to measure 260V!
+ +In the US, a lot of older equipment is designed for 110V (very old), 115V or 117V, but the 'correct' nominal voltage is 120V. If you use equipment that really was designed for 110V, but the mains at your house measures 120V and sometimes goes a little high (125V perhaps), the vintage gear will likely have a short life if used consistently at the higher voltage.
+ +In some cases, you might simply want to extend the life of incandescent lamps so they last longer and you can keep using them after they are banned (this has already happened in Australia). Whatever your reasons for reducing the mains voltage by (say) 10-15%, the following will be useful and will allow you to do so cheaply and safely.
+ + +For this application, the first thing that most people think of is a step-down transformer. Since it will be rated at 2,200VA (2.2kVA) this is a big transformer, and it will be expensive. At around this size, you can expect a toroidal transformer to weigh in at about 12-14kg, not including any housing, connectors or anything else. A conventional E-I laminated core tranny will be larger and heavier - expect as much as 22kg for a 2kVA unit. Figure 1 shows the configuration of the transformer. As shown, there are no taps or adjustments - the ratio is fixed at 1.09:1 which will convert 240V to 220V or 120V to 110V.
+ +You cannot use the same transformer for both applications! Transformers must be designed for the actual voltage and current at which they will be used. In many cases, and especially for trannies that are intended for a particular use in the country of origin, they will also be designed for the frequency used (50 or 60Hz). A 60Hz transformer is smaller than one designed for 50Hz, but may fail if operated at the lower frequency.
+ +
Figure 1 - Conventional Step-Down Transformer
There are many variations. Tapped secondary windings may be provided to give more range and a closer match. While tapped transformers are useful, they are (or should be) reserved for more critical applications. The number of taps provided can vary widely, and there are many possible variations. Adding taps increases the size and cost slightly, but also gives the non-technical user many opportunities to use the wrong tapping and cause damage to equipment.
+ +While the standard step-down transformer is a good solution for our specific goal, it is the least efficient and most costly. It will also introduce a couple of problems. One is that the extra resistance of the windings will reduce the regulation of the mains supply, so the voltage will fall further than normal at full load. Regulation can be expected to be no better than about 4%, meaning that the output voltage will fall by at least 4% when the load is increased from zero to full power.
+ +The second issue is more serious - it renders any safety switch useless for the equipment on the secondary side of the transformer. Safety switches have many names, depending on where you live. They may be called core balance relays, earth leakage circuit breakers, ground fault interrupters, etc.
+ +Needless to say, this limitation is by far the most important, although reduced regulation may be a major issue in some cases. There is a very small number of applications where isolation of the mains is required along with a small step down (or up) of voltage. Operation of household equipment - TV sets, hi-fi (valve or transistor), kitchen appliances, etc. - never requires isolation, and because it defeats the safety switch has to be considered a bad idea.
+ + +If used for 240-120V step down applications, I recommend very strongly against using an autotransformer b cause they can be very dangerous with some equipment (old US made guitar amplifiers for example). In this article, we are looking at only a small reduction of voltage, there are no issues with electrical safety, and an autotransformer is perfectly alright here (see Importing Equipment From Overseas ... for more on the safety issues). An autotransformer is shown below - there are no longer two separate windings - everything is handled by a single winding with a tapping that gives the same 1.09:1 ratio as before.
+ +
Figure 2 - Step-Down Auto-Transformer
In this application, the autotransformer has a number of advantages. Because there is only one winding, thicker wire can be used and a smaller core is suitable, and regulation will be better and the transformer can be made smaller and lighter. You can even have both - better regulation and smaller, lighter and cheaper, and not break the laws of physics. Outstanding .
In addition, your electrical safety switch still provides protection, although with a small reduction of sensitivity. Overall, this is a great solution. It would be the ideal solution if auto transformers were always wound properly, since the size can be reduced dramatically. Sadly, this may not be the case unless you have a tame transformer winder who knows what he's doing. You could easily end up with a transformer weighing about half that of an isolation transformer, where it only needs to weigh a couple of kilograms at most. In addition, whatever you need will be a custom job, since I know of no transformer makers that produce a stock range of autotransformers (other than 240/220V - 120V and vice versa).
+ +The ultimate autotransformer is a Variac which allows continuous variation of the mains voltage from zero to (typically) 110% of the applied voltage. For critical applications, Variacs have been fitted with servo systems that automatically adjust the setting to maintain a very stable mains voltage, regardless of variations caused by normal fluctuations. Such a system will not be described here, as it is completely outside the scope of this article. Variacs are also rather expensive, especially in larger sizes.
+ + +The bucking transformer has been around for a long time - probably almost as long as transformers themselves. The primary reason for using a bucking transformer instead of a traditional step-down transformer is size and cost, and because of this people often assume that it must be a bit dodgy and not as good as a 'real' transformer.
+ +This is not the case at all - a properly designed bucking transformer will perform just as well (or better) than a step-down. Like the autotransformer, there is no isolation, so the mains voltages are just as dangerous as they always are, however, the safety switch is still functional so you are protected.
+ +A bucking transformer is really a modified version of an autotransformer. The difference is that only a small part of the winding has to carry the full load current. This is a commonly overlooked aspect of an autotransformer - looking back at Figure 2, only the top section of the winding carries the full load current, so the remainder of the winding can use a smaller wire gauge than may otherwise be used.
+ +The bucking transformer works by placing the secondary of a (relatively small) transformer in series with the mains, but wired out-of-phase so the voltage is 'bucked' or reduced by subtraction. Only the secondary winding needs to carry the full mains current. This means that for 240V to 220V we need to 'buck' 20V at 10A - a maximum of 200VA. Likewise, for 120V to 110V, we only need to buck 10V at 20A ... also 200VA.
+ +To understand how the bucking action works, it's just a matter of remembering some basic school maths. If a transformer winding is wired out-of-phase, it can be given a negative sign. In our examples, we will have 240V + ( -20V ), which equals 220V (try that on a calculator if you don't believe me). Likewise, for 120V we end up with 120 + ( -10 ) = 110V. It really is that simple .
At the end of this exercise, a 200VA transformer wired as a bucking tranny does the same job as a 2,200VA conventional transformer. The 200VA transformer can be expected to weigh less than 2kg (again excluding case, connectors, etc.). This is not only a significant weight and size reduction, but it will also cost far less than a conventional transformer. This seems too good to be true, but it really does work as described. This is probably as close as you can get to the much hoped for (but disallowed by the laws of physics and the taxman) 'something for nothing'.
+ +
Figure 3 - Traditional Bucking Transformer
The schematic above shows how it's done. The voltage in the secondary is wired out-of-phase, so removes (by subtraction) the secondary voltage from the mains voltage supplied to your appliance. The maximum current that flows in the secondary is the full load current, so is 10A at 220V or 20A at 110V. Regulation can be improved by using a slightly larger than necessary transformer if it's critical, but it will still be far cheaper and lighter than a conventional transformer. At 240V input, the primary current is a mere 833mA at the maximum load of 2.2kVA. Predictably, this increases to 1.66A for the 120V version. The regulation from your mains supply will be worse than that of the bucking transformer in most cases.
+ +Because your safety switch is not disabled, there is no increased risk of electric shock. If you ever need to reduce the mains voltage by a set amount to improve the longevity of an appliance, this is a cheap and effective way to do it. In many areas, the mains voltage can be significantly higher than the nominal, and this is a simple and cost-effective way to reduce the voltage to something that doesn't cause your expensive equipment to blow up at regular intervals. Take special note of the winding polarities, and make sure that you test your wiring (with a mains voltage filament lamp in series with the mains in case of a serious mistake).
+ +What is commonly overlooked when auto transformers are specified, is that the requirements are actually almost the same as for a bucking transformer. As a result, there is no reason not to connect your bucking tranny as an autotransformer. This means the normal thin primary wire for the majority of the winding, and thick secondary wire only for the high current part of the winding. Current in the lower section of the winding is reduced to 775mA for the 240V version. From this, we can do a bit of lateral thinking and reconfigure the bucking transformer so that it works properly. This increases the output voltage by a small amount - it will be of no consequence in most cases. This is just as cheap and effective as the bucking transformer shown above, but is slightly more efficient.
+ +
Figure 4 - Proper Way To Wire A Bucking Transformer
You won't see this arrangement described very often (if at all), but it is a far better solution. In Figure 4, I have simply rewired the circuit as an autotransformer, and the equivalent circuit shows that this is indeed the case. The transformer is exactly the same as used in previous examples. The incoming mains connects across the entire winding ... the primary in series with the secondary, wired in phase. The output voltage is taken from the tap - this is identical in every way to a normal autotransformer connection. The output voltage is fractionally higher than with the bucking configuration - the 240V version gives 221.5V RMS output (110.75V RMS for the 120V version). Again, double check all winding polarities before connecting to any equipment.
+ +You can also push this version a little harder than a traditional bucking transformer. The normal output current (based on our initial criteria) is 10A at 220V, but with the arrangement shown in Figure 4 you can have an output current of about 10.8A (a total of 2,400VA) without exceeding the transformer's secondary current rating. That's because the currents are subtracted within the winding itself, because of the transformer action. The main primary runs at a current of about 835mA at the maximum output of 2.4kVA.
+ +A simple reconfiguration of an old technique therefore provides better efficiency and lower losses than the traditional bucking transformer. It is important to understand that we are not getting something for nothing, we are simply minimising losses. In the following drawing, voltage waveforms are shown in red, current in green.
+ +
Figure 5 - Bucking Transformer Waveforms
Largely due to a reader who was very puzzled by the claimed primary current (in particular), I thought it would be worthwhile to show the voltage and current waveforms, using an 'ideal' transformer so that iron and copper losses don't confuse the issue. The details are shown above, and I aimed for the simplest case possible. This means a 10:1 transformer, a desired output voltage of 230V, and an actual input voltage of 253V (23V too high). The input voltage and current are in phase because the load is a resistor. The input power (V × I) is 2,300W (2.3kW).
+ +The output voltage is 230V at 10A, again, a power of 2,300W. Note that the transformer's primary current is 909mA, and is 180° out-of-phase. That causes it to be subtracted from the input current, reducing what you might expect to be 10A back to the 9.09A measured. There are small inaccuracies because I rounded the figures to three decimal places, but rest assured that it all adds up perfectly.
+ +Rearranging the circuit to the 'traditional' method shown in Figure 3, The output voltage is 227.7V (a bit lower than the design value) and the transformer's primary current is 990mA (a little higher than the case shown above). The power in and power out are still the same, but reduced to 2,254 watts because the output voltage is lower than expected. Because the transformer primary current is higher, there will be greater losses with a 'real' (as opposed to 'ideal') transformer due to winding resistance.
+ +Although I used a 200VA transformer in the above example, if the transformer in the equipment you're using is (say) 300VA, then you only need a 30VA bucking transformer for a voltage reduction of 20V. The rating for the bucking (or boosting) transformer is determined by the voltage and current, so if you need to buck or boost by more than 20V (or the current is higher), the VA rating is increased, and for a lower boost/ buck voltage or lower current, it's reduced.
+ + +There may also be occasions where the voltage you get is consistently too low. The same technique can be used to give your supply a lift so that it's closer to where it should be. Be very, very careful with this setup though. It should only be used where the mains voltage actually needs to be boosted because it is always too low. You cannot use this trick to get a bit more voltage from a transformer or to get more power from an amp, because you will be operating everything at a voltage that causes additional stress. This could easily prove fatal for your equipment! Transformers can easily be pushed into saturation even with a fairly modest voltage increase.
+ +
Figure 6 - Boosting Transformer
The transformer requirements are exactly the same as for a bucking transformer, except that the secondary voltage is now added to the incoming mains voltage, rather than being subtracted. A boost transformer can only be wired to create an autotransformer - there aren't any 'alternative' ways in which it can be connected. As before, your electrical safety switch still works normally.
+ +Note that as shown in the equivalent circuit, the boost configuration is already wired as a 'true' autotransformer, and it can't be improved by any trickery.
+ + +Note that great care must be taken with construction and mounting of any transformer used as buck, boost or autotransformer. All parts of all windings are effectively at the full mains voltage, and insulation must be adequate to ensure that the end result is safe under all likely conditions (including faults). If the transformer has additional secondary windings, do not be tempted to use them for anything! The secondary is at mains voltage and the insulation between secondary windings is rarely (if ever) designed to withstand mains voltage, so any remaining secondaries are potentially lethal. Therefore, don't even think about using another secondary to power other equipment (for example). Your bucking (or boost) transformer must be dedicated to one purpose only!
+ +The final assembly should be protected against excess current by a fuse (minimal protection) or a circuit breaker. Depending on the intended purpose, it may also be wise to include a thermal cutout (either self resetting or a one-time thermal fuse). The case, wiring and input/output mains connectors must also comply with all requirements as determined by the electrical codes where you live, and must always include a mains earth (ground). Ensure adequate ventilation for the transformer, consistent with the need to keep small fingers well away from dangerous voltages or sharp edges.
+ +For a bucking transformer, the circuit shown in Figure 4 is obviously the most sensible. This is a true autotransformer, and the secondary winding no longer bucks (or opposes) the incoming mains. Efficiency is improved, and a given transformer wired this way can provide a little more output current than if it's wired as a bucking transformer. The small extra voltage should not be an issue, and in many cases will not even be apparent because transformers are wound to ensure that you get rated voltage at the maximum permitted current.
+ +Naturally, if you want to use the traditional bucking configuration you may do so, but it will have worse regulation than the Figure 4 alternative. With typical winding resistances and 240V applied, a bucking transformer will provide 216V into a 22 ohm load (1.8% regulation), and the autotransformer configuration will give 218V (1.6% regulation). There isn't a big difference, but it's measurable. These regulation figures may be slightly worse for the 120V version because of the much higher current, however, the autotransformer configuration wins again.
+ +Autotransformers (or bucking transformers) are best and most effective where the voltage change is relatively small - typically no more than a 20% change. The maximum ratio is 2:1 or 1:2 (double or half voltage) before any autotransformer becomes uneconomical and/or pointless. Even at this ratio (typically used for 110V equipment used in 220V countries or vice versa), there are risks and dangers that are not immediately obvious - especially when 110-120V US equipment is used in Australia, the UK or Europe etc. (220-240V). In this case, the only safe option is a proper isolated step down transformer. The article Importing Equipment From Overseas ... explains all the reasons.
+ +Finally, consider the use of a larger transformer than theoretically necessary if the load is constantly close to the maximum. Such operation means that the transformer will run at a higher temperature than we might like, and it will be heavily stressed in use. Using a larger transformer gives better regulation and cooler running. Remember that the operating life of many electrical and electronic components is halved for every 10°C rise in temperature. This includes the insulation used in transformers, so cool running means a long trouble-free operating life and lower operational losses. The transformer described above (Figure 4) has a total loss of perhaps 20W at full power. This is power that you have to pay for, and over a period of time a larger transformer with lower losses may work out cheaper.
+ + +Although I looked at a few websites discussing bucking transformers, none had the level or depth of information that I feel is needed to understand the concept properly. That is one of the main reasons that this page now exists. On the strength of this it should be apparent that there are no references as such.
+ +Even though there were no useful references for bucking transformers, EC&M (Electrical Construction & Maintenance) on-line magazine does describe the proper way to design an autotransformer (which verifies my approach taken in Figure 4), and includes formulae to allow you to calculate the required size for a given VA rating. The article also refers to drawings, but there are none that I could find.
+ +The 'Dry-Type Transformer Study Course' by Square D Company was listed as recommended reading, but the link (and the document) no longer seem to exist. The article had some great info for those who really want to study transformers in more detail. The material describes mainly large trannies, all decidedly US-centric (60Hz, US distribution system, etc.) and concentrates on power distribution types. This does not change the principles though, which are just as valid at 100VA as 100kVA, 50 or 60Hz.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +ESP's Guide to Purchasing Components |
Purchasing components is a recurring question both on the ESP forum and via e-mail. It seems that most people buy exactly (or as close as possible to) the number of parts needed for a particular project. While this may seem to be sensible, as a hobbyist, you need parts to be able to experiment - either with the design you are currently working on or to prepare for the next project.
+ +With high-cost parts such as big power transistors, large electrolytic capacitors and transformers, it is normal to purchase what you need and no more. To do otherwise can be very costly, and any left-over parts may never get used for the typical hobbyist.
+ +With small and cheap parts such as small signal transistors, resistors and small capacitors, having a stock of these in your parts drawers is an excellent way to ensure that many projects will not require you to buy any of these small parts at all, and others give you the opportunity to add new values to your stock. In general, it is better to stay with the more common values - obtaining a usable stock of every resistor in the E48 series (48 values for each decade) becomes expensive. The E12 series will cover most projects, but are rather limiting if you need accurate filters for example.
+ +The number of parts you collect will depend on how much experimentation you want to do - if you expect to build many experimental circuits in your quest for knowledge (or audio nirvana) then you will need more parts than someone who only builds the occasional project.
+ +Experimentation in particular is one of the best ways to learn how circuits work. You will learn a great deal by building simple projects then figuring out why they work ... or not. When things don't work, you often learn more than you would if they do, because you have to figure out what is wrong. While building projects that you actually need is rewarding, experimentation is the thing that will teach you far more than any website or book ever can.
+ +The list below is intended as a general guide only - it may not suit some, but it will give you an idea of the parts that are most likely to be useful. For experimentation, you do not need the 'best' parts. Pure snake-oil-filled capacitors may be specified for some projects (none on my site though), but for experimentation they are not needed at all. A few bipolar electrolytics will suffice for all the higher values (1uF and above).
+ +It is worth getting 1% metal film resistors though, if only because it allows you to populate your next project from your own stock. This can save a great deal of aggravation, and many projects can then be completed with the purchase of very few additional parts. It is unrealistic to try to maintain stock of everything though, as it can become very expensive.
+ +My recommendations are shown below. The number of parts can be varied up or down depending on your own expectations and/or specific requirements, but in general the parts and quantities suggested are a good start.
+ + +As the most common electronic part known, there is almost nothing that can be built without resistors. 0.5W metal film resistors can often be purchased in packs of 8 or 10, and there are some values that are far more common than others. Some you can ignore completely unless a project calls for them, and when that happens, that is your opportunity to obtain some for your own stock. In general, most ESP projects use the E12 range (12 values per decade). Many circuits call for E24 or even E48 values, but this is often just to make the circuit appear more 'accurate' somehow, or to make it appear 'special'. Few circuits really need E24 or E48 values, with the possible exception of filters that require accurate tuning frequencies. There are exceptions of course, but they are less common that you might imagine.
+ +The list below consists only of E12 values. Very low and very high values are not common, and a small few will cover most requirements. The majority are in the middle range ...
+ +Value | Comments | Quantity |
Below 10 Ohms | Not common | 10 of a few values ¹ |
10, 15, 22, 33, 47, 68 Ohms | Small range usually sufficient | 20-50 of each value |
100 Ohms | These are fairly common | 50 |
150, 220, 330, 470, 680 Ohms | Small range usually sufficient | 20 of each value |
1k, 1k2, 1k5, 1k8, 2k2, 2k7, 3k3, 3k9, 4k7, 5k6, 6k8, 8k2 | Full E12 range recommended | 20-50 of each value |
10k, 12k, 15k, 18k, 22k, 27k, 33k, 39k, 47k, 56k, 68k, 82k | Full E12 range recommended | 20-50 of each value |
100k, 150k, 220k, 330k, 470k, 680k, 1Meg | Small range usually sufficient | 20-50 of each value |
Above 1M Ohms | Not common | 10 of a few values ¹ |
+ 1 - "A few values" is not very helpful, but I must leave this to the individual. Some people will find these values very useful, others not at all. ++ +
It's usually a good idea to include a few higher power resistors as well. Common values are 0.1, 0.22 and 0.47 Ohms (all 5W) and perhaps a few values between 10 ohms and 1k in 1W. These are just handy to have around, and are by no means necessary for quick test circuits for line level applications. As with any generalised recommendations, it depends a lot on what you are doing.
+ + +Caps are the next most common electronic part, and there are few projects that don't use them. Standard MKT style 'boxed' polyester caps are the most useful, because they have standardised pin spacings and have very good performance - despite spurious claims that may be made by the 'magic component' proponents. There are also a few ceramic values that are very common, as well as some electrolytic and bipolar electrolytic types.
+ +As with resistors, very low and very high values are not common, and a small few will cover most requirements. The majority are in the middle range ...
+ +Value | Comments | Quantity |
100, 120, 220pF 50V ceramic (NP0 or G0G) | Power amp Miller caps, etc. | 10 of each |
100nF 50V multilayer ceramic | Opamp Bypass | 20-50 |
1, 1.5, 2.2, 3.3, 4.7 nF MKT Polyester | Fairly common | 20 |
10, 15, 22, 33, 47, 68 nF MKT Polyester | Small range usually sufficient | 10 of each value |
100 nF MKT Polyester | Very common | 20 |
220, 470 nF MKT Polyester | Not very common but useful | 5-10 of each value |
1, 4.7, 10, 22 µF Bipolar electrolytic | Very useful | 10 of each value |
10 µF 63V Electrolytic | Very common | 10-20 |
22, 47, 100, 220, 1000 µF 35V or 63V Electrolytic | Common & useful | 5-10 each value |
A few higher value electros are always useful if you need to experiment with power supply applications. 4,700µF/63V is a good value, and has a high enough voltage for most circuits. If more capacitance is needed, you can simply parallel the 4,700µF caps. As your test and experimentation stock, it doesn't matter if the caps are rated at well over the voltage you are using. NP0 and C0G ceramic capacitors have a zero temperature coefficient, and I've tested 50V versions at 1kV without breakdown. These are very stable, and are totally different from multilayer ceramic caps. The latter should never be used in the signal path!
+ + +There is a bewildering array of different types, having different characteristics, voltage, gain, bandwidth, etc. They are the mainstay of many simple circuits, and it is useful to have a few on hand. For many general purpose circuits, the following will allow you to verify that a design works, but you may not be able to use high voltages or currents, or get the best performance. Some devices (like the MC4558) are simply useful to have around to experiment with, even though they may never make it into any project.
+ +The following is my recommendation, and I use them regularly (as shown in many projects) ...
+ +Type | Comments | Quantity |
BC549 NPN Small signal transistor | 30V, low noise, high gain | 10-20 |
BC559 PNP Small signal transistor | 30V, low noise, high gain | 10-20 |
BC546 NPN low power transistor | 80V general purpose | 10-20 |
BC556 PNP low power transistor | 80V general purpose | 10-20 |
BD139 NPN Medium power | 80V GP driver transistor | 10 |
BD140 PNP Medium power | 80V GP driver transistor | 10 |
TIP35C NPN 125W 100V | Rugged, GP high power | 5 |
TIP36C PNP 125W 100V | Rugged, GP high power | 5 |
TL072 GP JFET dual opamp | Good performance in most circuits | 5-10 |
MC4558 GP, Very cheap opamp | High performance dual opamp | 5-10 |
NE5532, Low noise, low cost opamp, can drive 600 ohms | Very high performance dual opamp | 5-10 |
1N4148 Small signal diode | 100V, 100mA | 10-20 |
1N4004 400V 1A power diode | Immensely useful | 20-50 |
555 GP timer | Just handy to have | 2-5 |
+ GP = General Purpose ++ +
Depending on your requirements, you may also want to include a few zener diodes (5.1V, 12V and 15V are useful values). These allow you to test opamp circuits without bothering with fully regulated supplies. You may wish to include a few MOSFETs as well, such as MTP3055 or IRF540 - these are cheap, and will work fine for general experiments.
+ + +These days, you are more likely to find 'generic' prototype board than the original Veroboard, but this is very useful stuff. Complete circuits can be made using it, although it does take some time to get used to cutting tracks and adding bridges to get power and other connections to where you need them to be. There are other types of prototype boards that don't have any tracks, but IMO these are less suitable for most amateurs as they can be difficult to use.
+ +A 'solderless' breadboard can also come in handy, as parts can be plugged in (no soldering required) and reused when your testing is done. These are not suitable for most high speed circuits (including fast opamps) because the internal stray capacitance is often a limitation. For most general purpose testing they work well enough.
+ + +Unfortunately, this is not a question I can answer. In Australia it's easy enough, because we have suppliers that I know and have dealt with. Elsewhere, I can only cite a few companies that I know of, but have probably never dealt with. ESP customers come from all over the world - there are very few locations where ESP boards have never been ordered, especially within the Americas, Europe, the United Kingdom and Asia-Pacific (even parts of the former Soviet Union).
+ +Ultimately, it is up to the individual to find a supplier for the various parts needed. Since I do not have any specific paid advertising on my site, I am reluctant to advertise suppliers anywhere - there are simply too many and they are too diverse. Some specialise in large quantities but will sell in ones and twos, others can sell small quantities but will be unable to supply larger orders. Some sell in large quantities only. On-line sellers come and go, and it's impossible to keep up with who has what and for how much.
+ +As noted in several places on my site, I don't make recommendations for suppliers, nor will I attempt to give cost estimates for projects, experimentation stock or anything else. The prices vary considerably from one supplier to the next, and it's simply impossible to try to maintain any estimates for a worldwide market.
+ + +None of these recommendations are absolutes - many hobbyists will decide on more or fewer of any given part. There may also be favourite values/parts that have been omitted. The idea of this short article is to provide a guideline for those starting out, to ensure that they have enough general purpose parts to experiment.
+ +As each project is built and parts are ordered, order a few extra of anything that's cheap. Add these to your collection, and before long it will be possible to make up a good part of many projects using your own stock, ordering only the devices that are particular to the project.
+ +Occasionally, you will see common parts offered by your supplier at bargain prices. Be careful - bargain power transistors may be fakes! For most passive parts, low power semiconductors, diodes and the like, it can be very economical to purchase 100 or so if the price is right. As long as it is something you are likely to use or can adapt to your experiments, bargains can be a great way to build up your basic stock of experimentation parts.
+ +While buying parts in 100 or 1,000 lots is economical for small manufacturers, it rapidly becomes far too expensive for hobby projects, and there's no point having thousands of something if you will never be able to use them all. While I may have 20,000 resistors and perhaps a few thousand transistors, ICs, etc. on hand at any given moment, I do short run production jobs as a result of consultation and design work. It would be very foolish of me to have to purchase a few parts just to be able to test a design. For example, every new project or revised PCB layout is built and tested before sale to ensure there are no mistakes. If a mistake is found, then I am able to offer a solution, rather than scrap an entire shipment of PCBs.
+ + +![]() |
Elliott Sound Products | Output Capacitor (Single-Supply) Power Amplifiers |
In theory, capacitor-coupled output stages are completely straightforward, and there's no uncertainty about how they work. We all know that a capacitor passes AC and blocks DC, but with a single-supply power amplifier (or any other Class-AB single-supply circuit for that matter), current is only drawn from the power supply with positive half-cycles. When 'at rest' (no signal), the amplifier's DC output voltage sits at ½ the supply voltage. During positive half-cycles, current to the load is provided through the upper transistors (typically a Darlington pair). It passes through the capacitor to the load as we would expect.
However, things aren't quite so clear for negative half-cycles. We know that the lower transistors pass current, because we see a negative voltage across the load. However, there's no matching current drawn from the power supply. It's almost like magic, but the only reasonable explanation is that the current is delivered from the output capacitor. But - how does the capacitor charge and discharge when the current through the upper transistors, the output capacitor and load is identical? Surely the current should be greater to 're-charge' the capacitor after it's been partially discharged through the load and lower transistors.
This article came about after a number of emails back and forth with a well-regarded supplier of 'high-end' equipment. Not being one to reject a challenge, I decided to look into this, because it is not immediately apparent. While no-one gives this a second thought (or so it seems), it does require some explanation. It can be proven without too much difficulty, but it remains a little mysterious.
Audio amps require local decoupling to minimise interactions between the power supply wiring and the amp itself. Cables have inductance, and this can cause instability or increased distortion. These caps are shown in all of the drawings, and are assumed to be 4,700µF. In reality they may be more or less, and if the amplifier is located very close to the supply filter cap(s) the amount of decoupling needed is usually minimal.
A simplified version of the 'standard' single supply amplifier is shown below. The output capacitor is 1,000µF for convenience, and the load is 8Ω (resistive). I've used a 30V supply (equivalent to a ±15V dual supply). The performance of each is analysed. The power output is immaterial, as the same principles affect all single-supply amplifiers equally, with the only variable being the peak output voltage and current. The topology of the amplifier is not relevant, since everything 'interesting' happens in the output stage. In all descriptions I've assumed a Class-AB amplifier, and while the behaviour of the output cap is the same for Class-A, DC supply current always flows, so the supply current waveform is completely different.
For testing, I used the Project 217 low-power amplifier, as it's the only one I have that uses a single supply. Testing shows that without a doubt, there is some degree of infrasonic disturbance with an unregulated supply. However, it's only very low-level, showing a shift in the DC operating point of less than 20mV at the amp's input. The DC input filter has a -3dB frequency of less than 0.5Hz, but some of the power supply variations with programme material do get through the filter. The amp has unity gain at DC, so any DC disturbance at the input is not amplified, but simply buffered.
With a 1,000µF output cap, that has a -3dB frequency of 20Hz. This further reduces any infrasonic disturbances prior to the resistive load. A speaker is not resistive though, so at resonance the impedance may be 40Ω or more. Even so, 20mV of infrasonic energy will not cause significant cone movement. Indeed, it's likely to be negligible with even the most sensitive speaker.
Figure 1.1 - Schematic Of The Test Amplifier
In many ways it's no accident that many early single-supply amplifiers used a regulated supply. The regulator was pretty crude, but it served two purposes. It all but eliminated ripple which could be easily reduced to less than 20mV, and also kept the supply voltage reasonably stable as the load changed. This meant that the relatively poor power supply rejection ratio (PSRR) didn't cause hum and noise at the amp's output, and it all but eliminated the likelihood of infrasonic disturbance. The latter effect is almost certainly 'incidental', as I've never seen a reference to infrasonic disturbances for capacitor-coupled amplifiers.
Note that half of the AC feedback is taken from after the output capacitor (via R12). This connection has a very minor effect on the generation of infrasonic signals, but was a common trick in the days when single-supply amplifiers were common. Because the capacitor is inside a feedback loop, low frequency response is improved, and damping factor is somewhat better than a design that doesn't include the cap in the feedback loop. However, most of the time there will be little audible difference one way or the other.
In Fig 1, it's assumed that the supply voltage will be unregulated. Almost all tests I carried out on the design used an unregulated supply, and the DC voltage must fall when current is drawn. The amount of voltage drop depends on the size of the transformer and filter capacitor, and the signal amplitude and load impedance. Normally, we can expect the voltage to fall by at least 10% at full power, but if the transformer is only just big enough (around 20VA for example) you'll lose somewhat deal more when the amp is driven hard. This could see the average DC voltage fall from a nominal 30V to perhaps 26-28V under load. This voltage variation will affect the bias point, as it's derived from a voltage divider (R1, R2 and R11).
Figure 2.1 - Infrasonic Disturbances Caused By Supply Voltage Variation
The above graph shows just what I'm talking about. From 'low power' (176mW) to 'high power' (8.56W), the average supply voltage fell by just over 1V. While it's doubtful that the disturbances seen would be audible on most systems, the possibility cannot be discounted. You would need a very revealing set of speakers and an excellent listening environment to hear anything, certainly far better than the speakers I have in my workshop. The peak-peak amplitude of the disturbance is just under 800mV, so it's not going to cause large speaker cone excursions. A power supply with worse regulation will make matters worse of course. The effect can be reduced by increasing the value of C6, which filters the bias voltage, but it can't be eliminated without using a regulator to supply bias.
A tone-burst is a brutal test for capacitor-coupled amplifiers, and fortunately, music is far less demanding. This does not mean that there are no disturbances, but they will generally be comparatively subdued. Very simple amplifiers with only one gain stage (such as the El Cheapo [Project 12A]) may be expected to be affected more than the example used here, although a simulation showed (surprisingly) less effect. Be aware that the output capacitor itself removes at least some of the disturbance, because it's a high-pass filter.
The infrasonic effects seen above are all but eliminated if the supply is regulated. However, this adds extra parts and means a bigger heatsink due to the power dissipated by the regulator. These results can be duplicated easily, either using the test amp described above, or any commercial amp from before ca. 1975. Most of these early designs used an output capacitor, and several used a simple regulated supply. You can see the advancements in power amp designs in the article Power Amp Development Over The Years.
You can also regulate the bias supply. In the case of the Figure 1 amp, a fixed voltage of +25V applied in place of C6 will do just that, assuming a 30V supply. This reduces the amount of disturbance, but it doesn't eliminate it. This is because the remainder of the amplifier still has a supply voltage that varies with load, and that changes the operating conditions. Almost without exception, modern power amps use a dual supply, and the reference is the amplifier's ground connection. This doesn't move around, and infrasonic disturbances are almost unheard of. This is covered in detail below.
This is something that you'll be hard-pressed to find any information about. I suspect that the likely search terms are partly to blame, because the major search-engines will prioritise other material that seems to fit the criteria. Enclosing 'suitable' searches in quotes doesn't appear to be very helpful, because there are thousands of pages that refer to capacitor coupling, but none that I found that describe the process in detail. It's possible that there may be something behind a 'paywall', but it's a risky business to pay for an article based only on a short excerpt. I consider this to be an abuse of the spirit of the internet.
The capacitor acquires a charge when the amp's output is positive (referenced to the quiescent voltage of 15V), equal to I × t (time in seconds) coulombs. By definition, if a current of 1A flows for 1 second, the charge is 1C. The charge with 1A for 0.5ms (e.g. a 1kHz squarewave) is 0.5mC. When the amplifier's output is below the quiescent voltage, this charge is reversed, and will provide (for example) 1A for 0.5ms, leaving the net charge across the output capacitor the same as it was after the amplifier stabilised after power-on. The quiescent charge for CC is about 15mC, obtained during power-on. A 1,000µF (1mF) cap with 15V across it has a charge (Q) of ...
Q = C × V
Q = 1m × 15 = 15mC (milli coulombs)
This initial charge is reached in (for example) 150ms with a constant current of 100mA. In reality, the charge curve is less well defined because there's a series resistance (the loudspeaker) and an uncontrolled charge current. For the case with a signal present, we can look at a 1kHz (1ms period) sinewave. We need to include the sinewave average constant of 0.637 to obtain the average current over time. We'll assume a peak output of 8V and an 8Ω load (1A peak). With a sinewave, the output cap will gain a charge (Q) of ...
Q = I × t
Q = 1A × 0.637 × 0.5ms = 0.3185mC = 318.5µC
The charge acquired/ released is obviously greater at lower frequencies and smaller at higher frequencies. On the negative half-cycle, this charge becomes a discharge. The charge on the capacitor increases and decreases by about ±0.5mC with a squarewave. Provided the charge/ discharge cycle is small compared to the total stored charge in the output capacitor, the frequency response is relatively unaffected. Using the same capacitor and load, the -3dB frequency is close enough to 20Hz, and the on/ off periods at that frequency are each 25ms. Under these conditions, the capacitor gains/ loses 14.6mC for each cycle, almost the total stored charge. This isn't easily calculated because the current waveform is differentiated due to the capacitor and load creating a high-pass filter. When Xc (capacitive reactance) is equal to the load impedance, the output level is reduced by 3dB. For a sinewave, we use the average value, which is 0.637 ...
Iavg = ( 1 / π × 2 )
Iavg = 0.636.62 ( 0.637 )
The capacitor gains its initial (quiescent) charge during power-up. The charge time is determined by the risetime of the bias network, the size of the output capacitor and the load impedance. If the amp's output voltage jumped to Vq (the quiescent output voltage) of 15V instantly, the initial current would be 1.875A for a 30V supply, tapering off to zero when the cap is fully charged. To measure the stored charge, you have to use the average current and the time period from power-on to where the charge current falls to (almost) zero. Again, this is not easily calculated, but it can be simulated easily enough. Alternately, just use the simple formula shown above.
Although no-one ever thinks about it, the exact same process applies with all capacitor-coupled circuits, from preamps (valve or transistor) to power amps.
Figure 3.1 - Voltage & Current For Symmetrical ±8V Output
In the drawing, I've shown a symmetrical ±8V sinewave output from the amplifier. For the positive half-cycle, current is drawn from the supply, controlled by Q1, through the capacitor (CC and then through the load to the ground return. As this is a series circuit, the current is identical at any point of the loop. For a negative half-cycle, current is drawn from the capacitor, controlled by the lower transistor (Q2), and passed through the load. Again, it's a series circuit with identical current at all points in the loop. The average level of a half-sinewave is 0.637, so the charge on CC increases by 318.5µC for the positive half-cycle, and releases 318.5µC for the negative half-cycle.
In each case, 8V must cause a peak current of 1A. There is a small voltage 'lost' across the capacitor due to ESR and capacitive reactance. With a 1kHz signal, it should be about ±100mV, partly due to the reactance of the cap itself (159mΩ at 1kHz) plus a small loss due to the cap's ESR (equivalent series resistance). ESR should be less than 100mΩ (0.1Ω). These losses are ignored in the following calculations because they have little effect on the outcome.
So, during the 'charge' period with a 1kHz sinewave (amp output 8V greater than 15V), the capacitor accumulates a 318.5µC charge described above. For negative outputs (15V - 8V), the cap loses 318.5µC of charge. Equilibrium is established quickly. If there were no state of equilibrium, the capacitor could charge or discharge in one direction until it reached the supply voltage or zero, but this doesn't happen over the long term. The small periods where equilibrium is not maintained perfectly represent the infrasonic disturbances seen in Figure 2.1.
The situation is more complex when a music signal is used, as there are always periods of asymmetry, and music is dynamic. This means that the DC voltage across the capacitor will change, but most of the asymmetry has been eliminated thanks to the input capacitor. This goes through the same process as the output cap, but of course the voltages, currents and amount of charge are all a great deal smaller. Any asymmetrical waveform will cause a DC shift, but most of it is removed by the capacitors throughout the circuit. Asymmetry can be re-created if transients (in particular) are allowed to clip. The clipping will often be inaudible due to the short duration, but the asymmetry created is very real. Capacitively-coupled asymmetrical signals can create a DC offset under some conditions, but a lab experiment and real-life are different.
Note: Fully DC coupled amplifiers might seem like a good idea, but consider the fact that any DC offset will cause speaker cones to shift relative to their rest position. This can cause distortion because the voicecoil is no longer centred within the magnetic circuit. You have a choice - either allow all asymmetrical signals to pass through the amp to the speaker (including any DC component), or use one (or more) capacitors to remove the DC component. If you choose the latter, there will be some infrasonic disturbance, but it's a great deal less than the effective DC component. Everything you listen to has passed through multiple capacitors, so the idea of eliminating 'evil' capacitors is just silly and isn't worth discussion.
It should be obvious from the above that load power is drawn from the supply only during positive (greater than Vq) signal excursions. As there is no negative supply, the negative portion of the output waveform is derived from the charge stored in the output capacitor. For a perfectly symmetrical signal, the two balance out, leaving the net charge on Cc (the output capacitor) unchanged. At first glance it may seem that we are getting something for nothing, as the negative half-cycle is 'free'. Naturally, nothing of the sort happens.
When you look at the current distribution in a single-ended (capacitor coupled) amplifier, it's apparent that current is drawn from the power supply only during positive-going signals, when the output voltage is greater than the quiescent state. That's +15V for the example here, but it can be up to +35V with a +70V supply. You might imagine that this means that the negative-going signals get 'free' power, because it's supplied by the output capacitor. Getting something for nothing is frowned upon by the laws of physics (and the Taxman), so we have to assume that there is no 'free' power involved.
The easiest way to demonstrate the power used is to examine both input and output power. The current drawn by the remainder of the amp is ignored. Using the same waveforms as shown in Figure 3, we can examine the input power, delivered from the power supply. The single supply is 30V, and the average output power is 4W (8V peak is 5.66V RMS). The input current averages 364mA, so with a 30V supply the input power is 10.92W. It's immediately apparent that we don't get that free lunch after all - the input power is 2.72 times greater than the output power with the conditions described.
With a dual supply amplifier (±15V) it's obvious that the speaker current and therefore the supply current for each half-cycle must be equal. Each part is a series circuit, so if 1A peak flows from the supply to the speaker via the transistor, the current in each part of the circuit has to be identical. With each half-cycle, the peak current is again 1A, and the average is also 346mA. With half the voltage, the power delivered from each supply (one positive, one negative) is 5.46W, exactly half that of the single 30V supply. Because there are two supplies, the total is 10.92W.
The laws of physics are satisfied, and the input power is identical for single-supply and dual-supply amplifiers under the same conditions - total supply voltage, signal amplitude and load impedance. It's somewhat counter-intuitive at first, but examination of input vs. output power is by far the easiest way to work out what happens. You can also measure the mains current, but if you were to do that the circuits must be identical other than the power supply configuration. If you're unwilling to build the amps and supplies to take measurements, the results can be simulated.
A dual supply amplifier uses ground as its reference, with a positive and a negative power supply. So I could use the same amplifier (both for display here and for simulations), a -2.5V bias was used at the input to obtain zero voltage at the output with no signal. Otherwise there's no difference in the circuit, other than changing from a single +30V supply to a ±15V supply.
Figure 5.1 - Dual Supply Test Amplifier
This arrangement doesn't require a great deal of comment, as the dual supply is the defacto standard today. This doesn't mean that capacitor coupling is not used though, and there are a surprisingly large number of amplifiers that still use an output capacitor. These are primarily low-power designs, and they are used in many consumer products because they are cheaper to build than a dual supply.
Figure 5.2 - Voltage & Current For Symmetrical ±8V Output
The current paths are also exactly what you'd expect. Positive output current flows from the positive supply, through Q1, the load and back to the power supply common (ground). Negative half-cycles are provided from the negative supply, through Q2 and the load back to the supply's common. This is all very easy to follow. The load current is controlled by the transistors, which are within a feedback loop to ensure that the output signal is an accurate (but larger) image of the input signal.
A point that's generally missed is that the power supply filter capacitors form part of the audio circuit, both for single and dual supplies. The supply doesn't exist in some fugue state, divorced from the 'real world' and acting as a separate entity with no association with the amplifier. The filter capacitors supply the current for positive transitions (single supply) or both positive and negative half-cycles (dual supply), with the job of the transformer and rectifier being only to maintain the required voltages at the current being drawn. I expect that this may not 'sit well' with some people who claim to abhor capacitors in the audio path, but it should be obvious that they are there whether you like it or not.
There will always be a (small) voltage dropped across the output capacitor. The voltage difference you can measure easily is due to the ESR of the capacitor which is in-phase but almost always slightly non-linear! This is the reason that output capacitors nearly always cause increased distortion, particularly at low frequencies where the reactance is greater, and more voltage is developed across the capacitor. This gives us an additional voltage component across the capacitor that's harder to measure, due to the reactance of the capacitor. This varies with frequency, and is 90° out of phase. It's this voltage component that's created as the capacitor gains or loses the charge (in coulombs). Capacitive reactance and the charge are directly related, so as the reactance is reduced (e.g. with increasing frequency) so too is the stored charge (and vice versa of course).
You'll see many capacitor coupled amplifiers (including the one shown here that I used for testing) that derive at least part of their negative feedback signal from after the output capacitor. This helps to minimise distortion created by the capacitor. The other method is to use a capacitor with a higher value, as this reduces both ESR and capacitive reactance. The 1,000µF cap shown is actually too small for very good performance.
In the interests of completeness, I've included the conversion factors so coulombs (charge) can be converted to joules (energy), along with other useful conversions. While these aren't necessary to understand the processes involved with capacitor coupled amplifiers (or other applications using capacitive coupling), they may come in handy some day.
Energy in capacitor = Q × V / 2
Q² / 2 × C
C × V² / 2 (also written as ½ × C × V²)
Where the energy is in joules, Q is the charge in coulombs, V is the voltage in volts, and C is the capacitance in farads.
I suggest that if you intend to work a lot with capacitors (something we can't escape with electronics), you should make a note of these formulae. You won't need them for most activities, but there will come a time when you'll want to know, either for interest's sake or because not knowing will leave you in the dark as to what happens within a circuit.
All of the results shown here were simulated, but a bench-test using the P217 amplifier was also performed. This was done both with music and a tone-burst, and the infrasonic disturbance was visible, but not very pronounced. This was because the unregulated supply I used has better regulation than expected (at least at the modest power drawn by the test amplifier). In the majority of cases, any infrasonic disturbance will be quite small, and audibility (or otherwise) isn't something I'm willing to comment upon. The effects are real and easily simulated, but are probably less easily measured.
Nevertheless, the way a coupling capacitor works in a circuit isn't something I've seen described anywhere else. Mostly, we just know it works because we can see it in operation. We know that both power transistors dissipate much the same power, and probably don't give a great deal of thought to the processes involved. As it turns out, there is more to it than we imagined, particularly the charge existing on the coupling capacitor itself.
Most people don't worry about the charge (in coulombs) lurking on a capacitor, or it being increased and decreased with the signal. Indeed, it's not something that I've discussed other than in passing for any of the many articles on the ESP site. Mostly, it's pretty much irrelevant to the majority of audio circuits, and although the principles explained here apply for all coupling capacitors, it's never necessary to go into any detail.
There are no references for the specific topic, but the formula for Coulombs was obtained from Wikipedia and verified elsewhere. The specifics of this topic seem to have escaped attention over the years, largely I suspect because not many people actually care, as long as it works.
One link you may find useful is Capacitor Charging Equation (Hyperphysics), along with Lumen Learning (Energy Stored in Capacitors)
![]() | + + + + + + + |
Elliott Sound Products | +Capacitance Multipliers | +
In the Project 15 page, I have described a number of different approaches to a capacitance multiplier. While this is a useful resource, it doesn't delve into the design criteria, so this article is intended to provide you with enough information to design your own. There is also an article in the 'TCAAS' section of my site (see JLH Capacitance Multiplier), but this doesn't cover the design criteria in much detail either. The original John Linsley-Hood version (see Simple Class A Amplifier, page 9) uses a single-pole filter, which is nowhere near as good as the version described here.
+ +The article Linear Power Supply Design should be considered essential reading before embarking on a capacitance multiplier, as many of the essential elements are discussed in detail. Parts 2 and 3 are also interesting, but don't cover high current supplies.
+ +While a capacitance multiplier is superficially simple, there's actually more to it than you might think. Everyone who uses this type of circuit calls it a 'capacitance multiplier', and while you may think it's also a crude gyrator (simulated inductor), this isn't the case. The behaviour is similar to a very much larger capacitance, but there are some significant differences.
+ +A 'capacitance multiplier' is really just a buffered filter, with the filter response set by the resistance and capacitance at the base circuit. Capacitance is not multiplied by the gain of the transistor(s), only the current flowing through the base resistor. However, there's more to it than that. In particular, there's a great deal to be gained by using two capacitors, separated by a second resistor. This improves ripple rejection because the filter is converted from first-order (6dB/ octave) to second-order (12dB/ octave).
+ +Despite the name 'capacitance multiplier' being a misnomer because nothing of the sort happens, I'll still use the term in this article. Calling it a 'buffered passive filter' is more accurate, but doesn't convey the same idea, as the original term has been used for years and it's something that people are used to. Provided you understand that the original term is inaccurate (or just plain wrong) and understand how it works, it doesn't matter what it's called.
+ +A simple passive filter can't be used with significant current because the voltage drop across the resistors would be prohibitive unless they are very low values (less than 1Ω). This is impractical, because the capacitance needed to obtain a -3dB frequency of less than 1Hz becomes very large. For example, a filter using 1k and 1,000µF has a -3dB frequency of 159mHz. If the resistor is reduced to 1Ω, the capacitance would have to be 1F (that's 1 Farad!). Using a transistor emitter-follower means that we can use higher resistance and lower capacitance, with the transistor providing the current, rather than directly from the filter.
+ +A single transistor doesn't have enough gain to allow the use of comparatively high resistance. The TIP35/36 devices I suggest will have a 'typical' gain (hFE) of around 45, and around 100 for the BD139/140. This gives a total theoretical hFE of 4,500 but it will be less than this in reality. A value of 1,000 is a realistic figure to work with. This means that resistors can be (up to) 1,000 times the value needed for a passive filter, and the capacitance will be 1/1,000th of the value otherwise needed. Because we will adopt a 2nd order filter (12dB/ octave) it's possible to reduce the capacitance further than would be the case with a single resistor and capacitor, with no loss of performance. Indeed, the reverse is true, with faster response and better filtering.
+ +While it would seem to be ideal, a MOSFET isn't recommended for a number of reasons. Section 6 explains the reasons for not using a MOSFET. Because they have very high input impedance, low values of capacitance and correspondingly high resistor values can be used, but the issues are with the MOSFET itself.
+ +Note that in this article, I have avoided extensive mathematical analysis. A published article [ 2 ] that I saw went in the opposite direction (all maths, with little or no practical application) which made it pretty much useless for hobbyist constructors. The engineering is all quite correct, but the application was ... unhelpful (IMO). As seems to be typical, the only filter discussed is first-order, so performance was comparatively poor - despite the extensive maths offered.
+ +While having all the equations to hand may seem like a good idea, mostly you don't need them. A few simple calculations are shown here, and you usually don't need anything else. You need to know how to specify the transformer and main filter cap, decide on the transistors you'll use, and do a rough calculation to determine the filter frequency (it should be less than 1Hz if you expect low ripple). These are all mostly straightforward, but transformer selection is more difficult.
+ +In addition to capacitance multipliers, there are a couple of other techniques shown here. These are provided because they are interesting, but they are not particularly useful for most hobbyists because they show solutions that are better achieved using other techniques such as a 'proper' regulator. However, regulators themselves generate noise, but it's generally low enough that it doesn't cause any problems with most circuits. Regulators are not covered here, because they are explained in detail in the articles Voltage & Current Regulators And How To Use Them, Discrete Voltage Regulators and Low Dropout (LDO) Regulators.
+ + +For the sake of the exercise, assume that we want the following specifications:
+ ++ Output Voltage - 25 Volts (nominal)+ +
+ Output Current - 2.5 Amps max. (1.25 Amps average)
+ Mains Voltage - As used in your country +
These specifications are typical. Australia, Britain and Europe use nominal 230V mains, with (again nominal) 120V used in the US and Canada. However, the mains voltage is immaterial, and only the secondary voltage is important. The mains voltage is subject to variations, both long and short term. The energy suppliers generally claim ±10%, but it can be more in some circumstances. Australia and the UK used to be 240V, and in many cases that's still what is supplied. The US and Canada used to claim anything from 110V to 120V, with 115V often quoted. Europe used to be 220V, and has now changed to 230V, but as with everywhere else, only the claimed nominal voltage was changed, but in most cases no physical changes were made to the network. All circuitry has to assume 'worst-case' variations, and using the claimed voltage alone will always have a significant error.
+ +We are not all that interested in the mains input voltage, only the possible variations at the output of the transformer/ rectifier/ filter combination. While the two are related, the secondary voltage is also subject to copper losses in the transformer (winding resistance). This is particularly troublesome when continuous high current is expected. If the transformer is operated above its nominal VA rating, it will overheat and may be damaged if the overload lasts for too long.
+ +For a nominal output of 25 Volts (for example), we need a minimum input DC voltage of about 28 Volts, since there will be ripple on the DC voltage (See Fig 1.1). This 'minimum' voltage is the instantaneous minimum, including ripple and the voltage drop caused by the transformer's regulation. Note that for all calculations I am assuming 50Hz mains supply. The results will be slightly different for 60Hz, but the difference is not particularly significant. Capacitor values can be reduced by about 15% to account for 60Hz.
+ +
Figure 1.1 - Basic Rectifier, Filter & Load
Your multimeter will show the average voltage, but that's not useful because of the superimposed ripple. Once the amplifier's output voltage increases beyond the 'DC Output' voltage (just below the minimum voltage shown), ripple will appear at the amp's output. This will find its way to our ears as it is the onset of clipping. The combination of Cf and Rsource will always have a -3dB frequency, as it's a simple low-pass filter. Unfortunately, Rsource is not an easy parameter to measure because it's a mixture of mains, transformer and rectifier impedances, complicated by transformer ratios and the dynamic resistance of the diodes in the bridge rectifier.
+ +The only thing we can control easily is the capacitor value. If we use the example above, the DC output voltage is 20V, and the required capacitance is (roughly) determined with a simple formula ...
+ ++ C = ( I L / ΔV) × k × 1,000 µF ... where ++ ++ I L = Load current+
+ ΔV = peak-peak ripple voltage
+ k = 6 for 120Hz or 7 for 100Hz ripple frequency +
To obtain (say) 1V of ripple with 1.25A average current, the capacitance needs to be 8.75mF (i.e. 8,750µF) for 50Hz, or 7.5mF for 60Hz. Note that the DC voltage is (almost) immaterial, and 1V P-P ripple (±10%) will be present with a 1.25A load current at almost any voltage you care to use. I've run a simulation showing that with an AC input of 20V, 30V and 40V (peak), the ripple voltage only changes by 20mV (RMS) or about 80mV peak-peak. Perhaps surprisingly, if the power transformer is larger (higher VA rating, so lower internal resistance), the ripple voltage will be slightly greater than you'd get with a smaller transformer. This is almost certainly the opposite of what you'd expect.
+ +When you are drawing a continuous (and relatively high) output current, the DC voltage will be much lower than expected. We nearly always assume that the DC voltage is 1.414 times the AC (RMS) voltage, and at light loading that is true. The transformer's regulation complicates matters, because it's reduced due to resistance in the primary and secondary windings. The manufacturer's regulation figure (if quoted) is based on a resistive load, and a capacitor input filter is anything but resistive.
+ +Determining the transformer VA rating isn't hard either. Using the values from above, we need a 25V secondary (not 18V as you may have thought), and we'll have an average current of 1.25A DC. The AC (RMS) current in the transformer's secondary is roughly double the DC current (it's often taken as 1.8 for a bridge rectifier, but that leaves no margin for error), so 2.5A. The transformer needs a rating of 62.5 VA as a minimum ...
+ ++ I sec = I DC × 2+ +
+ I sec = 1.25 × 2
+ VA = V × I = 25 × 2.5 = 62.5 VA +
Note that this is the absolute minimum, and you'll get better regulation (and better performance overall) if you use a bigger transformer. Running a transformer at its full rating for long periods will cause it to run hot, and small transformers always have worse regulation than larger ones. Some people will recommend that the transformer VA rating (for Class-A amplifiers) should be up to five times the total output power. While that might seem like total overkill, it's probably about right. That means you'd use a 200 VA transformer for a dual 20W Class-A amplifier. Because of better regulation, you can almost certainly use a lower voltage (say 20V rated AC output instead of 25V)
+ +If you intend to draw more current or operate with a higher voltage, you can work out the transformer to suit. One of the things that's quite difficult to know in advance is the transformer's regulation. While it will usually be provided in the datasheet, that's for a resistive load, and it's always much worse with a rectifier and capacitor filter. Unless you know the winding resistance for primary and secondary, it can't easily be calculated. For continuous current, as a first approximation assume that the DC voltage will be the same as the rated AC secondary voltage. The DC voltage will always be higher with no-load or light loading. Capacitance multipliers are best used with circuits that draw fairly constant current, and they don't work so well with dynamic (always changing) load current.
+ +One of the nice things about a capacitance multiplier is that you don't need to change much to use it with a higher or lower voltage. The voltage rating of capacitors needs to be high enough of course, but the value shouldn't need to be changed. If the expected current is a great deal more (or less) than the examples shown here, you may need to adjust resistor values, but mostly you won't need to change anything. If you need higher current, suitable transistors should be used, but the dissipated power remains fairly low.
+ + +The only real thing to worry about is the degree of filtering needed! We must assume that up to 2 volts may be lost across the capacitance-multiplier filter, to ensure that the DC input (including ripple component) always exceeds the output voltage by at least 2V. Transient performance may also need to be considered if the load current is not continuous. In general, the minimum differential voltage from input to output should not be less than 1 volt (based on the lowest point of the input ripple).
+ +Because there is no regulation, the power amplifier must be capable of accepting the voltage variations from the mains - every standard power amplifier in existence does this quite happily now, so it is clearly not a problem. Note that the output power is affected, but this happens with all amps, and cannot be avoided because the output voltage is a little lower than for a basic capacitor filter.
+ +We can now design for the nominal transformer secondary voltage, and with very simple circuitry, provide a filter which will dissipate no more than about 4 Watts in normal use - regardless of the mains voltage. Figure 3.1 shows the basic configuration of a capacitance multiplier filter, where the frequency response of the filter is in control of the output DC via the emitter-follower connection of the series-pass transistors. This allows a comparatively high impedance filter to be buffered by the output stage, and allows the use of small capacitors rather than very large ones.
+ +
Figure 2.1 - Single (Basic) Capacitance Multiplier
A basic cap multiplier is shown above. The filter is single-pole, and has a rolloff of 6dB/ octave above the -3dB frequency (0.159Hz). This heavily filtered voltage is then buffered by Q1, which is an emitter-follower. D1 prevents transistor damage if a voltage is present at the output but not at the input. This diode is (or should be) used with any regulator or cap multiplier unless there is zero possibility of a reverse voltage being applied. This can happen easily if you use a particularly large output capacitance, but it's never a problem if the diode is included.
+ +Both a 1F (one Farad) filter capacitor and a basic cap multiplier will provide a ripple of well under 10mV RMS at around 3A, but the multiplier has the advantage of removing the triangular waveform - it's not a sinewave, but it has a much lower harmonic content than would be the case even with a 1F capacitor.
+ +
Figure 2.2 - R/C Filter, Emitter Follower & Load
The basics of operation are split into the sections above (D1 has been omitted in this circuit). R1 and C2 form a simple low-pass filter, and it's obvious that if the load were connected across C2, the available current is very low because of R1. The maximum output current without Q1 is limited to about 2mA for a loss of 2V. This is overcome by adding the transistor, which is an emitter-follower used to boost the low current through R1 to drive the load. R1 only has to provide base current for the emitter-follower transistor. Provided Q1 has high gain, very little voltage is 'lost' across R1. However, there has to be some loss which is 'just right' or ripple at the collector will get through Q1 and to the load. Refer to Figure 4.3 to see the voltage relationships.
+ +The simple capacitance multiplier filter in Figure 2.1 is quite satisfactory as a starting point, but its operating characteristics are too dependent on the gain of the output transistor(s). What is +needed is a circuit whose performance is determined by resistors and capacitors, and which is relatively independent of active devices (although these will still have some impact on the degree of filtering provided). The Figure 2.3 circuit accomplishes this by using a Darlington pair, which has much higher gain than a single transistor. The gain is important, because with too little gain, R1 (Fig 2.2) or R1 + R2 (Fig. 2.3) need to be a lower value to minimise the voltage drop of the filter network when supplying base current to Q1.
+ +We need to keep the impedance of the filter fairly high (to minimise the capacitance needed), so that requires an output transistor hFE of at least 1,000, so 1mA of input (base) current becomes 1A of output (emitter) current. To obtain a gain of 1,000 for a power transistor, we need to use a Darlington - either an encapsulated Darlington device, or a pair of 'ordinary' transistors connected in a Darlington pair (See Fig. 2.3). The latter is my preferred option, since it allows greater flexibility selecting suitable devices and it will usually have better performance. Another alternative is to use a complementary feedback (Sziklai) pair, as shown in Figure 3.2.
+ +The way it works is fairly straightforward. The degree of hum filtering (for the simple version) is determined by the filter comprising R1 and C2. With 1k and 1,000µF, it's a low pass filter, rolling off at 6dB/ octave from 0.159Hz (-3dB). That means that by the time you reach 100Hz (or 120Hz) the 100Hz ripple is (at least in theory) attenuated by at 56dB, so (for example) 1V RMS of ripple is reduced to 1.6mV (RMS). This is far greater than you can achieve with a capacitor alone. The next phase is to add another filter as shown in Figure 2.3, so the rolloff is increased to 12dB/ octave. With 2 x 500Ω resistors and 2 x 500µF caps, the output ripple is reduced to about 40µV (almost 90dB). Of course, this is all well and good (again in theory), but the transistor itself will prevent you from achieving this. However, a ripple attenuation of 60dB is easily achieved.
+ +The final step is to add a resistor to ensure that the output voltage is just below the most negative part of the ripple waveform. In the designs shown below this has been included. If it's left out, the output ripple is over 30 times greater. Many designers have failed to perform this analysis carefully, and the final resistor is omitted. R3 makes almost no difference to the DC output (it's reduced by around 200 millivolts), but that tiny bit of 'headroom' makes a big difference to the ripple appearing at the output.
+ +
Figure 2.3 - Single (Final) Capacitance Multiplier
A complete design (single polarity) capacitance multiplier is shown above. Dynamic loads are less than ideal with a capacitance multiplier, but it can be done, and more details are shown below. The 2-pole filter is far and away the better configuration, and it is (more-or-less) suitable for dynamic loads. It's not perfect (nothing is), but the voltage recovery can be made fast enough to get good performance. Increasing the speed means less ripple reduction though, so it's always a compromise.
+ +When a 2-pole (12dB/ octave) filter is created using a passive design, the -3dB frequency is increased by a factor of about 1.56. For example. a single-pole filter with 1k and 1,000µF cap has a -3dB frequency of 159mHz, but when that's split into 2 x 500Ω resistors and 2 x 500µF caps, the -3dB frequency is 248mHz. At 100Hz, the 6dB filter is -56dB down, but the 12dB filter is nearly 74dB down, a significant improvement! With the values used in these examples, the hum is just under 73dB down at 100Hz, reducing the ripple by a factor of more than 4,700. Feel free to increase the value of C2 and C3 if you wish, but you probably won't hear the difference. The resistance of R1 and R2 has been reduced to 220Ω to ensure that there's always enough base current for Q1.
+ +The output capacitor (C4) is only needed to provide a small amount of transient current and to ensure that the connected amplifier remains stable. Although I've shown 470µF, it can be increased or reduced if you wish. It makes little difference to the filtering, because the output impedance of the emitter-follower series-pass transistor is very low. At 100Hz, the impedance (capacitive reactance) of a 470µF capacitor is 3.4Ω. The final output impedance from the emitter-follower will be a few milliohms, and C4 doesn't change that. A larger value will help to provide transient current, but it's unlikely to make any audible difference. The transistor's gain will fall at high frequencies, so the output capacitor maintains a low output impedance up to several hundred kHz. Feel free to add a film capacitor in parallel with C4, but don't expect it to have any measurable or audible effect.
+ +
Figure 2.4 - Single Multiplier With Current Limiter
Like regulators, capacitance multipliers are utterly intolerant of an output short-circuit. If the output is shorted, Q1 and/ or Q2 will fail almost instantly, and there is no 'safe' short-circuit duration. Adding very basic current limiting as shown above will (hopefully) provide some protection, but it's limited to very brief 'events' and it cannot tolerate a long-term short. The output current limit is set by R4, and Q3 will conduct if the voltage exceeds ~3A. A higher value for R4 means lower current and vice versa.
+ +This can be applied to any of the circuits shown, but it will generally be an unnecessary complication. You do need to be aware that without protection, a shorted output will cause the demise of Q1 and Q2. There's a remote possibility that a fuse will provide some protection, especially if C4 is a larger value than shown. It's doubtful though, because fuses are generally not fast enough to protect transistors. A chap I used to work with (many, many years ago) called transistors '3-legged fuses' and the term is just as applicable today as it was then.
+ + +The final circuit for a dual supply is shown in Figure 3.1. This circuit reduces ripple to less than 1mV with typical devices (about 250µV RMS as simulated), and dissipates less than 4 Watts per output transistor at 1.25A continuous operating current. It is unlikely that you will achieve this low hum level in practice, since real wire has resistance and capacitor ESR also has an influence. However, with careful layout you should easily be able to keep the output hum and noise to less than 5mV, and this level is usually more than acceptable for a power amp.
+ +As noted above, by splitting the capacitance and adding another resistor, we create a second-order filter (12dB/octave rolloff), which reduces the hum more effectively, and also removes more of the higher order harmonics (which tend to make a 'hum' into a 'buzz' - much more audible and objectionable). The resistor to ground (R3) stabilises the circuit against variations in transistor gain, but increases dissipation slightly. This is done deliberately to ensure that there is sufficient voltage across the multiplier to allow for short term variations.
+ +The 12k resistor shown may need to be adjusted to suit your transistors and supply voltage. Reducing the value increases dissipation in the output devices and lowers output voltage. It is unlikely that any benefit will be obtained by increasing this resistor, but you may experience increased hum (hardly a benefit).
+ +
Figure 3.1 - Complete Dual Capacitance Multiplier (Darlington Pair)
This is an easy design to build, but requires great care to ensure that ripple currents are not superimposed on the output because of bad grounding or power wiring practices. The schematic is drawn to show how the grounds of the various components should be interconnected, using a 'star' topology. If this is not followed, then excessive hum will be the result. The grounding area needs to be big enough to provide space for all the connections, but not so big that there can be any circulating currents. All capacitor leads must be as short as possible. Wire has both resistance and inductance, and these can combine to provide significant performance degradation.
+ +Normally, a schematic diagram is intended to show the electrical connections, rather than the physical circuit layout. This diagram is an exception, and the physical layout should match the schematic (inasmuch as that is possible, at least). Surprisingly little resistance is needed across a high current connection to produce a measurable performance degradation.
+ +Note that the transformer is centre-tapped, and requires equal voltage on each side - selected to give the voltage you require. It is most important that the centre-tap is connected to the common of the two input filter capacitors (4,700µF), and that this common connection is as short as possible. Use of a solid copper bar to join the caps is recommended. Likewise, a solid copper disk (or square) is suggested for the common ground, tied as closely as possible to the capacitor centre tap. The resistance of the main earth connection is critical to ensure minimum hum at the output, and it cannot be too low.
+ +Because the circuit is so simple, a printed circuit board is not needed, and all components can be connected with simple point-to-point wiring. Keep all leads as short as possible, without compromising +the star grounding. For convenience, the driver transistors may be mounted on the heatsink, which does not need to be massive - a heatsink with a thermal resistance of about 5°C per Watt (or better) should be quite adequate (one for each output device). Remember that the lower the thermal resistance, the cooler everything will run, and this improves reliability.
+ +Increasing the capacitance (especially at the input) is recommended, and I would suggest 4,700µF as the absolute minimum. More capacitance will reduce hum even further, and provide greater stability against short term mains voltage changes. Increased output capacitance C4) will help when powering Class-AB amplifiers to account for their sudden current demands. I do not recommend more than 4,700µF for C4, as the charging current will be very high and may overload the series pass transistors.
+ +Although generic transistor types (such as the 2N3055) can be used, it is better if devices with somewhat more stable characteristics (from one device to the next) are used. Plastic (e.g. TO-218) +devices are fine for the output as shown, but if higher voltage or current is needed you might have to use TO-3, TO-3P, TO-264 (etc.) types. While you might get away with using TO-220 packaged transistors, be aware that they have poor thermal properties, and getting heat from the case to the heatsink is always a challenge.
+ +For the components, I would suggest the following as a starting point (or equivalents):
+ +++ ++
++ Output Transistors TIP35 (TIP36 for the -ve supply) + Drivers BD139 (BD140 for the -ve supply) + Resistors ½W metal film for all resistors + Diodes 1N4001 or similar + Electros No suggestions, but make sure that their operating voltage will not be exceeded, and observe polarity.
+ (Bypassing with polyester is not necessary, but if it makes you feel better, do it)+ Bridge rectifier 20 to 35A Amp bridge is recommended. This is overkill, but peak currents are high, especially
+ with large value capacitors. Also ensures minimum diode losses at normal currents.+ Transformer Ideally, use a toroidal. Power (VA) rating for supply should be 'as required' for the amplifier.
+ A dual 20W Class-A amp will have a preferred transformer rating of 200 VA - 5 times the amplifier power.
+ (Note that VA is sometimes incorrectly quoted in watts). Primary voltage is naturally dependent upon where you live.
Matching the output and driver transistors is not necessary and will not affect performance to any degree that's audible. Use devices with the highest gain (hFE) possible for best results. Transistor gain must be measured at (or near) the typical operating current or the measured value is not useful. Most hand-held 'all-purpose' component testers are useless for measuring power transistors, because the test current is far too low to give a usable reading.
+ +To use the above circuit in single-ended mode, the transformer will need only a single winding (or paralleled windings). Simply wire the whole circuit as shown in Figure 2.3. See further below for a complete dual single-polarity version. A complementary version of the Figure 3.1 circuit is shown next.
+ +
Figure 3.2 - Complete Dual Capacitance Multiplier (Sziklai Pair)
The voltage drop across the series pass transistor can be reduced if a complementary (aka Sziklai) pair is used rather than the Darlington connection shown. For the positive supply, the driver may be a +BD139 (NPN), but the output device would be TIP36 or TIP2955 (PNP). This arrangement has almost the same gain as a Darlington pair, but the lower forward voltage may be considered an advantage as overall dissipation is slightly lower.
+ +However (there's always a 'however' ), as unlikely as it may seem, the performance of the Sziklai pair is worse than the Darlington unless the value of R3 is reduced. With it set at 6.8k, the performance of both circuits is virtually identical, and there's nothing to be gained. It's usually easier to wire a Darlington than a Sziklai pair, so the Darlington connection is the 'winner' in this comparison.
++ + ++
+In all cases, and regardless of the transistor configuration, beware of the charging current into C4. If you use a very large capacitor in + that location, the series-pass transistors will be at risk every time you turn on the power. The saving grace (as it were) is that the voltage comes up comparatively slowly as C2 and C3 + charge, but if C4 is too large that may not be enough to save the transistors. +
It's worthwhile to look at the difference between single-pole (6dB/ octave) and 2-pole (12dB/ octave) filter networks. This is important, as you might think that a single-pole filter should perform better with a constantly varying load. As it turns out, this isn't the case at all, and a 2-pole filter outperforms a single-pole filter in all respects ... including speed!
+ +These circuits are suitable for Class-AB amplifiers, but since their current requirements vary so widely, adding a larger capacitance to the output is a must. The diode should be a high-current type as it may be subjected to more 'abuse' than is normally the case due to the current variations of Class-AB amplifiers. It's highly debatable if there's any real advantage when the load current changes continuously, but it's not difficult to run the simulations, and 'real life' will be almost identical to the simulated results.
+ +When a capacitance multiplier is suddenly loaded, there will be some ripple 'breakthrough', because the voltage across the circuit is reduced when the load current is increased. If the voltage across the series pass transistor falls, there may not be sufficient reserve to maintain the minimum value of ripple voltage. It is very uncommon to find capacitance multipliers used with Class-AB amplifiers, because their supply current is constantly changing and the benefits are dubious. Consider that millions of Class-AB power amps are in use worldwide, and none that I know of use a capacitance multiplier. A few old designs did use a (very basic) regulated supply, primarily because they were single-supply designs with an output capacitor. Supply voltage modulation could cause some infrasonic disturbances, but even then most just used an 'ordinary' power supply.
+ +
Figure 4.1 - Single-Pole Capacitance Multiplier Test Circuit
Most of the articles (and videos) discussing capacitance multipliers only look at the single-pole version shown above. From the description I've already provided, you know that a single-pole filter is inferior to double-pole (2-pole). While it may come as a surprise, a 2-pole filter also recovers slightly faster when a load is applied or removed with the same overall filter values. I expect that at least some of the apparent reticence may simply be due to a lack of interest.
+ +If the load is variable, a 2-pole filter is still preferable, but ripple rejection has to be sacrificed for a faster recovery time. The only change is that the capacitance is reduced. With the values given in Figure 4.1, ripple reduction is about 4 times worse than the Figure 2.3 multiplier, and the recovery speed of the latter (after a high-current load is removed) is still slightly faster. With a load current varying from 360mA to 2.8A, the single pole filter recovers to 35V in 275ms vs. 193ms for the 2-pole version.
+ +
Figure 4.2 - Double-Pole Capacitance Multiplier (High Speed)
There can be no doubt that the 2-pole capacitance multiplier outperforms a single-pole version in every respect. I managed to get that part right when I published the first version back in 1999, but almost every other description fails to mention anything other than single-pole filters. I have no idea why this is the case, especially given the superior performance of a 2-pole filter.
+ +
Figure 4.3 - Single-Vs Double Pole Capacitance Multiplier Comparisons Test Circuit
For the above, the simulations were set up as shown in Figures 4.1 and 4.2. All voltages shown are AC-coupled RMS values, so show the ripple voltage present with a 100Ω load for the first 3 seconds, then with an additional 10Ω load switched in. It's turned off again at 4.5 seconds so the recovery time can be seen. Next, I've zoomed into the point where the 10Ω load is turned off, and you can see the recovery for the two filters.
+ +
Figure 4.4 - Single-Vs Double Pole Close-Up Response
The input voltage (across C1) recovers very quickly, but the multipliers are much slower. Naturally either circuit can be made faster by using smaller capacitors, but ripple rejection suffers. Of course, you may decide that you don't need very high ripple rejection, in which case the capacitance can be reduced further. With C2 and C3 at 33µF, you still reduce ripple by 24dB (from 1.53V to 94mV RMS, or 4.9V p-p down to 240mV p-p). Recovery is almost instant, taking only 48ms to get back to 36V.
+ +The idea of a multi-pole filter can be extended to a 3-pole version. This will give better ripple attenuation and a faster response, but at the expense of more parts (one extra resistor and capacitor). The law of diminishing returns comes into play though, and it's unlikely that the improvement will prove worthwhile. A 2-pole filter is a good compromise, and it has performance that's 'good enough'. Almost any circuit can be improved, but if the improvement isn't audible then it's rather pointless.
+ +Note that the supply voltage to the power amp(s) will be modulated by the instantaneous current drain of the amp, but this happens with 'conventional' supplies too. For any dynamic load, you have to sacrifice ripple rejection for speed, otherwise the results will almost certainly be unsatisfactory. Even if a 2-pole filter is optimised for speed, it's still slower than the recovery of a supply with only a simple filter capacitor.
+ +
Figure 4.5 - Pi Filter
An alternative arrangement is a 'pi' (π) filter, with two (usually fairly large) capacitors separated with a resistor of perhaps 0.1Ω. Figure 5.5 shows a main filter cap of 10,000µF, a 0.1Ω resistor and a second 10,000µF cap for the π filter. The cap multiplier shown in Figure 4.2 uses only 4,700µF for the filter cap, yet the multiplier wins hands down. This is despite the smaller filter capacitor and far less overall capacitance. There's less ripple - just over 1V p-p for the π filter, and 246mV p-p for the multiplier. Recovery speed is close to identical, but the cap multiplier drops about 2.7V at 3.6A (the test current in the simulation) and the series-pass transistor will dissipate power (around 10W at 3.6A).
+ +Of course you can also use an inductor instead of a resistor, and while more effective, it will be large, heavy and expensive. Increasing the resistor value helps a bit, but the multiplier still has lower ripple. Unfortunately, the multiplier has a greater voltage loss and dissipates more power than a simple π filter. As always, there's a trade-off, and the better solution depends on your requirements.
+ +Overall, the capacitance multiplier will be cheaper and smaller, but the need for a heatsink probably negates any cost saving. There are also active semiconductors in the power supply that will always be at risk if there's a short across the power supply. As always, there are compromises, and it's up to the designer to decide which compromise is the 'least worst'. Mostly, there's a great deal to be said for keeping circuitry as simple as possible, provided overall performance isn't compromised.
+ +If a 'traditional' power supply filter is shorted, it's not very good for the capacitor(s) due to the very high instantaneous current, but it will almost certainly survive (and the mains fuse will blow if it's powered on at the time). Do the same with a capacitance multiplier, and it's almost guaranteed that the series-pass transistors will die instantly. Of course, one could add a current limiter, but then there are even more parts, and a PCB would be essential. These are all design considerations that influence a final circuit, and adding parts that make no audible difference to the sound is something manufacturers (and most hobbyists) avoid.
+ + +Project 36 (Death of Zen or DoZ) is a simple Class-A amp that can benefit from using a capacitance multiplier, as can many others. To reduce the stress on the series pass transistor, it's easy (and probably cheaper) to build two capacitance multipliers as shown in Figure 4. Each multiplier is designed to provide the required single supply of 30-35V DC. By using separate cap multipliers we also isolate each amplifier, so they are very close to being mono-blocks, with only the power transformer being shared.
+ +
Figure 5.1 - Complete Dual Capacitance Multiplier (Single Supply, Darlington Pair)
This scheme is similar to that shown in Figure 3.1, except that both capacitance multipliers are the same. While the earthing/ grounding arrangement has not been shown diagrammatically this time, it's just as important to ensure that there is a single earth point, and care is needed to ensure that no ripple current can be re-injected into the DC via stray earth resistances.
+ +If used with the DoZ amp at higher than normal quiescent currents, you may need to either reduce the 220Ω resistors to around 150Ω or increase (or even remove) the 12k resistors to get 30-35V DC +from a 30V transformer. Dissipation in the TIP36 (or whatever you decide to use) will be around 6-7W with a current of 1.7A, so there's not a great deal of heat to dissipate in the heatsink.
+ +Expect the output ripple to be around 1mV RMS or less with a current of 1.7A, with ripple being lower at reduced output currents. With 4,700µF main filter caps as shown, there will be a fairly high ripple voltage on the raw supply, but the output ripple is reduced by more than 50dB when the capacitance multiplier is used.
+ +While it is certainly possible to reduce the ripple even more, it adds cost to the circuit and the benefits are doubtful at best. With a power supply rejection of better than 50dB itself, DoZ should be noise free into even the most sensitive of horns when powered by a capacitance multiplier power supply.
+ +You can even use a pair of positive capacitance multipliers (this also works with regulators) to get both positive and negative outputs. The circuit is shown below.
+ +
Figure 5.2 - Positive & Negative Outputs From Two Positive Supplies
This can be useful if you don't have any PNP transistors that are suitable, but want to get your circuit working without having to buy more parts. It falls into the category of 'useful to know', even if you don't use it. It's usually easier to build two identical circuits than to make them complementary. The loads don't see the slightest difference - electrically, it's identical to using a complementary circuit for the negative supply.
+ + +A MOSFET based capacitance multiplier can work very well, but it's not as straightforward as it may seem at first. In the original Project 15 article I showed a more-or-less suitable design, but it's very hard to recommend because of the greater voltage loss. You'll typically have at least 4.5V across the MOSFET, so power dissipation is a great deal higher than the Darlington configuration. While you should be able to get ripple below 1mV easily enough, the increased power loss makes it far less attractive.
+ +While most implementations I saw still resolutely stick with a first-order (6dB/ octave) filter, a second-order (12dB/ octave) filter still wins for response time and ripple rejection. This is the case regardless of the transistor type used (BJT or MOSFET). Most MOSFETs available now are designed for switching, not linear operation. If you do wish to experiment with a MOSFET version, choose a high-current device with a fairly high RDS-on ('on' resistance), as these are less likely to fail with linear DC operation. Make sure that you check the datasheet, and look at the SOA (safe operating area) for DC operation.
+ +One of the more intractable problems is that the gate draws no current, yet this is apparently a good thing. Because there is no gate current to speak of, transient behaviour as a load is applied is dreadful, with considerable ripple breakthrough. This happens because there is no rapid discharge path for the filter capacitors. When the input voltage falls, the gate remains at a higher than desirable voltage for up to 250ms, There are ways to (at least partially) get around this, but it's not worth the trouble given the higher voltage loss (and dissipation) of a MOSFET vs. BJT circuit.
+ +When this is combined with the greater voltage loss (typically 5V or more depending on the MOSFET and load current) and the correspondingly high dissipation, it is a sub-optimal solution. Using a MOSFET may seem 'high-tech' compared to lowly BJTs, but they cannot perform as well, and the less-than-ideal SOA of a MOSFET operated in linear mode.
+ + +This is something that hasn't been used as far as I'm aware. There are many caveats of course, mainly due to the high voltages used in valve (vacuum tube) designs. I've not tested or even performed detailed simulations for a high-voltage version, but with the right transistors it should be possible to get a very clean supply. The greatest issue you'll have is finding devices that can handle the voltage and remain within the safe operating area of the series-pass transistor. This is probably an area where a MOSFET is the best choice, provided the SOA is not exceeded.
+ +An IRF840 can provide up to 400mA even with 300V between drain and source, so charging output capacitors shouldn't be a challenge. An output of 1A at 400V is well within its capabilities, which is more than enough for 4 x KT88 output valves. I leave this as something to ponder, as I don't have any high-powered valve amps that are amenable to this type of modification. It could be done of course, but mostly I don't use the valve amps I have, and certainly don't intend to make any serious modifications that will take considerable time for no direct benefit.
+ +The circuit doesn't change much, but all resistor values are increased, capacitor values reduced, and a gate protection zener diode is essential. There is no doubt whatsoever that a well-engineered capacitance multiplier will provide less ripple and better overall performance that the common C-L-C filter commonly used, but people who build valve amps generally prefer 'traditional' techniques, and will avoid using transistors (or any other semiconductor) as a matter of principle.
+ +I must say that I find this more than a little depressing. The idea of engineering is to use the best solution to a problem, and if that means using some semiconductors in a valve amplifier if it improves performance, then that's what should be used. There's nothing 'magic' about valve rectifiers (quite the opposite in fact), and if a capacitance multiplier or regulator gives lower ripple and better performance than an inductor, then that's the optimum solution. It's a different matter if it's a restoration, since originality is a requirement, but for a new build you should use the best circuit for the task. If that means a hybrid of valves and transistors, then so be it (and it will be cheaper as well as performing better).
+ + +A capacitance multiplier isn't the only way to get very low ripple. In valve amplifiers, filter chokes (inductors) are still very common, and ripple attenuation is good, but far from perfect. The so-called 'pi' filter (so named because it resembles the Greek letter π) works well, and it suppresses ripple without excessive voltage loss due to ripple. For the example shown below, the ripple across C1 can be as much as 10V peak-peak, so the available voltage is less than 28V DC at 3.2A output with the lowest voltage determined by the ripple. Adding a filter choke of only 100mH raises the minimum voltage to 33V with the same current. The only other way to get a higher average DC output is to use a much larger filter capacitor (at least 10mF).
+ +For the example shown below, the transformer has a loaded output voltage of 27.5V AC, with a DC output of 31.8V at 3.18A (10Ω resistor). Without the resonant filter (no inductor and capacitor, but with 2 x 2,200µF filter caps) the output ripple is 1.5V RMS, or 4.74V peak-to-peak (p-p). The DC output is a little higher because there's no series resistance, so the voltage is 32.5V at 3.25A. + +At 100Hz (rectified 50Hz), the 100mH inductor has an impedance of 62.8Ω, but (at least for this example) a resistance of 0.1Ω. This was used for simulation, but in reality the resistance will be considerably higher. Without CR (the parallel resonance capacitor) ripple is reduced to 33mV RMS (95mV p-p). Adding CR reduces this to 7mV RMS (23mV p-p, and at 200Hz). You may well ask why the frequency is doubled, and the answer is simple - the 100Hz 'fundamental' is all but eliminated, leaving only the 2nd harmonic. For the particularly fussy, you could add another parallel resonant filter, but tuned to 200Hz (50mH || 12.5µF).
+ +You could be excused for thinking that CR would be subjected to high current, but it's not. Even with the maximum load used in the simulation (3.18A), the capacitor's ripple current is only 51mA. Of more concern is stability over the years, as electrolytic caps aren't known for short or long-term accuracy. For this reason, a film capacitor would be preferred, but this adds more cost and bulk. The inductor will be fairly substantial, as it must carry the full DC without the core saturating. While this approach appears to offer many advantages, they disappear quite quickly when you try to source the components. The values for LR and CR are critical for good performance. The frequency is determined by ...
+ ++ f = 1 / ( 2π √ LC ) Alternatively, if you know the frequency and inductance, determine the capacitance ...+ +
+ C = 1 / ( f² (2π)² L ) +
The tuning needs to be as close as possible to the ripple frequency, and the values shown are correct for 50Hz mains (100Hz ripple). For 60Hz mains, CR must be reduced to 17.6µF. Either value will need to be made up from paralleled caps, and fine-tuned to get resonance as close as possible to 100Hz or 120Hz as required. Don't expect the L1 to be exactly the claimed inductance, as tolerance is usually fairly broad (expect ±10% for commercial products). Adding the parallel capacitor should result in a ripple reduction of around 12dB compared to a traditional pi filter (with the same values for all other components).
+ +Based on a simulation, the ripple without the resonant capacitor is 29mV RMS at full load (10Ω). With the resonant cap in circuit, this is reduced to 5mV, with the ripple frequency increased to 200Hz (50Hz mains). That's a reduction of 15dB. The notch filter created removes almost all 100Hz ripple, and the output consists only of the harmonics. The amplitude of these is not increased, but they are what's 'left over' after the 100Hz ripple is removed.
+ +
Figure 8.1 - Parallel Resonant Filter PSU
It's likely that many readers will wonder why this arrangement isn't used all the time, since it's so effective. The answer is quite simple, and it's almost certainly not what you want to see. Like capacitance multipliers, filters incorporating inductors are suitable for continuous loads. If the load current varies, the resonant frequencies created by the inductor, filter capacitors and/ or the resonance of the series filter interact to create unwanted peaks and dips as the load changes. With only an inductor, the resonant circuit consists of C1, L1 and C2, with the two capacitors effectively in series. Resonance therefore occurs at 10.7Hz, and the presence (or otherwise) of CR doesn't change this. While it may not seem possible to have two different resonant frequencies from a single inductor, it happens because the overall topology allows it - the 'basic' LC filter and the (deliberate) resonant LC filter tuned to 100Hz act independently of each other.
+ +The step response was simulated by switching the second 20Ω resistor in and out of circuit at a 1Hz rate (500ms on, 500ms off). There's an initial 'spike' which rises to 48V (not shown), and it's expected with any filter using an inductor. This is often a 'deal-breaker' in itself, because the over-voltage can damage circuitry. The fact that the voltage dips to 25V when the load is applied and peaks to 40V when the load is removed is a characteristic of filters using inductors, so they are usually only suited to loads that either change slowly or not at all. Oddly enough, music usually changes slowly enough so that problems are usually averted. The resonant frequency must be lower than the lowest signal frequency of interest to prevent unwanted interactions.
+ +
Figure 8.2 - Resonant Filter Step Response
The step response shown above is identical, regardless of whether CR is connected or not. For the simulation, the current was varied by a factor of two. At 'half' load (20Ω, 1.75A), the output voltage is 33V DC, falling to 29.5V with the full 10Ω load. One thing that an inductor or resonant filter can do is provide more DC voltage than a capacitance multiplier with similar performance. Because the inductor is reactive, it stores energy during peaks and releases it during troughs. You must be mindful of the resonance created by the inductor and the two filter caps. With the values shown, resonance is at 15Hz.
+ +With just the two 2.2mF caps in parallel, the minimum voltage is 30.2V at 3A, and a capacitance multiplier would reduce that further. However, one must consider that the filter choke will be large, heavy and costly. This is the main reason they're not used in low-voltage, low-frequency power supplies. High-value capacitors are comparatively cheap and don't take up much space, and that's the approach that's used in almost all power supplies used for audio. Switchmode power supplies are a different matter, and the pi filter is very common to reduce output noise. The switching frequency is high, so the inductance needed is small, and it's common to see powdered-iron toroidal inductors in this role. Inductor-capacitor resonance peaks are (hopefully) dealt with by the feedback circuit.
+ + +This last version is an oddball - I tracked down where it came from [ 4 ], but the first I saw it was when it was sent to me by a reader who erroneously thought he'd seen it on my site. The idea is that the incoming noise is inverted and used to counteract the input noise, with the hope that it will be equal and opposite. This can only happen properly under very limited conditions, but it is an interesting approach. If the transistor and other circuitry is left out, leaving only the 15Ω resistor and 220µF capacitor (R1 and C4), the performance is almost as good.
+ +Very low noise isn't needed for most audio work, as the amplifying device will usually be an opamp and these have good power supply rejection. For very low-level RF (radio frequency) amplifiers, getting ultra-low noise is often a requirement. There are several very low noise LDO regulators, but most have a limited input voltage (often no more than 5V). A noise-cancelling circuit such as that shown below might be useful, but a conventional RC filter will be sufficient in many (most?) cases. The original circuit does not include C4, which reduces the effectiveness rather dramatically, and may cause misbehaviour in the powered circuitry.
+ +
Figure 9.1 - 'Anti-Noise' Circuit
Q1 must be selected to suit the likely current, but for a nominal 15V low-current supply a BC549 should do nicely. The test load was 390Ω, providing a current draw of about 37mA. The output voltage was reduced by about 0.68V, as expected with the load and the current through Q1 (about 5mA). The circuit works by inverting the noise and feeding that back to the output such that the 'anti-noise' cancels the regulator's noise. If noise and 'anti-noise' are equal and opposite they cancel.
+ +The noise reduction (at 1kHz) with the values shown was 23dB (without C4), and with only R1 and C4 that increased to just over 26dB. With both the 'anti-noise' circuit and C4, that was increased to 55dB. It's far simpler to use only a resistor (R1) and a bigger capacitor for C4. Alternatively, use an LM317 regulator, which is much quieter than the 78xx series. In theory, this circuit can achieve impressively low noise, but a simple RC filter is a great deal easier.
+ +It's possible that some people may find this useful, and it's equally possible that no-one will bother. Performance can be improved by changing the values of R1 and R5, and it's also possible to boost the performance by using a more accurate inverter. In the end, you may be able to get a circuit such as this to almost equal a low-noise regulator, but with many more parts. It is possible to get complete cancellation of any ripple or noise, but the end result is far more complex than a cap multiplier or regulator, and it may be considered a waste of components.
+ +A negative version can be made by using a PNP transistor, and reversing the polarity of all capacitors. One thing that this circuit does show is that it's possible to reduce low-frequency noise by inverting and summing it, so it's still an interesting technique. Ideally, the transistor inverter's gain will be made variable to allow complete cancellation. I've not bothered to provide any waveforms as I don't think the idea is particularly useful, but someone may find it suits their application. Then again ...
+ + +It's no accident that the suggested values shown here are the same as those recommended in Project 15. That design was used as the 'inspiration' for this article (including some of the original text), with the difference being that this goes into more detail for the design process. This includes proper transformer sizing and more accurate simulations than were available to me when the project was published.
+ +As noted in the introduction, the term 'capacitance multiplier' is a misnomer. If that were the case, the gain (hFE) of the transistors (or MOSFET transconductance) would need to be factored into the equation to determine the filtering effect. That isn't the case at all, because all you have is a voltage follower (the transistors) fed by a passive filter. No part of the circuit qualifies as a 'multiplier'. The only place where the transistor's gain comes into play is when determining the resistor values, as low gain means that the resistors supplying base current have to pass more current (lower resistance). This means that caps have to be larger to get the same ripple reduction.
+ +Capacitance multipliers aren't particularly common now, since most Class-AB amplifiers use dual supplies, and most have very good ripple rejection. However, for a Class-A amp or anywhere else that you need a very clean (ripple and noise free) voltage, they're hard to beat. Because the voltage drop across a cap multiplier circuit is low, there's very little heat to deal with, but of course the output voltage varies along with the input voltage. This includes the transformer regulation, which is always much worse than the datasheet figure because of the rectifier and filter capacitor.
+ +A capacitance multiplier is not something that suits all circuitry, and this is especially true with dynamic loading. Therefore, I don't recommend that you use one with Class-AB amplifiers, because the current varies so much. A Class-A amp may have a 2:1 current variation, but for Class-AB it can be over 100:1 variation. This is not what capacitance multipliers are designed for.
+ +The primary benefit of a capacitance multiplier is that you can almost eliminate ripple, without having to use insanely large amounts of capacitance. They have a relatively low voltage drop, at least compared to a regulator, and the output voltage follows the input voltage with both load current and mains variations. This may be seen as a disadvantage, but it happens with a simple capacitor filter too, but with increased ripple as the current increases. The multiplier provides excellent filtering at any output current within its design range, which is its primary reason to exist. A cap multiplier is not intended to replace a regulator, although the circuitry can appear almost identical to a simple zener diode regulated supply.
+ +These circuits are not for everyone, and most of us don't need to use them. If you have a Class-A amplifier, you probably have a good reason to try using the one that meets your needs. The calculations are mostly pretty straightforward, but sizing the transformer will often be an issue, especially if it has poor regulation. The only way to improve this is to use a bigger transformer, as the regulation is inversely proportional to the VA rating. A cap multiplier stage can also be added to a regulator (in a bench supply for example) to reduce the ripple breakthrough.
+ + ++ 1 JLH Capacitance Multiplier+ +
+ 2 The Capacitance Multiplier (AudioXpress, February 2021)
+ 3 BD139/140 and TIP35/36 Transistor Datasheets
+ 4 Finesse Voltage Regulator Noise! (Wenzel Associates) +
![]() | + + + + + + + |
Elliott Sound Products | +Capacitor Characteristics |
![]() ![]() |
It's often said that capacitors provide 'energy storage', but in reality, many used in audio circuits do nothing of the kind. Energy storage is certainly true for caps used in power supplies or to bypass the supply rails of power amps or opamps (for example), but caps that are used for coupling a signal and blocking DC (or simply as a safety measure should DC ever become present) perform no 'energy storage' at all, other than accidentally. The AC presented to one side of the cap is coupled through to the other side, and if the cap is large enough (compared to frequency and circuit resistance), it will never have any appreciable voltage across it. With no voltage, there is no stored energy. There will always be a tiny voltage present, but it's generally small enough to be ignored in an analysis.
+ +In the light of this simple fact, it's very hard to know why such a great deal has been made of the 'sound' of capacitors. In most cases, these debates are centred on coupling caps, which (as noted above) generally have very little signal voltage across them. Dielectric losses (dissipation factor, dielectric absorption) feature heavily, with some fairly outrageous claims made as to the importance of these losses in amplifiers and other audio equipment.
+ +Signal capacitors (as opposed to power supply 'storage' caps) work their hardest when used in filter circuits. This applies for active and passive filters, but caps used in passive loudspeaker crossover networks have to carry high current and often (relatively) high AC voltages as well. These need to be rated accordingly, and although there are bipolar (aka non-polarised) electrolytic caps sold for the purpose, IMO they are suited only for systems where fidelity is not a major concern. It's generally accepted that polypropylene is the optimum dielectric for this (and similar) applications, but for lower powered systems polyester is usually quite alright. Electrolytic capacitors (whether polarised or not) change their value over time, and are simply not suitable for high fidelity systems. In active filters (typically opamp based), the caps generally have very low current (a couple of milliamps at most) and low voltages. There is no need for 'special' caps in this application, but they still should be metallised film types (not high 'K' ceramics - ever!).
+ +There are sites on the Net showing that different caps have different properties, and this is often used a 'proof' by many people that the differences are audible. There are sites that seem to have impeccable credentials, but have managed to create nothing but FUD (fear, uncertainty & doubt) with wild claims of irreparable damage to the signal by using the 'wrong' kind of cap ... even as a supply bypass (yes, it's true - this claim has been made). In some cases you will read things like "listening tests have indicated ... (blah, blah, blah)". But where is the data? Who conducted the test? How was it conducted? Was the test ever really conducted at all? Most claims of this nature indicate that there is a hidden agenda, so beware. Guitarists are one group commonly targeted by snake-oil vendors (this may include famous manufacturers!).
+ ++ The search for 'tone' often involves esoteric capacitors, with some people imagining that if they could just find the 'right' capacitor they will sound like <insert famous musician of + choice>. This is a fool's errand, but is often reinforced by others with the same mindset. The 'right' capacitor simply does not exist. The value of a cap affects what it will do + to the 'tone' of a guitar (for example), not its physical appearance or imagined 'magical' characteristics. There is no magic, just physics. ++ +
Something that is often missed completely is that capacitors used for signal coupling must have a very low impedance for all frequencies that one expects to pass through the system, and in general, the impedance (capacitive reactance) should normally be less than half the circuit impedance - for the lowest frequency of interest. For example, a coupling cap that is used at the input an audio amplifier may have a value of 1µF, with a following resistive load of 22k (this is fairly common in ESP designs).
+ +The capacitor has a reactance of 7.9k at 20Hz, and 22k at 7.2Hz (this is the -3dB frequency). At this frequency, if 1V is applied to the input, 707mV will be 'lost' across the cap, and the amplifier will get an input signal of 707mV. The reason for the voltages not being 50% of the input voltage is due to phase. This is quite normal, and causes no problems. A double blind test of any two capacitors of the same value and reasonable construction will not reveal any audible difference - even if the music has significant very low frequency content, and the loudspeakers can reproduce it. At 40Hz, the capacitor has a reactance of just under 4k, and at 1kHz this has fallen to 159Ω. At 10kHz, the reactance is only 15.9Ω! These figures apply reasonably accurately at all voltages, impedances and frequencies.
+ +![]() | Note: Understand that if there is close to zero voltage across any capacitor, then it stands to reason + that there will be close to zero distortion 'generated' by the capacitor - including those that are claimed to have high distortion. When used for AC coupling (DC blocking), no properly + sized caps will ever have more than a few millivolts/ volt AC across them. This is easily measured or simulated, and the results are quite conclusive. Claims that caps will 'damage the sound' + are common, and generally false unless a completely inappropriate part has been used! + |
Dielectric losses (dielectric absorption and dissipation factor are lumped together for my analysis) are blamed for 'smeared' high frequencies, thus implying that as the frequency increases, the problem gets worse. However, as the frequency increases, the amount of signal across the cap falls, so at the highest frequencies the capacitor is effectively almost a short circuit. The influence of any coupling capacitor diminishes as frequency increases, and is most significant at the lowest frequency of interest.
+ +These effects are examined by a combination of simulation and actual testing - and to alleviate any concerns, no components were harmed in the production of this article (sorry ). Simulation features heavily here, simply because most of the effects are extremely difficult (some are almost impossible) to measure. The resolution of the simulator is far greater than any known test instrument, but one has to be careful to ensure the models used act in the same way as real components.
Figure 01 shows the general form of construction for a capacitor. The plates shown may be metal foil, or more commonly for most caps, a metallised film. This is very thin and typically long and narrow, then it is rolled up and encapsulated. In some cases, the cap is made flat, with interleaved plates and dielectric. This allows the maximum capacitance for a given volume.
+ +Figure 02 shows the general construction of a multilayer cap, and is also representative of the cross-section of a traditional wound capacitor. With some capacitors, one end is marked with a band or is otherwise indicated as the outer foil. This can be useful for sensitive circuits, where the outer foil (or plate) end may be connected to earth (ground/ chassis) to shield the capacitor against interference. This is usually only ever needed in very high impedance circuits, or where there is considerable external noise. Note that if the cap is used in series with the signal, the 'polarity' (i.e. outer foil) is usually unimportant. There may be some isolated cases where this is not the case, but they will be few and far between.
+ +Note the way that the edges of the foil are joined. This prevents the signal from having to traverse the length of the plates. Because one edge of each 'plate' is joined in a 'mass termination', only the width of the plates (i.e. between the terminations, plus lead length) is significant for inductance.
+ +The capacitance of a pair of plates is determined by the formula ...
+ ++ C = 8.85E-12 × k × A / t where C is capacitance (Farads), k is dielectric constant, A is area (m²) and t is dielectric thickness (in metres) ++ +
So, for example, a pair of plates of 0.01m² area, separated by 10µm of insulation with a dielectric constant of 3 (e.g. polyester), will have a capacitance of 26.55nF. These plates might typically be a metallised layer of 10mm width, and having a length of 1m [ 1 ]. While this is probably not very useful, it may come in handy one day (or perhaps not). The dielectric thickness is mainly determined by the voltage rating and the withstand voltage of the dielectric material.
+ +Typical values for k (dielectric constant) and dielectric strength (withstand voltages) are as follows ...
+ +Material | k (Dielectric Constant) | Dielectric Strength + |
Vacuum (reference) | 1.00000 | 20 - 40 MV/ metre + |
Air (Sea Level) | 1.00059 | 3.0 MV/ metre + |
Aluminium Oxide | 7 - 12 | 13.4 MV/ metre + |
Ceramic | 5 - 6,000 | 4-12 MV/ metre + |
Mica | 3 - 6 | 160 MV/ metre + |
Polycarbonate | 2.9 - 3.0 | 15 - 34 MV/ metre + |
Polyethylene | 2.25 | 50 MV/ metre + |
Polyester/ Mylar/ PET | 2.8 - 4.5 | 16 MV/ metre + |
Polypropylene | 1.5 | 23 - 25 MV/ metre + |
Polyphenylene sulfide (PPS) | 3.00 - 5.45 | 11 - 24 MV/metre + |
Polystyrene | 2.4 - 2.6 | 25 MV/ metre + |
Teflon | 2.0 | 60 - 150 MV/ metre + |
Kapton | 4.0 | 120 - 230 MV/ metre + |
Paper | See Note | See Note + |
Snake Oil ![]() | Unknown/ Variable | Unknown/ Variable + |
+ Notes:+ +
+ Paper is never used by itself, and the dielectric constant and strength depend mainly on the material used for impregnation. Foil + paper + oil caps are used for high + current and/or high voltage applications.
+ The dielectric strength can be determined for any thickness of material by dividing by 1 million to obtain the dielectric strength in V/µm, then multiplying by the + thickness in µm. For example, PET has a dielectric strength of 16V/µm, and 400V for 25µm (0.001"). +
This is just a small sample - see references for more. Only a few of the vast number of dielectrics available are useful, and only some of these are listed above. Of the many sites that give this information, there is considerable variation for many materials - this is to be expected because of the range of different material formulations, even within the same chemical compound group. PET (polyethylene terephthalate) is often used or referred to interchangeably with polyester/ Mylar. The term 'PET' is commonly associated with drink bottles, and is in same family (or is an identical) thermoplastic. Mylar is a trade name for PET (owned by DuPont). PPS is common for SMD film capacitors, and is normally limited to relatively low voltages (up to 50V is common). It's claimed to be very stable, and to have a low dissipation factor and ESR.
+ +Note that Kapton® (Polyimide) has been included because it's useful to insulate transistors from heatsinks and because it's something of a benchmark for other insulating materials. It is used for capacitors for specialty applications, in particular those intended for very high temperature operation (up to 250°C) [ 11 ].
+ +The dielectric strength column is a bit of a stab-in-the-dark I'm afraid, as it proved to be very difficult to find reliable information (there are no references because the info I could find came from a wide variety of different places). There are many references to some materials, almost nothing for others, and many are conflicting.
+ +The dielectric strength (also shown as breakdown or withstand voltage in some texts) is typically (but not always) rated in MV/m (million volts per metre of thickness), and 1MV/ metre equates to 1V/ µm. The figures are not absolute, and they can vary considerably depending on temperature, frequency and electrode shape (amongst other factors). Different websites have (often very) different interpretations, and the figure is intended as a guide only. For example, in my search I found polyester (aka Mylar) rated at 7,500V/ mil (1/1,000th inch = 25.4µm) which equates to nearly 300MV/ metre, where the figure I've shown is 16MV/m. It's probable that the higher figure is not correct. Interestingly however, it appears that the dielectric strength actually improves as the material is made thinner. It's almost impossible to get conclusive evidence for this, but it is shown in the occasional data sheet for insulating films.
+ +Snake oil has been included in the table, but there is no actual data associated with it and none can be found on-line. Yes, this is in jest, but as you may discover, there is a great deal of snake oil used in the audiophile capacitor industry.
+ +The generalised equivalent circuit of a capacitor is shown in Figure 03. The nominal capacitance is the value of Cnom, with ESR and ESL (equivalent series resistance and inductance) in series. The parasitic capacitances (C1 - Cn) and their series resistances represent the dielectric loss (resistance) and dielectric absorption (capacitance). These are infinite, with ever diminishing capacitance and series resistance. Figure 1.3 shows values I used for simulation purposes.
+ +It is important to understand the equivalent circuit of any component, because this allows you to simulate or measure the effects with the 'flaws' greatly accentuated. In many cases, it is not necessary to do either, since the effects will be quite obvious once seen for what they are. This excludes non-linearities, because they can't easily be modelled and are (by definition) non-linear in a variety of different ways. These include time, temperature and voltage.
+ +Be aware that capacitors can easily be damaged if the current through them is too high. This comes into play when caps are used in snubber networks in switchmode power supplies, where the average current may only be a few milliamps, but the pulse current can be a great deal more. General-purpose caps (e.g. metallised film) will almost certainly fail because the metallised layer is very thin, and it cannot handle high current. The end termination is another point of failure with high current, so caps expected to survive should be selected based on the maximum ΔV/ΔT or dV/dT (change of voltage over time, in V/µs - volts per microsecond) they can handle. As an example, a 100nF capacitor with a ΔV/ΔT rating of 1,200V/µs will pass 120A during a transition at the maximum rate (assuming a zero-ohm source). Special capacitors are made for this kind of duty, usually with a paper or polypropylene dielectric.
+ + +There are a few formulae that you will always need when working with capacitors. They are pretty common, and are shown in many articles and projects on the ESP website. By far the most common is the formula to determine capacitive reactance, written as Xc ...
+ ++ Xc = 1 / ( 2π × f × C ) Where π is 3.141, f is frequency and C is capacitance in Farads ++ +
The capacitive reactance of a 1µF capacitor at 50Hz is 3.181k. With reactive components (capacitors and inductors), the frequency is an essential part of the formula, because reactive components are frequency dependent. The simple formulae only work as expected at low frequencies (e.g. audio), because parasitic inductance, capacitance and resistance will affect behaviour at high frequencies - typically 100kHz to 1MHz and above, depending on physical characteristics of the part(s).
+ +One you don't see often lets you calculate the current through a capacitor, knowing the frequency and voltage. This assumes a sinewave, and it does not work with pulse waveforms. There is a great deal more that you need to know about the waveform and the capacitor itself if you need to calculate the current of any nonlinear waveform (i.e. any waveform that is not a sinewave) ...
+ ++ Ic = 2π × f × C × V ++ +
For example, a 1µF capacitor with an applied voltage of 230V RMS at 50Hz will pass 72.26mA. You get the same answer by dividing the voltage (230V) by the reactance (calculated above to be 3.181k).
+ +If you need all the formulae and the method used to transpose any formula, see Beginners' Guide to Electronics - Part 1. The article is intended for anyone who is starting out and who isn't an expert in (or has forgotten) algebra.
+ +Something else that is potentially revealing is to calculate the worst case rate-of-change (slew rate) of the audio signal. A 150W (8Ω) amplifier has a peak output of a little under ±50V. If we imagine that it must pass full power at 20kHz, the slew-rate is only 6.28V/µs. The slew rate can be determined with the following formula ...
+ ++ Slew-Rate = 2π × VPeak × f / 106+ +
+ Slew-Rate = 2π × 50 × 20k / 106 = 6.2831 ... V/µs +
This is never achieved in practice with music, and even if it were it's a fairly leisurely rate-of-change. Switching circuits operate at much higher speeds, and the rise/ fall times are often measured in nanoseconds. It's not unusual to measure the rate-of-change in kV/µs, something that no linear audio amplifier will ever achieve. With these very high switching speeds, 'ordinary' capacitors are not suitable, and the dielectric and construction must be considered carefully. Linear audio never comes close, and 'ordinary' capacitors are rarely found wanting.
+ +A Class-D amplifier may have a slew rate of more than 300V/µs, and likewise a switchmode power supply. The only capacitors that can survive that kind of energy are specialised high pulse current types, which may be metallised film or film+foil types. They are selected to have very low dielectric loss, and must be rated to carry the peak current. This is rarely necessary for Class-D amps, but some switching power supplies demand the highest performance or the caps will fail. Audio requires no such thing, but high current capability may be necessary for passive crossover networks in high-power speakers.
+ +When anything 'unusual' is required, read specifications, and select according to the requirements. There should be no guesswork involved, because everything you need to know is available.
+ + +The first thing to understand about dielectric loss, residual charge, series resistance and inductance, and all the other ills that afflict capacitors, is that they are quite normal, and appear in all real-world components. What is at issue is whether these cause a problem for normal audio signals at normal levels. There is no point testing capacitors by placing a 70V AC signal across them if this will never happen in the circuit being investigated. There is even less point doing this with multilayer ceramic capacitors that are rated at 50V DC and are designed specifically for supply rail decoupling! (Yes, it's been done as 'proof' of ... something.)
+ +While coupling capacitors are a primary target of the upgrade brigade, these are the most benign because of the very low voltages across them. Capacitors used in filter circuits are deliberately selected so that they cause the signal to roll off at the selected frequency, and this will be examined later in the article.
+ +First, let's look at the voltage across a 1µF coupling cap connected to a 22k input impedance amplifier. At 40Hz, this is only 177mV for a 1V input, and by the time we get to 10kHz the voltage across the cap is down to 723µV. This is shown in Figure 1.1, and Figure 1.2 shows the circuit used for the measurement.
+ +The test circuit is shown below. It is simply a matter of measuring the voltage across the capacitor and the resistor. With a 1V RMS applied signal, each will measure 0.707V when the capacitive reactance is equal to the resistance. This is the low frequency -3dB point.
+ +Now, the caps used in a simulator are 'ideal' in that they do not have dielectric loss, series resistance, insulation resistance (leakage) or any other undesirable parameters of a real capacitor. A simulated cap with these real parameters included is shown in Figure 1.3. The ESR (equivalent series resistance) is much higher than an actual cap, ESL (equivalent series inductance) is about typical, leakage resistance (via the parallel resistor) is much lower than reality at 100MΩ, and the dielectric loss components (the string of smaller caps with series resistance) deliberately exceeds that of most normal capacitors. This sub-circuit behaves like a capacitor with quite high losses - certainly it would be completely unacceptable as a tuning cap in a circuit operating at very high frequencies. In short, this is a dreadful capacitor. Perhaps these shortcomings might make it 'sound better', but it would need to be very expensive and perhaps unreliable to gain true acceptance. Just do a Web search for 'Black Beauty' capacitors - these are notoriously unreliable (especially early 'NOS'), sometimes unbelievably over-priced and should be avoided for anything more technologically advanced than land-fill (and yes, I do have personal experience with them).
+ +This lossy capacitor (which is worse than any typical real-world component) is next used in the same circuit shown in Figure 1.2. When we look at the amplifier signal (across the 22k resistor) the frequency shifts up by 11mHz (milliHertz) and there is a loss of 3.3mdB (yes, milli decibel) at 10Hz, with a loss of 4mdB right through the audio band and up to 1MHz. This can be considered utterly insignificant. The vast majority of all loss is caused by the series resistance (which is exaggerated for clarity). Lest anyone think that the dielectric loss or leakage resistance may cause a phase variation, that too is insignificant. The phase angle at 10Hz is just under 36°, with the lossy capacitor being 0.047° different. Again, at higher frequencies there is no significant difference.
+ +Ok, so there is very little change in overall performance when the lossy cap is used for coupling, but the losses should really mess up a filter circuit, right? Wrong! There is virtually no difference at all. Although the difference can be seen with the simulator, most affordable real instruments don't have sufficient resolution to be able to see it. The difference between a 24dB/octave crossover filter built using ideal and lossy capacitors is so small as to be insignificant. The frequency changes by 1Hz, and the voltage difference at the crossover frequency is 0.044dB (44mdB). Many times this variation will result from normal component tolerances, and even stray capacitance on the PCB itself could easily exceed the variation seen by the simulator. There is little to be gained by showing graphs with perfectly overlaid curves, but should anyone want to do their own simulations there is more than enough information here to allow that.
+ +It is important to understand that the lossy capacitor appears (electrically) as an infinite number of small capacitors, each with its own series resistance. This can be built using real capacitors, with a lumped parasitic capacitance of perhaps one tenth of the value of the actual capacitance. Use a 1 megohm resistor in series with the 'parasitic' cap, using the general scheme shown in Figures 03 and 1.3. The 'losses' in this capacitor are far greater than any metallised film cap, yet using it in a circuit will not degrade the performance one iota. Dielectric absorption simply does not affect the way a capacitor passes the signal. Dielectric loss becomes a problem when significant (high frequency signal) voltage appears across the capacitor, but is rarely even measurable as a loss at audio frequencies and at levels typical of audio systems.
+ +Dielectric loss/ absorption becomes very important for capacitors used in RF (radio frequency) circuits, and likewise for switchmode circuits. The losses accumulate and can easily reach the point where the cap gets very hot and fails. These issues are of no concern for audio because the frequencies (and amplitudes) are simply too small to cause problems.
+ + +So, we can conclude from this that the dielectric losses do not cause massive variations - in fact the variations are infinitesimal. But ... what of the charge storage of the dielectric? This is the phenomenon that allows a cap to recover some of its original charge due to 'dielectric absorption' (also known as 'soakage' [ 6 ]). This is part of exactly the same phenomenon that creates capacitor 'losses'. The lossy cap shown above has that effect too, and this is shown in Figure 1.1.1.
+ +The test circuit is shown below. This is a fairly standard test, but unless you are building a very low frequency filter or high accuracy sample and hold circuit, the effect is rather meaningless. It is interesting though. The simulated capacitor is the same as the lossy version shown in Figure 1.3. The official military specification test circuit for MIL-C-19978 (the test for dielectric absorption) uses an opamp wired to give almost infinite input impedance, because standard digital multimeters will not allow a useful measurement. The typical input impedance of a digital meter is 10 or 20MΩ, and normal audio circuit impedances are much less than that - consequently any 'problems' caused by dielectric absorption will also be much lower than specifications indicate.
+ +The capacitor is charged for 500 seconds using SW1, then discharged (for one second) by SW2. After the discharge, the voltage is seen to rise again, even though it was obviously zero for the duration of the short. Real caps do exactly the same thing, and if they were used in circuits having close to infinite impedance, it would be a problem. In long period sample and hold circuits, dielectric absorption is a problem, but in audio circuits it causes an almost immeasurably small loss of signal. Nothing more.
+ +Once the cap is loaded with normal circuit impedances, the effect goes away almost completely. This assumes that caps will be charged then discharged in an audio system, but as covered above, that does not happen in normal audio circuits. Even in filter circuits, the effect is negligible - dielectric absorption does not magically create reverberation, sub-harmonics, background 'glare', 'whiteness' during silent passages, image smearing, ingrown toenails or cardio-vascular disease. Again, all this particular 'audio nightmare' (as some might have you believe) achieves is a tiny loss of signal level (at all frequencies).
+ +With a 22k load resistor, the maximum 'recovered' voltage is 4.45mV, at 1.2ms after the short is removed (-81dB). Remember that this was after charging the cap to 50V for 500 seconds, then shorted for one second. This is not a normal audio circuit, and no audio circuit will subject a capacitor to anything even approaching the conditions used here.
+ +Caps in audio circuits are simply not charged and discharged in this manner. To do so would cause signals to be generated that, after amplification, would mean instantaneous speaker disintegration. These tests are silly - they prove nothing, but are regularly hailed by some audiophiles as some kind of 'proof' that they can hear a difference because it can be measured. It is forgotten in the excitement that the signals and tests that form such proof will never occur in a real audio system that is not in the process of blowing up.
+ +I have even heard a claim that the voltage recovery characteristic causes distortion similar to reverberation. What complete rubbish! If it were that simple to create reverb, one can be sure that no-one would have ever bothered with reverb springs, plates, or digital delays. Utter nonsense - it simply does not (and cannot) happen.
+ + +Electrolytic capacitors are definitely a problem though - there is any amount of proof ... Or is there? Again, often claims are made based on tests that are irrelevant for audio. A popular myth is that electros have considerable inductance because of the way the foil is wound inside the can. This is nonsense - the foils are usually joined at the edges in the much the same way as with film caps. High frequency performance usually extends to several MHz [ 2 ], even with standard off-the-shelf electros and bipolar (non-polarised electrolytic) caps.
+ +Electrolytics do have ESR (equivalent series resistance) as do all capacitors, but because of the nature of the internal chemistry of electrolytic caps it is non-linear. What is important here is not the non-linearity itself, but just how much signal is developed across the cap in normal (properly designed) circuits. We would be foolish to use electros in filter circuits, because they change their capacitance, ESR and inductance with varying temperature, frequency and age.
+ +Electrolytics are not usually a problem with audio circuits, provided they are used only for coupling and decoupling applications. Because the AC voltage across the cap is so small (by design), the component's contribution is negligible. If you use electros for coupling, I recommend that you use a value 10 times greater than needed for the design rolloff frequency. For example, if you were to exchange a 1µF film+foil coupling cap for a bipolar or polarised electro, the electro should be 10µF. This keeps the voltage across the cap to the absolute minimum at all frequencies.
+ +A word of warning about electrolytic caps is in order. When soldering, make sure that you don't exceed the maker's recommendations for time and temperature. Likewise, if it's at all possible, never operate an electro at (or near) its maximum operating temperature, unless you accept the manufacturer's rated life at full operating temperature. For most caps, this ranges from 1,000 to 2,000 hours. That's not very long! In reality, most electrolytic caps exceed their claimed lifetime by a wide margin, even if they are operated at close to the maximum rated temperature. For every 10°C reduction of operating temperature, life approximately doubles, so a 105°C cap operated at 55°C should last for at least 128,000 hours - close to 15 years of continuous operation. A 'golden rule' is that you never locate electrolytic caps close to a heat source.
+ +There is an exception though - low value high voltage electros have an appalling failure record. You won't find a lot of definitive info on this, but many service techs know the problem only too well. I have recently been diagnosing a problem with a commercial product that doesn't make it past the warranty period, and every single failed unit tested had a 1µF 400V electro that measured around 1nF. Operating voltage is around 250V DC. The caps are rated for 105°C and are subjected to around 75°C (worst case), yet haven't lasted anything like the 3½ years one would expect. There is evidence of electrolyte leakage in all the failed caps, so either the seal was damaged by an excessive soldering temperature, or the caps are simply showing the standard unreliability expected of all low-value, high-voltage electrolytic capacitors.
+ +Bear in mind that many very expensive and highly regarded loudspeakers use bipolar electrolytics in their crossover networks, because they are considerably smaller (and cheaper) than film/ foil types. This is one place that electros (bipolar or otherwise) should not be used, because distortion can be easily measured when there is a significant voltage across any electrolytic (bipolar or otherwise). No-one would dream of using electros in the filter circuits of an electronic crossover, but they are standard fare in passive crossovers. Strangely, no-one seems to mind that their crossover network uses electrolytic caps, yet there will be much howling and wailing if one is seen in a preamp or power amp. I find this very odd.
+ +Modern polarised aluminium electrolytic capacitors will generally provide many, many years of reliable service with zero polarising voltage. The only thing you need to ensure is that the voltage across the cap never exceeds around 1V (AC or DC), and preferably less than 100mV. If these conditions are met, distortion is close to immeasurable at any frequency, except where the signal voltage across the cap starts to increase. If this frequency is well below the lowest frequency of interest, you will be unable to measure any distortion even at low frequencies, unless you have extraordinarily sensitive equipment.
+ +Then of course we have tantalum electrolytics. While many sing their praises, I don't recommend their use for anything, other than tossing in the (rubbish) bin. There might be the odd occasion where you really need the properties of tantalum based caps, but such needs should be few and far between (for example, some LDO [low dropout] voltage regulators require the odd performance of some tantalum caps). They are unreliable, and have a nasty habit of failing short-circuit. They cannot tolerate high impulse currents and/or rapid charge/ discharge cycles, and especially don't like being shorted when charged. Tantalum caps announce their failure by becoming short-circuited, and it can be extremely difficult to track down a (possibly intermittent) short across a supply bus that powers many ICs. I never use tantalum caps (once bitten, twice shy!), and don't recommend them in any of the published projects. Personally, I suggest that you don't use them either. As noted above, sometimes there is no choice - LDO voltage regulators often need the specific characteristics provided by tantalum capacitors or they will oscillate. I must add that most modern tantalum caps seem to have no issues when used appropriately, but I still don't use them.
+ +You also need to be aware that much of the world's tantalum supplies qualify as a 'conflict' product, where the mining (in particular) is carried out by people who are essentially slaves [ 13 ]. The ore (Coltan) is used for tantalum and niobium, and avoiding materials that exploit workers is important. Sometimes, there's no choice, but if there is an alternative then I will use that instead.
+ +Two of the new 'solid' dielectric materials are niobium metal and niobium oxide, but I don't have any experience with them so can't make any educated comments. I suggest that anyone interested looks up the data for themselves. They are claimed to be (more or less) equivalent to tantalum caps, but I don't have much real information at the time of writing. Niobium oxide apparently has a high resistance to ignition (i.e. catching on fire), so that can't be a bad thing - they are much harder to ignite and don't burn as easily as tantalum or niobium metal.
+ +Electrolytic polymer capacitors are now making serious inroads into areas that were dominated by 'conventional' aluminium electrolytic caps. They use a conductive polymer as their electrolyte material within a layered aluminium design. These capacitors combine unique properties from the polymer material in terms of high conductivity, extended temperature range and no risk of drying out. This makes a capacitor with high capacitance and very low ESR, with high ripple current capability and a longer operating life. They are not available for high voltages (100V seems to be the maximum), and are at a cost premium compared to standard electros.
+ +Because these caps have relatively high leakage current, they are not recommended for timing circuits or anywhere else where low leakage is needed. They are not recommended for AC coupling where a DC voltage is present, because leakage current will disturb the operating conditions of the following circuitry. A 'standard' aluminium electro has a leakage current of perhaps a few microamps, depending on capacitance and rated voltage (calculated as ≤0.01 ×CV or 3μA ¹), compared to over 200μA for polymer types. Their low ESR makes them a good choice for low voltage supply bypass applications.
+ ++ ¹ ≤0.01 ×CV or 3μA Where V is rated voltage and C is capacitance (in μF), at rated voltage, or 3μA (whichever is greater) ++ +
As an example, a 10μF, 25V cap works out to 2.5μA by the first part of the formula. The expected leakage current is therefore 3μA, as that's greater than the calculated value. If the cap is operated well below its rated voltage, leakage diminishes (although not necessarily in direct proportion). Leakage current rises with temperature, so keep electros well clear of anything that runs hot. This also extends the life of the part. The formula is (deliberately) conservative, and a 10μF 63V cap operated at a few volts (say 12V DC) can be expected to have leakage well below 1μA. I ran a test on a 10μF, 63V cap (at 12V) that's been sitting in my parts drawer with ~100 of its mates for at least 5 years. After 10 minutes (more or less), the leakage current was measured at 420nA (0.42μA). The effective dielectric resistance was therefore over 28MΩ. The current was still falling when I terminated the test and I'd expect it to 'bottom out' at around 250nA (48MΩ). This was an 'ordinary' electro, not a low-leakage type.
+ + +Imagine an electro, whose characteristics are so poor that it develops almost 10% distortion internally, with an applied voltage of 1V. This is a particularly bad capacitor, but it is sized so that the AC voltage dropped across the cap is 1% (10mV) of the applied voltage. This means that there is only 10mV AC across the cap, and the distortion across the load will be less than 0.1%. In reality, no electro will be that bad, so provided the voltage across it is kept to the minimum, distortion is not a problem.
+ +Upon testing some 1µF 63V electros (polarised), the readings were interesting. My signal generator has a residual distortion of 0.02%, nearly all third harmonic. Connecting the 1µF electro directly across the output reduced the output voltage from 10V to about 5.5V RMS. This is because the generator has an output impedance of 600Ω, and the cap was acting as a low pass filter. Figure 1.3.1 shows an equivalent circuit of the test setup.
+ +The electro under test was unbiased, and with 5.5V AC at 400Hz across the cap, the distortion rose from 0.02% to 0.022% - a definite increase, but only small. At lower voltages such as 3V open circuit (about 1.6V RMS across the cap), the distortion fell to just over 0.015%. The reason the distortion appeared to fall is simple - the connection shown forms a low pass filter, which helps to remove the harmonics that make up the distortion component of the signal. A first order low pass filter will reduce the third harmonic sufficiently to make reading the difference quite easy. Based on a very similar test done using the simulator, the distortion should be about ½ the generator value, so the cap is still introducing some distortion. As the voltage (across the real capacitor) is reduced, so is distortion, until the noise limit of the distortion meter is reached.
+ +Now, remember that this was using the electro in a way that was never intended. I subjected it to a relatively high applied AC voltage (where as a coupling cap it would have a great deal less - millivolts instead of volts), and it was unbiased as well. Even so, the increase in distortion was small, even with 5.5V AC applied, and it is safe to say that distortion was negligible below around 1.5V, and rapidly fell below the threshold that I am able to measure.
+ +Attempting the same test described above with a polyester cap was a dismal failure - I was not able to measure the cap's distortion, only the attenuated distortion of the signal generator. As predicted by the simulation, measured distortion was about half that of the generator alone, using a 1µF cap at 400Hz, with 5.5V AC across the cap itself. I am satisfied that the polyester capacitor's contribution to measured distortion was well below my measurement threshold.
+ +Various ceramics were a completely different matter though. A 0.22µF (220nF) ceramic was tried, as was a 100nF multilayer bypass cap, along with a few others. At any reasonable voltage, distortion was measurable - the worst measured distortion being 3% with 9V RMS across the cap. This was measured across the capacitor, so the actual distortion was far worse than indicated because the capacitor was attenuating its own harmonics. I was unable to measure any distortion contributed by a 220pF 50V ceramic, even with 10V RMS at 100kHz across it.
+ +The graph above shows the distortion measured (using the 'Test #2' circuit in Fig 1.3.1) with the low impedance output of my test set (see Project 232), terminated with a 100Ω resistor. This forms a reference, so that when a capacitor is installed between the generator output and the 100Ω resistor, we can see how much distortion is added by the capacitor. The reference level is actually 0dBV, because I had to engage the 20dB attenuator for the input. The test frequency was set for 110Hz to avoid mains harmonics.
+ +The above shows the distortion with a series 2.2μF polarised electro between the output of the test set and the 100Ω resistor. Admittedly, I'm unable to get accurate levels below about -120dB because the system noise floor won't let me. Unlike the test described above, the distortion is not attenuated by the capacitor, so what you see is the distortion contributed by the capacitor with 980mV across it. This is well beyond what anyone would use for coupling, as 17dB is dropped across the capacitor at 110Hz. Normally, a much higher value would be used, between 100-220μF. Very little voltage appears across the capacitor. As noted earlier, if there's almost no voltage across a capacitor, then it can contribute almost no distortion. Even in this test, the THD is much lower than you'd expect. It's actually hard to see just how the cap has increased the distortion, as the harmonic levels are similar to those measured without it in circuit. However, the harmonic levels may be much the same, but the fundamental (110Hz) is 17dB lower, so the relative harmonics have increased (the distortion measurement shows 0.016% excluding noise).
+ +This is not a definitive test, simply an example of a couple of measurements I took on a randomly selected capacitor. If you need proof for yourself, you'll have to take your own measurements, using a selection of different capacitors, and at a range of AC signal levels. You may wish to add DC bias or drive the cap(s) with a higher voltage, or conduct other tests based on your requirements. If you want definitive, read the series of articles by Cyril Bateman (See Reference 3.
+ + +Ceramic caps deserve a section to themselves, because they are quite specialised and cannot be treated as 'any old cap'. Low value NP0 (negative/ positive/ zero aka C0G) types are often used as a Miller capacitor in power amplifiers, and are also common for stabilising uncompensated opamps and for HF rolloff (to prevent RF interference for example). Values used are were almost always below 1nF, with the most common range being between perhaps 10pF up to 220pF. Most are rated for 50V, but I've tested them at 500V and have never seen one break down.
+ +NP0 (and C0G) means that these caps have close to zero temperature coefficient, and they are traditionally very stable because their main purpose in life is for tuned RF circuits. Continuous use with an AC voltage of up to 50V RMS has never caused a 50V NP0 cap to fail in my experience. This class of ceramic cap is very stable with both voltage and temperature, and they can be used anywhere within the signal path. Temperature stability is typically ±30PPM (parts per million).
+ +Multilayer ceramics are now separated into three classes, Class I (NP0, U2J) which are stable but have relatively low capacitance for their size, and Class II, being higher capacitance but with relatively stable performance, and Class III, having great sensitivity to voltage and temperature. In general, avoid Y5V and Z5U dielectrics if at all possible.
+ +Multilayer ceramics (aka MLCC - multilayer ceramic capacitors) are commonly referred to as 'high-k' types, because the ceramics used have a very high dielectric constant. Unfortunately, this high 'k' value is not stable, and the dielectric constant varies with applied voltage and temperature. A 100nF bypass cap with 15-30V DC across it (very common with opamp circuits) may have an actual capacitance of perhaps 80% the claimed value, so might only be ~80nF. Fortunately, this is almost always more than sufficient to ensure that opamps don't oscillate due to power supply track inductance. High-k ceramics caps also have a high temperature sensitivity, and all parameters (insulation resistance, capacitance and dissipation factor) are affected.
+ +You could be forgiven for assuming that the voltage sensitivity could be put to good use, and you could make a tunable filter or oscillator by using a high-k cap as a 'varicap', and you could tune the device by changing the voltage across the cap(s). Unfortunately, while this will actually work, the temperature coefficient is such that the tuning frequency will vary too much as the ambient temperature changes.
+ +You also need to consider that if any high-k ceramic capacitor has significant signal voltage across it, the capacitance will change and will be different for different signal amplitudes. This means that distortion is inevitable! It might not be very much, but it will be easily measurable and will have an effect on the sound. Whether you'll actually hear the distortion is another matter.
+ +Some of the issues with ceramic caps include instability (capacitance varies with applied voltage and temperature) and, of considerable importance to audio, many are also microphonic (see next sub-section). Microphony is not an issue with very low values used for amp (or opamp) stabilisation, but it could be a disaster for higher values that might be used as coupling caps. Well over 40 years ago, I discovered this problem in an amp that I manufactured, where a 19mm diameter 220nF ceramic was the only economical alternative at the time. The caps had to be glued to the back of a pot to damp the microphony (they were part of the tone control circuit). The extra mass of the metal pot and the resilient contact adhesive was a success.
+ +Microphony is due to the ceramic itself, which becomes (slightly and accidentally) piezoelectric, so the ceramic flexes with signal and generates a signal when flexed. The sum total of the problems with ceramic caps is such that they are not recommended at all for any audio signal coupling or filtering application.
+ +1st Char Tempco - ppm/°C | 2nd Char Multiplier | 3rd Char Change Over Temp °C | |||
---|---|---|---|---|---|
Char | Temp | Char | Temp | Char | ppm /°C |
C | 0 | 0 | -1 | G | 30 |
B | 0.3 | 1 | 0.3 | H | 60 |
L | 0.8 | 2 | 0.8 | J | 120 |
A | 3 | 0.9 | K | 250 | |
M | 4 | 1 | L | 500 | |
P | 5 | 1.5 | M | 1,000 | |
R | 6 | 2.2 | N | 2,500 | |
S | 7 | 3.3 | |||
T | 8 | 4.7 | |||
U | 9 | 7.5 |
Class I caps are considered to be stable, although compared to film caps that might be debatable. The dielectric is generally calcium zirconate, with a relatively low dielectric constant and low capacitance per unit volume. These types have a temperature range from -55°C to 125°C. High dielectric constants invariably lead to higher sensitivity to temperature and voltage. Class II ceramics use barium titanate dielectrics, leading to higher capacitance (×1,000 up to ×10,000!). Of these, the Y5V and Z5U give the most capacitance, but show very high temperature sensitivity. These are in a class of their own - Class III.
+ +1st Char Low Temp °C | 2nd Char High Temp °C | 3rd Char Change Over Temp °C | |||
---|---|---|---|---|---|
Char | Temp | Char | Temp | Char | % Change |
Z | +10 | 2 | +45 | A | ±1.0 |
Y | -30 | 4 | +65 | B | ±1.5 |
X | -55 | 5 | +85 | C | ±2.2 |
6 | +105 | D | ±3.3 | ||
7 | +125 | E | ±4.7 | ||
8 | +150 | F | ±7.5 | ||
9 | +200 | P | ±10 | ||
R | ±15 | ||||
S | ±22 | ||||
T | +22, -33 | ||||
U | +22, -56 | ||||
V | +22, -82 |
The table shows the various dielectric designations for high-k ceramic caps. Not all are available, and the most common are X7R and Z5U. Of these, X7R is preferred as it has a much lower thermal and voltage coefficient than the Z5U, and also works over a wider range of temperatures. Volumetric efficiency isn't as good so they are larger, but have fewer problems. Note that there is no specification for voltage coefficient, and some high-k dielectrics (particularly with the smaller SMD parts) can have such a high voltage coefficient that a 100nF cap may be reduced to less than 10nF at little over half the rated voltage! These high-k (Class III in particular) ceramics also have a problem with ageing - they will lose capacitance as they get older, with most of the loss occurring early in the cap's life [ 7 ].
+ + + +The above is adapted from a paper by Kemet (Here's What Makes MLCC Dielectrics Different) [ 15 ], and it shows the relative characteristics of the various dielectrics. It's quite apparent that Y5V and Z5U are in a class of their own. These should be avoided when possible, but you may not be aware of the dielectric material unless you read the datasheet. This is always available from reputable suppliers, but if you shop for parts on eBay or the like, be prepared for the worst.
+ +Most of the large capacitance values you see in modern equipment (e.g. 10μF to over 100μF) are X7R, and these are considered to be fairly stable and have low ESR. They are available with up to 500V rated voltage, and can handle reasonable ripple current (up to 4A in some cases). These are smaller than an equivalent electrolytic capacitor, but are more expensive. They've become popular because of ever-increasing power density in switchmode power supplies, where their small size reduces overall volume. They are not recommended for audio coupling, and are definitely not suited to filters, as their distortion becomes easily measurable (and may be audible).
+ +An article recently came to light that looked at capacitance loss when X7R MLCC caps are subjected to a continuous bias voltage [ 16 ]. The material I saw was originally from Vishay, and (as expected) it showed them to be superior to other makes. However, if a 50V cap is subjected to 50V for 1,000 hours, expect its capacitance to have fallen by up to 25%. When the DC is removed, there is some recovery, but it takes time (typically 1,000 hours of operation needs 1,000 hours to recover). This should disabuse people of the idea that leaving equipment powered 24/7 is somehow 'better', as this is clearly false for MLCC caps. Naturally, all other parts that are stressed continuously will have a shorter life as well.
+ +There are special ceramic types that are marked as 'Y' class (in particular, 'Y1'). These are intended to decouple hazardous voltages to SELV for EMI reduction, and are supposed to carry safety certification. All 'Y' class ceramic capacitors must have all the approvals required for countries around the world, and they are usually covered in approval logos. Common values are 1nF or 2.2nF (the range is typically 10pF to 4.7nF), and they are rated for continuous use at 250V AC, and are tested at up to 4kV AC. These capacitors are special purpose, and are not applicable to normal audio circuits, other than in switchmode power supplies. See X And Y Class Capacitors below for some detailed info on specialised EMI capacitors.
+ +Indeed, 'Y' caps were very uncommon until double insulated (no earth/ ground connection) switchmode power supplies became popular, and they can be found in almost every (approved) switchmode plug-pack (wall-wart) or in-line power supply available. Be very careful when buying such supplies (especially very cheap ones from China), and make sure that they carry the proper approvals. Be aware that just because there are approval marks on the supply, that doesn't mean they are legitimate! I have seen such supplies where the 'Y' cap is no such thing - it's a normal high voltage ceramic, and has zero safety certification! Although there is no direct evidence at the time of writing, don't be surprised to find that fake Y1 caps are being used - most likely normal high voltage ceramics with the required logos added later by unscrupulous resellers.
+ +So, ceramic caps have their uses, NP0/ C0G types are perfectly alright as Miller caps on power or opamps (and do not add distortion). Multilayer X7R or Z5U dielectric caps are perfectly suited for bypassing/ decoupling opamp supply rails, and don't believe anyone who claims otherwise. They were designed for just this purpose!
+ +However, never use any multilayer or other high value (high-k) ceramic capacitor for audio coupling, in filters (such as electronic crossovers, tone controls or infrasonic filters) or anywhere else in the signal path. These caps are designed for supply rail decoupling, and not to replace film caps.
+ + +There's quite a bit of information available on this topic, but fortunately (at least for audio applications) it isn't usually a problem. High-k ceramic caps will almost always have a piezoelectric effect, meaning that they will vibrate when subjected to an alternating current and will generate a voltage if vibrated. Of particular concern are high-value ceramic caps (10µF or more), as they are physically larger. The capacitor itself will normally be silent, but the PCB may act as a 'sounding board', amplifying the noise to the point where it can become audible [ 14 ]. As noted in the reference, there are solutions.
+ +No purely analogue solution (using opamps or discrete components) will have a problem, especially if built using through-hole parts. I never specify high capacitance multilayer caps in any design published, with the largest normally suggested being 100nF. Since these are used in parallel with opamp supply pins, they have an electrically quiet environment to start with, and as they are through-hole the opportunity for noise is negligible.
+ +Where this issue becomes problematical is with SMD caps, subjected to a noisy supply, and especially in small 'personal' devices where size and cost preclude the use of 'acoustically silent' capacitors. Many of these may also utilise thinner PCBs than more traditional circuits, allowing the PCB to flex more easily. In general, this isn't something you'll need to worry about with any of the ESP projects, but it is something you need to be aware of.
+ + +Electronics World did an epic series of articles written by Cyril Bateman [ 3 ], where he went to extreme measures to develop equipment to be able to measure the distortion of common capacitors. Again, this was done with an AC voltage applied across the cap, so the results are generally of far less importance for a coupling cap. The findings are useful for determining the usefulness of various caps in filter circuits (especially passive crossover networks) though, and he quickly disposes of a number of persistent myths, including (but not limited to) the following:
+ +For anyone who wants to examine these findings in greater detail, I strongly suggest that you get hold of the original series of articles. In general, it was found that the distortion of capacitors was generally very low - well below that contributed by the majority of active circuits. There are very good and valid reasons not to use certain capacitor types in some applications, and equally good reasons to insist on their use in others.
+ +As described above, for bypassing, so-called monolithic (multilayer, high-k, etc.) ceramics are very good, having a low impedance up to hundreds of MHz, assuring good supply bypassing at the highest frequencies. As frequency increases further, standard ceramics are usually preferred. Using them in an audio active crossover network would be a very bad idea though, because their capacitance tolerance is not good, and the value can also change with applied voltage and temperature. Some ceramics (high k types being the worst) may be microphonic due to the piezoelectric properties of the ceramic substrate. Modern multilayer caps are much smaller than early high-k types, and are less likely to suffer from microphony, it is well worth bearing this potential problem in mind. As with dielectric absorption, microphony is more likely to be a problem in high impedance circuits. In most audio applications it will rarely be an issue, but ceramics in general are not recommended for filters, or as coupling caps in audio circuits. This is because of wide tolerance and capacitance variations with frequency and temperature.
+ +G0G or NP0 ceramics have very low temperature coefficients, and are generally useful in many areas of audio where small values are needed. In particular, they can be used as RF suppression, or as the Miller cap in power amplifiers. While it is generally thought that polystyrene or silvered mica (for example) would be better, this may be more of an expectation than a reality. This is something I have tested, and have been unable to measure any difference in distortion between a polystyrene and ceramic cap. Frequency response is essentially unchanged, as is slew rate.
+ +Electrolytics are excellent for power supplies, and most other places where high values of capacitance are needed. They are unsuitable for filters, because they have wide tolerance, should be biased, and may change capacitance depending on applied frequency. Bipolar electros are ok where high values are needed, and no polarising voltage is available. Because of wide tolerance, they too are unsuitable for filter circuits. The distortion caused by electrolytic caps used for signal coupling (including their use in feedback networks to ensure unity DC gain) is low to immeasurably low if they are selected to have minimum AC signal voltage developed across them at all frequencies of interest.
+ +While it is a widely held belief, it is not essential to maintain a polarising voltage across an electrolytic cap. However, the capacitor value must be high enough to ensure that no more than ~100mV peak AC voltage (70mV RMS) ever appears across the cap in normal use. Provided you follow this recommendation, polarised electrolytic caps will last for many, many years with no DC voltage across them. Distortion is measurable with very sophisticated equipment, but is generally so low that it can be ignored.
+ +When it comes to high current applications (such as passive loudspeaker crossover networks), there will be significant voltage across the cap and current through it. It pays to use high quality capacitors that can withstand the voltage and current that the caps will be subjected to - this generally means polypropylene, polyester, or perhaps paper-in-oil (if you must). This is an area where dielectric loss may cause the caps to heat up with sustained high power, and the devices used need to be stable with time and temperature. Do not necessarily expect to be able to hear any difference between these (high quality) types in a blind test though, as you may well be disappointed.
+ +One thing that may be very important for passive crossover networks is the material used for the 'plates' of the capacitor. Metallised film caps may not be the best choice because of the resistance of the film itself. The film is usually extremely thin, and it may not have a low enough resistance to allow the full current required. I have not experienced any problems with this, but a film and foil type is more suited to high current operation than a metallised film construction. This topic is mentioned on capacitor manufacturers' websites, and I recommend a search if you want more information about current handling capacity.
+ +Bipolar (non-polarised) caps are (IMO) simply unsuitable for use in passive crossovers. They are very small for their capacitance so self-heating likely ... either because of power lost in ESR or dielectric losses. Wide tolerance also means that the network will probably not be right unless it is tweaked, and it will change with time anyway. Distortion is easily measurable when there is a significant voltage across the capacitor.
+ + +The standard (subjectivist) test method with capacitors (indeed, with many electronic components) seems to be to exchange a standard unit with one often costing a great deal more, then to proclaim that ... "Yea indeed, behold the huge difference", and "Lo, hear how great is the improvement". As I have noted many times, this is flawed reasoning, and any such test is utterly invalid. Nothing can be gained from this except a continuation of the 'pure subjectivist' dogma.
+ +In a properly conducted test, the test methodology will force the listener to determine if there is a difference between two pieces of equipment (or even any two components), and do so without knowing in advance which is which, and, to do so with statistically significant accuracy. This is usually taken to be around 70% - the listener must pick 'Part A' from Part B' correctly at least 70% of the time.
+ +According to the claims one might hear from some people regarding their favourite capacitor (or anything else), any blind test should score 100% accuracy, such is the difference heard. Sadly, it seems that in any blind (or ABX) test, the difference fades to nothing, and test results are nearly always inconclusive - it cannot be said with certainty that a difference was heard or not. I cannot understand how something can be claimed on one hand to be 'chalk and cheese', yet cannot be reliably identified as soon as the visual cues are taken away. This should alert everyone to the fact that experimenter expectancy and/or desirability are the overwhelming factors, and that the components themselves are sonically virtually identical.
+ +The actual testing of components must be done with care. The components must be tested in a manner that reflects the way they will be used in practice, or, if this fails to yield any measurable result, the degree to which the part is pushed beyond its ratings must be explained. The report should then extrapolate (in as far as practicable) the measured results at elevated operating conditions to the expected result at normal levels. That this is rarely (if ever) done is fair warning of the likelihood of erroneous data being propagated.
+ +I have never been able to measure the distortion of a capacitor that is used sensibly in a real circuit. This is partly because the equipment I have does not have the extraordinary resolution needed to be able to measure such low levels of distortion, and partly because the active circuitry and system noise will usually predominate. There is little point trying to measure signals that are -100dB below the 1 Watt level, or even worse, at -100dBu (i.e. referred to 775mV).
+ +For example, let's look at distortion at -100dB referred to full power of an amplifier. Assume that the loudspeakers are 90dB/W/m and the amplifier is 100W. The peak SPL is 110dB (unweighted) at 100W, and you might be blessed with an exceptionally quiet listening room - let's say 30dB SPL. If you have distortion artifacts at -100dB, then with a peak SPL of 110dB, the distortion will be at 10dB SPL unweighted (110dB - 100dB). Your very quiet listening room is a full 20dB noisier than the loudest distortion components!
+ +Based on my own observations, as well as those from many others (Bateman, Self, et. al.), capacitor distortion in any real circuit will generally be (much) less than 0.001% ... that's a level of -100dB. Testing and obtaining good results at these levels is highly problematical. Circuit noise, residual distortion and even a tiny bit of corrosion on a connector will increase the measured distortion dramatically. Cyril Bateman was forced to build specialised test equipment to measure the distortion, and while anyone can do the same, it is time-consuming and expensive to do so.
+ +Returning to the use of a cap for signal coupling ... let's assume a seriously non-linear capacitor as shown in Figure 2.1. when used with significant current through the cap (A), the simulated distortion is 1.26% at V1. When used for coupling, distortion is zero - there is simply not enough current through the non-linear circuit to cause a problem. Real tests show the same behaviour - the 1µF polarised cap that so happily gave measurable distortion before shows none that I can measure. The simulator also shows zero distortion at V2 when the non-linear cap is connected as a coupling capacitor (B). It is not until the load resistor (22k) is reduced sufficiently to cause significant voltage across the cap that distortion becomes measurable. For example, when the 22k resistor is reduced to 1.5k, distortion rises to 0.0076%. At 600Ω, distortion is 0.85%. The diode shown is 'ideal', so it will conduct at very low voltages.
+ +Although this is obviously a simulated experiment to show the general principle, reality (including test results on the same electrolytic cap that produced measurable distortion before) is very close. If a capacitor is going to cause measurable distortion, then the signal voltage across it must be significant. If this is not true and there is negligible voltage across the cap, then it is quite reasonable to expect that the contribution of the component at that frequency is also negligible. Any inherent distortion it produces must be considered in combination with the voltage across it. A capacitor (or any other component) with zero volts across it contributes zero distortion, so extrapolate from there, and not from the silly and pointless claims that "capacitors cause distortion".
+ +When used in filter circuits, capacitors no longer have next to no voltage across them, so some distortion is inevitable. However, your test methodology had better be very robust to ensure that any distortion measured is actually from the capacitor, and not due to anything else. It's all too easy to jump to conclusions that don't hold up to scrutiny.
+ + +All capacitors have some inductance, but what is often overlooked is that the leads are the primary cause for this. To minimise the inductance, keep the leads as short as possible, and keep them as close together as possible. When two conductors are run in parallel, they form a capacitor. By maximising the (capacitive) coupling, you automatically reduce the inductance. Loudspeaker cables have been produced that have extraordinarily low inductance, despite the fact that they are quite long, and should (in theory) have high inductance as well. Not so. They have high capacitance (sufficient to give many amplifiers severe heartburn), but inductance is low. The closer the conductors are spaced and the greater the physical area, the greater the capacitance and the lower the inductance.
+ +Now, consider a conventional wound film and foil (or metallised foil) capacitor ... even if the plates were not joined at each end to form a (relatively) solid block (see Figure 02, or do a Web search for capacitor construction), the capacitance would be at the required value, and inductance would still be negligible. The mechanism that supposedly causes internal inductance has never been demonstrated for film caps, but a great many measured results have neglected the capacitor lead length, resulting in erroneous figures. The errors can easily exceed an order of magnitude with a poorly set up experiment.
+ +The measurement must be taken from a point as close as possible to the capacitor. If the measurement is taken even a few millimetres away from the capacitor itself, it will include the lead inductance. This is made worse by spreading the legs of the cap to allow convenient connection. The inductance of the leads can be calculated by [ 4 ] ...
+ ++ LDC = 2 × L × [ ln ( 2 × L / r ) - 0.75 ] nH where LDC is the low-frequency or DC inductance in nanohenries, + L is the wire length in cm, r is the wire radius in cm, and ln is the natural logarithm ++ +
The inductance is not great ... about 5-10nH per cm (centimetre) depending on wire size, but it is still significant at very high frequencies. With a 1µF cap (hardly massive), a mere 6mm of lead length (6nH assumed) creates a series resonant circuit at close to 2MHz. Increase the capacitance to 10,000µF, and it is now 20kHz. This is not capacitor resonance, it is a resonant circuit formed by the capacitor and the external inductance of the capacitor's leads. For bypassing applications, the resonant circuit so formed does not reduce the effectiveness of the bypass capacitor if it is 'too big'. In reality, power supply bypass capacitors will supply the current required by the circuit regardless of the 'self resonant' frequency, so small values of capacitance do not mean better bypassing.
+ +Figure 3.0.3 shows a simulated power supply and switching circuit, with inductive leads to the MOSFET and its load. Given the 'self resonant' frequency of the capacitor and lead inductance (about 35kHz with a 1,000µF cap), one would expect that the pulse performance would be woeful, but it is essentially unchanged as C1 is changed from 1µF up to 10,000µF. If the value is reduced (less than 1µF), then performance does suffer. Lower capacitance does not give better bypass performance.
+ +In Figure 3.0.3, you can see the waveform of the switching pulse with (red) and without (green) the series inductance and resistance - a comparison between a real and a perfect (ideal) capacitor. Note that there is very little difference. This, despite the fact that in theory the 'combination' capacitor has a series resonance of 35kHz, and the switching speed is many, many times that. Using a much smaller capacitor (such as 100nF) is a disaster, allowing the circuit to ring, and develop excessive back-EMF. Feel free to perform the test using real components - you will get very similar results!
+ +The bypass capacitor equivalent circuit as shown is rather pessimistic. The 20nH inductor is actually a low Q component because of many system losses, and would normally be shown with some parallel resistance. The following plots were done with the high Q inductor as shown, hence the much sharper than normal impedance dip shown in Figure 3.0.5. In reality, this is a very broad notch because of the low Q of everything involved. For bypass applications, the low Q is a good thing and works in our favour.
+ +An important thing that is often missed is that the resonance formula ( fo = 1 / 2 π √ L C ) only implies that higher capacitance values cause lower 'self-resonance' and worse high frequency performance. This is largely untrue - the ability of the larger capacitor to supply instantaneous current demands is not impaired, so the idea of using a small cap ("well, they have a higher self-resonant frequency don't they?") in parallel with a big cap is essentially nonsense - more capacitance equals more energy storage. The concept of 'self-resonance' in this context is flawed thinking, and leads to silly designs (100nF caps in parallel with 10,000µF electros for example) that generally achieve nothing useful, other than using more components.
+ +In Figure 3.0.4, you can see the difference between using a 1,000µF (red) and 10µF (green) non-ideal bypass capacitor, measured at the positive supply to the switching circuit. The 1,000µF cap should show a sluggish response because its 'self resonant' frequency is so low. As power is demanded (MOSFET switched on), there is no difference at all, although recovery is a tiny bit slower. Not fully visible is the fact that the low value cap causes a damped oscillation, whereas the higher value does not. So, do low value caps 'work better' as bypass? ... No, in general they do not.
+ +Figure 3.0.5 shows the impedance of a simulated 1,000µF capacitor with 20nH series inductance and 10mΩ series resistance. The 'self resonance' frequency is 35kHz, with a minimum impedance equal to the series resistance (ESR). Even at well above the resonant frequency, the cap still provides capacitive energy storage - it is not an inductor, despite appearances. This is commonly claimed, but is generally untrue. The impedance is increasing, but until such time as the inductive reactance becomes significant (with respect to the circuit impedance) the composite circuit is still a capacitor. Even at 1MHz, the total impedance is only 125 milliohms. Although the 125mΩ is almost all inductive reactance, it cannot be considered 'significant' (a somewhat vague term that is usually taken as around an order of magnitude compared to the load). In this case, the load is 10Ω, so 1Ω is 'significant'. This occurs at 8MHz. It is very important to understand the difference between a supply bypass application and a tuned circuit or other electronic function. Note that self resonance in electrolytic caps is very broad because both internal (large) capacitance and (small) inductance are low Q.
+ ++ At least one person has declared that the above is garbage, but only after taking the material out of context and deciding that I also include RF transmitters as part of 'audio' (strangely, + no, I don't). The simulations are accurate, and if the silly claims of self-resonance were true, no-one would be able to use 100,000µF filter caps (for example) because the self resonant + frequency would be well within the audio range. Strangely, most amps work perfectly well at all frequencies with very large filter caps, even where the theoretical self resonant frequency of + the power supply is within the audio band because of very large capacitance. ++ +
In a normal circuit (such as a series tuned circuit for example), when the applied frequency is the same as the resonant frequency of a capacitor and inductor (including leads, PCB tracks, etc.) the tuned circuit is no longer reactive - it is resistive! The resistance is equal to the sum of all component and lead resistances (including ESR). Below resonance, the circuit is capacitive - above resonance, inductive. Series resonance in a capacitor may result in rather unexpected behaviour in high frequency circuits (including digital), depending on the specific application.
+ +It is obvious that capacitor leads should be kept as short as possible, and it might be an advantage if manufacturers stopped spreading and kinking the leads of monolithic ceramics in particular, as this introduces a (small) additional inductance because of the lead length and reduces the maximum operating frequency. (It's usually done so that soldering doesn't damage the cap.) It is quite obvious that lead (and PCB track) inductance must be considered for very high frequency circuits - or for circuits that are capable of very high frequency operation even though they are used at much lower frequencies (audio amplifiers come to mind). Where very high frequencies are involved, there will be a significant advantage if SMD (surface mount) capacitors are used, as they have zero lead length and are very short, so have extremely low inductance.
+ +Some interesting observations are made by Ivor Catt [ 5 ], where it is maintained that the vast majority of capacitor claims are false. His information on bypass caps (in particular) goes against all 'conventional logic', yet the simulation described above validates his theory. A couple of his more notable quotes are ...
+ +Ivor is considered eccentric to many in the electrical/electronics fields (some may say that is an excessively generous description), but his data cannot be dismissed out of hand. Particularly when a simulation shows that a capacitor, even with series inductance, can supply the instantaneous demands of a switching circuit. This is despite that fact that the switching occurs at a frequency that is well above the 'self resonant' frequency of the capacitor.
+ +Another of Ivor's contentious claims is that a capacitor is a transmission line. Based on the tests conducted (see ESL & ESR below), there is much to commend this model, even though it has been scoffed at by many who (in my opinion) should have been thinking more clearly. A length (any length) of coaxial cable appears to be capacitive at low frequencies, and at a frequency determined by its length, shows series resonance - it becomes (almost) a short circuit for that frequency. Above the resonant frequency, the cable is inductive. The primary difference is based not on any of the counter-claims that I saw to the suggested model, but because the very construction of a coaxial cable is such that it has vastly lower capacitance than any real capacitor. Because of the low dielectric losses, the resonance is very high Q. In addition, the cable's capacitance and inductance are optimised for the circuit impedance. Capacitors are optimised for capacitance (what a surprise), so generally use a dielectric that is far thinner and has somewhat greater losses than coax. That does not change the basic model though - it simply means that the characteristic impedance of any given capacitor is dramatically lower than that of any 'normal' transmission line.
+ +It is probable that if a capacitor were to be laid out flat rather than rolled up in the normal way, its inductance will not increase by anywhere near as much as the pundits might imagine. This, despite the fact that when it is rolled up, the entire edge of each plate is joined, so the 'length' of this transmission line is the width of the metallised film (or separate foil). This agrees quite well with the measured or calculated internal inductance of almost all capacitors, and this is easily verified by anyone with access to basic RF test equipment.
+ +One claim that is even described as "as everyone knows" is that large caps are 'slow' and small caps are 'fast'. This is (of course) unmitigated drivel. If you want proof (possibly at the expense of the test cap), charge a 100,000μF (100mF) cap to it's maximum rated voltage, then short the terminals together with a short piece of wire. If possible, monitor the voltage across the piece of wire with a scope. The reaction is instant, violent and very fast.
+ + +The series resonance of an electrolytic (or any other capacitor) has to be considered in conjunction with the circuit impedance. In real life devices, it is actually quite a broad null, often extending over several decades of frequency. This is readily apparent from looking at manufacturers' data, or by measurement. Measurement is actually quite difficult, since a significant current must be applied to be able to see the results. This requires an amplifier with very wide bandwidth indeed, and although some esoteric audio amps may be capable of providing sufficient current to obtain a worthwhile reading, most cannot.
+ +It is notable that series resonance frequency for large electros is usually quite low, and it's easy to imagine that this is due to the cap's ESL - allegedly massive by some accounts. Not so at all. The answer is simple - the frequency is low because the capacitance is huge. Inductance is generally low as can be seen in Figure 3.1.1, where adding just 20nH (~20-25mm of straight wire) makes a significant difference. If the internal inductance were huge as claimed, adding 20nH would make no difference at all because it would be negligible by comparison. For the following, I've assumed 10nH for each 10mm (1cm) of wire.
+ +Something that is somewhat easier is to apply a squarewave at (or near) the approximate series resonant frequency. Although the graphs in Figure 3.1.1 are simulated, the simulation is based on actual measurements, using a 15,000µF 35V electrolytic capacitor. This component has a series resonant frequency of around 40kHz, however, this comes with caveats. It is a very broad resonance (as expected), and with 10V RMS applied via a 10Ω resistor, I was able to obtain a readable trace on the oscilloscope.
+ +The following graphs show two traces ... the first (red) is the waveform across the cap with 20mm of lead between the cap and the measurement point. The second (green) is with the frequency analyser probes placed as close to the capacitor as possible. A mere 20mm of lead equates to approximately 20nH of inductance, but as seen in Figure 3.1.2, that is enough to double the amplitude of the spike at the leading edge of the waveform. It is also enough to lower the 'self resonance' quite dramatically - the low frequencies are unchanged, but the high frequency (where the impedance starts to rise) is moved down by nearly 400kHz. The minimum amplitude difference is because of the lead resistance (simulated as 10 milliohms).
+ +These simulations agree quite closely with the measured results, so even though there will be some variance, it will be less than that obtained from different samples of real-world components. As with the pulse response, there was exactly zero measurable difference when a 220nF film cap was added in parallel - either with the extended leads or without. The simulation agrees for the most part (but only if a 'real world' capacitor with losses is used), and I have not included these data.
+ +In a real circuit, there is a possibility that a small film capacitor in parallel with a large electrolytic may cause ringing (damped oscillation) at a frequency determined by the series inductance of the electro (and any leads connecting to it) and the capacitance of the additional film cap. This is more likely to degrade performance than improve matters. It is possible to simulate this easily, but it is not so easy to measure because the frequency will be very high, and the impedance still very low. Because this possibility is rather remote, if it makes you feel better, by all means add a parallel film cap with minimal lead lengths between the film cap and the electro. Don't expect to hear a difference in a blind test though, because you almost certainly will not.
+ +The pulse response is interesting. This is as close as I could get to the actual measured waveform, and contrary to common belief, adding a parallel capacitor (in this case 220nF) did not change the measured pulse waveform one iota. The impedance of the film cap is simply much higher than that of the electro, so it cannot have any significant effect on the end result. There is an effect, but it is immeasurably small. The impedance (capacitive reactance) of an ideal 15,000µF cap at 1MHz is 10mΩ, but we must add the ESR to that result. According to the simulation, the total impedance is 134mΩ at 1MHz - inductive reactance is responsible for most of that. By comparison, an ideal 220nF capacitor will have a reactance of 723mΩ at the same frequency - more than 5 times that of the electro.
+ +Earlier, I made the comment that for resonance to actually work as expected, the circuit impedance must be 'significant' compared to the resonant impedance. It is time to examine exactly what is significant, and what is not. The resonant frequency of a capacitor and inductor is given by the equation ...
+ ++ fo = 1 / ( 2π × √( L×C )) ++ +
This is a general formula, and while it holds true in all cases, the Q (quality factor) of the resonance is dependent on the circuit impedance and component losses. In the case of electrolytic capacitors (especially large ones, where the resonant frequency is comparatively low), the capacitance is massively dominant compared to inductance. For this reason, electros will rarely (if ever) appear as a resonant circuit in any power supply or coupling capacitor application.
+ +The coupling cap is a good one to examine, because this is an area where it is often thought that a parallel capacitor will assist with high frequencies. If we assume a 4,700µF capacitor, having 100mΩ ESR and an inductance of 100nH (this is much muchworse than any real capacitor), its 'resonance' is at 7,341Hz. The test circuit is shown in Figure 3.0.1. As a coupling capacitor, it might appear to have inductance above the self resonant frequency, or so it would seem. Not so, as Figure 3.1.2 shows. In fact, the frequency response into an 8Ω load remains substantially flat up to 4.5MHz (0.5dB down), and is -3dB at 12.7MHz, despite the gross exaggeration of ESL.
+ +In Figure 3.1.3, you see that the 100nH of inductance in the capacitor is totally insignificant compared to the circuit impedance (8Ω) and the capacitance. The speaker and lead will have a great deal more inductance than the capacitor, so the high frequency limit will be lower than calculated. 'Limitations' caused by ESL don't happen ... well, actually they do happen, but the effect is so infinitesimally small that it can only be measured by simulation. In the example given, the difference between the voltage across the load resistor at 100kHz is reduced by less than 250µdB (micro decibel), compared to the 7.3kHz 'self resonance' frequency. Any effect seen is well outside the audio frequency range, and is completely swamped if a series inductor is included in the circuit to isolate the amplifier from high capacitance (low inductance) speaker leads.
+ +However, should you want to test the capacitor itself (highly recommended if you can), the connection of the measurement instrument (most commonly a scope) is critical. It must be connected to the cap with as close to zero lead-length as you can manage. You can then experiment with the lead length to see for yourself how even a short length of component lead changes everything. You will need a signal generator that can get to at least 10MHz for meaningful results (I used up to 25MHz for bench tests). For a given ESL, a larger cap produces a lower frequency.
+ ++ For example+ +
+ f = 1 / ( 2π × √( 10nH × 10µF )) = 503kHz
+ f = 1 / ( 2π × √( 10nH × 100µF )) = 159kHz
+ f = 1 / ( 2π × √( 10nH × 1mF )) = 50kHz +
Despite claims to the contrary you may find, the capacitor does not stop passing current above resonance, because the inductive reactance is so low. As noted further below, 10nH of inductance only has 1.57Ω of reactance at 25MHz. The capacitive reactance is 6.37µΩ at that frequency, and ESR will be between 0.5 and 1Ω for a 1mF (1,000µF), 10V electrolytic cap (I measured 600mΩ).
+ +For self resonance to be noticeable, the circuit impedance needs to be in the same general range as the capacitor's inductive impedance at resonance. As can be seen from the graph, the inductive reactance only reaches 8Ω at 12.7MHz (the upper -3dB frequency). Will a 100nF polypropylene cap (or any other type) in parallel be of any use whatsoever? From the above, we can safely say "No". Its reactance will be equal to the 8Ω load at 198kHz, but at that frequency, the electro has a total impedance of about 200mΩ, making the influence of the small parallel cap insignificant.
+ ++ A further test was done, using a 220µF 25V electrolytic capacitor, connected directly to the output of a 50Ω digital function generator, with an output level of 6V RMS (open + circuit). Oscilloscope leads were right at the base of the capacitor, so that (almost) no series inductance was present to ruin the test. At around 1MHz, the measured voltage across + the capacitor was a little under 50mV peak (35mV RMS). I measured the frequency where the output had risen by (roughly) 3dB, and it was at 8MHz (that's not a misprint). When I moved + the oscilloscope probe 25mm along the lead, that frequency fell to only 3MHz. The tiny amount of inductance of a 25mm length of straight wire was enough to reduce the upper +3dB + frequency by 5MHz - that is significant in anyone's language. + ++ +For anyone who has the ability to generate a waveform from 1-10MHz with an output impedance of around 50Ω, this is a test that will demonstrate once and for all that capacitor + 'self resonance' is rarely a problem. If your circuit has issues at high frequencies, you'll probably need to look elsewhere. I must admit that I did not expect that result, and the + capacitor's overall impedance was such that attenuation was in excess of 45dB, meaning that the capacitor has an ESR of about 0.3Ω, even at 1MHz. In this case, something in the + vicinity of 20nH of inductance was added by the lead, and a simulation bears out the measurement. Note too that the calculated ESR is in agreement with the 'typical' values shown in + Table 4.1 - 0.3Ω is about right for a 220uF capacitor.
+
+ As a test and to prove (at least to myself) that there's no point adding a film capacitor in parallel with an electro, I set up a small experiment. Using a 6,800µF 50V electro with + a 100Ω load, I applied a squarewave to the capacitor and measured the voltage across the load resistor. At 1kHz, there was no discernable difference, other than a small difference in + the bandwidth of the oscilloscope between Channel-A and Channel-B. This became more pronounced at 5MHz (yes, 5MHz!), but the capacitor's response was perfect. One would expect such a + large capacitor to have a fairly low self-resonance, certainly well below the 50MHz that my oscilloscope can resolve. + ++ +Adding a 100nF film cap in parallel with the electro achieved exactly ... nothing. Not even the smallest difference was seen. Swapping scope channels (because Channel-B is a wee bit + worse than Channel-A) showed the same. No change whatsoever - film cap on/ film cap off - no difference (this was done in 'real time'). This is what I expected, but it was rather satisfying to + actually (not) see it in action. Since the voltage on either side of the 6,800µF capacitor was identical at all test frequencies (from 1kHz up to 5MHz), it's obvious that the film cap + can't make it more identical. This is all the proof I need to be able to say that the simulations and calculations shown above are valid, and that the addition of a film cap is simply + a wasted component. + +
However, it's a cheap wasted component, and if adding it makes you feel better then use it by all means. Do not claim to others that you can hear the difference unless the comparison + has been made in a properly conducted double-blind test. +
Update: Feb 2023
+Recently I performed a few additional tests along the same lines as those described above. I tested un-bypassed 1,000µF, 10V electros in the Project 236 AC millivoltmeter, and one of the modules was tested out to 25MHz. The module with the 220µF caps used LM4562 opamps and was verified as dead flat to 4MHz - the upper limit is set by the opamp, not the capacitors. I subsequently ran tests on 1mF caps (1,000µF), extending the test to 25MHz. The scope was connected as close to the cap's body as I could manage, with no more than 1mm of lead between the cap and the probe + ground lead. The results were pretty much exactly as I expected, with ESR being dominant from 1kHz up, and an (estimated) 10nH of ESL causing the impedance to rise beyond ~6MHz. At 25MHz, moving the scope probe just 20mm down the capacitor's lead (adding another ~20nH inductance) increased the residual output very noticeably. 20mm of component lead almost tripled the voltage across the capacitor. Remember - the cap's internal inductance is 10 nano-Henrys, an almost inconceivably low value. Just 10mm of 0.7mm diameter wire raises that to 20nH.
+ +I suspect that most readers won't have a low impedance generator (50Ω) that can extend to 25MHz or more, so you'll have to take my word for it. These results are perfectly in keeping with all other tests I've performed, but were taken to the maximum I could achieve. Electrolytic caps perform flawlessly - even at frequencies that are quite ridiculous for audio. At 25MHz, a film cap does make a small difference, but it's really hard to keep all lead lengths short enough to prevent the leads from adding so much inductance that the results become hard to quantify. It's not easy to consider 10nH of inductance as a problem, as it's such a low value. The reactance of the extra 20nH is only 3.14Ω at 25MHz!
+ +The ESR of the 1mF cap I tested was 600mΩ, the reactance of the intrinsic 10nH ESL is 1.57Ω. Theory and practice are perfectly aligned, and a simulation of the test circuit gives almost identical results (but easier to measure because there's no noise to worry about). If you do have the ability to run a similar test, I suggest that you do so. There's nothing quite so satisfying than running an experiment like this and seeing just how sensitive it is to lead placement.
+ +Many tests that have been conducted over the years have come up with fantastical results that 'prove' otherwise, and almost without exception the reason is excessive capacitor lead lengths. If just 10mm of wire makes so much difference, it's not hard to imagine extraordinarily poor results if there's somewhere between 100mm to 1 metre of test leads in the way. This has been missed many times, by many experimenters.
+ + +ESL - Equivalent Series Inductance
+Claims have been made that most capacitors must be inductive, because they are made from a wound sandwich of film and foil, or metallised film. Because it is usually wound (in a flat coil), logically, this leads to inductance. The problem with that theory is that it assumes that the termination is made to the foil at the end only, but a quick check of manufacturer data will show that this is generally not the case. The vast majority of capacitors are made so that the foil or metallisation projects from each side (one 'plate' on one side, the other 'plate' on the other side). Each end is then connected so that all sections of the plate are joined together. This is shown in Figure 01. There is no longer a 'length' associated with the plate, and only its width becomes significant for inductance. When distortion is measured in a film capacitor, it is almost always the method used to connect to the foil that causes the problem, rather than the dielectric or foil material.
Aluminium is the most common metal for both foil and metallisation, and aluminium is notoriously difficult to attach to anything with good and reliable conductivity. That cap makers have made them as good (and as reliable) as they are is testament to the effort that goes into capacitor manufacture.
+ +The ESL (equivalent series inductance) of any given capacitor is related more to its physical size than anything else. A larger capacitor will almost always have a greater inductance than a smaller version of the same capacity. Usually, the lead length is of far greater importance for high frequency operation.
+ +To check the general principle, I decided to test a roll of telephone jumper wire as a capacitor. This is a fairly large roll of twisted pair (Cat-3), with the diameter of the roll being about 130mm, and 51mm high. The wire is twisted (as you would expect for twisted pair), and the roll contains about 80 metres of wire. Insulation is 0.25mm PVC, and wire diameter is 0.5mm. All in all, this should be an appalling capacitor.
+ +Being a coil of wire, one would expect a high inductance and therefore low self resonance. The measured values were ...
+ +++ ++
++ Capacitance 9.67nF + Dissipation Factor 0.059 + Self Resonance 303kHz + Inductance 28.5uH
The capacitance was measured at 9.67nF with two different meters, and DF (dissipation factor) was 0.059 ... not especially wonderful, but far better than I expected. Remember, this is a coil of twin wire, with the connection made at one end only. Inductance is calculated based on the self-resonant frequency - it is obviously much greater than a normal capacitor, but that is expected due to the physical size of the 'capacitor'. Needless to say, the fact that the connection was made at only one end doesn't help matters, but remember that this is a physically large coil of wire - one would expect that self resonance (and inductance) would be far worse than was the case.
+ +Joining the (start and finish) ends together gave the following ...
+ +++ ++
++ Capacitance 9.73nF + Dissipation Factor 0.059 + Self Resonance 1.0MHz + Inductance 2.6uH
This is a significant improvement to the inductance (roughly an order of magnitude), and also gives a small increase in capacitance. As you can now well imagine, by joining the entire edge of each capacitor plate, inductance is reduced to almost nothing, and only the physical size of the cap will influence the inductance. This can't be applied to my 'coil capacitor', because I only have access to each end, rather than the edges of the 'plates'. I think it is safe to assume that the wire coil performs far better than might have been expected, especially with the ends joined together. It also has far thicker insulation and smaller plate area than any real capacitor, both of which increase inductance and reduce capacitance.
+ +In case anyone was wondering, the inductance of the coil that was used as a capacitor for the test just described is 15.6mH, with a series resistance of 6.4Ω. This was measured with the two wires connected in parallel. Self resonance is at 27MHz. Not useful in the context of this article, but worth including.
ESR - Equivalent Series Resistance
+While inductance is not affected by the dielectric material, ESR is - it is dependent on the dissipation factor (DF) of the insulation material, as well as the resistance of the leads, plate material/ metallisation layers and plate terminations. Because DF varies with frequency and/or temperature in most common dielectrics, so too does ESR. However, ESR is rarely a problem in most audio circuits. It may be important in passive crossovers used in high powered systems, or for other applications where capacitor current is high. ESR (like all resistance) creates heat when current is passed, so for high current circuits the ESR is often a limiting factor. High ESR is a major cause of failure with SMPS (switchmode power supplies), because it reduces the damping of high-energy pulses that are characteristic of these circuits. Should transient voltages exceed the breakdown rating of switching MOSFETs, failure is inevitable.
ESR is very difficult to measure with low value capacitors, because the capacitive reactance is usually a great deal higher than the ESR itself. In general, it is safe to ignore ESR in most electrolytic and film caps used in signal level applications (such as electronic crossovers, coupling capacitors and opamp bypass applications). ESR becomes very important in high current power supplies, switching regulators/supplies and Class-D amplifiers, many digital circuits and any other application that demands high instantaneous currents that are supplied by the capacitor.
+ +µF / V | 10 V | 16 V | 25 V | 35 V | 63 V | 160 V | 250 V + |
1.0 | 5.0 | 4.0 | 6.0 | 10 | 20 + | ||
2.2 | 2.5 | 3.0 | 4.0 | 9.0 | 14 + | ||
4.7 | 2.5 | 2.0 | 2.0 | 6.0 | 5.0 + | ||
10 | 1.6 | 1.5 | 1.7 | 2.0 | 3.0 | 6.0 + | |
22 | 5.0 | 3.0 | 2.0 | 1.0 | 0.8 | 1.6 | 3.0 + |
47 | 3.0 | 2.0 | 1.0 | 1.0 | 0.6 | 1.0 | 2.0 + |
100 | 0.9 | 0.7 | 0.5 | 0.5 | 0.3 | 0.5 | 1.0 + |
220 | 0.3 | 0.4 | 0.4 | 0.2 | 0.15 | 0.25 | 0.5 + |
470 | 0.25 | 0.2 | 0.12 | 0.1 | 0.1 | 0.2 | 0.3 + |
1k0 | 0.1 | 0.1 | 0.1 | 0.04 | 0.04 | 0.15 | + |
4k7 | 0.06 | 0.05 | 0.05 | 0.05 | 0.05 | + | |
10k | 0.04 | 0.03 | 0.03 | 0.03 | + |
The table above shows the worst case ESR for new (standard, not low ESR) electrolytics for a range of capacitor values and voltages. If any cap with the value/ voltage shown has a measured ESR significantly exceeding that in the table, it is on the way out and should be replaced. The table was compiled using the details printed on my ESR meters, and is representative - some new caps will be much better than shown, some may not be quite as good, and ultimately you need to use your own judgement as to whether the measured ESR will cause a problem or not.
+ +Some people have wondered why ESR is usually tested at 100kHz. The reason is simple - at that frequency, the capacitive reactance of a 1µF cap is only 1.6Ω, and any 'resistance' measured is therefore predominantly the ESR of the capacitor. This is why it's rather pointless to try to measure the ESR of any capacitor below 1µF - it can be done, but the measurement frequency must be much higher than 100kHz. Larger values have much less reactance, and the capacitive reactance is negligible.
+ +Another way to measure ESR is to apply a very short pulse with a known current, and measure the voltage across the cap. This can only work with larger capacitors (typically greater than 10μF or so), using a 1μs pulse. The pulse amplitude, duration and current must be insufficient to cause the cap to charge by any appreciable amount. The measurement parameter have to be tightly controlled, because the ESR value is only valid for perhaps 100ns after the pulse is applied, and the measurement must be terminated before the pulse turns off. ESL will mess up the measurement, but greater errors are probable due to the inductance of the test leads.
+ +I suggest that you invest in a dedicated ESR meter. Good ones aren't especially cheap, but if you're doing service work a decent ESR meter will pay for itself in no time.
+ + +Much of the information shown above is food for thought. I have had several e-mails from readers (some within a day of the article first being published), and further comments should be made to clarify a couple of important points. Much ado was made above about coupling caps, and these are a favourite of the upgrade brigade. It is not uncommon to see circuit boards where the constructor (or 'upgrader') has used caps for which the PCB was never designed. As an example, look at Figure 5.1. This is not at all uncommon, but what is not understood is the potential for possibly major problems to be introduced.
+ +At first glance the diagram looks alright. Everything is connected where it should be, so where's the problem? Notice that the input signal is connected to the PCB via a shielded lead. The PCB may have a ground plane, but even if not, the connection between the shielded input lead is nice and short, and connects to C1 on the board. The space allowed is sufficient for a cap as originally designed.
+ +Now, someone comes along with a massive (physically) cap that was sold as polypropylene (but could easily be polyester). It won't fit on the board, so is installed as shown. Look at the length of unshielded lead between the input terminal and the rest of the circuit on the PCB. Remember that the entire capacitor is part of the unshielded circuit, not just its leads. Even if the cap is marked so you know which is the outer foil, that doesn't help either, as any noise picked up will be coupled through regardless (this is what the cap is for!). This arrangement has the potential to pick up considerable noise, and if part of a power amplifier it may even provide sufficient coupling from the output to cause oscillation. It goes without saying that noise or oscillation will not improve the sound, even though the owner may think that it has done so.
+ +The likelihood of noise or oscillation depends on many factors of course, and these may not be an issue (or not at a level that is audible). The mechanical reliability is also highly suspect, especially if the oversized cap has not been fastened such that it cannot move relative to the PCB. Had the on-board cap been installed in the position shown, its size is much less, and the board would have been tested with it in position - any problems would be immediately obvious.
+ +It doesn't help anyone when (supposedly) reputable outlets make comments along the lines of "capacitors are one of the most destructive electronic components to sound quality" (and yes, that is a direct quote. No qualification was provided, just that blanket statement which was presented as a 'fact'. Well, it's not a 'fact' - it might apply to some degree in some specific circumstances, but since these were not disclosed it's simply nonsense sales-speak. Yes, as discussed earlier in this article, there are some instances where capacitors can 'harm' sound quality. However, in most cases this means using a type that's inappropriate for the intended usage, or operating the cap outside of its design capabilities.
+ +Another point made is that series resonance can be used to your advantage. In the presence of a strong RF signal, normal bypassing may be insufficient to prevent the RF signal from getting through an audio system. If you know the frequency, then it is not difficult to tailor a ceramic cap and appropriate lead length to create almost a dead short for the interfering signal.
+ +For example, if there is an AM CB transmitter (these operate in the 27MHz band) nearby that insists upon interfering with your audio system, you need to know exactly where it is getting into the audio path. Once this has been determined (not easy, but certainly possible), you can deduce the necessary capacitance and inductance using the standard formula ...
+ ++ fo = 1 / ( 2π × √L×C ) ++ +
If we assume (say) a 1.2nF capacitor, then it works out that a series inductance of 28nH is needed. With between 4-7nH/cm depending on lead configuration and diameter, the cap needs to have leads about 60mm long, and with two leads that means 30mm each. The leads need to be as widely separated as possible, and some adjustment of the frequency is possible by pulling the leads wider apart or pushing them closer together. In its basic configuration, the combination of cap and leads has an impedance of less than 1Ω between 25MHz and 30MHz, and is resonant (effectively a short circuit) at about 27.5MHz.
+ +Maximum effectiveness is achieved when the circuit is tuned as accurately as possible, but this normally requires specialised RF test equipment. In general, a calculation will get you close enough for the circuit to be effective, and a bit of tweaking should enable you to get almost total rejection of an unwanted signal. Will this arrangement work? Probably, but the difficulty of maintaining the exact lead spacing needed means that it can't be recommended other than as an experiment.
+ +RF is by its very nature sneaky, and deliberately using capacitor+lead resonance to solve a problem is just one of many techniques that need to be tried to solve an RF interference problem. No one method will work in all cases, and serious problems may need a combination of different suppression tricks. This applies both for preventing RF getting into a circuit, or preventing it from getting out (and therefore causing interference elsewhere). Unintended tuned circuits caused by stray capacitance and PCB tracks can cause an otherwise well behaved digital circuit to 'transmit' one or more harmonics of the operating frequency, meaning it may fail mandatory RF interference testing.
+ +Another piece of 'food for thought' is the idea that capacitors can be 'slow' (by inference). There is one manufacturer (which shall not be named) that offers something they call a 'fast' capacitor. Although many outlandish claims are made for them, they will perform no better than any other film cap in crossovers and the like, but they will cost you a great deal more. In short, this is unmitigated horse feathers - the so-called 'fast' caps will not improve the efficiency of your loudspeaker drivers, and in a properly conducted double-blind test they will be indistinguishable from any other competent polypropylene capacitor.
+ +There is absolutely no reason that any capacitor used in a loudspeaker crossover network (active or passive) needs to be 'fast' (with or without the quotes). It's most unfortunate that makers resort to such claims, because all they do is confuse the hobbyist (and sometimes the professional as well). As readers will know, I really dislike any company that uses BS to promote its products, and when I detect such BS, I will deliberately use or recommend components from a different maker that makes no pretense at 'better' sound at usually much higher than normal prices.
+ +One point that needs to be made is regarding non-polarised (aka bipolar) electrolytic capacitors in loudspeaker crossover networks. Many speaker makers use them (even in often very expensive loudspeakers), but IMO these generally fall into the 'inappropriate usage' category. They are supposedly designed for just this purpose, but the current demands of a loudspeaker are often well beyond anything that can be tolerated for any length of time. This means that as the speaker system ages, the bipolar caps will lose capacitance and increase their ESR. This ruins the crossover network's frequency response, and changes the overall balance between bass and treble. Film and foil caps are considerably larger and more expensive, but are the only choice if you are building a speaker system. In real terms, the extra cost is not that great. However, a fully active system (with electronic crossover) avoids large and expensive caps altogether.
+ + +There are many places where capacitors require specific ratings, either to ensure longevity under adverse operating conditions, or for safety. X-Class capacitors are specified for use across the AC line (active/ live to neutral). Failure is usually open circuit, because if the insulation is punctured these caps 'self-heal', because the metallisation layer melts around the puncture and this removes the short. If this happens often enough, the capacitor's value will fall. In the unlikely event that an X-Class capacitor should fail short circuit, it is directly across the mains and it will blow the fuse. Note that you will see the terms 'X-Class' and 'Class X' (likewise for Y-Class), and they are interchangeable.
+ +Y-Class capacitors are safety critical, as they are generally used between the AC terminals and user-accessible parts of the equipment. Failure is likely to cause electric shock, so only fully certified parts from reliable suppliers should ever be used when building or repairing equipment that relies on Y-Class caps. The most common reason for using them is with equipment that is not earthed via a 3-wire mains cable, but where additional protection against radiated EMI (electromagnetic interference) and RFI (radio frequency interference) is required. Switchmode power supplies almost always require one or more Y-Class caps, even if an earth/ ground connection is normally provided.
+ +Safety Rating | Voltage Rating | Insulation Class | Test Voltage + |
X1 | 275 VAC | n/a | 4,000 V + |
X2 | 275 VAC | n/a | 2,500 V + |
X3 | 250 VAC | n/a | None + |
+ | |||
Y1 | 250, 400, 500 VAC | Double | 8kV + |
Y2 | 250, 300 VAC | Basic | 5kV + |
X-Class capacitors are defined as being suitable for use in situations where failure of the capacitor would not lead to danger of electric shock. Y-Class capacitors are defined as suitable for use in situations where failure of the capacitor could lead to danger of electric shock. All countries have standards that define the test limits and in the case of Y-Class caps, they will almost always have various safety certifications printed on the cap itself. Tests include insulation resistance, pulse testing, endurance and flammability.
+ +Y-Class caps may be ceramic (the most common by far) or metallised paper/ film. Ceramic caps may fail short circuit (a serious safety hazard), but Y-Class types are certified to very high standards, and a short is highly unlikely in practice. They are generally available only in low values, with the minimum around 470pF up to a maximum value of 10nF. This much capacitance is rarely used though - most will be no more than 2.2nF, which will pass a maximum current of 159µA at 230V and 50Hz (100µA at 120V and 60Hz).
+ +This current is enough to feel (depending on how sensitive you are), but is well below the value that will cause electric shock. However, be warned that the peak current into the input of an audio or digital circuit is sufficient to cause irreparable damage in some cases. Y-Class caps can also used for antenna coupling, where it's important to ensure that the antenna must never become live.
+ +Safety standard compliance markings include UL, CSA, VDE, SEMKO, FIMKO, NEMKO, DEMKO, SEV, CQC and CE (logos shown in order below [ 8 ])
+ +It's not at all unusual to see all of the above logos printed onto the capacitor, along with class, capacitance and voltage rating. Some of the standards that are applied to X and Y-Class capacitors include [ 9 ] ...
+ + ++ UL 1414 - USA+ +
+ UL 1283 - USA
+ CSA C22.2 No.1 - Canada
+ CSA C22.2 No.8 - Canada
+ EN 132400 - Europe
+ IEC 60384-14 - International
+ GB/T14472 - China +
Depending on where you live, there may be additional standards (as shown above) that apply. It's impractical to list every standard that applies. In general, if a component has been certified to European or International standards, it will be accepted as compliant elsewhere. This is quite obviously most important for Y-Class, but non-compliant (counterfeit or untested) capacitors across the AC line pose a fire risk.
+ +Most Y-Class caps are ceramic, but they are also available with polypropylene (metallised film) or paper dielectric. The latter are often claimed to be safer, but I have been unable to find any credible information that describes a genuine Y-Class cap failing short circuit. X-Class caps generally use either MKT (polyester) or MKP (polypropylene) 'box' format, with close tolerances on size and lead pitch designed for automatic insertion equipment. X1 caps will almost always be polypropylene, as they are rated for high pulse current and polypropylene has a lower dielectric loss than polyester.
+ +Note that Y-Class caps can always be used in place of X-Class, but not the other way around ! X-Class caps are safety rated for connection between active (live) and neutral, but must never be connected between active/ neutral and safety earth/ ground. Apart from the capacitor class, X-Class caps are typically available with capacitance values that are far greater than is permitted for Y-Class operation.
+ +It should go without saying that X and Y Class capacitors are not tested for 'sound'. Their role is to minimise EMI and RFI, and discussing their sonic properties would be utterly pointless. However, it is likely that if they are omitted, sound quality could be adversely affected, because high levels of high frequency noise may interfere with digital circuits or even become audible as an interfering signal. This is most likely with AM radio, but it may also affect other equipment as well.
+ +![]() | Note: Never use DC capacitors from mains to chassis or across the mains. They may survive with 120V mains, but they will fail when used at 230V AC. The infamous 'death cap' + used in many old guitar amps from the US is a perfect example of what you must not do. With AC, a corona discharge will occur in any miniscule air-pocket, and that will eventually cause the + capacitor to fail. The failure mode is often short-circuit, so if the chassis is not earthed (grounded) it may become live and kill someone. See Mains + Safety, section 8 for the details of this particularly dangerous practice. The only capacitor permitted between either or both mains leads and an un-earthed chassis is Class-Y1. + |
Capacitor can fail in a variety of ways, not all of which are 'exciting', and there are some particular failures that are 'unexpected'. In rare cases, a manufacturing defect can cause premature failure, but the most common failures are the result of using the wrong type of capacitor in a stressful circuit condition. This is particularly true of switchmode power supplies, but even there most film caps manage to outlast everything else.
+ +I have pointed out a major shortcoming of tantalum caps, but this isn't especially prevalent (but I still won't recommend tantalum in any project). Failure of signal level caps is not common in audio equipment, especially where the caps are only used as supply bypass or for signal-level coupling. Power supply filter caps can (and do) fail occasionally, and in many cases it's simply due to old age. A proper discussion of failures and what causes them is probably worthy of an article in itself, but there is a lot of info available already - in particular from capacitor manufacturers.
+ +Meanwhile, the chart below [ 10 ] shows the statistical failure rate for polyester capacitors. As the operating voltage reaches the maximum rated voltage (voltage ratio of 1 on the chart), it's obvious that failures become more likely. It's also apparent that the likelihood of failure increases with temperature. Any capacitor will have a limited life when operated at maximum permissible voltage and temperature. In general, if you halve the voltage ratio (e.g. use a 100V cap with no more than 50V), it's failure rate (or expected lifetime) is improved by a factor of 10.
+ +Failure itself needs to be defined, as it's not necessarily a total failure as such. Rather, the cap may simply drift out of tolerance or become noisy (often due to excessive leakage). In the worst case, it may become short circuit or open circuit, depending on the level of abuse it has to put up with. Vibration can break leads if the cap isn't firmly attached to the PCB, or the vibration may cause the bond between the end connection and metallised foil to separate (permanently or intermittently). Some cleaning solutions can cause accelerated degradation.
+ +If operated in 'normal' equipment where voltages and temperatures are within sensible limits, failure rates are generally exceptionally low. 'Vintage' capacitors may show excessive leakage or reduced capacitance, and high leakage (in particular) is common with waxed paper and other materials that were used before the advent of modern dielectrics and packaging techniques. As a capacitor ages, the dielectric material may show signs of microscopic cracking, which is made worse by elevated temperatures and/ or temperature cycling.
+ +Many plastic dielectrics with vapour metallisation (metallised film) are 'self healing' if subjected to a severe transient over voltage. When an arc forms, the plastic melts, and the metallised film is vaporised by the arc. Once normal voltage is restored, the cap still functions, but with a little less capacitance than it had before because it's lost some of its area. Should this happen repeatedly, the cap will lose capacitance until such time as it's no longer able to do its job.
+ +Electrolytic capacitors generally have a comparatively short rated life, but it's not at all uncommon for 40 year old electros in valve equipment to be working just fine. The average life for most electros is generally stated to be between 2,000 and 5,000 hours, but this is when it's used at maximum rated voltage and temperature. The expected life is usually much greater, although high ripple (or peak surge) current can cause premature failure. The life of an electro is approximately doubled for each 10°C below rated temperature (but maintained above 0°C), and the same principle applies for voltage. It's harder to pin down the actual scale, but something similar to the trend shown in Figure 7.1 is probably close.
+ +It's notable that the ESR of an electro starts to rise well before the capacitance can be considered out of tolerance. An ESR meter is essential to test electrolytic caps, as the capacitance value can still appear perfectly alright for a cap that's either malfunctioning in circuit or causing circuit malfunction. While it's common to look for electros that are bulging (internal over-pressure), it's not at all unusual for an electro to look and measure (value) as being fine. If measuring the ESR shows an increase from the expected figure, the cap is due for replacement, even if all other signs are 'good'.
+ +This is (by necessity) a fairly brief introduction to cap failure modes. Electrolytic caps are far less reliable than film or ceramic types, and doubly so if they are operated at high temperature, high voltage, or with high ripple current. Plastic film capacitors will usually outlast the equipment they're installed in, and failures of film caps in small signal (e.g. audio coupling) are almost unheard of.
+ +X-Class capacitors (for mains usage) have their own peculiar failure mode. When there's a short mains overvoltage, the cap's insulation can fail. This exposes the metallisation layer, and a tiny arc will cause it to vaporise. These caps are 'self-healing', in that the fault is cleared when the metallisation layer has opened a wide enough gap to quench the small internal arc. Each time the capacitor is subjected to a voltage spike that punctures the dielectric, a small reduction of capacitance is the result. After many years in service, an X-Class cap may have lost 50% or more of its rated capacitance, something that often isn't picked up by service personnel. As the capacitor degrades, the EMI performance of the product becomes a little bit worse, and it may eventually cause interference to nearby radio equipment. In some cases, the failure may lead to product failure, for example if the cap is used to make a small, low current 'off-line' power supply (quite common in mains operated appliances that have minimal electronics). In this role, they reduce the voltage but consume no power (described in detail in the article Small, Low Current Power Supplies).
+ +Y-Class capacitors are supposed to fail open-circuit, because even a momentary short could lead to electric shock or death. Fake Y-Class caps exist, usually in very cheap goods from China.
+ + +While most passive crossovers capacitors are perfectly alright, many are unsuitable (non-polarised electrolytic) and other 'specialty' types are seriously overpriced. In general, polypropylene dielectric is recommended because it's readily available in the sizes typically needed for a crossover network. There is absolutely no need to use exorbitantly expensive 'name brand' types, and doubly so if they make silly claims (such as saying they are 'fast'). All caps are fast (within the audio band at least), and paying much more than necessary doesn't change anything other than your bank balance. It's worth examining the difference between an ideal capacitor and one that would be considered dreadful.
+ +If we were to assume a particularly 'bad' capacitor with far higher ESL and ESR than any cap you can buy, it's worth seeing the difference between that and an 'ideal' capacitor. We can also see if the most common silly idea (adding a smaller cap in parallel) actually achieves anything useful. For the sake of the exercise, we'll use a 5.6µF cap in series with an 8Ω tweeter, but it has massive dielectric losses (as per Figure 1.3), 2µH of ESL, plus 100mΩ of ESR. This would be a truly woeful capacitor, and I have no idea where (or even if) you'd be able to get one. ESL in particular is extremely high (but achievable if the wiring is way too long). The nominal -3dB frequency is 3.55kHz with an 8Ω load (the tweeter).
+ +If the response of this cap is compared to an 'ideal' (i.e. perfect in every way) 5.6µF cap, the signal across the load is less than 0.11dB down at 50kHz (increasing with higher frequencies), but at the 'crossover' frequency of 3.55kHz, the difference is only 29mdB (0.029dB). If the (very poor) 5.6µF cap is now reduced to 5.5µF and an ideal 100nF cap is used in parallel (this is a very common approach), the woeful capacitor's response is 'improved' by (and wait for it ...) 1.1mdB (0.0011dB !) at 20kHz. Above 50kHz there is an improvement, but it's pointless. The response above 50kHz is due almost entirely to the extremely high ESL. The response graph is shown above (it's hard to see the difference), and I would hope that anyone who thinks this can't be right would test it for themselves.
+ +Consider that said 'woeful' cap is vastly worse than anything you can buy, so if adding an 'ideal' 100nF cap makes so little difference to that, it follows that it will make far less difference to a half-way decent capacitor that's intended for use in crossover networks. Note that I specifically exclude bipolar electrolytic caps that are supposedly designed for crossovers. I don't use them, and I suggest that you don't either, because they are simply not stable (or good) enough for the purpose. However, consider that many high-priced commercial designs do use bipolar electros, and may well get high praise from reviewers and users regardless.
+ +It should be apparent that adding yet another smaller cap will have even less influence - I've seen designs using (for example) 10µF, 100nF and 10nF in parallel. The only thing this does is to create a capacitor that's a little greater in value than otherwise (10.11µF in total). The small amount of extra capacitance will change the crossover frequency ever so slightly, assuming that all values are exact, which won't be the case unless they have been measured carefully.
+ ++ A quick calculation shows that the capacitive reactance of most 'typical' crossover caps remains the dominant impedance up to around 100kHz or more. Even a 10µF cap has a reactance + of 159mΩ at 100kHz, so the ESR (a figure that's very had to find for most film caps) is pretty much immaterial. Likewise, sensible and realistic values of ESL have little effect - + even at elevated frequencies. 2µH (the value used for the 'woeful' cap above, has a reactance of 1.25Ω at 100kHz, but at the crossover frequency (3.55kHz) it's only 45mΩ, + rising to 250mΩ at 20kHz. Real components will never be that bad, but excessive wire lengths when building the network can add ~1nH/mm of wire length (so 100mm of wire adds 100nH). + You'd need 2 metres of 'excess' wire to add 2µH, possible but unlikely.+ +
+ + Scouring datasheets, I found the ESR and ESL values for a 10µF, 630V DC, TDK metallised 'high power' (14.5A at 10kHz) polypropylene capacitor. With an ESL of 11nH (due entirely to the + physical length of the cap between its leads) and an ESR of 6.4mΩ, it's safe to say that adding a 100nF cap in parallel will achieve nothing, and its performance will exceed our + hearing abilities by a wide margin. (See 'MKP_B32674_678' PDF datasheet, page 9.) +
At least one manufacturer (who shall remain nameless) and a number of hobbyists have used something called 'charge coupling'. The basic arrangement is shown below, and serves one (and only one) purpose - it uses more parts. The reason seems supposedly to bias the caps away from the (allegedly) 'troublesome' zero crossing point, where the voltage across the dielectric reverses. This is snake oil at its very best (or worst), and no such phenomenon has ever been measured by anyone. The capacitance values shown are an example only, as are the 'bypass' caps which do nothing except increase the total capacitance value as noted above. The crossover inductor value is not specified because it's immaterial to this topic.
+ +I only heard about 'charge coupling' fairly recently, although it's apparently been around for a while. This is best described as a complete crock, and it doesn't stand up to even the most rudimentary scrutiny. Yes, the 9V battery will last for its shelf life (there's no current drawn other than a tiny leakage), but it's simply a waste of parts and a battery. We all know that the battery will eventually leak its essential fluids (which are corrosive). It's not shown, but a single 9V battery can 'charge' multiple different caps within a crossover network via additional 1MΩ resistors.
+ +I must confess that this has to be one of the most pointless exercises I have ever seen, even though there is no end to other pointless exercises in audio. The needless increase of parts (and cost) will never provide an audible 'improvement' in a double-blind test, although there might be a small audible difference because of the extra capacitance (due to the 'bypass' caps) which will change the crossover frequency slightly.
+ +Having said the above, there may be a small benefit if 'standard' polarised electrolytic caps were used back-to-back in order to create a 'bipolar' electro (that's how non-polarised electros are made), but anyone who's serious about building a decent passive crossover will not use electrolytic capacitors of any type anyway. Polarising film+foil caps achieves nothing, but some people appear to have been hoodwinked by a marketing department to think that there's some obscure benefit. It's notable that some speaker cable charlatans have used the same principle, by applying a 'charge' to the cable's insulator and claiming it makes a difference. It doesn't.
+ +I did a fairly extensive search to see if anyone had identified capacitor 'zero crossing distortion' or any effects of a 'polarity reversal' on any plastic dielectric and came up empty. There's no end of waveforms and articles that discuss zero crossing distortion in general, as it's a well known phenomenon with Class-B amplifiers and active power factor correction (PFC) circuits. Capacitors having the same 'problem'? That's simply nonsense. I have no idea why anyone thought that this was a 'real' problem - adding (completely redundant) 'bypass' caps is bad enough, but biasing ... words fail me.
+ + +First off, I must say that not all capacitor claims are malicious or motivated by profit. There are many cases where the supposed 'flaws' have been mentioned by someone, then picked up and taken out of context by someone else. Such claims can (and do) end up with a life of their own, and eventually may be accepted as 'fact' by many - especially in the DIY fraternity. In the vast majority of cases, it's believed by the reader, because s/he has no way to verify or refute the claim. Once a particular belief becomes established, it can be very hard to change someone's opinion, and often even concrete proof isn't enough. Of course, some simply defy explanation, and are at best bizarre and at worst verging on insanity. I have no idea what motivates people to be so ... deceptive (that's the best I can come up with without expletives).
+ +If wine, pharmaceuticals or scientific discoveries (for example) were tested the same way as audio, we would be in a very sorry state indeed. To be valid, all tests must be conducted blind, where the tester does not know which product they are using, or preferably double-blind, where neither tester nor controller knows which is which. That sighted tests are not only tolerated but encouraged is testament to the level of disconnection from reality that many 'magic component' believers obviously suffer.
+ +Unfortunately, there are some who will search the Net for 'proof' of their current theory, and will use or misuse any data they happen across to further their argument. That the data quoted may be out of context, flawed, or simply a load of codswallop is immaterial. Once these rumours start they can become 'gospel', and it then becomes almost impossible to get the discussion back into the land of reality. This technique has been shown many times with the 'great cable debate(s)', and much the same has happened with capacitors and other generally benign components. Rarely will anyone who believes the silly theories about caps actually perform measurements to see if anything changes.
+ +A popular piece of disinformation that really irks me is the claim that ceramic caps should not even be used for bypass applications in audio. This is drivel, and is totally unfounded drivel at that. The purpose of bypass caps is to store energy that ICs need on a short term basis, swamp PCB track inductance to ensure that circuits don't oscillate, and to ensure that digital circuits don't generate supply line glitches that produce erroneous data. There is absolutely no 'sound' associated with DC supply rails. Opamps don't care if the DC comes from a battery, solar cell, or rectified and filtered AC (sine or square wave, any frequency). DC is DC - it has no sound, and it contributes nothing to sound unless it is noisy or unstable. Supplies may be completely free of noise, or might be relatively noisy (especially where digital circuitry shares the same supply). Provided all noise (including voltage instability) is at a low enough level that the opamp's (or power amp's) PSRR (power supply rejection ratio) prevents the noise from intruding on the signal, supply noise (in moderation of course) is immaterial. As always, a blind test will reveal any genuine difference, and a sighted (non-blind) test will reveal the expected result, not reality.
+ +To reject ceramic bypass caps, which have the best high frequency performance of nearly all types, is sheer lunacy. This is especially true when discussing simple DC power supply lines. PCBs have capacitance too, and the standard fibreglass material used is fairly lossy - it is certainly useless for very high frequency work at several GHz. Maybe that ruins the sound too - I have heard such tales, and they can be discounted out of hand. Still, these fairy stories circulate, are perpetuated by those with a vested interest in separating people from their money, and will continue for as long as anyone is silly enough to believe it.
+ +Just as misleading are claims that all/most electrolytic caps are resonant within the audio band (or at least below 100kHz). Again, while this may (nominally) be true, it is meaningless without context and simply indicates that the person conducting the test is either failing to keep leads short (essential for such tests), or misinterprets the results by failing to conduct the test under 'real life' conditions. Always remember the influence of the lead (including test leads) inductance - that is usually far more important than a few nano-Henrys of capacitor internal inductance.
+ +Large electrolytic caps may well resonate (if a very broad impedance minimum can be considered resonance) within the audio band, but with impedances of well under 0.1Ω overall, this can hardly be claimed as a problem. Adding a film cap directly in parallel achieves exactly nothing, because its impedance is many times greater than that of the electrolytic for all frequencies of interest.
+ +However, if there is any distance between the large electro and the film cap (for example leads running from the power supply to an amplifier or preamp PCB), the caps are no longer in parallel. They are separated by an inductance determined by lead length as well as some resistance, and the extra capacitor will help to damp out the effect of the inductance - that is precisely why they are used. Again, the larger the local bypass cap, the better it will perform.
+ +One thing you can count on ... if anyone wants to sell you 'special' capacitors, designed to replace 'inferior' types (such as polyester, PET, Mylar®, or even polypropylene etc.), then you know that there is a problem. These vendors are cashing in on the audio snake-oil bandwagon. Like cables, many of their offerings are likely to be of good quality, but often at many times the genuine value of the part. Others will be perfectly ordinary parts that have been re-badged. For example, there are many capacitors sold as polypropylene that are actually PET/ Mylar/ polyester. It seems that no-one has ever heard the difference, simply believing that it is polypropylene, so therefore sounds 'better'.
+ +So called 'vintage' capacitors are often thought (or advertised) to be 'better' than modern ones. I've even seen it claimed that they were engineered for sound, back in the day. Absolute nonsense, and B.S. of the worst kind. Modern caps are almost always better than true 'vintage' types (paper, foil and wax was common in the early days of electronics), with far lower leakage, longer life (when used within their ratings) and they are completely sealed against the ingress of moisture. Modern caps made to 'vintage specifications' are as often as not just ordinary (or very ordinary) caps with huge price-tags.
+ +There are special caps, designed for specific applications. Photo-flash caps are one type that springs to mind, and these are designed to withstand massive discharge currents over very short periods. There are many others ... power-factor correction requires caps rated for the full mains AC voltage (with zero internal corona discharge or other damaging effects), handling perhaps 20 Amps or more - all day, every day. We can also find caps that are designed specifically for switchmode power supplies, handling very high ripple currents at high frequencies and often also elevated temperatures. There are safety rated caps used for mains interference suppression that are specially designed to prevent corona discharge with 230V mains, extremely high voltage caps, caps designed for low losses at very (or ultra) high frequency operation ... the list goes on and on, and is well beyond the scope of this article. One common capacitor that is rather extreme is the high voltage cap used in microwave ovens !
+ +Suffice to say that there is a great deal of real engineering needed in these cases, but most are not appropriate (or necessary) for normal audio applications. Such engineering (at the extreme levels) simply doesn't affect what we hear. Standard capacitors are perfectly acceptable for audio, and will rarely (if ever) compromise sound quality unless used beyond their ratings, or a completely inappropriate type is selected for the application (such as a high tempco, high-K multilayer ceramic in a filter circuit).
+ +There are countless other tests that can be performed on capacitors, and they will almost always show that there are differences between 'ideal' and 'real' parts (and/or different types). However, if the capacitor is selected wisely the differences are usually small and don't compromise the circuit's performance. There are exceptions of course, but pretty much by definition that involves using a capacitor that is not suitable for the task (meaning that the selection was not wise). No electronic component is perfect, so it's the designer's job to ensure that a part with the fewest compromises is used when specific performance goals are expected. For example, using an electrolytic capacitor for a precision integrator or sample and hold circuit would be unwise, as would be specifying a (very large and expensive) film+foil capacitor for a high current power supply.
+ +I have never seen the specifications for snake oil as a dielectric, but I expect it to have rather poor performance overall. With 'magic' components, the end user loses (but not the sellers - their profits for 'boutique' parts can be substantial). DIY audio is supposed to be fun, not an endless search for the mystery component that will make everything sound wonderful. Sad news ... that component does not exist.
+ +"The best cap is no cap" is claimed by some. I would much prefer to ensure that no DC flowed where it is unwelcome by using a cap than allow a fully DC coupled system to try to destroy speakers given the chance. Perform all the blind tests you can with capacitors used in real circuits. Having done this, if you still think there is a difference (and can prove it with statistically significant data obtained by double-blind testing), then you will probably be the first to do so.
+ +++ +If you wish to let me know that I am wrong, feel free to do so ... but only if you have conducted blind A-B tests and can provide some verifiable data to substantiate your + claim. I regularly get e-mails from people who claim that they can hear the difference between components, leads or whatever, but in every case thus far, no blind A-B test method was + used. I am not the least bit interested in hearing about the results of any sighted (non-blind) test, because such tests are misleading and simply verify existing opinion. In + fact, the 'result' of the entire test is only an opinion, as there is never any data to substantiate the claim.
+
Electronic equipment is designed using facts and mathematics, not opinion and dogma.
+ + +++ + ++
+1 Capacitor ESR Ratings - Transtronics + 2 Improved Spice Models of Aluminum Electrolytic Capacitors for Inverter Applications + - Sam G. Parler, Jr. Cornell Dubilier + 3 Capacitor Sound - Jul, Sep, Oct, Nov, Dec 2002, Jan 2003 editions of Electronics World (formerly Wireless World), Cyril Bateman + 4 Inductance Of A Straight Wire - Resources for Electrochemistry + 5 Self resonant frequency of a capacitor - Ivor Catt + 6 Understand Capacitor Soakage to Optimize Analog Systems, Bob Pease, National Semiconductor + 7 Mechanism Of Ageing Characteristics In Capacitors - Murata + 8 Y1,Y2 SERIES Safety recognized capacitor - Shenzhen DXM Technology Co., Ltd + 9 EMI/RFI Suppression Capacitors - Illinois Capacitor Inc. + 10 Why Capacitors Fail, Technical Bulletin #3 - Electrocube + 11 Types of Wound Film Capacitors - U.S. Tech + 12 General Technical Information - Vishay Roederstein + 13 Coltan - ore used for production of tantalum and niobium (Wikipedia) + 14 How to reduce acoustic noise of MLCCs in power applications - TI blog (archived as PDF) + 15 Kemet - Here's What Makes MLCC Dielectrics Different + 16 Time Dependent Capacitance Drift of X7R MLCCs Under + Exposure to a Constant DC Bias Voltage (eletimes) +
+ There used to be a selection of very handy industry and manufacturer info here, but the linked pages have all been moved or deleted.+ +. This is now very common, unfortunately. +
![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Magnetic Phono Pickup Cartridges |
![]() ![]() |
There is rather a lot of information on the Net regarding vinyl record pickup (phono) cartridges, and while some is very good, there's also a lot of nonsense. Even manufacturers seem to get things badly wrong, and this surprises me. Considering how long people have been making phono pickups, I fully expected that the information provided would be rather more useful than it often seems to be.
+ +Note that this article concentrates on magnetic cartridges. Piezo ('crystal') pickups are not considered, simply because they do not fulfil the requirements of hi-fi. Most are guaranteed to cause irreparable damage to the vinyl disc in as little as one single playing. In addition, I focus on moving magnet/iron cartridges, as these seem to cause the most problems. Moving coil pickups are (generally) better behaved, but many of the issues are the same regardless of the type of cartridge - the impedance might be quite different, but the problems are simply scaled to suit.
+ +The pickup cartridge is a relatively complex piece of electro-mechanical ingenuity, and requires high precision manufacturing techniques. Much of the internal structure is extremely small, and microscopes are needed to see the tiniest parts. However, no matter how one tries to get around it, the laws of physics still apply. The cartridge itself shows an electrical impedance that needs to be loaded properly if the full frequency response is to be obtained in practice.
+ +Often the expected results are not achieved, and lovers of vinyl have vast numbers of websites and forum pages devoted to their cause. One of the issues faced is that there is a multiplicity of different issues - every part of the system causes something to happen at some frequency. There is the mechanical resonance of the pickup arm itself (with cartridge attached of course), and there is actually scope for several different resonant effects in this part of the system. Most effects will be noticed at the lowest frequencies, and it is desirable that the resonance be as low as possible (and below the lowest frequency to be reproduced).
+ +Then we have to deal with the cantilever - the lightweight tube that carries the stylus and the moving magnet/ iron/ coil assembly. Being a mechanical device, it has a resonance too, and it will affect the high frequency end of the spectrum. It is preferable if the resonance is well above the audio range, but this cannot always be achieved in practice.
+ +There is not a lot that we can do about the mechanical resonances in any turntable/arm/pickup assembly, other than a careful choice of the various components used. That this is not an exact science is to understate the matter - almost every manufacturer of these components thinks they have the answer, so it's no wonder that different combinations can sound very different from each other. Unlike with most other music sources (CD, SACD, FM, etc.), these differences are often not particularly subtle, and can be glaringly obvious in some cases.
+ + +The options for dealing with mechanical resonances are very limited - other than changing often very expensive equipment. Not so though with the electrical resonance(s), as they are much easier to model. Unfortunately, this doesn't mean that there is an easy fix. Some cartridges seem designed to thwart your every attempt to get a satisfactory result, often because of very high inductance. The basic electrical model of a cartridge is shown in Figure 1, and it is essentially a simple resistor, inductor, capacitor (RLC) filter. The inductance is split and one section is damped by a resistor - this simulates the semi-inductance of the cartridge (see below for more on this topic).
+ + +As should be obvious, there is an inevitable (and predictable) relationship between the inductive and capacitive elements, and this is moderated by the included resistances. As the value of inductance and capacitance increases, the resonant frequency falls. The ideal outcome is to ensure that there is no gradual rolloff, and no large peak at the high frequency end. Figure 2 shows the response of a (more or less) typical low inductance cartridge, having 230mH of inductance, and 1.2k winding resistance.
+ +The 47k resistor is the terminating impedance of the phono preamp (this is standard), and the 100pF of capacitance is due to the cable between the cartridge and preamp. In some cases, manufacturers recommend (or at least mention) a 'suitable' range of capacitance, but in many cases the upper limit is much too high. Some suggested loadings are such that there can be a peak of 4dB or more at a frequency well below 20kHz. One I modelled peaked at 8kHz - this might be alright for a DJ playing scratch mixes at 110dB SPL in a nightclub, but is hardly hi-fi and is not likely to be well received at home.
+ +The red trace shows the response with 100pF of capacitance (typical of the average cable run), and the green trace shows what happens if the capacitance is increased to 500pF. It is fairly obvious that with this cartridge (and most others), the capacitance needs to be kept low. I checked the specifications of a large number of cartridges, and the majority of moving magnet and moving iron types have an inductance of 400mH or more. The highest I've seen in specifications is 930mH, although the test cartridge I used initially appeared to be even higher (based on (flawed inductance) measurements). Great care is needed to ensure that measurement results reflect reality.
+ +The blue trace in Figure 2 shows the response of the cartridge when the standard 47k resistor is increased to 100k, and capacitance maintained at 100pF.
+ + +If you do want to measure the cartridge inductance, use the setup shown in Figure 3. You need to measure at a low frequency to gain a reference. 10Hz is a good place to start, but it must be resistive at the reference frequency - at least 2 octaves below the frequency where the voltage starts to rise. This is used to find the +3dB frequency. When the signal level has increased by 3dB (1.414 times the voltage measured at 10Hz), the inductive reactance (XL) is equal to the DC resistance of the cartridge. Now you can calculate the inductance ...
+ ++ L = XL / ( 2π × f ) + ++ +From the test I performed, the following values were obtained ...
+ + Voltage at 10Hz = 26.4mV
+ +3dB Voltage = 37.5mV ( 26.3 × 1.414 )
+ +3dB Frequency = 840Hz
+ Inductance = 232mH +
To verify that this works in practice, I also modelled the response in a simulator, using the measured and calculated values. The result was close to being an exact match (these data were used for Figures 1 & 2) - the process works!
In the two drawings, the 232mH is made up by 77mH as 'pure' inductance, with 155mH (paralleled by 68k) as the semi-inductance. When modelled in the simulator, this combination matched the voltages measured on the physical cartridge to a degree that one can be reasonably sure that the equivalent circuit is correct.
+ +While it may seem a bit drastic to subject a pickup cartridge to such high voltages (compared to the 5mV or so you get from them), there is no reason to expect that any damage will occur. Even if the stylus and cantilever is deflected (I couldn't detect any movement), it will be far less than that caused by using a stylus brush.
+ +Measurement of the cartridge parameters is not an especially easy undertaking, and determining the model from the measured electrical parameters is also somewhat irksome. One thing that is clear (but mentioned in only one reference I could find [ 1 ]), is that the 'inductance' of the cartridge is actually a 'semi-inductance'. It is imperfect, because of eddy current losses within the magnet/coil assemblies. When inductance figures are provided by the maker, they sometimes (but not often enough) specify the frequency. Knowing this is important to be able to characterise the electrical parameters properly, however the best you can expect is a figure for inductance at an unspecified frequency, and DC resistance. This is not enough to allow you to work out the real effects of loading on the cartridge's frequency response.
+ +Using an inductance meter to measure the cartridge's inductance won't work! The DC resistance is high compared to the inductive reactance, so the meter will lie, and indicate that inductance is much higher than it really is. In addition, the test frequency is determined by the meter, and is unlikely to be appropriate for the task. Most meters don't even tell you what frequency is in use, so you don't get the opportunity to decide if it's appropriate or not (most will satisfy the 'not' criterion). A pickup I measured showed 1.55H (1550mH), which is silly - no cartridge will have that much inductance, however, the actual inductance calculated to be 1.15H, which is still silly and makes the cartridge pretty much unusable for anything other than very casual listening.
+ +When the cartridge is measured, the amplitude rise with increasing frequency does not follow the ~6dB/octave one would expect from a 'perfect' inductor. This is partly because of the finite source impedance (I tested using 47k and 100k), but also because of the losses within the cartridge assembly itself that result in the 'semi-inductance' behaviour.
+ +While I'd love to be able to tell you that I devised a simple formula to allow you to separate the inductance and semi-inductance to obtain an reasonably accurate model, I cannot. I figured out the circuit shown in Figure 1 by trial and error using a simulator - a tedious exercise to put it mildly. Another (completely different) cartridge I measured gave me the following data points ...
+ +Freq. | Voltage (mV) | Error + |
100 | 28.7 | n/a + |
200 | 38.3 | 0.50 dB + |
500 | 76.2 | 0.05 dB + |
1k | 135 | 1.05 dB + |
2k | 243 | -0.92 dB + |
5k | 436 | -0.94 dB + |
10k | 595 | -3.32 dB + |
20k | 726 | -4.29 dB + |
The error column referred to in the table is based on the figure that should be obtained if the inductor were a 'true' inductance, supplied from an infinite voltage via an infinite impedance (so don't fret too much that it can't be achieved). The low frequency end is of little consequence - the LF models perfectly due to the series resistance. At the higher frequencies, it is obvious that the effective inductance falls with increasing frequency. For this particular pickup, the inductance is fine up to somewhere between 2kHz and 5kHz, with the losses becoming more pronounced as the frequency increases further. The measured impedance response of the test cartridge is shown below.
+ +The second test unit was subjected to an input signal so I could determine its parameters. The measurement data are shown in Table 1 (above). The red trace shows the simulated response (based on an ideal inductance), and green is the plot based on the measured values (I only tested this between 100Hz and 20kHz). To take these measurements, a signal generator puts a signal into the cartridge, via the normal 47k resistor. The voltages shown were measured across the cartridge (see the methodology shown in Figure 3). You can see that the green trace starts out with a little more level than the simulated (ideal) response, but is equal at ~1.5kHz, and falls below the ideal response at higher frequencies. This is the direct result of eddy current losses, which show that the 'semi-inductance' is a real phenomenon. As noted earlier, this is not the same cartridge shown in Figures 1 to 3 - it's a completely different unit.
+ +Don't expect the slight loss of inductance at high frequencies to cause reduced attenuation at high frequencies - the signal amplitude will also fall as the losses increase. This too can be modelled, but to do so requires a great deal more complexity in the model, and it can't be verified by any sensible (i.e. non-destructive) test methodology that I can think of. Cartridge manufacturers often use cantilever resonance to attempt to get a flat response up to the highest frequencies, but this can add further complications. For example, a cantilever carrying a dirty stylus will be heavier than one where everything is nice and clean, and will have a slightly lower resonant frequency (and perhaps some additional damping as well). A change in HF response is likely, but it will probably be inaudible amongst the greatly increased distortion caused by the dirty stylus (plus, the vinyl will be damaged as well).
+ +On top of everything else, there can be some interesting (but usually not good) phase anomalies created when any form of EQ is applied, and this is especially true of a mechanical resonance. Whether it actually causes audible problems is unknown (to me at least), but some [ 2 ] claim that the results are very poor. I can't confirm this, but I expect that the audibility effects may be overstated - at least to a degree.
+ +The cable is another issue that must be considered. There is the cable that runs from the headshell to the sockets on the back of the turntable, and also the capacitance of the cable between the TT and phono preamp. I measured a fairly typical cable, and got a figure of 326pF for 1.2 metres of cable - this is not good, and IMO is generally far too high. By comparison, a 1.5m length of miniature RF coax (RG174/U) was only 155pF (close enough to 100pF/metre). Without extensive further research into cable types and their capacitance (outside the scope of this article), I am unable to make any useful comments on this. The issue with cable capacitance is that if it's too high, the only way to reduce it is to change cables - hit and miss at best. This also applies to phono preamps that have a shunt capacitor built in to the preamp - again, if it's too high, there may be no way to disable it - especially for commercial products that are still under warranty, and lack a switched capacitance option.
+ +As always, beware of the snake oil! There are some utterly outrageous claims made for all cables, and tone arm/phono leads are no different. It matters not a jot if the cable is 6N pure (6 nines, or 99.9999% purity), and anyone who claims otherwise is lying. The use of Litz wire is common and fairly normal for the tone arm cable, because it needs to be very flexible to cope with swinging back and forth and up and down movements for years on end. Low capacitance is also highly desirable - remember, you can always add capacitance, but you can't take it away. The use of precious metals is a benefit for the contact areas, particularly gold because it doesn't tarnish. Silver cables are just a way to separate you from your money - they don't (and can't) sound 'better'. No double blind test has ever shown that anyone can hear the difference between any two cables with similar inductance and capacitance, regardless of price. Nor can any (other) differences be measured, even with the most sophisticated equipment. Cable distortion? Complete nonsense, provided that all connections are well soldered and wiping surfaces are free of oxides! You don't have to spend $1k/metre to get that.
+ +Ultimately, there is only one way you can accurately characterise the response of any cartridge and cable arrangement, and that is to use a reference disc with recorded tones at different frequencies. This takes everything into account ... electrical characteristics, mechanical resonances, and anything else that may influence the cartridge's response. This includes the RIAA equalised preamp.
+ +While this may ultimately allow you to get perfectly flat frequency response, this still does not guarantee that it will sound any good. It's also unfortunate (but true) that vinyl records don't like to be played over and over again, and display their displeasure by losing the high frequencies first. Test discs aren't cheap nor very easy to find any more, so the final adjustment may well end up being purely subjective.
+ +It's little wonder that there is a vast discrepancy between the maker's specifications and (amateur) listener reviews with many phono pickups. With few exceptions, I consider commercial (magazine or Web) reviews to be useless, because it's very rare that any product gets the thumbs down, regardless of how badly it may perform. Unfortunately, any subjective assessment is also likely to be flawed unless it has been conducted using double-blind techniques - a very difficult proposition with phono cartridges.
+ +Resistive and capacitive loading can alter the performance of a phono cartridge rather dramatically at high frequencies, and tone arm resonance can have a significant effect at low frequencies (although hopefully not with quality units). Since this is obviously the case, it's very hard to argue that the accuracy of a phono preamp's RIAA equalisation is especially significant. Certainly, it should be as close as possible, but any deviation of a dB or so either way is of little consequence. This is particularly true since no-one knows (and/or those who do won't say) what other equalisation was applied to the master recording or the disc-cutting lathe. What is important is that the two channels should track each other very well to preserve the stereo image.
+ + +Now that we can determine the cartridge parameters, we can go about determining the optimal loading for the cartridge. What we don't know is the effect of any HF boost caused by cantilever resonance, so we can only model based on the electrical parameters. In almost all cases, it is reasonably safe to assume that the lowest possible capacitance will give the flattest response, but I wouldn't want to bet on that. What we do know with certainty is that if the capacitance is higher than desirable, the response will peak at some frequency that's within the audio band (see Figure 2). Add cantilever resonance and any other effects that no-one will tell us about, and the results become highly unpredictable.
+ +In some cases, the cartridge might benefit by using a higher than normal load impedance. See Figure 2 again, and note the blue trace. In this case, a higher load resistance means that even less capacitance is tolerable. The blue trace was done using the model in Figure 1, but with 100k resistance and 100pF. The capacitance has to be reduced to 30pF to get flat response!
+ +It is probable that you will not be able to get capacitance much lower than ~100pF, although mounting the phono preamp within the turntable eliminates the output cable's contribution altogether. Other than using unshielded cables, this is the best way to minimise the capacitance. Unshielded leads of any kind are generally a really bad idea for phono pickups, because such leads will pick up any interference that is present. Hearing a random radio station or other noises rarely adds anything useful to the recorded material (although there may be exceptions with some 'music' genres).
+ +Given that there is a practical lower limit for the capacitance, cartridges with comparatively low inductance are easier to work with, but they will generally have lower output. In general, we expect these cartridges to have an output level of around 2.5mV to 4mV at 1000 Hz, with a 5cm/sec recording velocity. Inductance of around 400mH or lower seems to be the most desirable, but this limits the range of available cartridges quite dramatically, and may still not give you optimum results. In very generalised terms, I suggest that anything over 550mH may cause problems with high frequency response, which must be augmented by cantilever resonance with any realistically achievable cable capacitance.
+ + +Phono pickup cartridges exhibit many different effects, many of which we are completely unable to model because the information is simply not available. It should be clear that most cartridges are likely to perform at their best with no more than 100pF ('typical' RCA lead capacitance) of shunt capacitance, although there will be exceptions. In some cases, personal preference will guide the decision that either a higher capacitance or higher load resistance gives a subjectively better result. It's also probable that some cartridges never manage to sound quite right in some systems.
+ +At least one thing should be very clear - the common 'wisdom' that higher capacitance makes cartridges sound dull is obviously wrong. If the capacitance is too high, you will get a resonant peak at some frequency within the audio band, and this will often give the illusion of 'brightness', but the highest frequencies are lost. Figure 2 shows this very well - the peak shown with 500pF is +4dB at about 15kHz, and this will sound very bright indeed. If the cartridge has more inductance, the peak frequency is lowered, and can easily fall below 10kHz.
+ +For reasons that I can't quite fathom, I was (when this was written) unable to find inductance data for any moving coil pickup. Not just the low output varieties, but the high output (around 2mV at 5cm/sec) types as well. It's obvious that the inductance will be much lower than a high impedance moving magnet/iron cartridge, but so too is the recommended load (or terminating) impedance for low impedance MC pickups. At 100 ohms, far less inductance will cause HF rolloff than with a higher impedance, but capacitance becomes irrelevant. No sensible cable will ever have enough capacitance to cause a problem.
+ +A reader has since sent me the information for two moving coil cartridges. He has the spec sheet for the AT07 and AT09 moving coil cartridges, and they give figures for resistance (12Ω) and inductance (12µH). This means that the total impedance is only 12.08Ω at 1kHz. I have since looked up the specifications for the AT-ART9, and that shows resistance to be 12Ω, with 25µH inductance at 1kHz. This gives a total impedance of 12.16Ω at 1kHz. The AT-ART7 has an inductance of only 8µH (1kHz). Note that these are very expensive cartridges (around US$1,000 ! ).
+ +This is probably one reason that the (low output) moving coil construction is thought by many to be 'superior' to moving magnet/iron types. Even high output moving coil pickups are likely to be looked down upon by many an audiophile (for example, those who also consider $10k speaker cables to be a bargain are likely candidates). There seems little doubt that many moving coil cartridges are extremely good - but one is still limited by the available source material, as well as the need for a very low noise 'head' amplifier or an expensive transformer.
+ +The final result can really only be measured using a test disc and the cartridge of choice. Subjective 'listening test' evaluations may well give you a result that you like and can live with, but there is no guarantee that this will be accurate or result in an overall flat response. In the end, it doesn't really matter a that much - you listen to your system, and if you like the performance then you have achieved your objective.
+ +Regardless of what you do, there will be discs that sound superb, and others that are rubbish. This probably has nothing to do with your system, but can be the result of over-enthusiastic equalisation or compression during mastering or cutting. It is unrealistic to expect that everything will sound good. This doesn't happen with CDs, SACDs, FM, DAB, Blu-Ray or any other medium, and to expect it from vinyl (with all its additional complexities) is ... well, ... unrealistic.
+ + +![]() ![]() |
![]() |
Elliott Sound Products | CFB Vs. VFB |
The vast majority of common opamps and power amps use voltage feedback. Current feedback used to be common for early power amps (most often using a single supply), and was also used in valve (vacuum tube) power amplifiers. In this article, we'll look at the differences, which in many cases are surprisingly subtle. Despite the term 'current feedback' there is always a voltage present at the feedback node, and a number of writers on the topic have disputed the term 'current feedback'. In most cases they're wrong, and current feedback opamps (while uncommon) offer some significant advantages.
Current feedback also offers some benefits with power amplifiers as well. However, one thing that CFB amplifiers are not designed for is high DC accuracy. This is rarely a major problem in the applications where CFB opamps are used, but there's a significant disadvantage with an audio power amplifier, one of the main reasons they fell from favour. Almost all CFB power amps use capacitor coupling from the amp to the load (the speaker), because it's hard to minimise the DC offset.
It's important to understand that there are two completely different definitions for current feedback. The first is where the amp is designed to provide a constant current through the load (a trans-impedance amplifier), or uses a mixture of voltage and current feedback to obtain a specified output impedance that's significantly greater than the 'ideal' zero ohms. The Project 27 guitar amplifier is a case in point, where the output impedance is deliberately raised to allow the speaker to 'do it's own thing' as is expected for a guitar amp. Current drive in this form is also used with reverb tanks, and many other inductive transducers.
The second definition is applicable to Project 37 (DoZ Preamp) and Project 217 'practice' amplifier. In these cases, the feedback is applied as a current into a low-impedance inverting input. Unlike a VFB amplifier which has a pair of high-impedance inputs, in a CFB circuit the inverting and non-inverting inputs have very different input impedances.
The CFB amplifier (or opamp) sacrifices DC offset performance for wide bandwidth and (usually) a much greater phase margin when feedback is applied. The bandwidth of a CFB amplifier is determined by the ft of the transistors (and perhaps a Miller [dominant pole] capacitor), but the design means that the resistance of the feedback resistor is the dominant influence. The feedback resistance is almost always a comparatively low value. Unlike a VFB circuit, using equal value resistors for the two inputs does not improve the DC offset, but makes it worse!
A great deal of the literature you'll find for CFB amps and opamps concentrates on advanced maths, and tends to be analytical, rather than simple explanations for the internal processes. A perfect example follows, including 'equivalent circuits' that are (IMO) not very helpful for beginners, and aren't even much use for anyone other than an academic in the field. That doesn't describe me, and I doubt it describes many of my readers either.
The following is a quote from the TI application Report [ 1 ] ...
The ideal VFB opamp model is a powerful tool that aids in understanding basic VFB opamp operation. There is also an ideal model for the CFB opamp. Figure 1A shows the VFB ideal model and Figure 1B shows the CFB ideal model.
Figure 1.1 - Voltage Feedback vs. Current Feedback Ideal Models
In a VFB opamp ...
Vo = a × Ve
Where [ Ve = Vp - Vn ] is called the error voltage and [ a ] is the open loop voltage gain of the amplifier.
In a CFB opamp ...
Vo = ie × Zt
Where [ ie ] is called the error current and [ Zt ] is the open loop trans-impedance gain of the amplifier. An amplifier where the output is a voltage that depends on the input current is called a trans-impedance amplifier because the transfer function equates to an impedance.
Vo / ie = Zt
The above formulae are also from the TI Application Report, and Figure 1 is adapted from the same document. While it's claimed that the 'Ideal Models' are a 'powerful' way to understand operation, this is probably open to dispute, especially by non-engineers. The formulae are also not generally useful for real-life applications, and while there are many more formulae that can be found in the literature cited in the references, most are not particularly helpful for anything other than a theoretical understanding. As always, I will focus on practical examples, all of which have been simulated to obtain the results claimed for each circuit.
Feedback is applied in the same way (at least externally) for both types of opamp. Either can be used in inverting or non-inverting mode, but CFB opamps generally have much lower values for the feedback resistors. If used in inverting mode, that means that input impedance is far lower than for VFB opamps, with values rarely exceeding 1k. With VFB opamps, the feedback drives (inasmuch as is possible) the error voltage to zero, while a CFB opamp drives the error current to zero.
Obtaining actual error zero voltage or current is never possible, but it's more convenient to assume zero than to wrestle with formulae to get an answer that's not useful anyway (due largely to resistor tolerances which are often the dominant error source). With any high-gain circuit, the error terms are very small. For example, if a circuit has an open-loop gain of 60dB (× 1,000), the error is 1mV/V.
Most opamps (including current feedback types) have up to 100dB (× 100,000) open-loop gain, so the error is closer to 10µV/V. Trying to include that in common feedback formulae is pointless, because normal resistors will provide a far greater error unless they are very close tolerance - at least 0.01%, preferably better. When using common 1% resistors, any error introduced due to finite gain is minimal. This assumes that the open-loop gain is sufficiently greater than the closed loop gain to ensure that the gain is determined by the feedback components, not the amplifying stage.
If a stage has an open-loop gain of 100 and is configured for a gain of 10 with feedback, the gain will be 9.1 - a significant error. To get within 1% of the required gain (× 10), the open-loop gain needs to be at least 1,000 (closed loop gain of 9.9, a 1% error). With open-loop gain of 10,000 (80dB), the gain is 9.99, an error of only 0.1%. These criteria apply whether the amplifying stage is configured for voltage or current feedback. The amplifying device(s) are irrelevant.
The circuit below shows a typical VFB opamp, in this case a µA741 (or ½ 1458 dual). The inputs go to Q1 and Q2, which are emitter-followers in cascode with Q3 and Q4 to create the error amplifier. Both inputs are high impedance, and the bandwidth is determined almost completely by the 30pF capacitor, the dominant-pole. The input stage has a current mirror as the collector load (Q5, Q6 and Q7).
Q17, Q18 is the voltage amplifier stage (VAS), which uses Q13 as a constant current collector load for improved linearity. The circuit is very different from most power amplifiers, although the principles are pretty much identical. The output stage is (predictably) designed for much lower current. Feedback is from the output to the 'In-' terminal, and is applied in exactly the same way as any other IC opamp. Rin is 10k, Rfb1 is 10k, Rfb2 is 5k with Cfb as 22µF (a -3dB frequency of 1.45Hz).
Figure 2.1 - Typical Voltage Feedback Opamp (µA741)
Rather than 'invent' a schematic, I've elected to use the µA741 as an example. It's not a fast device by any stretch of the imagination, with a quoted slew-rate of only 0.5V/ µs, and a unity gain bandwidth of just 1MHz. I used it here simply because it's one of the few opamps with a (more-or-less) complete schematic, and I had already converted it to my 'normal' ESP drawing style. It's also instructive in its own right, and worthy of analysis (by you, not me ).
Unlike the CFB opamp circuits shown, you don't need to build a discrete VFB opamp, and the sensible approach is to use something that you have on hand if you wish to experiment. The µA741 might be a bit too pedestrian for any useful high-frequency response, but it's also a good starting point.
The next circuit is for a CFB opamp, using common, readily available transistors. I didn't use a commercially available circuit, but one that's widely referenced on the Net. The input ('In+') stage is a buffer, using complementary emitter followers. Feedback is applied to the emitters of Q3 and Q4, which is a low-impedance point in the circuit. Although feedback is applied in the same way (a feedback resistor from 'Out' to 'In-' (Rfb1), with a second resistor from 'In-' to ground - Rfb2), the resistors used will be much lower values than you'd use for a VFB opamp delivering the same gain.
Figure 3.1 - Typical Current Feedback Opamp
When feedback is applied (1k, 500Ω 220µF and Rin as 10k to ground), DC offset is 32mV, and doesn't change appreciably regardless of the resistance from 'In+' to ground. With values from 100Ω to 22k, it remains between 31 and 32mV. It's easy to see why it's claimed that CFB opamps are not recommended where DC offset performance is important. Note that the circuit is fully balanced (inasmuch as NPN and PNP transistors can be considered 'equal but opposite'), but DC offset is still terrible.
The LF -3dB frequency is still 1.45Hz. The HF -3dB frequency is over 10MHz, and a healthy 8V peak signal (5.7V RMS) is available at 5MHz (distortion is under 2%). If Rfb1 is reduced to 500Ω and Rfb2 is 250Ω, the -3dB frequency is increased to 28MHz! Note that that is the only change. The bandwidth is inversely proportional to the feedback resistance, so as Rfb1 is reduced, the bandwidth is increased. Even if the gain is increased from three to six (Rfb2 at 100Ω), the bandwidth still extends to 23.5MHz.
This is where the CFB circuit excels. Even with no compensation capacitor (which is essential in a VFB circuit), the Figure 3 opamp remains stable at any gain you desire - at least in the simulator. Real-life is different of course, but having recently played around with a small CFB power amplifier, I know that extraordinary high-frequency performance is quite easy to achieve. Mostly it is necessary to include a compensation capacitor, if only to prevent the circuit from amplifying RF or oscillating due to stray capacitance from output to input (only a few pF is usually needed).
While balanced CFB circuits have slightly better DC performance than those with a single transistor input, it's still mediocre. The Figure 3.1 circuit is simplified from the version you'll see elsewhere (including two of the references), but it's still somewhat 'over the top' for anyone who wants to play with the idea, so a simpler version is shown below. The Figure 3.2 circuit has actually been on the ESP website for a long time, as the Project 37 DoZ preamplifier, but without the output buffer stage.
Figure 3.2 - P37 (DoZ - Modified) Current Feedback Opamp
As you'd expect, the performance can't match that of the Figure 3.1 circuit. With only a single input transistor, the DC offset is large, and VR1 is essential to reduce it to something 'sensible' (±50mV or so). For the convenience of comparisons, it's set for the same gain as Figures 2.1 & 3.1, at ×3. The simulator claims -3dB at 10MHz (without C3, which is optional), and based on tests I've run that's probably about right, but not with full output level. Getting 5.5V RMS at 100kHz is easy, and that's respectable for such a simple amplifier. Note that I added an output buffer stage so the circuit won't struggle with low-impedance feedback networks. This was not necessary with the original P37.
The frequency response of the two circuits is interesting. The VFB opamp is the µA741 shown in Figure 2.1, and the only thing that was changed was the feedback resistors. The ratio was 2:1 (Rfb1:Rfb2) in each case, and the response was plotted from 10kHz to 100MHz. While there is some variation with the VFB opamp, it's only slight. In fact it's so small that the traces are perfectly overlaid and they can't be separated, but all four are in the graph. Note that in both cases (VFB and CFB), the response is theoretical, and would typically only apply at low signal levels.
Figure 4.1 - Fig 2.1 VFB Opamp Response
In contrast, when the feedback resistors are changed with a CFB opamp (Figure 3.2 circuit, without C3), the response change is very apparent. Nothing else was changed - only the feedback resistors, and always with a ratio of 2:1 (Rfb1 and Rfb2 respectively). As the feedback resistance is reduced, the bandwidth is increased. This is completely normal with all CFB opamps, and it's usually included in the datasheet in graphical form. This makes it easy to control the frequency response simply by selecting the value of Rfb1, with Rfb2 setting the gain.
Figure 4.2 - Fig. 3.2 CFB Opamp Response
I also measured the rise and fall times for both circuits. The VFB opamp (µA741) managed 0.54V/µs rise and fall, while the CFB opamp was much faster, with 52V/µs rise time and a very fast 329V/µs fall time. These were both taken with Rfb1 at 2k, so it's possibly pessimistic for the CFB opamp, but as the frequency response shows, it makes no appreciable difference for the VFB version. With all VFB opamps, the slew rate is determined by the dominant pole capacitor, and feedback doesn't change that. Without C3 (10pF) the CFB opamp shows ringing when a 2k resistor is used for Rfb1. Adding C3 reduces the slew-rate to 12V/µs rise and 55V/µs fall.
Figure 4.3 - Fig. 3.2 CFB Opamp Transient Response
The rise and fall times for the Figure 3.2 CFB opamp are shown above. This shows the output response vs. the input, and the ringing is clearly evident. The graph was taken without C3 (10pF) to demonstrate the 'worst-case' behaviour. The ringing can be predicted from the frequency response (brown trace, 2k for Rfb1) which shows a 5dB peak at around 15MHz. Note that the rise and fall time of the input signal is 5ns, which is easy to get in a simulator, but a little harder in real life.
The slew rates are different due to the VAS transistor (Q4). It can turn on very quickly so fall time is short, but it takes longer to turn off again, due to base storage within the transistor itself. This is made worse when the transistor is driven into saturation (minimum collector voltage). A transistor specifically designed for high-frequency operation (such as an RF transistor) will improve this. The wider bandwidth will make the response peak greater, so compensation becomes a requirement rather than an option.
Another major difference is the open-loop gain. This is measured using only the DC feedback resistor (Rfb1), with an 'infinitely large' capacitance from the inverting input to ground. This is easy to do in a simulator, but is somewhat more difficult to do on the workbench (to put it mildly). Typically, the capacitance used will be over 1F (yes, 1 Farad or more).
I also simulated a TL072, which has a slew-rate of 13V/µs. Like the µA741, it showed an almost identical response when the feedback resistors were changed while retaining the same ratios. I varied Rfb1 from 2k to 100k, and Rfb2 from 1k to 50k. The upper -3dB frequency remained at 1.3MHz in the simulator, which agrees with the datasheet. At that frequency, there is almost zero feedback, because the IC runs out of gain (according to the datasheet, unity gain bandwidth is 3MHz). This is what we should expect, as the TL07x series was not designed for high frequency operation.
For the µA741 VFB circuit, open-loop gain reaches 107dB at low frequency, but is 3dB down at only 5Hz, and rolls off at 6dB/ octave. The gain is reduced to 21dB at 100kHz. In contrast, even the simple CFB circuit (Figure 3.2) has an open-loop gain of 80dB at low frequencies, 74dB at 7kHz, and has 57dB of gain at 100kHz. The Figure 3.1 CFB opamp has slightly higher low-frequency gain (87dB), and it rolls off later (-3dB at 14kHz). At 100kHz it still has 70dB of gain. This was simulated using a 2k feedback resistor (Rfb1).
You may well ask why open-loop gain is of any interest. It lets you determine how much feedback is applied at any given frequency. If an audio preamp (or power amp) is expected to have 30dB of gain, should the open loop gain at 20kHz be only (say) 34dB (rolling off at 6dB/ octave from 100Hz to 20kHz), then there's only 6dB of feedback available, where there may be 80dB at 100Hz (for example). The ability of the feedback to reduce distortion is severely compromised by so little reserve gain. As a result, distortion is increased ... as you would expect.
Many people complain that feedback increases the amplitude of high-order harmonics, but fail to understand that this isn't usually the case at all¹. Yes, upper harmonics may appear to rise alarmingly, but that's because there isn't enough gain at those frequencies for the feedback to be effective. It's usually not so much that the harmonics are increased, they aren't suppressed if there's not enough feedback. For feedback to be effective, there needs to be a lot of it, and the circuitry needs to have enough open-loop bandwidth to ensure that the feedback remains effective over the widest frequency range possible. This is harder with voltage feedback because of the requirement for a dominant-pole capacitor. However, with any competent opamp currently available, it's rarely a problem unless you expect a gain of (say) 100 (40dB) from a single stage.
¹ There are some instances where feedback can increase the level of harmonics, and this is covered in the article Distortion & Feedback. In most cases, this is only possible using circuits that are designed to show the effect, which is not helpful when considering 'real-world' circuitry. With most traditional designs, the increase in harmonic levels is due only to reduced feedback at high frequencies.
The poor DC performance of a CFB amplifier (whether small-signal or a power amplifier) can be improved by using a DC servo. This will invariably be a VFB opamp, selected for its low-frequency and DC performance. If properly configured it will have no effect on the audio. DC servo circuits are covered in detail in the article DC Servos - Tips, Traps & Applications. Figure 5 shows a modified version of the Figure 3.2 circuit, with the trimpot removed and replaced by the servo circuit (U1, R9, R10, C2 and C3). No output capacitor is used.
Figure 5 - CFB Opamp With DC Servo
By adding a DC servo, the output DC offset is reduced to the worst-case offset voltage of the opamp, but also determined by input offset current. For most 'ordinary' opamps the offset should be less than ±2mV, and if you need better than that you can use an 'exotic' opamp. Ideally, the opamp will have JFET inputs so the servo capacitance (C2) is kept to a reasonable value. With the servo, the output DC will be minimised even if the temperature of the circuit changes (especially Q1), something that isn't possible using the trimpot. Servos aren't without their own problems of course, but in this circuit there's nothing that will create any issues.
The servo does have some influence of the very low frequency performance, but since CFB circuits are generally used to get good high frequency performance, this should not cause any degradation. I suggest that the reader also reads the 'DC Servos' article, as that has full details of how it works. The use of a servo can be applied to any CFB amplifier if good DC performance is necessary. The simulator claims that the DC offset will be in the order of 130µV, which is about what I'd normally expect.
All servo systems have a settling time, determined by the filter time constant. With 1MΩ and 1µF as shown, the time constant is 1 second, but it will typically take at least twice that before the output voltage is close to zero. To avoid noise (typically a 'thump') when power is applied, a muting circuit is necessary if the noise will cause problems.
If you're used to looking at the specifications for VFB opamps, you could be forgiven for thinking that the idea of an opamp working to up to 100s of MHz can't be right. Not too many years ago you'd be correct, as CFB opamps only became readily available in the 1990s. They remain a niche product, and most people will never have used one or experimented with them. As I mentioned above, you may have built one without realising it, based on a couple of ESP projects. However, these were never intended to be used for radio frequencies and the subtleties would have escaped notice.
Figure 6 - THS6012 Frequency Response Vs. Feedback Resistance
The above graph is adapted from one in the datasheet for the THS6012 CFB differential line driver, specifically designed for ADSL (which is now a distant memory as cable or fibre-optic broadband has become available). Back in 2001, TI sent me a selection of CFB opamps to evaluate as headphone drivers. The THS6012 excelled in this role, but was unfortunately only available in an SMD package, and was very difficult to mount on a PCB as it had the heatsink tab on the underside of the package.
I ran many tests on it, and it could not really be faulted, although the low input impedance and heatsink requirements made it impractical to use it in a project. That's a shame, because its performance was exemplary, but the package made it impractical.
I have deliberately left out the complex formulae that can be used to analyse these circuits mathematically. Most readers won't be interested, and for the few who do want to perform detailed analysis, the references have everything you need. Like many references, expect to find errors - it's very hard to ensure that all details are exactly right, and you'll quickly discover that the Figure 3.1 circuit has been used as the 'gold standard' for CFB opamps. There are more complex versions as well, but it's already a fairly daunting circuit (I'm certainly not about to build one).
For the most part, CFB opamps will remain a curiosity for most hobbyists, unless working with RF. Radio (and video) frequencies are easily handled by many readily available CFB opamps. Frequency response to over 300MHz is not uncommon, with slew rates of 1kV/µs or more. These are specialised, but even 'ordinary' CFB opamps are capable of 100MHz bandwidth (at low output levels). Power amplifiers are a different matter. A 'practice' amplifier (for learning how power amps work) is published as Project 217, and it uses current feedback. It can't achieve MHz bandwidth, but that's a limitation that was deliberately imposed to ensure stability. There are a few other power amps that use current feedback, including Project 36 (DoZ) and its preamp, Project 37.
As noted in the introduction, one must be careful to differentiate between the two types of current feedback. Project 27 (guitar amplifier) uses current feedback too, but to monitor the output current and adjust the gain accordingly. The amp itself uses a VFB circuit, rearranged to provide current feedback. While this may create a little confusion at first, the two types are very different. To complicate the situation even more, it's quite simple to have a 'true' CFB amplifier reconfigured (via the feedback network) to provide a defined output current, which I suppose would make it a 'current feedback - current feedback' amplifier!
![]() | + + + + + + + |
Elliott Sound Products | +Cinema Sound |
Before you read any of this article, I must stress that the points made are not in isolation, and apply equally to commercial cinemas/ theatres and home theatre. Many home theatre products have 'room equalisation' facilities, and they don't work in exactly the same way that the commercial cinema systems don't work. I strongly suggest that the reader doesn't simply believe (or disbelieve) what's in this article, but does some proper research and reads/ watches presentations from established experts in the field of sound reproduction.
+ +The assertions made here are not intended to 'bash' the industry, but to point out that what they do in cinemas does not work. It doesn't work anywhere else either, but there seem to be factions who not only believe that the processes do work, but will attack anyone who says otherwise. A loudspeaker needs to have a reasonably flat frequency response (with no resonant peaks), and a directivity index (DI) that is consistent across the frequency range. It's inevitable that there will be greater directivity as the frequency increases, but sudden changes in DI cause problems with the reflected sounds that not only affect what we hear, but also what is measured. Adding EQ does not and can not ever correct problems with reflected sounds, but that's exactly what the established practices attempt to do.
+ +I strongly suggest that the interested reader look at Lenard Audio - Cinema Sound before reading this. Much of the material there is a collaboration between John Burnett and myself, and is based on our research experiences when developing possibly the largest sound system ever created for commercial cinemas, and applying that research to a cinema installation in Sydney.
+ +I have left out most of the history and many other details, so I could concentrate on the one major problem - the sound system and the contentious X-Curve alignment procedure. While many people may consider the sound in their local cinema to be 'good' or even 'very good', in reality this is probably not the case. Because we are watching the movie while listening, we become absorbed in the plot, action and dialogue, and the sound usually has enough dynamics to reinforce the overall experience.
+ +The situation will be found to be very different if the cinema-goer dons a blindfold, and just listens to the sound. Without the image, the deficiencies in the sound quality become very apparent. In this day and age, we have to wonder how this could be - the sound should be superb, and far better than most people can ever hope for with their home cinema system.
+ +The Dolby® CP650 processor (and others of its ilk) is a very versatile piece of equipment, and provides everything one could ever need to set up a high quality cinema system. This being the case, why is it that so many commercial cinemas sound so ... mediocre?
+ +To understand the reasons, we need to examine the setup process in some detail. There is also an absolute requirement that we should understand general acoustics principles. One of these (not often voiced, or at least not in these words) is the key to understanding what goes wrong ...
+ ++ You cannot correct time with amplitude ! ++ +
Equalisers affect the amplitude of different frequencies, but cannot do anything to correct for room effects caused by reflected sound. Some background is needed before your understanding of these concepts becomes clear. Most of the problems in any enclosed space are the result of reverberation and/or echoes, caused by insufficient acoustic damping, combined with flat, hard and parallel walls. In general, most cinemas have enough absorbent, diffraction and diffusion surfaces to make the sound acceptable for the audience, however taking measurements to 'align' a cinema sound system is another matter.
+ +Any claim that it is (somehow) possible to 'align' a room or other space using EQ is much like the concept of foreign languages. Most people know that speaking loudly makes no difference if the other person doesn't speak your language. Speaking slowly doesn't help either. Despite this, people persist in doing exactly these things - speak slowly and loudly and the other person must understand. Right? WRONG!
+ +We can phrase the above a little differently too ...
+ ++ You cannot equalise a room! ++ +
The above is a very common misconception, and any claim that 'room EQ' is even possible should be treated with suspicion and/or contempt. Any room has reflective, absorbent and diffractive characteristics, depending on furniture, acoustic treatment, wall, floor and ceiling materials, etc., etc. If you tried to 'equalise' a room, you are also 'equalising' the coffee table, so if it's moved (or some books are stacked on it), you would need to re-'equalise' the room ... every time something changed. This is a silly concept, but is easily proved with a microphone, simple (PC based) spectrum analyser and a pink noise source. I've done it, and yes, I can see the difference if I move the mic, coffee table or listening chair.
+ +If I were to apply equalisation to 'correct' the response, the result is simply awful! I've done it as an experiment! It doesn't work, simply because the microphone 'hears' things that our ear/brain combination knows are irrelevant and are ignored. Microphones connected to analysers are dumb, and cannot differentiate between things that are audible and those that are not. Multiple microphones only make matters worse, never better.
+ ++ There is, however, an exception. Room EQ can be applied at very low frequencies, where the wavelength of the frequency is large compared to the room dimensions. This is a region + where the room is in 'pressure mode', and normal wave propagation cannot apply because the distances are too small. It's also worth mentioning that because of this phenomenon, contrary + to common 'wisdom' you can have deep bass in a small room, and claims to the contrary are simply nonsense. After all, headphones can manage extremely deep bass, and their enclosed + space is tiny. However, use caution and common sense if you do apply EQ to the lowest octaves. ++ +
As explained in this article and as anyone who has tried knows only too well, sound system measurements are fraught with difficulties. A lack of understanding by installers and misleading (or incorrect) instructions ensure that few cinemas will ever sound their best. What we can expect is that many cinema theatres will achieve an acceptable level of mediocrity, but will completely fail to delight any audience.
+ +There are other issues as well - poorly designed and/or implemented speaker systems and underpowered amplifiers being just two of them. Because this article is primarily concerned with the practice of equalisation, these other problems will not be discussed here except in passing. Suffice to say that it is not uncommon for subwoofers to fail with alarming regularity in some theatres, because they aren't really subwoofers at all. Vented enclosures tuned to perhaps 35Hz, with no high pass filter to prevent excessive excursion at frequencies below cutoff, are almost designed to fail - especially if EQ is used to attempt to achieve 25Hz at the low end. + +
Throughout this article, I shall use the terms 'align' and 'calibrate' in single quotes. This implies that they are just words, and their normal meaning does not apply. True calibration implies that the system meets a proper (and realistic) standard, and that the same calibration of two or more different (but similar) items will result in very close correlation between them. Alignment has a very similar meaning, but neither term applies to the process of 'calibrating' a movie theatre.
+ +Despite my very negative opinion of the X-Curve and the whole process of cinema equalisation, it must be taken in context - this is 2012 (at the time of writing), and no longer 1970. While measurement capabilities have improved, the X-Curve seems to be set in stone, even though it's arguably well past its use-by date. When introduced, the original 'Academy Curve' (which preceded the X-Curve) was an attempt to solve known problems, and provide the cinema goer with some consistency. The X-Curve followed this same principle, but allowed for the fact that many of the previous issues were (more or less) solved. Excessive noise and distortion were no longer dominant issues once magnetic strips were added to the film for sound (rather than the earlier optical sound-track).
+ +In this respect, the whole process must be put into perspective, and the historical reasons considered. However, we now have the ability (but perhaps not the will) to create sound systems that are vastly superior to those that came before. Most of the original issues are now just memories, but there is still a dogged belief that the X-Curve is still relevant, and that far-field measurements are somehow useful. They weren't before, and they aren't now.
+ + +There are two types of echo that occur inside a room, large or small, but some effects don't become audible unless the room is large enough. Reverb is known to most people - it is immediately audible in a tiled bathroom, and there is a noticeable 'enrichment' of tonality within that environment. Many people like to sing in the shower for just that reason - the reverb makes them sound better!
+ +In a larger room intended for the reproduction of speech, excess reverb makes the sound difficult to understand. Indeed, towards the back of a long room, the level of reverb commonly exceeds the level of the direct sound (this is known in acoustics as the 'far' or reverberant field).
+ +The other issue is commonly referred to as a 'slap' echo. A sharp noise causes an almost immediate and very distinct echo, often followed by a 'flutter' effect as it dies away. Anyone who has tried to use a simple digital echo/ reverb unit will have heard this effect. The reverb is a series of rapid repeats of the original sound, dying away to nothing. If the walls are non-parallel but reflective, you may hear a 'chasing flutter' that moves from one end of the theatre to the other. Slap echoes occur in small rooms, but our hearing mechanism cannot respond to the very short delay times - typically only around 10ms for a 3.5 metre room.
+ +These effects are all caused by time - namely the time it takes for a sound from the speakers or other source to hit a hard surface and be reflected. In a typical room, there are many surfaces that will be subjected to the original sound, plus sounds reflected off other surfaces that have in turn reflected yet more sound. This is the essence of reverberation, and while the effect can be pleasant, too much causes a serious loss of intelligibility. However, some reverb is essential or the sound is completely flat and lifeless. There is a balance between too little and too much reverb, and few theatres suffer from an excess. Some of the newer theatres may have very little reverb - not quite anechoic, but very dead.
+ +
Figure 1 - Direct Sound vs. Early Reflections
The early reflections are those that bounce off walls, ceiling or floor, and arrive at the listening (or measurement) position a short time (within a few milliseconds) after the direct sound. The time delay is determined by room size, listener position and path lengths, and the relative amplitude at various frequencies is dependent on the surface treatment and its effectiveness at the frequency of interest.
+ +
Figure 2 - Reverberation
Reverb is a longer term affair, and is measured in seconds. The standard reference for reverberation time is RT60, being the time in seconds until the SPL (sound pressure level) has decreased by 60dB. Once a room has been excited by a sound (a loud impulse is used to perform reverb measurements), the sound will bounce from one surface to the next until it is finally absorbed or dissipated. While it can be argued that a cinema (for example) should be completely free of reverb, this is not possible - especially at low frequencies. As a result, it is a fact of life that some reverb will be present, and it is essential to ensure that it is well balanced across the frequency range, and is not excessive at any frequency or location within the theatre.
+ +Reverb is important for another reason too, and it has nothing to do with the soundtrack of the film. Humans are not used to total silence or anechoic rooms. We expect to hear certain characteristics that are congruent with our visual surroundings. In an open field, we do not expect to hear reverb because we can see far into the distance. When in a room, we do expect to hear reverb, and it should agree with our visual impression of the room's size. Large cathedrals are expected to have significant reverberation, small carpeted rooms with heavy curtains or drapes are expected to have very little. An interesting article on this topic is entitled If these walls could talk, they would whisper, published by The Guardian in the UK. The reporter describes his experience in an anechoic chamber as "disconcerting" - a hint of understatement I suspect .
However the sound system must be capable of representing the sound that is congruent with the picture and not incongruent with the picture. This is explained on on the Lenard audio site and needs to be explained again from my point of view as well. A scene of pirates on the open sea must not be contaminated by the reverberation imposed by the walls and ceiling of the cinema, for example.
+ +Another effect that is disconcerting is when the film action is set in the open, yet you can hear the reverb of the studio sound stage. Again, this is incongruous, and confuses our senses. We see an open field, and we expect the dialogue to sound flat, with no reverb. The same would occur if the action were set in a large cathedral and we heard no reverb at all. Although common in some early films, most modern movies get these things right (but by no means all!).
+ +The following is somewhat contentious, and should be considered as comment rather than established fact. Psychoacoustics is not an exact science, and there are many differing opinions of what is right and what is not. My opinion is that if a theatre is built that confuses our senses by having so little reverb that the patrons are disconcerted, this may be bad for business. Architects and acoustic engineers will generally strive to ensure that the room has the right balance of reverb to ensure patrons feel comfortable. Perhaps surprisingly, this is not difficult to achieve. Most theatres do sound somewhat 'dead', but not so much so that it causes discomfort. Once the film starts the difference is academic, as the soundtrack supplies the ambience needed to place the audience into the film - to become a part of the experience. This is the purpose of the surround speakers - to immerse the audience in a sound field that is congruent with the image. It would be very easy to add reverb to a completely dead theatre electronically, and the artificial reverb can be faded down with the house lights. While this would give the best of both worlds, it is not really necessary to go to such lengths. This could easily become a subject unto itself, but I expect you have the idea by now.
+ +Desirable though it may be for the comfort of patrons, early reflections and reverb may cause problems with the reproduction of sound. Our hearing mechanism discards most early reflections, but only if they arrive within ~30ms. Of much greater significance, these effects create major problems for measurement systems that are incapable of separating direct and reflected sounds (something that humans do very well). Because of these problems, the cinema industry has a procedure for setting up the sound system in theatres. Installers use multiple microphones, most or all of which are at or towards the back of the theatre, in some cases well within the reverberant field. Naturally, these tests are done with no-one in the theatre, so only the furnishings and fittings provide damping and/or diffraction/ diffusion.
+ +At the beginning of the setup process, a vague attempt is made to calibrate the SPL (sound pressure level) in the theatre area against a sound level meter. The CP650 instructions inconveniently neglect to specify that the meter MUST be set for flat response - most people use sound level meters with the A-weighting filter applied. This is completely wrong for making any kind of calibrated measurement. The instructions also fail to specify the type of measurement microphone, so some installers may use directional mics, others omni-directional, and some might use a mixture.
+ +Why does the type of microphone make a difference? Because almost all directional mics have an inherent bass rolloff so response is unpredictable. If the sub is equalised using a mic with bass rolloff it will end up far louder than it should be. Directional mics will also favour sound coming from directly in front, and the installer is directed to aim the main microphone (mic 1) straight up towards the ceiling. Far more reverb than direct signal may be picked up by this mic, making any measurement pointless at best.
+ + +The first step is to set the Dolby processor to '7' (reference level), then adjust the amplifier gain(s) to achieve 85dB SPL in the reverberant field where the microphone(s) are located. Surround amps are adjusted so the level from the surround speakers is 82dB SPL (3dB lower than the main speakers). The sound level meter must be set for C-weighting (more-or-less flat, at least from 100Hz to ~10kHz). No specification for the meter is provided, so presumably it's ok to use a cheap and nasty meter costing less than $100.
+ +Then the real fun starts. By using equalisation (bass and treble controls, a 27 band graphic equaliser and a parametric equaliser for bass), the system's frequency response is adjusted one channel at a time until each meets the industry standard 'X-Curve'. The system includes a 'real time analyser' (RTA), that displays the amplitude of each 1/3 octave frequency band. This is similar to a spectrum analyser, but the display is divided into separate vertical bars for each frequency.
+ +
Figure 3 - Cinema X-Curve
The X-Curve is named from eXtended range curve and is defined by ISO Bulletin 2969, although the more correct term is 'eXperimental', since that is exactly what it was in the beginning. It is intended to provide compensation for the fact that sound in a reverberant environment sounds louder than under anechoic (no echoes) conditions. While there are different compensation factors intended to be applied to rooms of different sizes (the standard curve is for a 500 seat auditorium), this doesn't happen. The CP650 processor has no facility to select room size, and the installer is expected to make the response fit the curve regardless of room dimensions. I think that after about 40 years it's fair to say that the experiment has failed and should be terminated.
+ +One major factor that the X-Curve claims to address is that reverb at high frequencies is usually minimal, and that the majority of reverberation will be in the low to mid frequencies. Even the air itself will absorb the highest frequencies, and the typical theatre furnishings will generally reduce the reverb time at high frequencies to comparatively low values. If a speaker system is 'calibrated' to flat response in the far (reverberant) field, it will sound overly bright and harsh. The 'X-curve' allegedly compensates for this, but the process is flawed because you can't 'calibrate' a loudspeaker in the far field! However, this is the 'standard', warts and all.
+ +
Figure 4 - Pink Noise SPL Build-Up Over Time
The curves shown above are adapted from Ioan Allen's paper "The X-Curve: Its Origins and History" [ 7 ], and shows how the SPL increases over time when the room is excited with pink noise. If an attempt were made to equalise the response so it was flat, it is obvious that significant treble boost would be needed. During normal programme material which is generally transient by nature, the reverberation never has enough time to build up as shown, so the treble boost makes the sound shrill and harsh. No-one seems to have noticed that pink noise and programme material are completely different from each other, so we have the X-Curve.
+ +
+ ![]() | Notice something very interesting in the above curves. The red curve is the first arrival, and is the main stimulus for
+ what we hear with normal programme material. As opposed to pink noise. It's flat! There is no equalisation needed, because there is no reverb yet. Most sounds
+ will have passed before we hear any reverb, and for that reason, the theatre must be properly damped to minimise reflections. + We don't listen to pink noise as a rule - it's not something that anyone is really hanging out to enjoy, so why on earth should we equalise a system so that + pink noise supposedly looks right on a real-time analyser (without actually listening to it)? Apart from anything else, applying EQ won't make + pink noise sound better anyway. The EQ certainly doesn't make the programme sound better, because it was performed with a completely inappropriate stimulus, + and in the reverberant field. + |
The X-Curve specifies that the response should be flat to 2kHz, as measured by microphones of unspecified type*, located two thirds of the distance from screen to rear wall, in locations that are also not specified. After 2kHz, the response is supposed to fall at 3dB/octave up to 10kHz, and at ~6dB/octave above that, to the maximum of 20kHz (although 16kHz is commonly the realistic upper limit). As noted elsewhere, there is neither the requirement nor the capability to measure actual reverberation time (RT60). There is supposedly a 'small room' X-Curve, which rolls off at 1.5dB/octave, and a different version again that rolls of at 3dB/octave above 4kHz. These are just as flawed as the full sized version of course, and are therefore just as useless. Other variations exist as well, so as a standard it has rather too many variables with no clear direction.
+ ++ * Actually, the mics are specified, and should be as per the SMPTE-202M document. This is quite specific about the characteristics of the microphones + to be used. They must have omni directional response up to the highest audio frequency to be used, and calibrated for random incidence - not free field + calibration ... see Brüle & Kjær or similar for a description of the differences. The Dolby instructions are written on the basis that the + installer somehow 'knows' this. ++ +
If a standard curve is to be applied, then it can only be applied to a standard room. Since there is no facility (or requirement) to measure RT60 before 'calibration', we can be assured that the end result will be far from standard. No two rooms will ever be exactly the same, unless they are truly identical in every respect - right down to the floor covering and light fittings. There is a suggestion in the manual that the system should be checked by ear after calibration, and that small adjustments may be needed for different room sizes. This suggestion is not at all prominent, and some installers might perform a very basic listening test at best.
+ +For the time being we shall ignore these 'minor issues', read the manual for the processor, and follow the instructions as they are written. Having done so, the installer can cheerfully head back to the office with another calibration completed to everyone's (dis) satisfaction. Interestingly, anecdotal evidence (from several independent parties) indicates that even if the same installer returns to re-equalise the same cinema with no changes to the system, the EQ will be different. This will happen as many times as the system is 'aligned'. If the exact same results can't be achieved every time for the same theatre and sound system, then the procedure is (by definition) fatally flawed. No-one seems to have noticed!
+ +The process goes wrong for a number of reasons ...
+ +To cover these points in order is essential ...
+ +1 - Reverberant Field
+A microphone picks up all sounds that impinge on the diaphragm, and gives each variation in sound pressure equal 'weight', regardless of origin or time of arrival. This causes measurement readings to differ markedly from what we hear - especially in the reverberant field. Because we have stereo ears, we are able to focus on a particular direction, ignoring (to a large extent) sounds that arrive from different directions.
Microphones cannot do this, so every sound at a given level is reproduced equally, regardless of its point of origin or how long it has been delayed before reaching the diaphragm. In addition, even small diameter measurement microphones show response anomalies when sound arrives off axis. Unspecified microphones are just that - unspecified, so no-one knows or seems to care about the possibly severe frequency response errors they might introduce.
+ +The installation manual specifies that the measurement mics should be in the reverberant field, as this is claimed to be an important aspect of the procedure. This practice guarantees that the end result will be less than satisfactory.
+ +The pink noise sound source is a relatively constant sound, and reverberation has plenty of time to build up the overall measured SPL. The film sound track will normally consist of speech, music and sound effects, almost none of which are constant. Most are highly transient in nature, so high frequencies in particular will be quieter than expected, reducing clarity. That's exactly what occurs when you roll off the top end of the audio frequency spectrum - is anyone surprised that this happens?
+ +2 - Microphone Placement
+Since no-one specifies where the mics should be placed, nor that calibrated test mics are used, it's up to the installer or setup technician to determine what mics should be used and where they are placed. Moving a mic just 300mm in the reverberant field will change the frequency response - dramatically in some cases. It is thought that by using a number of microphones (each giving a bad reading as described in #3 below), that this somehow will result in a 'good' reading.
Not so. An infinite number of averaged bad measurements simply provides another bad measurement - the number of times you do it is immaterial if your methodology is flawed in the first place. Proper loudspeaker measurements are taken in an anechoic environment, using calibrated microphones and a test process that is documented to the smallest detail. No-one in his right mind would even consider that just placing any old mic somewhere in the room and taking that as gospel was a sensible approach. This, however, is the documented procedure for 'calibrating' a movie theatre!
+ +3 - Microphones vs. Ears
+Our hearing mechanism has evolved over hundreds of thousands of years, and is specifically designed to give a very low priority to reflected sound that arrives within around 30ms after the direct sound (but only if the sounds are the same - otherwise we might ignore an important auditory warning sound while someone is speaking). This enables us to hear clearly, even in the presence of nearby reflective surfaces. This ability is enhanced by our ability to detect direction, so any signal that is essentially the same as the first, arrives within the 30ms window and comes from a different direction barely registers at all.
Microphones cannot (and do not) do this. All sounds are registered equally regardless of direction (for an omni-directional microphone at least), with the microphone responding only to the relative amplitude and phase of any two (or more) signals. A microphone measurement of a loudspeaker in a typical listening room or theatre will respond to each and every reflection, giving a highly unrealistic representation of the true performance of the loudspeaker system.
+ +
Figure 5 - 500Hz + 530Hz Example
In addition, consider a microphone and measurement software subjected to two or more frequencies at the same time, separated by perhaps 20 or 30Hz as shown above. They can't tell the difference and will average the SPL to obtain a value that is not matched by what we hear. Even though people's ear/brain will hear the effect easily (we hear it as a modulated tone), the measurement system will give a reading based on the average composite sound which may not match the audible characteristic at all. It has no ability to know that there are two different frequencies involved, each with possibly quite different reverb characteristics. Since the 'calibration' is performed using pink noise, this exact issue can potentially exist in reality, because true pink noise in a reverberant environment effectively contains all frequencies at once.
+ +4 - Empty Theatre
+Human beings (en masse) have fairly good sound absorbent properties, and our hard surfaces (heads) also act well as diffusers. An empty room (even a theatre with plush seating) will show very different characteristics when full and empty. There is absolutely no provision to even try to compensate for these effects!
5 - Equalisation
+Where a loudspeaker is deficient in some region of the frequency spectrum, it is sometimes (but by no means always) possible to apply (preferably modest) EQ to correct response anomalies. If we have three front speakers (left, right and centre), it is reasonable to expect that they will be either virtually identical, or at least very similar (ideally, left, right and centre speakers should be identical).
If frequency correction is applied to a loudspeaker, identical correction should be applied to all others that are the same. As noted above, left and right speakers will be identical for all intents and purposes, and the centre channel should ideally be likewise.
+ +When EQ is applied differently to an array of (identical) speakers, the imaging is destroyed, and the overall listening experience almost always creates listener fatigue, loss of clarity and focus, and simply sounds wrong (not unreasonable, because it is wrong). The setup process 'calibrates' each speaker independently, and all three front speakers (even if physically identical) will have different EQ applied. This approach is just plain silly - it can't ever work properly! + +
Many theatres don't have 'real' subwoofers, so the same loudspeaker may be expected to handle both bass and lower midrange. In such systems, the left and right speakers will be close to room corners, and will have greater bass output than the centre speaker. In such cases, it may be necessary to apply slightly different bass EQ to the centre speaker, but left and right speakers must be the same to preserve imaging. Where any loudspeaker driver is expected to handle both bass and lower midrange, expect high levels of intermodulation distortion if there is significant LFE (low frequency effects) material along with speech or other lower midrange programme material.
+ +6 - Reverb Time
+Attempting to equalise a room to a standard frequency response simply doesn't work (see below for all the reasons). Attempting to do so to compensate for reverberation time that has not been measured is pure folly. The X-Curve allegedly accounts for reverb, but there is no process for measuring (or even estimating) how much reverb exists, so any attempt to equalise for an unknown quantity is guaranteed to fail, even if it were possible in the first place.
7 & 8 - Listening Tests
+Any listening test is subjective, and different people will hear different things (even from the same system). However, an installer should be someone who is interested in good sound reproduction, and as such should be able to make an informed opinion as to whether the system has potential, exceeds expectations, or is best suited to land-fill. That there isn't a single suggestion anywhere in the CP650 (or any other that I'm aware of) setup manual to listen to the system (other than to check for rattle, buzz or other signs of major component failure) indicates that the entire process is determined by equalisation alone, and if this meets the 'standard', the system is supposedly fine.
The entire official 'calibration' technique is completely unsatisfactory on all respects, as should be obvious.
+ +Surrounds
+The surround speakers get the same treatment, and will almost always have different EQ applied to left and right surround channels even though they are (or should be) the same. While this is not as great an issue as with the main speakers, it is still incorrect to apply different equalisation to each bank of speakers because the ambience can be destroyed by different amounts of EQ (radical or otherwise).
Likewise, if dedicated rear channels are used in a theatre, the same will apply.
+ +As should now be quite apparent, the vast majority of issues are due to reverberation within the theatre, and are therefore the inevitable result of time delays and reflections. The solution isn't at all difficult to understand. Remember ...
+ ++ You cannot correct time with amplitude ! ++ +
... Yet this is exactly what the industry standard setup procedure recommends to 'align' or 'calibrate' a movie theatre.
+ +We can add to this ... + +
+ Microphones and ears respond to sound completely differently! ++ +
Nearly all of the response aberrations measured by the microphone(s) in the reverberant field are the result of time - assuming that the speakers have a respectable frequency response to start with. There is a time delay between the direct sound being reproduced until it reaches the mic, and additional time delays before the reflections start to arrive. Since the mic cannot differentiate between the direct and reflected sounds, it will show a frequency response that is completely at odds with what the audience will hear.
+ +If the installed speakers are wrong, near field (i.e. microphone as close to the speakers as practicable) measurement can be used to correct any minor anomalies, and correct for screen attenuation of high frequencies for example. The equalisation capabilities of the processor are sufficient (in most cases) to make appropriate corrections, but even here there are difficulties (again involving early reflections, at least in part) that can give a very unrealistic indication of the actual frequency response of the loudspeakers. If there are serious issues with the speakers, they must be replaced or repaired. In some cases, a simple change of crossover frequencies can rescue an otherwise mediocre speaker system. A truly bad speaker system is unlikely to be able to be saved, and should be replaced.
+ +Assuming that the operator understands exactly which aberrations are likely to be caused by reflections (and ignores them as required), the loudspeakers can hopefully be equalised to sound at least passable, and if this is not possible, the installation should be halted at that point. Once an EQ curve has been defined for any one speaker, all speakers of the same type must be equalised identically.
+ +To do otherwise will cause the effects referred to above - loss of clarity and focus, and listener fatigue. For anyone who has been to a movie theatre recently, you should now have a very good idea as to why the experience may have been less satisfying than should have been the case.
+ +Of course, most home theatre systems are not equalised at all, and in many cases can sound far better than one's local cinema (mine does!). This is a guaranteed way to force people out of real cinemas and get the movie on DVD as soon as it's released (or obtain a pirate copy well before DVD release).
+ +A 'real' cinema should give the audience a better experience in every respect than a home theatre system, and if they fail to do so, loss of patrons will continue at an ever increasing rate. The cinema patrons will always like the large screen and ultra-sharp picture that film produces, but if the sound is ruined because of poor alignment technique they will ultimately be driven away. Simply increasing the volume isn't the right answer!
+ +In his article "The Mythical X-Curve" [ 5 ]. John F Allen writes ...
+ ++ While a directional speaker that sounds right to the ear in a living room may indeed exhibit a flat upper frequency response + with a real-time analyzer and pink noise, such will not be the case when a speaker is in a room the size of a theatre. When equalized + with pink noise to show a flat response in a theatre, speakers deliver sound with too much treble. The resulting sound is unnatural, + way too bright and impossible to listen to. This, again, is due to the far greater reverberation of the larger room being included + in the measurement. Since there is more low frequency reverberation, the lower frequencies appear to have a greater amplitude than + the higher frequencies. Looking at such a measurement on a real-time analyzer, the higher frequencies appear to be rolled off. + ++ +
The X-Curve was an attempt to normalize the shape of such a measurement in a large room. It resulted from measurements made + of theatre speakers after they were equalized to sound the same as a set of studio monitors placed at the console position. When the + two sets of speakers sounded as close as they could, the theatre speakers exhibited a frequency response that was basically flat from + 100 to 2000 Hz and rolled off at a rate of 3 dB per octave above 2000 Hz, when playing pink noise and measured on a real-time analyzer. + Below 100 Hz, the X-Curve showed a roll off of these lower bass frequencies. But this primarily due to the weakness of the older + theatre speakers in the bottom octave. Rolling off the bass a little would help prevent these systems from being overloaded and damaged. + It was also noted that larger theatres would exhibit a somewhat steeper high frequency roll off, and that smaller theatres would exhibit + a slightly reduced roll off of the high frequencies. This finding was officially noted in 1990. Beyond that, there have been few + additional guidelines to aid technicians in the interpretation of these measurements and the equalization of cinema systems. +
It is incomprehensible that after all this time (22 years at the time of writing), the same processes are still used and recommended for cinema sound system alignment. While the X-Curve is still something that needs to be considered, it should not (must not) be used as the standard for system setup. No real effort has even been made to adapt the equalisation curve to account for room size, let alone taking even the most rudimentary RT60 reading, although it is required for THX.
+ +While some installation technicians (as noted by John Allen) will use their own judgement, most will simply follow the instructions. The results are predictable and can be heard all over the world - cinema sound that is indistinct and lacking clarity, and producing listener fatigue because the EQ causes an overall loss of focus and image.
+ +In Dolby's defence, setting up a sound system is not an easy task, and they have attempted to provide a process that will give an acceptable result. Given that few installers will have the specific skills needed in acoustics and electronics, the process described is designed to make the system EQ as painless as possible. With the appropriate background knowledge, it is obvious that the methodology is flawed and can never work properly.
+ +The comments and recommendations in this article are not in isolation - John Allen (of HPS-4000) and Ioan Allen (Dolby Laboratories) have both presented papers to the industry (International Theatre Equipment Association and SMPTE, plus industry magazine articles) that state much the same thing. Both have extensive experience in the cinema industry. My involvement is more recent, yet it was immediately apparent that the established standard alignment procedure was simply wrong.
+ +In my opinion, the industry has had more than long enough to get its act together and scrap the X-Curve, yet nothing has changed. There are still far too many people in the industry who continue to think that this fatally flawed system is 'right', and there is enormous resistance to change.
+ +The CP650 processor I worked with is one of the latest Dolby processors, but it still dictates a setup and alignment procedure that has been demonstrated to be in error, defies logic and ignores basic acoustic principles.
+ + +Fairly obviously, it is imperative that the sound system (where 'system' means left, centre, right and all surrounds) is reasonably free of audible defects, bearing in mind that there is no speaker system that is actually free from colouration. The system needs to sound well balanced and free of audible discontinuities across the range, before any attempt at equalisation is made. Speaker EQ only works if the amount of EQ needed is small and doesn't require sharp filters to boost or cut any frequency.
+ +The main things that are missing (or simply wrong) from the alignment process are many, but one of the main ones will always be difficult. It is vitally important that the installer listens to the system. Not the rather cursory listening test suggested in the CP650 manual for example, but a comprehensive listening test that has defined objectives. With experience, it is possible to isolate many problems with no test equipment at all, other than a pink noise generator (already built into the processor) and a pair of trained ears, whose owner knows what to listen for.
+ +This part is critical, but it is surprisingly easy to demonstrate problem areas and teach someone what to listen for, and how to do so accurately (within limits of that person's hearing of course). An instant reference is available using a set of headphones. Even relatively cheap headphones have far fewer frequency aberrations than any typical speaker system, and the instant comparison allows the operator to listen for specific differences - typically frequency peaks or dips. Peaks are the worst offenders, because they tend to be far more audible than dips. The latter can limit (or completely ruin) clarity and definition, but can be harder to isolate.
+ +Major (severe) peaks or dips indicate that something is seriously wrong with the loudspeakers, and these problems cannot be fixed with EQ. The only way to address this kind of problem is to have the supplier identify the cause and fix the loudspeakers.
+ +Amplifier racks should ideally be co-located with the main speakers. It is (IMO) unwise to have the amps in the projection room and have to run long heavy-duty cables all the way to the back of the screen. Only a single send is needed for each speaker stack - the electronic crossovers must also be in the same rack as the amplifiers. For anyone to think that the use of passive crossovers is alright is unthinkable in this day and age. Only a fully active (preferably 4-way) speaker system can do justice to a well recorded film sound-track, and be capable of the dynamics needed while retaining sensible amplifier power.
+ +Of course, this approach does have some issues, since cinemas may have many screens operating at the same time. When the amps are not located in the projection booth no-one knows if there is a fault, but this is a fairly easy problem to solve with the application of a bit of technology to allow any amp/ speaker combination to be monitored individually, and raise an alarm if an abnormal condition arises. Swapping out a faulty amplifier is admittedly a little more difficult though.
+ + +To obtain and maintain the correct reference level is an experience in itself. Unlike the broadcast or professional public address industries, there is nothing in the setup procedure to calibrate the power amplifiers in their own right. There are no details (or procedures) provided to allow the amp's level (volume) controls to be set so that a measured output reference level is obtained for a reference level output from the decoder. The alignment will normally be done with amp level controls set to maximum (as suggested in the installation manual), but this is only a suggestion and may not happen. At some time after alignment, levels will be changed. Will the projectionist be able to return to a known (and calibrated) setting should someone fiddle with the controls? If every setting is recorded in the projection room log, perhaps. In general ... probably not.
+ +What happens if an amplifier fails and is replaced by another with different gain? Now we have a real problem, because there is no process to define the speaker voltage referenced to line level (from the processor). To include a procedure that sets a specific gain structure to the entire B-Chain¹ is not difficult, and needs to be included. This would allow re-alignment of amplifier levels using nothing more sophisticated than an AC voltmeter - a patchable VU meter would work just fine. Agreement of the reference level is another matter of course - some will argue for 0dBm (775mV), others for 0dBV (1V) and others for +4dBu (1.23V) as is common in professional public address and many recording studios. It actually doesn't matter which standard is used, so long as the reference level information is kept in the projection room log, or labelled on the amp rack.
+ ++ Note 1 - The B-Chain is that part of the processor that handles the signal sent to the speakers. The section that handles the analogue and digital signals from + film is known as the A-Chain. ++ + +
Many of the systems available today still (inexplicably) use passive crossover networks. A cinema installation is a professional application and can be very demanding at times. There is no reason at all to use a passive crossover for any system, even for the smallest theatre system. Electronics can be produced at such low cost that every system should be fully active, and use electronics crossovers for everything other than the surround speakers. Surrounds are used in relatively large numbers and are not usually expected to have the same response or definition as the main system, so an exception is more than reasonable in this application. The surround speakers are expected to be able to provide the same SPL, however few can even come close.
+ +By using electronic crossovers, each amplifier has a somewhat easier task, and power requirements can usually be reduced for each amp. This approach gives a system that is capable of being louder and cleaner than an equivalent passively crossed loudspeaker, all other things being equal. This topic is discussed at length in the ESP article Biamping - Not Quite Magic (But Close), and is recommended reading for those who have not used electronic crossovers.
+ +By applying this approach, the installer has total control over each frequency band in the system, so reliance on passive crossover networks being right is eliminated. Even if a network is (theoretically) right, it may not be right for the specific conditions encountered in a theatre environment. Because an electronic crossover can achieve 24dB/octave filter slopes easily and cheaply, each driver in the system has greater protection from out-of-band frequencies - especially important for high frequency compression drivers. (Use of 24dB/octave Linkwitz-Riley crossovers is mandatory for THX certified systems.)
+ +Ideally, each individual driver should have its own amplifier. This affords maximum control over the driver's resonance and creates a very robust overall system. On the same topic (driver control), the power amplifiers should be located as close as possible to the speakers. Very long cable runs can add significant resistance to the speaker circuit, resulting in large power losses and loss of driver control. While it may seem more convenient to have the rack in the projection booth, the losses associated with this practice can become unacceptably high unless very large diameter wiring is used.
+ +Subwoofers pose another set of problems, and these are often not addressed at all. Many subs use vented enclosures, and while these can give very good performance, the bandwidth must be limited to prevent all signals below the box tuning frequency from being amplified. There have been many cases where certain sound tracks have caused subwoofer failures in multiple cinemas, and this is the direct result of allowing frequencies below the enclosure cut-off frequency to be amplified and sent to the subs. Added EQ (by an installer not familiar with the box limitations) will increase the likelihood of driver failure dramatically. This is easily fixed with a professional electronic crossover, but the system processor is unlikely to have any such provision.
+ +Although references are few, all sub amps should be fitted with adjustable limiters to prevent excessive power at any frequency. Many of the power amplifiers available are more than capable of destroying any loudspeaker ever made - regardless of its claimed power handling. Failure to limit the power to a safe value will ultimately cause failures, and it is guaranteed that these will occur at the most inconvenient time possible (yes, Murphy really was an optimist. )
That many of the installed systems are grossly underpowered is another issue again. The typical average SPL is expected to be around 85dB in the theatre, but peaks can be a great deal louder. If a system is struggling to get to 85dB and we expect peaks to reach 105dB, this is a ratio of 20dB. The amplifiers need to be able to produce 100 times as much power to reach 105dB from the 85dB reference level. A 200W amplifier that just reaches 85dB needs to be upgraded to produce 20kW to achieve the 105dB level. No driver made can withstand so much power, and high efficiency loudspeakers are the only way to keep power requirements to a reasonable level.
+ + +One thing is certain, and that's that the use of equalisation to correct for room response is simply wrong, it doesn't work, and the practice should be discontinued forthwith. Howls of protest can be expected from those who created (and those who believe in) the standards, but they simply need to gain a greater understanding of the real problems.
+ +Very few people will say that cinemas sound excellent. Some sound very ordinary indeed, and not necessarily because the sound system is inherently bad. Properly set up, many systems are capable of providing a satisfying experience. Not excellent perhaps, but certainly better than they sound now. Some will be seriously underpowered, or will be of a design that will never work properly given the requirements of a cinema system. Even so, they can still be made to sound at least passable if properly set up.
+ +All frequency response variations that are caused by reverberant field energy are time related, and are caused by reflections from walls, ceiling and floor - each with its own time delay. Every surface and every surface treatment will affect the amount of reflected signal at any given frequency. Because all of the variations are displaced in time from the original sound, none of the problems so caused can be corrected using (amplitude based) frequency response modification.
+ +In case you think that perhaps digital delay might help, the answer is (in general) "No". A highly reflective surface could be tamed by having a speaker (or multiple speakers) mounted on that surface, providing an anti-phase signal to cancel the echo ... this might work in some (limited) cases, and even then only at low frequencies. However, the cost and complexity to do so is disproportionate to the benefits, and it is simpler, cheaper and usually more effective to treat the surface as needed. Treatment may include absorption and/or diffraction/ diffusion. Properly applied, these can make the problem far less of an issue.
+ +Frequency response variations caused by deficiencies in the speaker system can sometimes be equalised or corrected electrically by other means - for example phase reversal of drivers to account for electrical phase reversal in crossover networks. Any equalisation must be performed by someone who knows (or knows how to calculate) which frequencies are affected by reflected sound from nearby surfaces. The measurement mic will give spurious (and useless) results at a number of frequencies based on the distance from the sound source to the mic and any surrounding surfaces. Anything that seems completely wrong (and that your ears tell you is not the case) is almost always the result of reflections causing the microphone to provide incorrect data.
+ +Using multiple microphones will not help, and in most cases will make matters even worse than using a single measurement mic. Multiplexers are suggested for some parts of the EQ process because allegedly using multiple mics is somehow 'better'. These (along with the extra microphones) should be left in the cupboard where they belong. Where one microphone can give a bad reading, many microphones will simply provide many bad readings.
+ +Whenever loudspeaker measurements are performed by the designer, only one microphone is used in an anechoic measurement area, and is commonly placed as close to the speaker as practicable to minimise the influence of reflections. This is admittedly difficult with large cinema systems, but the established methods used at present simply don't work, so a new approach is essential.
+ +All equalisation should be as gentle as possible - a speaker system that requires radical EQ to sound even passable has no place in a theatre or anywhere else, and should not be used. As mentioned above, all identical loudspeakers must be equalised identically - regardless of what the measurement microphone may indicate. It is then essential that the installer carefully listens to each speaker to ensure that they sound the same (a mono source directed to each speaker in turn is needed). This should be done with pink noise, dialogue and music, and careful adjustments made to ensure that dialogue (in particular) is clear, crisp (but without excessive sibilance), and has no "chesty" resonances. While such resonance may sound ok if you listen to talk back radio announcers, it has no place in a cinema. All such effects are applied on the soundtrack where they are needed - they must never be part of the overall sound.
+ +Where it is obvious that one of the speakers sounds different from the others, find out why! It may simply be that a high frequency horn is behind the masking screen (the black material on either side of the screen itself), or there may be a faulty loudspeaker in the array. Other things can influence the sound as well, and all potential physical causes must be examined before resorting to equalisation. EQ is the last step in the setup process - not the first! Ever ! + +
There is one place where EQ is absolutely essential, and that's to compensate for what's known as 'screen loss'. Because the main speakers are located behind the projection screen, all sound has to pass through the screen itself before it gets to the audience. This isn't a problem for bass which passes straight through, as does most of the midrange. High frequencies are heavily attenuated, and the presence of the screen can also interfere with the normal sound propagation in the horn (almost all theatre systems use horn drivers for upper midrange and treble). Theatre screens are always acoustically 'transparent', but the degree of transparency can never be as great as one might like, because too much light would be lost. It has been determined that the screen loss of 'typical' screens is in the order of 6dB/ octave above 5kHz [ 8 ].
+ + +One point must be made here, and although not often stated, it is more important than almost anything else. A sound system does not have to produce a perfectly flat frequency response to sound good. Many highly regarded loudspeakers are not especially flat, yet they produce a well balanced and enjoyable listening experience.
+ +The key point here is well balanced, meaning that there will never be sharp peaks, and the all-important 'intelligence band' (my terminology) from 300Hz to 3.4kHz must be free of colouration and distortion. This range should be flat, but not if radical EQ is the only way to achieve the flat response. This frequency range provides the listener with all the dialogue detail needed to understand what is said, and for this reason (not at all coincidentally) is the frequency range used by the telephone system.
+ +This, very unfortunately, means that subjective assessment becomes an important part of the installation process. The idea is to make the system sound good, not flat - while these conditions will coexist in a very well designed system, many systems will never sound good if an attempt is made to equalise them to be flat. As soon as subjective assessment becomes part of any installation process, problems are created. Each individual will have a slightly (or sometimes radically) different idea of what sounds 'good', so any installation needs to be verified by consensus - a number of people should agree with any change, and should agree when a system is sounding as good as it can.
+ +This is at odds with the idea that a single person can come into the theatre, set up a few microphones, perform a 'standard calibration' and equalisation process, pack up and leave. Now it seems that we need a few extra people who know what systems should sound like, so they can argue amongst themselves until consensus is reached. However, this is not necessarily true.
+ +The key to understanding what sounds genuinely good (as opposed to what some people may think is good) is education. It doesn't take very long to demonstrate good and bad sound to someone who has the capacity to hear the difference. It is not especially difficult to let a new installer know what to listen for, and what to do about it ... or what can be done about it. People in the industry need to understand how our hearing works, how microphones make a complete hash of things if set up incorrectly, and how to measure a system properly. With education, an installer will know quite quickly when nothing more can or should be done.
+ +Education appears to be the missing element in the process at present. While some installers simply follow the manual, others take more care and use their knowledge and judgement to perform the setup. Those with the education (probably self taught) will generally get the best out of a system, while those who simply follow the 'rules' will make a few systems sound better, others worse, and the remainder will be pretty much unchanged (but different). The human mind can be strange at times - if something sounds different after it has been messed with, it will nearly always be perceived as 'better'. This can even happen if it is demonstrably worse!
+ + +For the (quasi-religious) fanatics of X-Curve alignment and anyone else who doubts this material, please do yourselves a big favour. At the next installation you perform, first verify that the sound from the speakers (with no EQ at all) is as it should be. If the speakers don't sound right, get the people who installed them to come in and correct the problem(s) before continuing. Sounding 'right' means that voices should be clear and intelligible, music should sound like music, without harshness or shrillness that hurts your ears.
+ +The only thing that should sound like a goat pooping on a tin roof is ... a goat pooping on a tin roof! In other words, the speakers, without any treatment from the B-Chain processor, should sound as they should ... 'right'. If you would be happy to have the sound you hear in your living room, then they are probably ok. If your only music system at home is an MP3 player, AM radio or a pair of computer speakers, please find employment in another industry - you are totally unsuited to setting up a sound system. (Yes, I am quite serious.)
+ +The installed system needs to be verified as capable of reaching the required levels at all frequencies, without distortion. This alone will defeat many systems - they are often undersized, sometimes by an astonishing margin. A system that cannot achieve the required SPL can't be made to do so without major upgrading. This can be an expensive exercise, and one the theatre owner(s) my be unwilling to undertake.
+ +After the speaker installers have done their best, send all test signals to the centre speaker. If needed, apply the minimum EQ to obtain a reasonably flat response, as determined by using a single omni-directional measurement mic positioned close to the speaker, and preferably on axis with the high frequency horn (this assumes identical left, centre and right speakers). Listen to the speaker carefully, and make corrections as needed to make it sound right. Compare the speaker response to that from headphones (not ear 'buds' - proper headphones). The important part of this process is to make the speaker sound right - measured response is secondary to sound quality. Do not rely on the microphone - it only tells you a part of the story. While a useful tool, it can mislead you in any number of exciting ways. Only use the one speaker at this time!
+ +Most important of all, ignore every instruction in the processor setup manual regarding mic positioning and equalisation, except where it helps you to work through the menu system to apply the most basic EQ possible.
+ +Using a CD or DVD with known clear dialogue, listen up close, in the 'prime' seating area, go right at the back of the theatre, etc. The sound should be excellent at any location in the theatre. Now do the same with music. Listen for colouration in each location. If there is none up close and lots further away, the room is bad and should be corrected with acoustic absorbers and/or diffraction or diffusion material before you continue - unlikely but possible. Unless you have a good background in acoustics, it is best to engage a professional. Acoustics can be a black art, and the best solution isn't always the most obvious.
+ +If EQ was applied to the centre speaker, apply exactly the same EQ to the left and right speakers, again assuming that all three are the same. Play a film sound track, listen from every (sensible) location in the theatre. As before, the dialogue should be clear and have excellent intelligibility, no matter where you sit.
+ +Listen carefully to sounds that pan across the screen, and to sound that is supposed to come from a particular location. Make sure that it comes from the place it's supposed to. Listen with your eyes closed, and verify that you can locate the precise point where the sound seems to originate - verify that this makes sense in context with the on-screen action.
+ +During the course of this exercise, make notes that will help you to remember - our auditory memory is notoriously short.
+ +After having done the above tests, if you still have doubts that you have created an auditory masterpiece, perform the setup exactly according to the processor instructions. You can even tell the speaker guys that they can make the speakers sound horrible again if that's what you started with.
+ +When the setup is complete, go back into the cinema and use the same material at the same volume (this is important) as you did before. How's the sound now? Better than the first test?
+ +Listen very carefully to the same dialogue and same music. Listen from the same locations within the theatre. Does everything still sound as it should, with pinpoint accuracy of location, completely clear and intelligible speech? Do all music passages completely fail to hurt your ears? Do the high frequencies have sparkle, giving the same clarity as before, and without any harshness?
+ +If there is just one "no" in any of your answers to the above, you have proved the point. While the 'official' setup process is unlikely to produce a catastrophic failure (although this is certainly possible), the chances of it producing a better result than the method described are almost nil. The critical thing is to know what to listen for - once you know that, the rest falls into place.
+ + +It is useful to provide some specific examples of what goes wrong when we attempt to take a measurement of a loudspeaker system under non-anechoic conditions. Anechoic chambers are used when accurate response measurements are needed on any sound reproducer. That a movie theatre is non-anechoic is obvious, and some reverberation is necessary as discussed above.
+ +When a microphone is used to take a measurement, the direct sound from the loudspeaker is the first to arrive. Early reflections are those that bounce off walls, the ceiling or floor, arriving shortly after the direct sound. A path length difference of only 345mm causes a 1ms delay - it is safe to assume that the vast majority of early reflections will have to travel a great deal further than 345mm in a typical theatre - even a small one.
+ +If we assume for the sake of simplicity that the first reflection from the left speaker is from the left wall, it may have to travel 2 metres further than the direct signal in a reasonable sized theatre. This represents a delay of 6ms, which is well within the 30ms limit where our hearing mechanism rejects such sounds as being spurious. In addition, I added a 7ms and 10ms delay, each at a lower amplitude than the one before - again, this is typical, but doesn't apply to any specific theatre. Actual figures will vary, but the effect is the same. Because a microphone cannot reject spurious signals, it will tell us that the frequency response curve looks something like that shown below.
+ +
Figure 6 - Frequency Response With Early Reflections (6, 7 & 10ms)
High frequencies will always be attenuated more than low frequencies. This is not simply because the high frequencies are more easily absorbed, although this is true. Another factor is that the high frequencies are more directional, so far less original signal even gets to the side wall to be reflected. This effect has been included in Figure 6, by attenuating the delayed high frequencies at 6dB/octave, starting from around 500Hz. The same is done to each reflection that is added to the direct sound. Each reflection used in the above is at a different level, as will commonly (but not always) be the case. The first reflection (6ms) is 6dB lower than the direct sound, the second (7ms) is 12dB down, and the third (10ms) is 16dB down. If by some horrible chance all reflections were at the original level and/or have more high frequency content, the graph will look a great deal worse (and yes, that is possible).
+ +Now, if we use multiple microphones and a multiplexer (a device that allows one of several mics to be selected), then the problem should go away - this is the reason that a single mic is not recommended in the setup process, right? With more microphones to choose from, the number of possible combinations is now increased dramatically, but the net result is not improved at all. The setup manual says to use a microphone multiplexer, but goes into little detail (well, none actually) as to how this should be set up in its own right, other than to "select sequence mode" when calibrating the subwoofer. Suffice to say that sequencing will not achieve anything that is dramatically more useful than a single microphone. While the multiplexer does allow the installer to select any mic in the group, who's to say which one is right? All of them? None of them? (The answer is actually "none of them".) As stated above, if one mic takes a bad reading, multiple mics will take multiple bad readings. There is no magical number of bad readings that constitutes a good reading.
+ +As you can see, the single mic response looks appalling. Selecting a different mic will give a different appalling result, and any attempt to equalise to make the response appear flat will be an unmitigated disaster. The end result will sound absolutely dreadful, and you will not have solved the problem at all - only created another far worse problem. Fortunately, it is easy to fix - simply reset the EQ to flat.
+ +It may be worthwhile here to add to the original statement that defines the process ...
+ ++ You cannot correct time with amplitude, and ...+ +
+ Throwing (expensive) technology at the above makes no difference whatsoever, because ...
+ You still cannot correct time with amplitude ! +
It is very difficult to understand how companies with vast technical resources could have failed to see that the entire process they recommend is fatally flawed. While the end result may comply to a standard is of no consequence. The standard itself is flawed, and until it is totally reconsidered and changed to match reality and established acoustics principles, theatres will continue to sound the way they do now - not necessarily bad, but certainly not good either.
+ +It is almost as if there were a global conspiracy to ensure that no cinema should sound so dramatically better than its competition as to raise any questions from the patrons. While the X-Curve and everything connected to it is an attempt to ensure acceptable sound, the industry should be striving for outstanding sound - acceptable simply isn't acceptable when exceptional can be achieved with very little additional effort.
+ +Many of the speaker systems commonly used can undoubtedly be aligned to provide excellent performance (independent of the processor), and those that can't have no place in a cinema. There is little doubt that some of the systems rely on final calibration to correct response anomalies, but this can usually be fixed without incurring a severe cost penalty. To rely on the processor to correct any loudspeaker problems is not the right approach, and the industry needs to set minimum standards for the equipment that must be met - before any equalisation is applied from the B-Chain processor or other projection room common equipment.
+ +What is the likelihood of change? Unfortunately, the prognosis is not good based on the reactions that have been heard so far. Many of those in the industry appear to have a vested interest in maintaining the status quo, and allowing reality to impinge doesn't seem to be an acceptable option. There are notable exceptions of course, but they haven't been able to force a change either.
+ + +There is one full sized cinema sound system in Sydney (Australia) that I know of (because I helped design and install it) that was not set up according to the standard X-Curve, and the speaker system itself is calibrated to sound at its best with no external equalisation. When standard Dolby calibration was performed on the system, the results were very disappointing indeed. A signal fed directly into the amplifier racks sounded really good, but film sound tracks using the processor sounded ... well, wrong. Poor definition and imaging (especially for the all-important vocal range), and strange dips in frequency response had converted excellent sound into merely mediocre - just as one might expect from any other theatre with an otherwise very good sound system.
+ +After a (rather painful and frustrating) tour through the CP650 processor's menu system, the EQ settings were removed. All tone controls were set to flat, the subwoofer parametric EQ was disabled, and all equalisation for the left, centre and right speakers was returned to flat response. It is notable that each of these very important speakers had different EQ settings, even though their tonal balance is virtually identical without any EQ at all. Equalisation was also removed from the surround speakers, which although fairly ordinary have acceptable response for their purpose (and sounded better without EQ). The surrounds are actually the weak link in the system, but funds are not available to upgrade them.
+ +After the equalisation was removed (other than correction for screen losses), the system was back to sounding the way it should. There have been a great many comments from patrons - including film professionals - that this particular cinema had the best sound of any theatre they have attended. There isn't a single seat in the theatre where the sound is too bright or too loud, and likewise no seat where dialogue isn't absolutely intelligible. In short, the sound was excellent at any seating position.
+ +This installation showed that use of the X-Curve and extensive equalisation is not only unnecessary, it creates problems that didn't exist before. There is a sensible requirement that the speaker system should be properly aligned in its own right, but once this is done, attempting to apply room equalisation will do far more harm than good. For more information about the system itself, see the Lenard K4 theatre system - this is the basis of what we installed at the cinema.
+ +The system itself is a 4-way fully active design, with all drivers horn loaded for maximum efficiency. Each loudspeaker driver has its own dedicated amplifier, and all amplifiers, crossovers and other system electronics were built by John and me. No part of the system uses off-the-shelf assemblies. The final system is relatively inexpensive, compared to purchasing all the equipment from normal industry outlets. This approach has the added benefit that individual sections can be tailored to suit their exact purpose, with a minimum of compromises.
+ +In case you were wondering, John Burnett (Lenard Audio) and I have worked together on many projects, including the K4 in the cinema in question. The installed K4 system includes the ESP P125 4-way 24dB/octave crossover networks, P84 third octave bass equaliser and P127 power amplifiers, plus a P48 EAS subwoofer equaliser circuit to obtain sub-bass extension to 20Hz. There are also other parts of the system that were custom designed to provide additional functionality that is not found in other cinema systems. It is very pleasing to have worked on an installation such as the one John and I set up - I've not been to a movie theatre anywhere that sounds as good!
+ +Unfortunately, the system has been de-commissioned and is no longer in use, as the cinema was bought out by another chain.
+ + +As is obvious from advertising material and theatre posters, the Dolby based systems are not the only ones available. Although there appears to be a huge amount of information on the Net, much of it is duplicated, and the majority is directed at home theatre rather than cinemas. Little detailed technical information is available unless one has access to the equipment.
+ +Dolby SR, SRD, etc.
+Dolby SR and SRD are just two of a whole family of formats. For more information see the Dolby website.
Lucasfilm THX
+Although the THX® system uses a Dolby processor, it has different (and it would seem closely guarded) setup requirements. Lucasfilm (the creators of THX) will relieve you of a large sum of money to have your equipment and theatre certified as THX compliant, but it would seem that many cinemas will cheerfully claim to be certified, even though they haven't parted with a cent to have the work done. Much of the work needed to make a cinema THX compliant involves ensuring that minimum acoustic criteria are met, covering reverberation, sound transmission (through walls, floor and ceiling), ambient noise, etc. In my opinion, the standards set appear to be perfectly reasonable (from the few snippets I have been able to find). There is no reason for any new theatre to be built that does not comply, as it seems to be sensible acoustic design.
THX also insists on a minimum sound quality standard from all installed equipment - especially loudspeakers and crossover networks. That any installer would consider using anything less is cause for some concern, but there is a vicious circle effect ... if the sound is bad or mediocre, fewer patrons will attend. Fewer patrons means less income, making it hard to justify spending a lot of money on a good sound system. If the sound remains bad ... (and so it continues).
+ +All in all, it would seem that the THX requirements are a very good starting point, and to refit an old theatre or build a new one without applying proper acoustic treatment and installing a decent sound system is a recipe for disaster. In some cases, old theatres may already have acceptable acoustics (not ideal perhaps, but acceptable), and little or nothing may be needed in addition to what's there. Few existing sound systems will be re-usable, but that depends on the age and general condition of the equipment. The financial burden ultimately decides what is possible, because motion picture theatres are no longer the "cash cow" they once were.
+ +Sony SDDS
+Sony's SDDS (Sony Dynamic Digital Sound ®) system uses its own proprietary processor, and like the Dolby system it has provision to equalise the loudspeakers. Not having played with one, I can't comment on the setup process in any real detail. The manual does provide a glimmer of hope though.
Because the SDDS system has provision to interface with the Dolby CP500 (and above) processors, in many cases the Dolby processor will maintain control over the overall system equalisation. This ensures that all films will sound as mediocre as each other if the standard setup is used.
+ +The SDDS processor does have full equalisation built in, and it must be pointed out that the instructions are at odds with those from Dolby. In general, the instructions are in quite close agreement with my recommendations above. Sony rightly points out that you cannot equalise the room with its reverb and reflections, and suggests moving the measurement mic(s) closer to the speakers if measurement results appear wrong. It is also recommended that all speakers of the same type should use the same EQ settings - this is a very good start, and is consistent with reality.
+ +Unfortunately, it is likely that install technicians used to the Dolby system will use the procedure they know, rather than follow the instructions.
+ +DTS
+Digital Theater Systems. The latest processor (XD10P at the time of writing) has full equalisation facilities, as well as all normal decoding facilities. I finally have access to the calibration procedure, and it seems to be reasonable - at least on the surface. Although the unit has an inbuilt RTA (real time analyser), it is suggested that this should only be used for quick checks. There is a complete section describing how to 'calibrate' the room using the in-built graphic equaliser, and the X-Curve is prominently displayed as the ideal response.
It does have the ability to measure RT60 reverb time as required for THX certification. Full 'calibration' is recommended to be performed with an external fully calibrated RTA and microphone. However, it is recommended that once the centre channel is 'aligned', the EQ should be copied to left and right - again, assuming that they are the same as should be the case. Likewise for the surround speakers.
+ +There are also some suggestions for verifying that the speakers are up to the task, and this includes a proper listening test. Info is pretty sparse on exactly what to listen for, so I suspect it is assumed that the technician will have reasonably good knowledge of how a cinema system should sound.
+ +Other Formats
+There have been a number of other competing formats for digital and/or multi-channel audio for cinemas, but many of them have died because of lack of support or technical problems. I don't propose to even attempt to list them here, as they are not relevant to the current discussion - namely system equalisation.
That cinemas should sound consistent is beyond any doubt. That the original work of Ioan Allen (who joined Dolby in 1969) was ground-breaking is not disputed. Allen pushed the boundaries of cinema sound in many areas, and for the first time, there was a move to maintain some kind of standard so that cinema goers could expect to hear the dialogue clearly, and experience the movie more-or-less as was intended when it was dubbed, mixed or otherwise dealt with at the production sound stage.
+ +For various reasons, the methodology used and decisions made were flawed - taking measurements in the reverberant field is pointless, and can only ever yield mediocre results at best. However, even mediocre is certainly better than 'really bad', or perhaps patrons complaining that "the sound was total crap!". In some cases, it is probable that mediocre was a huge leap forward.
+ +When equalisation is used, it is illogical and obviously completely wrong that more or less identical speaker systems (left, centre and right) can (and usually will) have radically different EQ applied at the end of the 'calibration' process. If any EQ is needed at all, it should obviously be the same for all speakers of the same type. In addition, and perhaps most important of all, remember that ...
+ ++ You cannot correct time with amplitude ! ++ +
Reverberation is time related, and there is absolutely no form of equalisation that can be applied that will change it. None whatsoever! To continue with the pointless pretense that the process works just means that there will be no improvement of the sound quality in cinemas, regardless of further advances in loudspeaker performance.
+ +The technology to make excellent sounding speakers has existed for many years, but loudspeaker driver and cabinet costs and the sheer size of the systems needed for a decent sized cinema mean that it becomes a very expensive exercise to outfit a modern multi-cinema complex with the best that can be made. However, these costs must also be put into context - a modern film is an extraordinarily expensive undertaking, and may only last a month or so at the box office.
+ +The cinema (individual or complex) will be there for countless films, and a genuinely excellent sound system becomes just a comparatively small part of the overall setup or refurbishment cost. If properly designed, ongoing maintenance should be minimal - a well designed and implemented sound system can last for many, many years without a single failure.
+ +Once the myth that the X-Curve is somehow a good idea can be finally laid to rest, and the silly reverberant-field 'room equalisation' nonsense is stopped, there is no reason at all to prevent the cinema from being all it can be. There are already a few people who advocate abandoning the X-Curve and all attempts at room EQ, but as yet they are a more or less silent minority.
+ +Looking through websites and forum pages is instructive. Many explanations for the X-Curve are simply regurgitated from some other website, and some are almost identical to each other. Industry professionals on forum sites often ask questions that clearly show that they have no idea of what the X-Curve is, why it is used, and what it's supposed to do. Very few point out any of the serious deficiencies that have been described here.
+ +The references cited here are just some that may be found on the Net, discussing cinema processing, equalisation (not just for theatres) and many other similar topics. These are the ones from which I made specific notes, but there are countless others that discuss the general principles.
+ +![]() | + + + + + + |
Elliott Sound Products | +Class-D Amplifiers |
Class-D amplifiers are now one of the most popular audio amps, and are used in a vast number of consumer products. One of the reasons for this is that heatsink can be much smaller, and for low power the PCB often provides an adequate heatsink for normal listening. On-line sellers offer a range of different boards, and many cost less than the parts used to build them.
+ +Not all are usable though, with some being so bad that anyone who is serious about sound quality would be unable to listen to them. This can be the case even when apparently identical parts are used. Because Class-D amplifiers operate at very high switching frequencies (usually greater than 300kHz), a small error in the board layout can make a big difference to the end result. I have a selection of Class-D amps that were purchased for evaluation and with this article in mind.
+ +Some are very good, with low distortion and a flat frequency response, although many are load dependent and the high frequency response will change depending on the load impedance. Consider that almost all speakers will have an impedance that's well above the rated/ nominal impedance at frequencies above 10kHz or so. This can make the result something of a lottery with a Class-D amp that is highly load dependent.
+ +I doubt that any of the Class-D ICs currently made are inherently 'bad'. It's a foregone conclusion that some are better than others, but the primary cause of poor sound quality is the PCB layout. Most 'linear' amplifiers are at least passably tolerant of board design, but if it's not done properly you may end up with twice as much distortion than you expected. With a poor layout, it's easy for the distortion from a Class-D amp to be 10 times that of a good layout, even using the exact same parts.
+ +If you're unsure about how these amps work, I suggest that you read Class-D (Part 1), which has been on the ESP site since 2005. It's a contributed article, written by one of the owners of ColdAmp (based in Spain), but the company has since ceased business. Importantly, it covers the operation of a 'standard' Class-D amplifier, but concentrates on fixed frequency types. These are now in the minority, with variable rate switching being more common now. These are often classified as using '1-bit' Sigma-Delta modulation. A very common (and popular) IC is the IRS2092, and while it's a fairly early IC (introduced ca. 2007) it is still used in many Class-D designs.
+ +One thing that's more than a little annoying is the insistence by many Class-D vendors on quoting output power at 10% THD (total distortion and noise). A common way to claim the maximum power is to simply use the power supply voltage. For example, if the supplies are ±40V, with zero losses the RMS voltage is 28.3V, so power is claimed to be 200W into 4Ω. A more realistic figure is about 2dB less, or 160W, but that still assumes a regulated power supply that maintains the voltage under load. In most cases, the 10% THD output power should be divided by two (-3dB) so the claimed 200W output is more realistically only 100W. Some Class-D amps get very 'ragged' as the output voltage approaches the rail voltage(s), as evidenced by the screen-shot shown further below.
+ +Some also quote the 1% THD power output, which is when the amplifier is (supposedly) just on the verge of clipping. It's uncommon for most people to run an amp at full power, and one has to troll through the datasheet to find THD figures at realistic output power. You can be sure that the figures quoted are for a PCB that's been very well designed, with first-rate parts used throughout and regulated supplies. If you buy something from eBay or similar sites, you get what you get. Sometimes quite alright, but other times a disaster. I have examples of both.
+ +I've not published a Class-D project, and after reading this article you'll know why. Some of the parts are troublesome, with the output inductor being one. The best performance can only be achieved by using SMD parts, which minimise stray inductance that causes problems with very high switching speeds. The PCB has to be perfect, which often means several iterations to get it right. Unless a constructor can get the exact parts specified, there's no guarantee that performance will be acceptable. It becomes a minefield, where the smallest construction error can cause instantaneous failure, and it's simply not something I'm willing to try to support.
+ +In the following descriptions and circuits, there are often multiple supply voltages referenced. +VDD is the upper MOSFET's drain voltage and -VSS is the lower MOSFET's source voltage. The PWM signal switches between the two - there is no intermediate state other than the dead-time, where both MOSFETs are turned off. Additional supplies may be referred to by a number of different terms, but they are usually easy enough to identify. Anything that includes 'A' (e.g. VDDA) means it's power for analogue circuitry (input stages, modulators, etc.). There are no conventions, even from the same IC manufacturer, so where necessary they are explained in the description for each circuit presented.
+ + +Class-D was invented by British scientist Alec Reeves in the 1950s [ 1 ]. Strictly speaking, he invented pulse-code modulation (PCM), the underlying principle of Class-D. As with so many things we take for granted, PCM was developed for telephony, with the first patent taken out by Reeves in 1938 (using valve circuitry). Class-D wasn't practical until the MOSFET was developed. This 'new' device was presented in 1960, a year after its development. The idea was proposed in 1926, but it was not possible to fabricate the device at that time. The term 'Class-D' came about because it was the next letter in the alphabet, and we already had Class-A, B and C. The 'D' does not mean digital, but that distinction has become blurred over time. While some Class-D amplifiers may use digital processing internally, the operation is completely analogue. For those Class-D amplifiers with digital inputs, after any internal signal processing there's a DAC (digital to analogue converter) before the power amplifier itself (for example, see the SSM2518 - Digital Input Stereo, 2 W, Class-D Audio Power Amplifier Data Sheet [Rev. B] datasheet).
+ +The 'standard' fixed-frequency PWM waveform is derived from the comparison of the input (audio) signal and a triangle (or sometimes a ramp) reference waveform. This is the switching frequency, and it should not be less than ten times the highest audio frequency. The two signals are applied to a comparator, which outputs a 'high' or 'low' voltage, depending on the relative amplitudes of the two inputs. With no input, the output is a 1:1 squarewave, leaving a net output voltage of zero. However, there is always some switching signal breakthrough, and in an ideal case it's a sinewave at the switching frequency. An increased switching frequency makes the output filter less critical, but increases switching losses. Low switching frequencies reduce switching losses but makes the output filter more difficult. Most modern Class-D amps use a switching frequency of at least 300kHz. In the case of self-oscillating types the switching frequency is generally highest at low input levels, and reduces as the amp approaches clipping.
+ +The way a Class-D amplifier operates is fairly simple in principle, but getting it right is not. The first commercially available Class-D amp was a kit made by Sinclair, designated X10, which was closely followed by the X20. When the X10 was launched in ca. 1965, it was the first to use Class-D, but it had many problems. The output stage used bipolar transistors that weren't fast enough to switch cleanly, and because there was no 'real' output filter it radiated harmonics of the switching waveform. This caused radio frequency interference, and that (along with dubious quality semiconductors) caused its demise. The X20 was no better, and while it was claimed to deliver 20W, that was simply not possible. The kits disappeared very rapidly once the problems were discovered. Many other manufacturers followed, but Class-D remained something of a niche product until the early 2000s.
+ +The above is adapted from an original Sinclair circuit. (Sir) Clive Sinclair was nothing if not modest (not!), hence the photo included in the schematic, which I retained because it's part of the legend. The amp itself was a disaster, not only because of radiated RF interference, but also due to Clive's penchant for getting the cheapest transistors available at any one time. That's why no transistor types are shown, because they were likely to change. Note that the amp uses a negative supply voltage (not uncommon with germanium circuitry at the time). However, it's very unlikely that germanium transistors were used in the X20. A PCB photo I've seen puts TR11 and TR12 at odds with what's shown in the schematic, with TR12 being the TO-3 device, with no TO-66 transistor to be seen.
+ +While many Class-D amps use PWM (pulse-width modulation), there are a couple of alternatives [ 2 ]. These include Sigma-Delta (ΣΔ aka Delta-Sigma) and so-called 1-bit modulation. A variation sometimes seen is 'PDM' - pulse density modulation, where the number of pulses depends on the signal level. Many new designs use a 'self-oscillating' converter, which solves some of the issues but introduces others. When multiple Class-D amps are combined in a chassis, there is always the chance that the difference between oscillator frequencies causes audible whistles (sometimes referred to as 'birdies'). With designs using a fixed modulation frequency the oscillators in each amp can be connected to an external 'master' oscillator, and some ICs have clock synchronisation inputs and outputs.
+ +The designs shown below are a combination of fixed and self-oscillating types. Self-oscillating Class-D amps cannot use clock synchronisation, because there is no 'clock' as such. Self-oscillating amplifiers generally have a switching frequency that changes with signal level. The amount of variation depends on the design. Using a modulated clock frequency reduces the radiated emissions (RF interference) because the interference is spread out, rather than concentrated at a single frequency.
+ +One of the major claims is that Class-D amps are very efficient, but that requires some qualification. When operated at (or near) full power, they are more efficient than Class-B (including Class-AB) designs, typically up to 90%. However, at (say) one tenth power that may fall dramatically, depending on the quiescent current. MOSFETs are incapable of instantaneous switching, and at low power the switching losses and operating current for the modulator become significant. At very low power, they are usually no more efficient than a low quiescent current Class-B amplifier. For home use, it's unusual to operate any amplifier at close to full power unless it's only a low-power design, but this also depends on the loudspeaker efficiency, the type of music and the listener's preferences. A great deal depends on the design of course, so you need to look at efficiency graphs in the datasheet.
+ +There are many things that must be considered in the design of a Class-D amplifier, most of which were ignored completely with the Sinclair designs. While they were ground-breaking at the time, the required technology wasn't available to make them work well. Most of the designs covered here are capable of distortion levels below 0.1%, which doesn't match most of the better Class-B (including Class-AB) amplifiers, but there are other designs (mainly proprietary) that achieve noise and distortion levels that rival anything else available. Even the IRS2092 IC is easily capable of distortion well below 0.05% at any frequency, but the PCB layout has to be perfect.
+ +It's important to understand that the power supply is more critical. Where a Class-B amplifier's supply only has to supply current, the supply for Class-D both sources and sinks current. If it's unable to sink (absorb) current from the amplifier, the supply voltage will increase (bus pumping). This effect can be reduced by using large filter capacitors. The supply still has to be able to provide the maximum peak current demanded by the load. While the switching operation does reduce the supply current at lower output levels, at peak amplitude (at or near clipping) the supply must deliver V/R amps (assuming a resistive load). A ±50V supply must be capable of delivering ±12.5A peaks, and if it can't, the amplifier will either clip prematurely or the power supply may shut down (if it's a switchmode type).
+ +In many ways, it can be helpful to think of a Class-D amplifier as a '4-quadrant switchmode buck converter', with the instantaneous output voltage (and polarity) determined by the audio input. '4-quadrant' simply means that the amp can supply and sink current of either polarity. A 'conventional' amplifier is different, and the PSU only has to supply current, and any power returned from the (reactive) load is dissipated as heat in the output transistors. Output device dissipation in a linear amp depends on the voltage across and the current through the output devices. For a Class-D amp, MOSFET dissipation is a combination of switching losses and RDS-on (the MOSFET's 'on' resistance).
+ +One thing that you almost always see with Class-D amps is a bootstrap circuit. This is used to provide a 'high-side' voltage that's greater than VDD (positive drain voltage). Every design described in this article uses the bootstrap principle to enable the high-side MOSFET to be driven with a positive gate voltage. It is possible to use a P-Channel MOSFET, but they invariably have lower specifications than the N-Channel 'equivalent'. To provide optimum performance, almost all Class-D amplifiers use only N-Channel MOSFETs. The principle of the bootstrap circuit is shown above. The waveform also shows dead-time, exaggerated for clarity. Dead-time is very important. Too little and you get cross-conduction as both MOSFETs conduct at the same time, too much and you get high distortion.
+ +When the output (VS) is low (either ground or -VSS, Q2 turned on), Cboot is charged via the high-speed diode (Dboot which is forward biased, with optional current limiting by Rboot). When the output switches high (VDD), Dboot is reverse biased (no current flow), and the voltage held across Cboot is used to provide the upper MOSFET (Q1) with a gate voltage that's 12V greater than the +VDD supply voltage. This extra voltage is necessary to switch the gate high, to 12V above the source ('VS', which is the output). Bootstrapping is not needed for the lower MOSFET (Q2) because that's provided by the 12V supply referred to -VSS (VCC).
+ +The bootstrap principle is not particularly intuitive, and you may need to sketch out the circuit and solve for the two output conditions (high and low). The simplified circuit shown should allow it to make sense though, but the VS + +12V voltage is relative to VS (the output). The voltage across Cboot is relatively constant at a little under 12V, but the voltage (VB) referred to GND varies from -38V to +62V. In many cases, the value of Cboot appears to be much too small, but it only needs to supply current for a short duration as the upper MOSFET's gate capacitance is charged. The current peak may only last for a few nanoseconds.
+ + + +A few manufacturers have experimented with 'tri-level' Class-D, with a number of possible implementations. The general idea is that the MOSFETs don't have to switch between the two supply rails, only between zero and positive/ negative as demanded. There isn't a great deal of information on this scheme, but there are a number of patents that describe the principles. I don't know of any commercial offerings, but Crown did release an amplifier it called 'Class-I' which uses "symmetrical interleaved PWM" (See White Paper). I'm unaware of the current status of this, but given the wild claims and lack of any updated information it can probably be ignored until further notice.
+ +Some of the terms used with Class-D can be perplexing at first. The datasheets usually explain what everything means, but it can be hard to find. The most common are as follows ...
+ +++ ++
+PWM Pulse Width Modulation, as shown in Fig 1.1 + SE Single Ended. Either using two supplies [+Ve and -Ve] or an output capacitor for single-supply amps.
+BTL Bridge-Tied-Load. Two power amps with the load connected between the outputs. The two amps operate in anti-phase (180° phase shift/ inversion).
+ Peak-peak output voltage swing is twice the supply voltage, so a 50V supply gives a 100V P-P output voltage (70V RMS) +PBTL Parallel BTL. Two amps are operated in parallel to double the available current. Usually requires that the IC is designed to be paralleled. +
There are also many different terms used to describe the supply voltage(s), along with any other voltages either generated by the IC or required for it to work. A sample of these is shown in the following drawings, but other devices will often used different terminology for the same voltage, even from the same manufacturer.
+ +One of the things that nearly all high-power Class-D use is a level shifter. This translates a voltage in the normal operating range (typically around ± 5V) to a higher (or lower) voltage, which can be as much as 200V. Manufacturers are very cagey about disclosing the details of the circuitry used, but it's not particularly difficult for low-speed circuits. This changes when the IC is switching at 300kHz or more, especially since the rising and falling edges are so critical. A displacement of just a few nanoseconds may cause the switching waveform to create shoot-through current if the two MOSFETs are turned on simultaneously. Fortunately, this is all handled by the IC itself, and the user doesn't have to worry about it too much.
+ + +The IR (international Rectifier) IRS2092 has been around for a long time. While it cannot be considered 'state of the art', with a well-designed PCB it works very well indeed. It's not in the same league as some of the best examples around, but for low-frequency drivers in particular, it can match many of the other offerings. One down-side is that it requires a separate regulator - it's not complex, but is a nuisance to include. It also requires external MOSFETs, which are surprisingly critical. Because the gate drive current is rather limited (+1A, -1.2A), you can't use nice big MOSFETs, as the maximum recommended gate charge (Qg) is only 40nC (nano-coulombs). To put that into perspective, the (now) rather lowly IRF640 has a total gate charge of 63nC and an IRF540 has 94nC.
+ +The circuit is deceptively simple. Working out some of the resistor values is a minefield though, as there are interdependencies that make it a complex process. The CSH and OCSET pins are used to program the current limiting. The dead-time - a mandatory period where both MOSFETs are turned off - is also programmable. Dead-time prevents 'shoot-through' current that would flow during the small period where both MOSFETs are (partially) conducting. If the dead-time is too great distortion performance is seriously compromised, if too short, output stage failure is likely.
+ +I don't propose to go through all of the options here, because the datasheet, application note [ 3, 4 ] and other published material (by IR) go into everything very thoroughly. There's probably more information available for this IC than for any other, and I expect that's one of the reasons it's remained popular for so long.
+ +Suitable MOSFETs are the IRF6645, with a gate charge of 14nC, rated for 100V and 25A (@ 25°C), allowing for supply voltages up to ~±45V. Another is the dual IRFI4019, 13nC gate charge, 150V and 8.7A (@ 25°C), which can use supplies up to ±70V. However, the limited current means that only high-impedance loads can be used with the maximum voltage (8Ω minimum). As shown in Fig 1, 4Ω loads will be alright, but only if 'benign'. If it's expected that the amp (as shown) will be driven hard into 4Ω, the supply voltage should be reduced.
+ +Note that the IRS2092 is inverting by default, so the speaker should be wired with the 'positive' terminal grounded. If two amplifiers are used, one should have a unity-gain inverting stage in front of one channel. This places the two amplifiers in 'anti-phase', which minimises the 'bus pumping' effect. This condition arises because the speaker load is reactive, and is made worse at low frequencies and/ or by inadequate power supply filter capacitance. One or both supplies can have their voltage increased sufficiently to cause an 'OVP' (over-voltage protection) shutdown. If properly configured, this will be activated before the voltage is high enough to cause MOSFET failure.
+ +Bus-pumping is a potential issue with all Class-D amplifiers, and most stereo configurations will invert one channel. This is shown with some of the other circuits seen in this article. The IRS2092 reference designs (there are several) show additional circuitry, which is not needed for basic operation. If it's not used, there's no over-temperature cutout, so heavy usage with low impedance loads can cause the output stage to fail. IR has published a number of compete designs, including additional protective circuitry, and extensive measurements. In most cases, it should be possible to get less than 0.05% distortion with an excellent PCB layout. Unfortunately, many of the PCBs you can buy don't qualify. One that I've tested has over 3% distortion even at modest output levels, which is completely unsatisfactory ... and very audible!
+ +Another, using almost the exact same parts, has distortion that remains well below 0.1% at any level below clipping. The PCB layout is only one factor though. A miscalculated output inductor and (to a lesser extent) an inappropriate output filter capacitor can easily wreak havoc with the performance. If the inductor saturates, distortion is increased dramatically. The inductor also needs to have low resistance, otherwise it will get hot, the ferrite characteristics will change, and it wastes power.
+ +The capture above is from an IRS2092-based amplifier, and distortion is below 0.1%. The PCB appears to be well laid out, and it has decent-sized filter caps on board. It both tests and sounds like any other amplifier. There is a limitation in my workshop speaker system that precludes 'audiophile' comparisons, but I listened at various levels and didn't detect anything 'nasty'. Overall, this is what I'd expect from a budget amp using the IRS2902 IC. I have another that's better, but the capture above is indicative of what you should expect. In these (and the next) traces, the violet trace is the distortion residual from my distortion meter, and the yellow trace is the audio.
+ +In stark contrast is the Fig 2.3 capture. This board also uses an IRS2092 IC and the same dual MOSFET, but the distortion is considerable, and very audible. The majority of the parts are much the same, but the 'designer' chose to omit the gate resistors and any form of supply bypass. The test was done after I'd added gate resistors and supply bypass caps, but it's still awful. This is the difference between seemingly similar amp boards, where you'd normally expect them to be almost identical. Note the ragged audio waveform, which is a give-away that all is not well. The layout and component choices make all the difference!
+ +The scope capture above shows what happens as a self-oscillating amp clips. It's easy to see that the modulation frequency falls as the amp's output approaches the supply rails. In 'full' clipping, the oscillation stops completely, which should come as no surprise. As the modulation frequency falls, its amplitude increases because the output filter is less effective. This is roughly the 10% figure that's often quoted for output power, and as you can see it's unacceptable as a 'figure of merit'. The distortion trace isn't shown because my meter was unable to make sense of the waveform with its superimposed oscillator residuals.
+ +IR (International Rectifier) probably has more detailed information on the design and implementation of Class-D amps than any other manufacturer. A lot of it is fairly old now, but the documents published are very comprehensive. Naturally, the emphasis is on IR devices throughout, but for an understanding there's nothing better that I've found. If you do a search for 'classdtutorial.pdf' and 'classdtutorial606.pdf' you'll see what I'm talking about. These documents go into a lot of detail about things you probably don't need, but they also cover the things you do need to know.
+ + +The NXP (Philips Semiconductors) TDA8954 is a popular IC, and it's theoretically capable of 120W output. This is highly optimistic though, as the limit with ±30V supplies is 112W into 4Ω (120W is claimed into 2Ω). In reality, expect around 100W at the most (4Ω). These ICs are used in a wide variety of different configurations, including parallel BTL (two BTL amps in parallel for double the output voltage and current, resulting in up to 400W into 4Ω. While it's claimed to be high efficiency (83%), it has relatively high quiescent dissipation at about 3W, and it runs warm at idle. Many Class-B amps will be lower than that, but of course they will dissipate far more at any significant output power.
+ +The schematic is adapted from the datasheet, and it's somewhat inscrutable because almost everything is internal. While you can't see the internal functions, the basic diagram shown in Fig 1.3 is sufficiently generic that you can work out what happens internally. The IC has differential inputs, but single-ended operation is obtained by grounding the inputs as shown above. Note that the two channels are operated in 'inverse phase' to prevent bus-pumping. This approach is common, and is seen with other examples as well. If connected as BTL, the positive input of one channel is connected to the negative input of the other and vice versa. The input signal can be balanced or single-ended. It has the modulator, level-shifters and gate drive circuits shown in Fig 1.3, and adds thermal and overcurrent protection circuits, as well as the differential inputs and standby/mute functions.
+ +In many cases, the 'Mute' and 'Standby' functions aren't necessary, in which the 'Mode' pin is simply pulled high (+5V). The datasheet is quite extensive, and has many graphs of distortion, power dissipation, frequency response and anything else you may find interesting. Because the IC is SMD only, it's expected that most of the support resistors and capacitors will be SMD as well. The IC has a thermal pad on the top, so a heatsink is simply clamped onto the top of the package (with thermal compound of course). It's performance is surprisingly good, as shown next.
+ +There's not much switching frequency residual, and the distortion residual shows no sign of harmonics. That doesn't mean there aren't any of course, but my distortion meter gets 'confused' when there's a high-frequency present along with the audio. The meter reading was below the minimum the meter can show reliably, but I used the same output voltage and load that was used for the two captures shown above. Overall, this is a good result, and the sound quality seems to be very good (my workshop speaker systems are not true hi-fi though).
+ +While one could certainly build an amplifier from scratch, the TDA8954 is listed as 'no longer manufactured', which makes things harder. However, there are many complete amps from China that still use it. Unfortunately, many ICs of this type have depressingly short production runs, and in some cases it's possible for the ICs to become unavailable before a PCB can be designed and manufactured by a hobbyist or 'small-scale' supplier.
+ + +This IC (along with the next) is from TI (Texas Instruments), and is a single-supply BTL Class-D amplifier. The recommended supply voltage (PVDD) is 36V, and it requires a separate 12V supply (VDD). Almost everything needed for a Class-D amp is internal, but as you can see, there are many external support components. These are predominantly capacitors for supply rails, bootstrap and input coupling. The inputs can be used as balanced or unbalanced, with a 24k input impedance. The DC voltage at each IC input pin is not specified, but I'd expect it to be 6V. The datasheet shows the input capacitors as non-polarised (presumably ceramic), but electrolytic caps will probably have slightly lower distortion. High-K ceramic capacitors have a considerable value variation with applied voltage and temperature.
+ +The amp can drive 4Ω loads in BTL, with a 1% THD claimed output of 140W. The claimed output power at 10% THD is 175W, but that's an unacceptable amount of distortion. The THD at 1W is said to be 0.005%, and if that's achieved it's a very good result. Unfortunately, I don't have a board using the IC to test, so I can only quote the datasheet figures. While the datasheet claims that no supply sequencing is necessary, it also say that form minimum noise the '/RESET' pin should be pulled low for power-on and off. The other supervisory pins indicate a 'FAULT' and clipping or over-temperature warning ('CLIP-OTW'). If the IC produces an over-temperature shutdown, a '/RESET' must be applied to enable operation. The IC also has protection for over/ under voltage, overcurrent for both high and low-side MOSFETs. The 'Mode' pins ('M1, 'M2' & 'M3') are shown for standard BTL operation.
+ +This IC can also be used in single-ended mode, but due to a DC offset of 1/2 PVDD (nominally 18V) the speakers must be coupled via capacitors. The value depends on the impedance, but I wouldn't recommend anything less than 2,200µF (-3dB at 18Hz with a 4Ω load). Another option is PBTL (parallel BTL), which couples the two outputs together in parallel to allow the load impedance to be as low as 2Ω. IMO this is not useful in most cases, because the speaker leads have to be big to prevent significant power losses. It's less of a problem for powered boxes, with the amplifier directly connected to the speakers in the enclosure.
+ +The TAS5630 is another IC from TI. It has a higher power rating (higher supply voltage at up to 50V), and can be used in single-ended mode, BTL or PBTL. The maximum output is claimed to be up to 480W (1% THD) into 2Ω when used in PBTL, or an output of 240W in BTL into 4Ω. Rated distortion is 0.05% at 180W output (4Ω) or less. There are many similarities with the TPA3251, but for reasons that I find somewhat mysterious, the pinouts are different.
+ +The schematic shown is for the TAS5630DKD (HSSOP package) version. There's an alternative package (TAS5630PHD, HTQFP package) which has 64 pins. I don't know which is the most common, but the schematic shown is still representative, although there are a few pins missing that are present on the 'PHD' version.
+ +Input impedance is 33k, and the DC voltage at the input pins is 6V (estimated, as it's not disclosed). The inputs have series resistors that are missing on the TPA3251 circuit, as are the 100pF capacitors which will reduce the amount of HF noise reaching the input pins. It seems that the schematic and overall design were done by different people, with no reference to other ICs or schematics (within the same company), where you'd expect the designs to be almost identical.
+ +Like the previous example, I don't have one of these to test, so I have to rely on the datasheet. The supervisory pins ('/RESET', '/SD', '/OTW' and 'READY') should be self-explanatory. Unlike the TPA3251, there's no clipping indication. The 'Mode' pins ('M1, 'M2' & 'M3') are shown for standard BTL operation. These pins (for both the TPA3251 and TAS5630 ICs) are used to select SE (single-ended), BTL or PBTL operation.
+ + +The next drawing is yet another from TI, but this time it's a dedicated automotive IC, the TPA6304-Q1. Automotive ICs are a very cost-competitive product, so the support parts are the minimum possible. It's rated at 25W/ channel, but of course that's highly optimistic (1% THD claimed, but I'm doubtful). Like more 'traditional' automotive power amps, each channel is BTL, but it does have the capability to use PBTL to drive lower impedance loads, down to 2Ω.
+ +The real output power (at 13.8V) will be closer to ~18W/ channel at around 1% THD, and I include the following quote from the datasheet ...
+ +++ +The TPA6304-Q1 device is a four-channel analog input Class-D Burr-Brown™ audio amplifier that implements a 2.1 MHz PWM switching frequency that enables a cost optimized + solution in a very small 2.7 cm² PCB size, high impedance single ended inputs and full operation down to 4.5 V for start/stop events.
+ +The TPA6304-Q1 Class-D audio amplifier has an optimal design for use in entry level automotive head units that provide analog audio input signals as part of their system design. The + Class-D topology dramatically improves efficiency over traditional linear amplifier solutions.
+
The IC has extensive diagnostics built-in, and details can be obtained using the I²C interface. Many aspects of the ICs operation can be changed as well, but there are presets (aka default) values that will work for most purposes. Consider that the datasheet is 122 pages, so the amount of information is vast. Not all of it is essential of course, but to get the maximum performance some degree of configuration (via the I²C bus) is essential. Since it's designed for automotive applications, it's protected against transients up to 40V (typically caused by a 'load dump', when the electrical system disconnects a high current load).
+ + +I've included the Tripath TA2020 because for a while, everyone seemed to think it was the greatest thing since sliced bread. Released in 1998, it even managed to be listed in the 'Chip Hall of Fame (by IEEE Spectrum), although their description was wrong, claiming a 50MHz sampling rate (it's not stated in the datasheet, but it's doubtful if it exceeded 400kHz). The topology doesn't show it, but these ICs relied on a self-oscillating architecture that was sufficiently different at the time to be 'noteworthy'. At one stage I had a pair of larger versions (4 x TA2022 in BTL if I recall) installed in a chassis that was intended to be used for high-power testing, but the first time it was called upon to 'do its duty' it promptly blew up. At the time, I was testing subwoofer speakers, and while the power supply had very large filter caps, it apparently 'pumped' the supply voltage high enough to cause IC failure.
+ +The Tripath company filed for (US) Chapter 11 bankruptcy in 2007, only 9 years after the first patent was filed. Their 'claim to fame' was the modulation technique (dubbed Class-T, but it's still technically Class-D). Class-T was a registered trade mark, not a 'new' class of amplifier. There was a great deal of hype for a few years, with many claims that they "sounded like valve (vacuum tube) amplifiers". Having used one occasionally (until it blew up) I can safely say that that particular claim was just nonsense, but the myth persisted nonetheless. As I recall, at sensible listening volumes in my workshop it sounded pretty much like any conventional Class-AB amp, which few other Class-D amps I tested could manage at the time.
+ +The ICs were used in a number of commercial products, and although Cirrus Logic purchased the Tripath company, they never returned to production. All that's left are a few recollections, which in all cases (including my own) should not be considered as particularly useful. The TA2020 is rated for 20W into 4Ω. There were a number of versions, with a fairly wide range of output powers, and these were available for some time after Tripath folded. As near as I can tell, there is no longer any stock, but a few still pop up from time to time.
+ +The specifications for most of the Tripath ICs are easily matched or beaten with other ICs now, so there's no need for anyone to try to get one. Like any Class-D amp, the PCB layout is critical, as is the selection of the output inductor. If you get either of these wrong, then the performance will be awful. Unfortunately I can't provide any 'scope captures as I no longer have any Tripath boards.
+ + +This article is not intended as a series of projects, but is only intended to show some examples of current ICs. The exception is the TDA8954 which is now obsolete but still readily available in complete PCBs available from China. The amps are not intended to demonstrate 'state-of-the-art', but if the PCB is well designed and a high-quality output inductor is used, they will equal or surpass many Class-AB amplifiers. There are other proprietary designs that one can purchase, but they are generally fairly expensive. I expect that many readers will know about them, but I'm not in the habit of providing free advertising.
+ +There's no doubt that Class-D has become mainstream, but there's also no doubt that some implementations are worse than useless. One I tested has a major mistake in the PCB, and has an output Zobel resistor in series with the output on one channel. Other errors include badly laid-out PCBs, the wrong type of inductor (causing saturation) and a multitude of other problems. These errors will not be apparent until you've bought the board, so it's very much a case of 'buyer beware'. Sellers on auction sites don't care if the product is crap, because people will buy it anyway. It's not even possible to 'name and shame', because they just close the account and open a new one with a different name. Fortunately, I had no expectations for the boards I bought, because it was in anticipation of writing this article.
+ +In the midst of testing the amps I have to hand, I did a comparison with an early version of the Project 68 subwoofer amp. It makes no pretense of being 'hi-fi', as it's intended for subwoofers, where the (tiny) amount of crossover distortion cannot be heard. When compared to the 'good' Class-D amps it was marginally worse at very low volume, but it trounced the mediocre and 'ugly' (poorly designed and executed Class-D) with the greatest of ease. Comparing the good Class-D amps with a low-distortion power amp revealed no audible differences on my workshop systems. A more revealing loudspeaker may betray a difference in sound quality, but once the distortion is below 0.1% it's difficult to hear a difference - provided the frequency response is the same. My aging ears don't work at 20kHz any more, so I rely on instruments which not only show any difference, they also quantify any difference that exists. Hearing a 0.1dB difference isn't easy, but a measurement is precise. The same applies for distortion of all kinds.
+ + +Note that most of the references are not linked directly, because manufacturers keep changing the location of reference material (for reasons I cannot fathom) and the links will break.
+ +![]() | + + + + + + + |
Elliott Sound Products | +Class-G Amplifiers |
Firstly, there is some dissent as to what constitutes Class-G compared to Class-H. I have explained the difference as I see it below, but there are many amps that claim to be Class-H that I consider to be Class-G. Because neither class is 'officially' recognised (as are Classes A, B C & D) it becomes a moot point. Ultimately, it doesn't matter all that much, so feel free to consider the two terms to be interchangeable.
+ +Many people will have noticed that most of the professional high-power amplifiers made these days are Class-G, and seem to have remarkably little heatsinking for the claimed power output. Should one look inside, there are more output transistors than expected too. As well, you may notice that there are also many more diodes than you'd expect to find in any 'normal' amplifier.
+ +All very baffling, and especially so since there is so little information on the Net about Class-G amplifiers. There are almost no schematics that are more than a basic concept, so figuring out how they work is not easy. While a very few circuit diagrams can be found in manufacturers' websites, for the most part it seems that there's a conspiracy of silence surrounding these amps. Without a full service manual, the likelihood of most people finding a complete Class-G schematic are fairly limited.
+ +Certainly, they are discussed (or in some cases simply dismissed [ 1 ]) in various books on audio amplification, with varying opinions as to their suitability, sonic qualities, etc. One thing is quite clear - the added complexity is only of benefit for high powered amplifiers, having an output power of 200-300W or more. In addition, the use of Class-G is of dubious value for normal home listening, where the average power may only be a few Watts but instantaneous (peak or transient) power may reach far higher levels. Although there are modest gains in efficiency, they do not warrant the additional complexity.
+ +At the modest power ratings generally needed for home use (generally well below 200W per channel), a traditional Class-(A)B amplifier is perfectly capable of providing as much noise as you'll need, without raising a sweat. In addition, the lower complexity reduces the likelihood of distortion artifacts, which (it is claimed) may become audible in some (others may say all) Class-G designs.
+ +Despite the so-called 'failings' of Class-G, the technique is now being used for high-speed ADSL line drivers [ 2 ] . This is a surprisingly demanding application, and that the benefits were seen as worthwhile and the apparent limitations (marginal dissipation improvement, distortion) of little concern, some people take the technique very seriously indeed. The other benefit is that the power transformer may be physically smaller, although more complex due to the additional windings.
+ +In some respects, a Class-G amp can be likened to an amplifier that uses series output devices. This arrangement is not especially common, but is sometimes used to improve the safe operating area for the output transistors by limiting the voltage across each transistor pair. From the perspective of improving efficiency, the series design does nothing useful, other than spread the wasted power across more transistors. If one were to contemplate such a design, it makes more sense to add a lower voltage supply rail and make the amplifier Class-G, since there are several benefits.
+ + +The use of a fixed supply voltage simplifies the calculations and simulations, but is somewhat pessimistic. In a real amplifier, the lower supply voltage with high output power results in lower power transistor dissipation. However, I have not included any tests with real loudspeaker loads, so the reactive load normally seen by the amplifier will result in higher dissipation than shown. In general, it is necessary to assume that peak dissipation with reactive loads will be double that obtained with a resistive load.
+ +Please Note: This is not a project, and no schematics are to be assumed to be workable as shown. For this reason, no transistor types are specified. While most schematics show a single output device, there is no single transistor known that can dissipate the power levels expected in the real world with the given supply voltage and load impedance. Multiple devices - in parallel, with emitter resistors - will always be needed in practice.
+ + +At higher power, a linear amp will start to dissipate significant power in the output devices, and this varies with the power level. As the most common of all topologies, Class-B (or Class-AB if you prefer) has a very predictable dissipation at any given output voltage and current.
+ +
Figure 1 - Power Dissipation vs Voltage for Class-B Amplifiers
Figure 1 shows the dissipation of the output transistors of one side of a simple Class-B amplifier using a 70V supply and feeding a 4 ohm load. The dissipation is seen to increase until the output voltage reaches 35V (exactly half the supply voltage), after which it falls again. Only one side of the amplifier is shown, and the test circuit is seen in Figure 2. Naturally, in a real amplifier, the total average dissipation at any voltage is quite different with sinewave or music signals. This is covered later in the article. The average power dissipated for a continuous sawtooth waveform is about 204W.
+ +
Figure 2 - Test Circuits for Power Dissipation
The test signal used was not a sinewave (or a half sinewave, to be exact), because that makes the power dissipation curve too complex for simple analysis. The linear increase of voltage shows the maximum dissipation very clearly, and it occurs at that point where the voltage across the load and output transistor are equal. For the test circuit, this occurs at 35V. With a sinewave signal, worst case transistor dissipation occurs when the RMS output voltage is roughly half the DC supply voltage - 35V RMS, or 306W into a 4 ohm load. Again, this only applies for a resistive load.
+ +Two circuits are shown, Class-B and Class-G. These were used to capture the power dissipation waveforms, and to perform all power calculations.
+ + +Using the test circuit shown in Figure 2 and the same sawtooth waveform used for previous measurements, I measured load power and output device dissipation. For this exercise, driver transistor dissipation was ignored, as I deliberately 'fudged' the simulation parameters to keep this at a very low value (typically less than 1W). A 3-rail Class-G stage was also simulated, but is not shown. Ignoring losses, with 3 supplies, each at 23.33V, the inner devices naturally get 23.33V, the middle transistor supply is 44.66V and the outer devices 70V (close enough). Average dissipation is the sum of all power transistor dissipations (1, 2 or 3). Input power is the sum of the output power and the dissipated power in all output devices, and with a resistive load (this gives a slightly optimistic result). The results are tabulated below ...
+ +Measurement | Class-B | Class-G 2-Rail | Class-G 3-Rail | Class-G Switched Rail + |
Average Load Power | 381W | 381W | 381W | 381W + |
Average Dissipation | 204W | 151W | 133W | 143W + |
Peak Dissipation | 304W | 268W | 220W | 297W + |
Input Power | 585W | 532W | 514W | 524W + |
Efficiency | 65% | 72% | 74% | 72% + |
The peak dissipation figures given are for the highest instantaneous power that is handled by the device(s). For Class-G, this is always the outermost devices, regardless of the number of supply rails. As is readily apparent, increasing the complexity to use three supply rails gives a marginal improvement in efficiency. Also, bear in mind that the overall efficiency of any Class-G amp is affected by the dynamic range of the programme material and the overall level. The above data show the dissipation and efficiency with an applied 0-70V sawtooth waveform, but this is a highly unlikely waveform in the real world. Also note that this is only one half of an amp, and is used for explanatory purposes only.
+ +Actual results will vary - possibly widely - depending on the usage of the amplifier. One thing is obvious, and that's that a 3-rail system has a minimal improvement over two rail systems. The switched-rail variant is discussed below, but was included here to keep the data organised, and to allow easy comparison.
+ +![]() | It must be understood that the efficiency figures shown above are different from the generally published values. This is because a sawtooth waveform was used, and not a sinewave. Half sinewave testing will change the numbers, but the relationships will remain fairly similar. For the Class-B example, load power is 287W, transistor dissipation is 90W, and efficiency is 76%. Class-G improves this, with 71W dissipation and 80% efficiency. These figures are for full power, and are not representative of expected performance with music signals. |
This arrangement is popular because it's relatively simple to achieve, and if done properly gives very good results. The remainder of this article will concentrate on this topology, although there are others that can also be used. These will be discussed later, but not in great depth.
+ +
Figure 3 - Basic Principle of Class-G Amplifier
The output stage above will be used for all further analysis of the Class-G output stage. The front-end and VAS (voltage amplifier stage) are virtually identical to any normal Class-B amplifier. The VAS would normally be used in place of one of the current sources shown. The arrangement above is convenient for analysis because it is quite straightforward.
+ +Because the inner transistors (Q2 and Q5) are supplied with ±35V, both inner output transistors and drivers must be rated for at least 105V breakdown voltage. The voltage across each will vary by the full inner supply voltage plus the difference between the inner and outer supplies of each polarity. As the output swings between positive and negative, the inner transistors will therefore get a maximum voltage of 70 + 35 = 105V, however prudence suggests that a higher rating is preferable, and ideally one would use transistors rated for the full supply voltage. Once the signal calls for a voltage exceeding 35V of either polarity, the outer transistors (Q6 and Q8) boost the supply voltage in the direction required, allowing the output voltage to swing by almost the full ±70V. Even a small miscalculation (in design or implementation) may cause large amounts of magic smoke to escape from expensive devices, and a great number of rude words to be uttered.
+ +The voltage across the outer transistors can (in theory) never exceed 35V, so low voltage, high power transistors may be used. If higher voltage devices are used, their SOA (safe operating area) should be very good - depending on the devices chosen of course. A good SOA is necessary, because by the time Q6 (for example) turns on, it will be expected to supply 8.75A with the full 35V across it. This is an instantaneous power of 306W, far more than any one transistor can withstand without failure. At the point where Q6 turns on, Q2 (the inner transistor) is turned on fully, and remains so until Q6 turns off again. Consequently, the dissipation in Q2 and Q4 will remain quite low whenever the output voltage is greater than 35V in either direction.
+ +With the supply voltages shown, the diodes providing the ±35V supplies must be rated for a continuous average current of at least 5A, preferably more. The voltage rating is not a problem, since the maximum reverse voltage is 35V with the supplies shown.
+ + +The main influences are ...
+ +The power dissipation (using the same signal as used for Figure 1 and the Class-G circuit shown in Figure 2) is shown below. Whilst more complex because of the two sets of transistors, there doesn't appear to be a vast overall difference at first glance.
+ +
Figure 4 - Power Dissipation of Class-G Amplifier
To appreciate the effect, one must look at the area below the curves, and determine the average power. The outer supplies remain at ±70V in all cases. For the inner transistors with 35V supplies, the average power - using exactly the same waveform as used in Figure 1 (and looking only at a positive signal) - is about 40W, and for the outer transistors (Q6 and Q8) it's 100W. The total is therefore 140W vs 203W for Class-B, a significant reduction. The ratios will change with differing supply voltage distribution. The total power will rise to 158W if the inner supplies are reduced to 30V. Conversely, if the inner supplies are raised to 40V, the total average power is reduced slightly to 136W, and with inner supplies of 45V the total average power falls further to 129W.
+ +It would seem logical that the inner supplies should therefore be around 70% of the total, and indeed, this will give the lowest overall dissipation - but only for continuous signals at full power. The real world is very different, and amplifiers are not usually operated at full power with a continuous waveform. They are used for music, and as discussed above, there are many factors that influence the optimum inner voltage.
+ +Most professional power amps that use a dual voltage supply will aim for the low voltage supply to be between 40% and 50% of the main (high voltage) supply, but there can be significant variations. When used for hi-fi applications, the low voltage supply may be as little as 30% of the high voltage, since the amplifier will typically spend well over 90% of it's time without ever turning on the outer transistors. Different designers will have different opinions, but the end result will usually be fine for most purposes. The point is that there is no optimum percentage - there are too many variables.
+ +There are several high power designs that use more than 4 supply rails (i.e. two positive and two negative voltages). There are some that have three supplies of each polarity, and even four is not unknown. As always, the law of diminishing returns applies, and the designer must balance complexity and cost against advantages. In most cases, this will result in a two or three stage output circuit (a total of 2 or 3 supplies of each polarity). The process for adding extra supplies is identical to that needed for a simple two stage Class-G stage, and multiple stages will not be examined any further.
+ +As mentioned by Douglas Self [ 3 ], there is an additional topology that he refers to as 'shunt' Class-G. This is not too uncommon with some commercial amplifiers, but is not covered here. The primary difference between the series and shunt (or parallel) topologies is that the latter requires that the secondary output devices must be capable of withstanding the full voltage from the high voltage supplies.
+ +The series connection only requires that the outer transistors be capable of the voltage difference between the two supply voltages. This can make the device selection easier, since only high power is needed, and not a combination of high power and high voltage. Safe operating area is also improved with the lower voltage.
+ +![]() | While the following is not normally publicised, it must be considered that certain test signals (in particular) may embarrass many Class-G amplifiers. Dissipation with some waveforms at specific levels may cause the amplifier to run far hotter than expected. This is not likely to be an issue with normal music signals, but it could still happen! + +For example, consider a signal to the amplifier that is clipped (due to incorrect levels set from the mixer or crossover perhaps), and just happens to push the amp to just above the commutation point (say 36V peak for our examples here). This is roughly the equivalent of a squarewave signal. + +Under these conditions, the amplifier output stage (Figure 3) will effectively be driven with a 36V squarewave signal. Power to the load is about 147W. A Class-B amp will dissipate ~150W, slightly less than may be expected. The Class-G amp will dissipate the same (150W), and this is the same as when delivering full power - under these conditions, Class-G has not reduced the dissipation at all! |
Efficiency Comparison
+Because the efficiency at various powers and waveforms is so hard to quantify, the following graph will hopefully make it a little clearer. As you can see, the Class-G amp has higher efficiency (meaning lower losses and less heat output) over the full operating range. The two types of amp have the same maximum theoretical efficiency at full power. Note that the efficiency of Class-B and Class-G amps can approach 100% if they are driven into hard clipping, but this is not the way they are (or should) be used in practice.
The data are based on a sinewave signal and a 4 ohm resistive load. There is no correction for the power supply voltage collapsing under load - these are theoretical curves only, and are included to allow the difference to be seen easily. The circuit topology used for the graphs was based on those in Figure 2, but modified for bipolar operation. Correction for crossover distortion was included, since the Figure 2 circuits have no provision for transistor bias.
+ +
Figure 5 - Efficiency vs Output Power of Class-B and Class-G Amplifiers
The graph for Class-G looks decidedly odd, but this is real data. At an output power of about 100W, the Class-G amp peaks, and falls again as power is increased further. Minimum efficiency after the peak occurs at ~160W, after which it increases again. The graph was produced by analysing the average power into the load and from the power supply (or supplies), with the peak output voltage raised in increments of 1V. The dip after 100W is the point where the outer transistors start conducting - see Figure 4, and compare the effects of combining the two sets of dissipation data.
+ +The reason that the Class-B graph is a straight line is because of the power scale which is based on equal increments of voltage so is not linear. Changing scales does not change the data of course, but this was the simplest way to obtain the graph, ensuring there were enough data points to get an accurate result. The important thing to notice is that Class-G is more efficient over the full normal working range of the amplifier, and especially so for material with a reasonably wide dynamic range.
+ +
Figure 5A - Dissipation vs Output Power of Class-B and Class-G Amplifiers
Figure 5A is essentially the same data used for Figure 5, but applied differently. In this chart, the dissipation (via the transistors and heatsink) is plotted against output power. This gives an alternate way to view the difference in heat between Class-B and Class-G. As is quite apparent, the heat is reduced dramatically by using Class-G. The overall curve shape is similar to the sum of the two dissipation curves in Figure 4, but is modified because of a sinewave signal instead of a voltage ramp.
+ +The simple Class-B and Class-G stages both showed a continuous average power into the load of 196W. No surprise, since both used the same signal. The Class-B stage had a transistor dissipation of 245W - this is a lot of heat to get rid of! By comparison, the Class-G stage showed an outer transistor dissipation of 87W, and an inner transistor dissipation of 48W - a total of 135W. This is just over half the dissipation - a significant reduction.
+ +Peak dissipation is also improved considerably. The Class-B stage had a peak dissipation of 306W. This means that even 4 x 200W transistors may be pushed past their limits for each side of the amp (positive and negative, not left and right) - remember the SOA curves, temperature derating and dissipation with reactive loads.
+ +By comparison, the Class-G stage showed peak powers of 269W (outer) and 72W (inner). While it is obvious that the outer sections will need at least 3 x 200W transistors, remember that they do not need to be high voltage. This reduces the financial burden. The inner transistors could use a single 200W transistor, which will be well within its ratings, even at elevated temperature and a reactive load. For ease of comparison, these figures are tabulated below. All figures are for a 4 ohm resistive load. Results into reactive loads (e.g. loudspeakers) will be very different, and are too difficult to predict. I've taken a few measurements, but all averages are based on estimates.
+ +Expect the peak dissipation to double with a worst-case 45° phase shifted current as produced by a reactive loudspeaker load. This is no different from a Class-B amplifier, except that there is a much greater saving with Class-G. All output devices must be able to withstand the worst-case peak dissipation while remaining within their safe operating area at normal operating temperature - not 25°C. This is commonly overlooked by novices, most of whom will (often seriously) over-estimate the power they can get from any given semiconductor.
+ +Condition | Class-B | Class-G + |
Average Load Power | 196W | 196W + |
Peak Dissipation (outer) | - | 269W + |
Peak Dissipation (inner) | 306W | 72W + |
Average Dissipation (outer) | - | 87W + |
Average Dissipation (inner) | 245 | 48W + |
Average Dissipation (total) | 245W | 135W + |
Average Dissipation (total, reactive) | 320W | 180W + |
Since a Class-B amplifier doesn't have two sets of devices, the Class-B figures have been included in the 'inner' categories. As you can see from the table, the difference is most worthwhile. Being able to use a heatsink that's less than half the normal size with an inexpensive fan means that a very powerful amplifier is easily made to fit into a 2U rack case (89mm or 3.5" high). Because of the lower losses, this also means that smaller transformers can be used.
+ +Since a single channel Class-B amp will waste 245W (from the table) while providing 196W into the load, the transformer and power supply must supply 441W so the amp can do its job. The Class-G amplifier requires 331W to do exactly the same work in the load - a saving of 110W in total.
+ +The saving is greater is you consider the reactive load dissipation, which as noted above is roughly double that for a resistive load. As seen in the table, this will be (typically) around 320W (or 490W for single frequency, 45° phase shift) for Class-B, but is manageable at 180W for Class-G. Attempting to calculate the real contribution of reactive loads is very difficult, because the signal will change from capacitive to inductive (both are reactive) to resistive depending on frequencies present at any given time. In terms of the average contribution of reactance, it is not unreasonable to assume that it will add somewhere between 10-30% to the resistive dissipation. Again, this will vary depending on usage, speaker design, type of music, etc. I used 30% for the table.
+ +As discussed earlier, it is impossible to predict the optimum voltage for the inner rail, because the signal is effectively random. This also means that every efficiency figure that can be produced can only be for a specific set of circumstances. I have attempted to provide sufficient information to allow the reader to understand the processes involved, but no-one can predict how any Class-G amplifier will perform in the end users' application, unless that application involves a signal that can be repeated exactly (in all respects!). There are simply too many variables.
+ + +There is no actual changeover from the inner to the outer pair of transistors, but instead, the inner rails can be considered to be boosted as the output attempts to exceed the lower voltage. In theory, the inner transistors remain in control of the signal at all times, however in practice this point is moot. Figure 5 shows the output signal and the voltage at the inner rails - the boosted voltage.
+ +
Figure 6 - Voltage Rail Boost Effect
As you can see, the rail voltage is maintained at a couple of volts above the output voltage. The outer transistors are really simply supply voltage boosters, and are designed to turn on just before the output clips at the inner rail voltage. Naturally, once the outer rail voltage is reached, the output will clip, since it can go no further. The switching process is commonly referred to as commutation, since the process is not hugely different from the operation of a brush commutator in an electric motor.
+ +Commutation (or rail boosting) is accomplished using diodes and the outer transistors. As with any function that involves switching, distortion artifacts can be produced that will become part of the output signal, reducing sound quality. In this respect, it is similar to the crossover distortion that occurs when an amplifier fails to reverse its output polarity linearly.
+ +Fortunately, the distortion occurs at a relatively high output level, and it is likely that any distortion will be completely inaudible. This is partly because of the low level of distortion compared to the output level, and partly due to masking, where sounds become inaudible when in the presence of other sounds of a similar frequency but higher levels. This phenomenon is exploited in all MP3 coded audio files, for example. While it is entirely possible that the distortion may be audible with a sinewave input, as the complexity of the music increases the audibility of the distortion reduces. The use of Schottky diodes for the inner rail voltages is highly recommended, as these switch off much faster due to a much lower stored charge, and are therefore less liable to generate audible artifacts.
+ +The zener diodes shown in Figures 2 and 3 are essential. They ensure that the outer transistors start conducting just before the inner transistors saturate. At this point, the amplifier would normally clip, but the outer transistors now boost the supply rails to exactly the degree needed to prevent clipping. As noted elsewhere, once the output signal attempts to exceed the outer supply voltage (either polarity) the amplifier will (must) clip regardless. If the zener diodes are omitted, there will be considerable distortion generated at the commutation points. While negative feedback will reduce it to some degree, it will still be present, measurable and possibly audible.
+ +If the zener voltage is too high, the outer transistors will turn on too early, which will increase the dissipation of the inner transistors. If too low, the outer transistors will not conduct early enough, and the output will have distortion at the commutation point because the inner transistors are no longer in control. Ideally, the zener voltage will be slightly higher than the voltage drop across the inner transistor circuits, including emitter resistors and the losses in the supply and outer transistor base diodes.
+ +Like so many other areas of electronics, there is a balancing act (aka compromise) that must be made. In general, a zener voltage of between 3.3V and 4.7V seems reasonable for most applications, although some topologies will need more, others less.
+ + +Because the inner transistors work at relatively low voltages, it is much easier to provide excellent overload protection than would be the case for a high voltage Class-B amplifier. There is no need for a protection circuit that has multiple break-points (essential for Class-B high voltage designs), because the inner devices are usually easily contained within their safe operating area with a relatively simple circuit.
+ +The protection scheme is simplified solely because of the low operating voltage for the inner transistors, and no other amplifier topology provides this inherent advantage. In the case of Class-G amps using three or more supplies of each polarity, the inner transistors only require protection for the lowest voltage rails.
+ + +Safe Operating Area
+As an example, I have included the SOA data from the ON-Semi data sheet [ 5 ] for the MJL21193/4 transistors. These are rated at 200W, 250V and 16A continuous. They are rugged devices, and are well suited for high power amplifiers. The general trend is virtually identical for all bipolar transistors, although the numbers will change to match the device ratings.
As you can see, the voltage at maximum current is quite limited, but surprisingly, it extends slightly beyond the 12.5V limit imposed by the 200W rating. Still, it is well below the 35V limit for the outer transistors described here - at 35V, collector current is limited to around 7A. Remember, that is with a case temperature of 25°C. At higher temperatures, the transistor must be derated by 1.43W / °C. For example, with a case temperature of only 60°C, the transistor must be derated by 50W, reducing maximum dissipation to 150W.
+ +
Figure 7 - Active Region Safe Operating Area
Quote from the data sheet ... There are two limitations on the power handling ability of a transistor; average junction temperature and secondary breakdown. Safe operating area curves indicate IC – VCE limits of the transistor that must be observed for reliable operation; i.e., the transistor must not be subjected to greater dissipation than the curves indicate. The data of Figure 7 is based on TJ (peak) = 200°C. TC is variable depending on conditions. At high case temperatures, thermal limitations will reduce the power than can be handled to values less than the limitations imposed by second breakdown.
+ +For more information on transistor power dissipation, second breakdown and other limits, please see the SOA article.
+ + +One topology used by at least one manufacturer [ 4 ] is shunt or parallel mode, where the outer transistors actually do take over when the inner devices run out of voltage. Whether there is any real advantage is debatable, but of the designs I've seen it was done for a very specific purpose. The way these amps are configured allows the transistor collectors to be mounted directly to an earthed (grounded) heatsink. This minimises the thermal resistance between the transistor case and the heatsink, allowing the best possible device dissipation.
+ +In my opinion, this is probably the most complex way to design a Class-G circuit, and I would not recommend even attempting it. Although the manufacturer does manage to make very reliable amps using this method, I am informed (by someone who has repaired them) that many of the components are highly critical - use a transistor even slightly different from the original, and the amp will suffer parasitic oscillation. For most of the semiconductors, it is stated on the schematic that they must be obtained from the manufacturer ... no substitutions of even the same brand of device.
+ +I've been able to get several schematics for commercial designs, and the series configuration described here is still one of the more popular. As with many things in electronics, there will be as many alternatives as there are designers. Ultimately, it doesn't matter, provided the design has enough output devices to safely handle the worst case abuse to which the amplifier is liable to be subjected. For professional audio, this can amount to a great deal.
+ +Yet another arrangement is to use a switched high voltage supply. The high voltage supply is switched on and off, rather than turn the supply on linearly to maintain 2-3V headroom over the signal. As soon as the threshold is reached (typically half the main supply), the full supply voltage is applied to the output devices, changing the supply voltage from (say) 35V to 70V in a single step. The full voltage remains across the output devices until the output voltage falls below the threshold, at which time the high voltage is turned off again. The output devices are then returned to the 35V supplies. Overall dissipation is potentially slightly lower than the series connection shown above, but commutation noise is almost guaranteed unless the switching is slowed to a reasonably leisurely rate.
+ +With this arrangement, the inner transistors must handle the entire dissipation of the amplifier. Although average power is reduced compared to Class-B, the peak dissipation remains (more or less) the same - it just doesn't last as long for any given frequency. The greatest benefit is that switching MOSFETs can be used for commutation, and there is no need to use comparatively expensive audio power transistors. The rail supply voltage is shown in Figure 8, as a simple description is not sufficient to allow full understanding of the process involved.
+ +There are undoubtedly other Class-G variants that I've not seen, and these may use other techniques.
+ +
Figure 8 - Switched-Rail Class-G Supply Voltage
As noted above in the section about overall efficiency, this scheme has an equivalent efficiency to a conventional dual rail (each polarity) Class-G amplifier. Where it wins is in the outer transistors, which are now switching rather than linear. Consequently, the outer transistor losses are very small, but the peak dissipation of the inner transistors is now increased dramatically.
+ +Because all amplifiers must be designed so that the output transistor safe operating area is not exceeded due to reactive loads or high operating temperature, the inner transistor power ratings must be the same as for a Class-B stage. Because average dissipation is lower, this helps to keep temperatures lower, and this can provide a small advantage. As with all Class-G schemes, the power transformer can also be slightly smaller, because of the lower overall losses.
+ + +A similar (at least in some respects) arrangement is called Class-H, and it can be difficult to decide exactly into which camp some amplifiers fall. Class-H is often described as using a 'bootstrap' capacitor that lifts the rails as needed, but cannot maintain them at the full voltage for more than a few cycles. After a short period, the capacitor discharges, and the high voltage supply collapses. Originally, there were used for car audio, and allowed much more power than can normally be expected (about 18W for a BTL (bridge-tied-load) amp operating from 13.8V DC). Being far cheaper than a switchmode power supply, this is a convenient way to get extra power for very little expense. A number of specialised ICs were/are produced for just this purpose.
+ +Because the difference is rather blurred, you may see Class-G amps described as Class-H and vice versa. My preferred terminology is that amps that use a bootstrap circuit or an externally modulated power supply are 'real' Class-H. If the supply is switched or boosted using a separate fixed high voltage supply, then Class-G is the most appropriate description of their topology.
+ +Hitachi is usually credited with the first Class-G amplifier, but from the descriptions I've seen, it actually appears to have been Class-H. The peak power of 400W into 8 ohms was not available continuously, but only for relatively brief periods. This implies that the high voltage rail was produced by bootstrapping a capacitor, rather than a switched rail design. By my definition above, that makes it Class-H, although Hitachi described it at the time (1978) as Class-G.
+ +This level of confusion has never gone away, largely because only classes A, B, C and D are 'officially' recognised. As suggested in the introduction, you can feel free to use whichever term you prefer, because there is no standard.
+ + +The diagram below is not a project, and is intended only to show the general configuration of a Class-G amplifier. Both inner and outer transistors use separate drivers. This is one of a great many configurations that have been used, and in that respect can be considered as '"typical' as any other configuration.
+ +
Figure 9 - Concept Schematic For Class-G Amplifier
The front end of the amp is quite conventional, and uses a long-tailed pair (Q1, Q2) coupled to a current mirror (Q4, Q5). The LTP is fed from a current source, using Q3. This drives the VAS (voltage amplifier stage, Q6), which is stabilised using a Miller capacitor. The VAS is also supplied using a current source, Q7. The bias servo (Q8) must mount on the main heatsink for the output devices. The driver transistors are connected in the Darlington configuration, because it is less prone to parasitic oscillation than the compound pair that I normally use. For high power amps, one of the most important factors is reliability and stability.
+ +The output devices are shown as a parallel pair, for both inner and outer output transistors. Oddly enough, the outer transistors require more dissipation capability than the inner devices, primarily because of the peak power demands. The average dissipation will also be higher with some programme material, less for others. This is reflected in the choice of two parallel transistors for both the and outer inner devices, but three may be needed in practice.
+ +The resistor values shown are for reference only, and are typical of those that may be used in a working amplifier. While the above circuit has been simulated pretty much exactly as shown, it has not been built and tested. It is a reference design, in that it allows the reader to gain an insight into the complete design. Should there be sufficient interest, a project may be developed for a Class-G amplifier, suitable for operation at the voltages indicated, or possibly a little higher.
+ +As should be fairly obvious, it is not a trivial undertaking, nor would it be a cheap amplifier to build. Further complexity would be involved if the amp were to be rack mounted, since short-circuit protection is needed to be added as a minimum. As with most commercial Class-G amplifiers, a fan is essential for each heatsink unless oversize heatsinks are used. Fan(s) should preferably installed in such a way as to provide cooling for the power transformers as well as the heatsinks. All transistors from Q8 to Q20 need to be mounted on the heatsink. Smaller separate heatsinks must be used for Q6 and Q7.
+ +Class-G amplifiers are not for the faint hearted, as will be apparent from the above. Because of relatively high voltages and considerable complexity, even a trivial mistake during construction can easily generate a cascade of exploding (expensive) parts. Now you all know why I have shied away from offering boards for anything more powerful or complex than the P68 subwoofer amp - it's already more than capable of providing the same power as the Class-G amp shown. Heat dissipation is higher, so it needs more heatsink for continuous operation - a reasonable compromise, since it's not intended in its basic form for continuous duty.
+ + +It would seem initially that Class-G can't hope to compete with Class-D (pulse width modulation amplifiers). The latter have a typical efficiency of around 85-90%, and even the best Class-G amp cannot match that. However, Class-D amplifiers are significantly more complex, and because of high frequency switching, PCB layout is critical. In addition, even with the best filter circuits available, there will be some RF (radio frequency) noise emitted. Most Class-D amplifiers also have a frequency response that is only flat for one load impedance. Since speakers are not resistive, the high frequency response can be a gamble.
+ +Because of the potential for RF interference problems, Class-D amps may be avoided by many users. There is no doubt that Class-D amps can deliver excellent fidelity and very little heat even at high power levels, but this doesn't mean that they are universally accepted as the most ideal power amp where high power and almost infinite reliability are needed. Music tour operators also have to consider the life of the equipment, and it's not unknown for 20 year-old amplifiers to be in daily service. With (mostly) normal through-hole parts, little or no surface mount, and no highly specialised ICs, linear amps can be repaired even when 30 years old or more.
+ +Class-D amps nearly all use surface mount devices (SMD), and specialised ICs are essential with most designs to obtain satisfactory performance. When these parts become obsolete, the amplifier must be thrown away - it can no longer be repaired if 'unobtanium' parts fail. Substitutes may exist, but will almost certainly have different pinouts, and perhaps a different SMD case style. This is becoming a real issue for many consumer products - they are increasingly becoming un-repairable, because of SMD and the short lifetime of many of the specialised ICs.
+ +Professional products must stay clear of short lifetime parts whenever possible, because there is a huge difference between the expectations of retail consumers and tour operators and other pro-audio users. To make a pro product with an expected life of less than 10 years is asking for trouble. This includes the ability to service the gear, well past the point where a normal home consumer would have discarded the item for the latest model.
+ + +There have been a number of alternative schemes over the years, with possibly the best known being the Carver 'Magnetic Field Amplifier' [ 6 ]. This was a misnomer in most respects, but it used a TRIAC phase-cut circuit before the power transformer, which was much smaller than it would have been otherwise. As more power was demanded, the TRIAC turned on earlier, boosting the voltage and current available. The TRIAC circuit was (pretty much literally) a lamp dimmer on steroids, as the basic TRIAC trigger circuit used the same principle as a household lamp dimmer. The power amplifier used three supply rails, ±25V, ±50V and ±75V.
+ +There is no doubt that the amp was innovative, produced prodigious power and needed little heatsinking for normal hi-fi usage. However, a continuous full-power test would cause the transformer to overheat and smoke fairly quickly. The amp was never designed to be able to produce full power on a continuous basis, so people who thought they could use it for high-power sound reinforcement applications quickly discovered the limitations. Although it's not stated, the Carver amp didn't increase the supply voltage under load, but regulated it using the TRIAC. The transformer was almost certainly wound with fewer turns than necessary, so with the TRIAC fully on the magnetising current would be much greater than normal. This only happened with high current output to accommodate transients or short but loud music passages.
+ +Similar approaches have (apparently) been tried using switchmode power supplies, but I don't have any more information. A potential limitation is that a transient may not last long enough to boost the supply voltage, and may result in transient clipping. While there are ways that this could be overcome, they haven't been adopted as far as I'm aware.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +Compound Vs. Darlington |
There are many reasons that designers need very high gain transistors, and although they are available in a single package, it is generally better to build your own using discrete devices. This gives much greater flexibility, and allows you to create configurations that are optimised for the specific task required.
+ +To create a high gain transistor, it is a matter of connecting two or more transistors such that the collector current of the first is amplified by the second. Thus, if two devices have a current gain (β or hFE) of 100, the two devices connected together can give an overall current gain of as much as 10,000 - this will be looked at in greater detail later.
+ +For a variety of reasons, current gains of more than 2,000 or thereabouts are rarely achieved in practice, but a β of well over 1,000 is easily achieved - even for high current configurations. These discrete compounded transistors are found in power supply and power amplifier designs, as solenoid drivers and general purpose high current switches or other linear applications.
+ +For many applications, I have always considered the Sziklai (aka 'compound') pair as the preferred option, but both that and the Darlington pair are essential to the development of modern linear ICs, power supplies and power amplifiers (to name but three applications). Although the Darlington pair was discovered/ invented first, in this article, I shall break with tradition and place the less well known configuration first in all diagrams. I do this because it is a better arrangement IMO, having greater linearity (less distortion if used for audio), and far greater thermal stability than the older and better known Darlington Pair. The configuration I refer to is called a Sziklai [ 1 ] or compound pair, and this combination has also been referred to in the past as a 'Super Transistor'. The differences between the two different topologies are often rather subtle, but in most applications the compound pair gives better performance.
+ +The Darlington pair was invented in 1953 by Sidney Darlington (1906 – 1997). The Sziklai pair was invented by George Sziklai, a Hungarian engineer who emigrated to the US (1909 - 1998).
+ + +The compound/ Sziklai pair is a configuration of two bipolar transistors of opposite polarities, so will always consist of one NPN and one PNP transistor. The configuration is named after its Hungarian born inventor, George Sziklai. It is also sometimes known as a CFP (complementary feedback pair). The composite device takes the polarity of the driver transistor, so if a compound pair is made with an NPN driver and PNP output device, the overall device behaves like an NPN transistor. This will become clearer as we progress.
+ +A Darlington pair always consists of two transistors of the same polarity. An NPN Darlington will have two NPN transistors connected as shown below, and a PNP device will use two PNP transistors. The Darlington configuration was invented by Bell Laboratories engineer Sidney Darlington in 1953. He patented the idea of having two or three transistors on a single chip, sharing a collector.
+ +
Figure 1 - Basic Configurations Of Devices
As you can see from the above, the compound pair polarity is determined by the driver transistor, so an NPN driver with PNP output transistor behaves like an NPN transistor and vice versa. It is a little disconcerting to see that the emitter of the power transistor is really the collector of the compound pair, and this subtle distinction has trapped a few designers over the years. Although it is not immediately obvious, the gain of the two different topologies is slightly different, because the compound pair has a small amount of in-built negative feedback which reduces the gain.
+ +For the sake of convenience, let's assume that the driver transistors in both configurations have a gain (β) of 10, and the output transistors have a gain of 5. For the compound pair, 1mA of base current will cause 10mA of collector current in Q1, and (ignoring the resistor R1), this will provide 10mA base current to Q2. This will become 50mA collector current for Q2. The total overall collector current is 60mA, and the emitter current is 66mA (it includes the base current of Q1).
+ +Looking at the Darlington pair, with 1mA of base current into Q1, the collector current of Q1 is 10mA, and the base current for Q2 is 11mA, because the base current is included in the current flowing from the emitter. Collector current of Q2 is therefore 55mA, and the total collector current is the sum of both transistors (the collectors are tied together). Collector current is therefore 65m and emitter current is 66mA (Q1's base current is added).
+ +Therefore, the β (aka hFE) of the compound pair is 60 and that of the Darlington pair is 65. This relationship is maintained regardless of the actual gains of the two transistors (remember that gains of 10 and 5 were used for convenience only). Note that when R1 is included the results will be different, and 'real world' parameter spread means that measured results will be very different from the value calculated. The actual formulae are ...
+ ++ Compound Pair β = βQ1 × βQ2 + βQ1+ +
+ Darlington Pair β = βQ1 × βQ2 + βQ1 + βQ2 +
The small gain difference is immaterial and normal component variations will be far more significant. Likewise, a temperature change of only a few degrees will completely override any measurable gain difference. For all practical purposes, the total gain for either configuration is approximately ...
+ ++ β = βQ1 × βQ2 ++ +
To obtain the maximum possible gain it's important to ensure that the driver transistor for either circuit has enough collector current to ensure acceptable hFE. When run at very low collector current, most transistors will have a gain that's well below the quoted figure in the datasheet. For example, a BC549 has a 'typical' gain of 520 at 2mA, but this falls to 270 at 10µA. It falls further as collector current is reduced below 10µA. The base-emitter resistor (R1, shown in each of the configurations in Figure 1) should be sized to ensure that the drive transistor has a collector current that's above the minimum. The resistor also helps to ensure a faster turn-off and minimises leakage current.
+ +For example, assume that Q1 and Q2 have a gain of 500 at currents above 100µA. If Q2 has a collector current of 10mA, its base current will be around 20µA. To ensure Q1's gain is acceptable, it needs to operate at a current of at least 100µA. R1 is therefore required to pass around 80µA, so needs to be about 8.2k (assuming 0.7V base-emitter voltage). This is just Ohm's law, and can be calculated for any combination of device gains for Q1 and Q2, whether wired as a Darlington or Sziklai pair.
+ + +Apart from the negligible gain reduction, the only real disadvantage of the compound pair configuration over the Darlington Pair is the saturation voltage. This is important for high current switching applications, because a higher saturation voltage means a greater thermal dissipation. This can make the difference between needing a heatsink or not, or needing a bigger heatsink that would otherwise be necessary. There are many different ways around this problem of course, but such a discussion is outside the scope of this article.
+ +![]() | Note that for all the following examples, generic transistors from the SIMetrix [ 2 ] simulator are + used. NPN devices are 2N2222 and PNP are 2N2904. These are both nominally rated for around 625mW dissipation and have a claimed minimum gain of 100, although this varies + widely (as do all transistors) and is dependent on temperature and current. |
The compound pair has a slightly higher saturation voltage than a Darlington pair, as shown in Figure 2. This is actually somewhat counter-intuitive, and you may find descriptions that claim that the opposite is true. The difference is not great - 765mV for the Darlington pair and 931mV for the compound pair as shown below, but that's enough to make a significant difference in a high current switch. The input signal is a pulse waveform, with a minimum value of 0V and a maximum of 12V. Base current is therefore around 1.14mA for the compound pair, and 1.05mA for the Darlington. The reason for the small difference is explained below.
+ +In the case above, the power dissipation in the compound pair is 52mW and 44mW for the Darlington. The difference is trivial here, but becomes much more important as current increases. When the current is 10 times or 100 times as great as the ~120mA used in this example it is easy to see that the dissipation will become very high. To a significant extent, this is no longer a problem in modern high current switches, because MOSFETs or IGBTs (insulated gate bipolar transistors) are now used for most serious switching applications.
+ +
Figure 2 - Switching Saturation Voltage Test
One area where the compound pair wins easily is the required turn-on voltage. The compound pair needs only 625mV vs. 1.36V for the Darlington pair (this is subtracted from the base supply voltage to calculate the current through the 10k base resistors). Again, this might not seem significant, but there will always be applications in electronics where a low turn-on voltage is advantageous because of other circuit constraints. While this is rarely a major issue with modern switching systems, it influenced many earlier designs before MOSFETs and IGBTs were available. The Sziklai pair (as simulated) has a slightly greater turn-off time than the equivalent Darlington (1.2µs vs. 805ns respectively), and while this is rarely an issue in low-level circuits, it becomes important for high speed power circuits. Switching times can be reduced dramatically by using a lower base drive resistor (and a corresponding reduced drive voltage).
+ +So far there is really not much between the two circuits, but it seems that overall the Darlington has a slight advantage. For this reason, Darlington connected transistors are still the most common for any form of switching that does not warrant the use of MOSFETs or IGBTs. It is a very useful circuit, and there are many integrated Darlington transistors available (the TIP141 is one of the best known examples). As always, the most appropriate topology has to be chosen to suit the application, and there is no single 'best' way to perform the various tasks needed in electronic circuitry.
+ + +Both Darlington and Sziklai pairs are used in linear circuits, and overall Darlington pairs are the most common. Readers of The Audio Pages will have noticed that almost without exception, I have used complementary pairs for power amplifier output stages. This is a relatively uncommon approach, but there are good reasons for this choice. I used the complementary output stage in the second amplifier I ever designed, and have continued to use it ever since.
+ +It was determined and demonstrated long ago [3, 4, 5] that the compound pair has greater linearity than the Darlington pair, and although this information seems to have been ignored by most people for a very long time, it is still true. One of the interesting things about facts is that they don't go away, even if ignored.
Figure 3 shows a pair of simple emitter followers, one using the compound pair and the other a Darlington. This is a fairly easy job for any transistor circuit, and one would not expect a significant difference in a circuit that has almost 100% degenerative feedback. The input signal is a 1V peak (707mV RMS) sinewave, with a 6V DC bias to place the output voltage at the approximate half-supply voltage.
+ +
Figure 3 - Compound And Darlington Emitter Followers
The first thing you notice is that the compound pair has a higher output voltage - it's 99.5% of the input voltage, vs. the Darlington pair which only manages 98.7%. Admittedly, this is hardly a vast difference, but it is notable nonetheless.
+ +Of more interest is the distortion contributed by the two configurations, and this is demonstrated below. Quite obviously, the compound pair (green trace) has fewer harmonics above the -120dB noise floor, and they are all at a lower level - by 20dB or more!
+ +
Figure 4 - Distortion Performance of Compound And Darlington Emitter Followers
A FFT (Fast Fourier Transform) of the output waveform lets us see the harmonic structure of the signal. Simulators have a real advantage here because they can generate perfect waveforms (zero distortion) and have infinite dynamic range. Any harmonic that's more than 120dB below the fundamental is not only buried in the noise floor of even 24-bit digital systems, but is well below audibility for any system.
+ +The THD (Total Harmonic Distortion) figures are a good indicator of linearity. A perfect system contributes no distortion at all. As you can see, the Darlington pair has 3 times as much distortion as the compound pair. Both figures are excellent and are well below audibility, but remember that every stage of a system contributes some distortion, so keeping overall linearity as high as possible is important.
+ +As I have noted in many of the articles on this site, I consider the THD of an amplifier to be an important measurement, not necessarily because we can hear low levels of distortion, but because it gives a good indication of overall linearity. Any non-linearity causes intermodulation distortion (IMD), and it is IMD that is almost always considered the most objectionable.
+ +The so-called TID (Transient Intermodulation Distortion) is largely a crock - there is no evidence whatsoever that any competent amplifier fed with real music signals generates TID. This was probably one of the most elaborate hoaxes that the audio community has seen so far, largely because it was widely reported and came from seemingly credible audio designers. The fact is that TID is real, but only if the amplifier is subjected to test signals that never occur in any recorded or live programme material.
+ + +In some classes of electronic equipment, thermal stability never needs to be considered. There are many designs where even quite radical changes to base-emitter voltages (for example) are simply and easily compensated by a feedback network, so never cause a problem. Of course, this assumes that adequate heatsinking is provided so that transistors remain within their safe operating areas.
+ +For other designs such as push-pull power amplifiers, thermal stability is paramount. Depending on the design and the designer, the thermal feedback circuit can be quite complex and hard to get exactly right, although it usually looks quite simple in the schematic. The phenomenon of thermal runaway is invariably caused by insufficient attention to the thermal characteristics of the output stage. For many industrial applications this is easily solved by operating the output stage with no bias at all, thus preventing any possible issues.
+ +However, this is usually not feasible in an audio circuit, because zero bias output stages cause crossover distortion (sometimes called 'notch' distortion). This is very audible - especially with low level signals - and was part of the reason that many transistor amps got a bad name when they were first released into the market. They might have measured better than their valve counterparts, but only because the measurement was not done correctly at the time, or the person taking the measurement failed to notice where the problem(s) lay. A great many people hated the sound of many early transistor amps, but it took a while before the reason was understood.
+ +
Figure 5 - Bias Stability Test Circuits
A transistor's gain varies with temperature, and when the temperature increases, so too does the gain. This temperature dependency is maintained up to temperatures that will cause device failure. In addition, the base-emitter voltage decreases by approximately 2mV / °C, so some means of stabilising bias current is mandatory.
+ +In a compound pair, the influence of the output device is considerably less than that of the driver. There is some effect as the output transistor gets hotter, but it is considerably smaller. The primary device that determines the bias current is the driver transistor, and it is much easier to maintain a reasonably constant temperature for a transistor that dissipates comparatively little power. The overall thermal sensitivity of the compound pair is significantly better than that for a Darlington pair, and the power transistor sensitivity is far lower.
+ +In contrast, a Darlington device is highly dependent on the base-emitter voltage of two cascaded junctions, so the effect is doubled. If both the driver and power transistor get hot, the current increases markedly, and is much harder to control. This is compounded by the fact that most amplifiers using a Darlington output stage have the driver and power transistors mounted on the same heatsink, along with the bias servo transistor. In general, this is may be a mistake regardless of output stage topology as shown below.
+ +Transistor Temperature | Compound Pair | Darlington Pair + | |
Q1, Q3 (Driver) | Q2, Q4 (Output) | Total Current | Total Current + |
25 °C | 25 °C | 41 mA | 41 mA + |
75 °C | 25 °C | 123 mA | 96 mA + |
25 °C | 75 °C | 44 mA | 87 mA + |
75 °C | 75 °C | 126 mA | 148 mA |
The temperature dependence of the two circuits in Figure 5 is shown in Table 1. Because it is much easier to keep the driver transistors at a consistent temperature, it is apparent that it will be far easier to maintain a stable bias current using the compound pair than can ever be the case with a Darlington pair. This has been proven in practice - none of my original designs presented on this website has an issue with thermal stability, and all bipolar designs use the compound pair output stage. The bias servo transistor must be in thermal contact with the driver transistor(s) in a compound pair output stage. Deciding on the optimum location is harder when a Darlington output stage is used. Because there is always considerable thermal lag between the die and heatsink temperatures, placing the bias servo on the heatsink is not as effective as it should be (at least in the short term).
+ +![]() | Note that the figures shown in the table show far less variation that you will experience in a typical power amplifier. + This is because of the 1 ohm emitter resistors, which provide a far greater stabilising influence than the more common 0.1 or 0.22 ohm resistors typically used. The 1 ohm resistors were + used for consistency and convenience for the purposes of explanation, and were not selected to be specifically representative of reality. There are too many variables in a real amplifier, + so the idea was to show the trend, not absolute values. |
There have been (spurious and mostly nonsensical in my opinion) claims that thermal effects cause distortion in amplifiers and output stages [ 6 ]. While this does appear to have some effect in an integrated amplifier (power 'opamp' ICs for example), it should be negligible in any discrete power stage, and is likely to remain well below audibility in any design. Assuming that there is some substance to the claims as applied to discrete designs, it is likely that there will be smaller and less problematic thermal variations in the driver stages than the output devices. In general, I consider claims about thermal distortion to be 99% complete nonsense, and contrary to the reference noted above, explain nothing of any real importance.
+ +Because of the known problems with thermal stability, some manufacturers have released power transistors with integral (but isolated) diodes to form part of the bias stabilisation network when the transistors are used. Personally, I think this is a stupid idea - the constructor ends up with a completely non-standard transistor, and when (not if!) it becomes discontinued there will be no replacement and an entire amplifier may have to be scrapped if a power transistor fails and a (genuine) replacement device can't be located. The alternative is a re-design, which may not be possible (depending on PCB layout, etc.).
+ + +In the early days of silicon transistors it was difficult to make decent PNP power transistors, so the compound pair was used in a great many amplifier output stages, leading to the then common 'quasi-complementary symmetry' output stage. One of the early amp designs described in the ESP projects section uses this output stage [ 7 ]. Over the years, there have been many schemes to improve on the basic quasi-complementary stage, but in reality nothing really needs to be done.
+ +
Figure 6 - Quasi Complimentary Symmetry Output Stage
At various stages, various people have added diodes to the compound pair so both sides of the push-pull stage have the same bias voltage and turn on in a more similar manner. While this can make a big difference, these days it's not worth the effort. Some attempts to 'improve' the circuit only managed to make it worse. These days, almost no-one makes quasi complementary symmetry amps because excellent NPN and PNP pairs are now readily available and it would be silly not to make an amplifier that's fully complementary symmetry.
+ +The scheme has been described in some detail here because it is an important milestone in the evolution of modern amplifiers, not because it is currently suggested for a new design. Having said that, there are undoubtedly situations that could arise that make the general scheme potentially useful. For this reason alone, it is worth remembering - one never knows when such information will come in handy.
+ + +The three common power output stages are shown in Figure 7. There are obviously others, but they are generally based on one or another of the above. To make things more interesting, there are also variations that use a combination of compound pair and Darlington configurations, but these will not be covered here.
+ +
Figure 7 - Three Common Output Stages
The oldest of the three is A, the quasi-complementary symmetry stage. As noted above, this was common before true complements to high-power NPN transistors existed. Once at least passable PNP power devices were available (albeit at higher cost), full complementary symmetry (B) using Darlington pairs became common. This type of output stage has remained the most common for many, many years. When appropriately biased, all these stages have fairly good distortion performance, with the Sziklai pair being the best, and quasi-complementary the worst. With the exact circuits shown above, all have less than 1% THD when loaded with 8 ohms. (Sziklai, 0.05%, Darlington - 0.23%, Quasi-complementary - 0.65%, noting that these results were simulated.)
+ +The least common is that shown in C - full complementary symmetry using compound pairs. I saw this used first in a little 30W(?) amplifier that if I recall correctly (and it's entirely possible that I don't) was one designed by Sir Clive Sinclair [ 8 ] in the early 1970s, and thought it was extremely clever. Try as I might, I've never been able to find the schematic on-line though. I promptly built an amp to test the output stage, and was pleasantly surprised in all respects. With few exceptions, every bipolar output amp I have designed since then has used this configuration, because I found its linearity and thermal stability to be markedly superior to the traditional Darlington stage.
+ +For reasons that I have always found obscure and somewhat mysterious, I've found that every amp I've designed using this configuration has parasitic oscillations on the negative half, and the addition of a small capacitor has been necessary every time. Strangely, I've never had an issue with parasitic oscillation with quasi-complementary amps (although many years have passed since I built one), even though the negative side uses an identical compound pair. It makes no difference which side is driven (NPN or PNP voltage amplifier transistor) and which has the current source, and nor does it matter if the current source is active or bootstrapped. Only the negative side ever shows any sign of parasitic instability, and a small cap (typically 220pF) installed as shown stops it completely.
+ +The final configuration is a fairly clever modification of the compound pair. Personally (and the obvious cleverness notwithstanding), I consider it an abomination - it has high distortion, extreme thermal sensitivity and very poor transient overload recovery characteristics. Despite this, there have been some commercial amps using the circuit shown, and there are even some who consider the sound to be 'better' than conventional output stages. This is not an argument I will entertain.
+ +
Figure 8 - Output Stage With Gain
As shown above, the output stage has a nominal gain of about 2, based on the 100 ohm resistors from the output to the driver emitters and then to earth. In reality, gain will be considerably less, because the stage has a relatively high output impedance (it's around 4 ohms as shown). This is in contrast to the other configurations shown above, which all have a low output impedance - typically 1 ohm as set by the emitter resistors in these examples.
+ +Thermal stability of these stages is always a nightmare - I've only ever worked on a couple (and built one to test the idea many, many years ago), and found all of them to be highly thermally unstable. It is possible to design a bias servo (based on Q1) that will keep the stage from thermal runaway, but it's by no means easy to do, and the results are never completely satisfactory. Quiescent current typically varies by up to three times the nominal, and sometimes seems to have a mind of its own. Excellent heatsinking is mandatory!
+ +Unlike the complementary symmetry stages (including quasi-complementary), open loop distortion performance is dreadful. Even operating at low gain as shown, distortion is close to 4% with over 150mA of output device quiescent current, almost 5 times as great as that from a Darlington pair output stage with roughly similar output stage bias current. If readers have twigged that I'm not fond of this arrangement then I've done my job.
It's also worth noting the worst case dissipation in the 100 ohm resistors to earth (or other value as determined by the design). It can be a great deal higher than expected - in the case of the amp shown and with ±25V supplies, these resistors can dissipate over 3W each if the amp is driven into hard clipping. Under normal operation with music, dissipation is usually minimal. This is a design that requires extraordinary care in design, because there are so many things to get wrong.
+ +However, like everything else in electronics, it may be ideal for some obscure application. Because of the gain structure, it is capable of fully saturating the output transistors in clipping. This means that if you happen to need a full level (rail to rail) squarewave output but also require the amp to be capable of linear operation, then this is the ideal circuit. I have found one (and only ever one) application for this for a client project, and in this rather odd role it performs perfectly.
+ + +There is only one instance on the ESP site where a compound pair is used for low level amplification, and that's for a microphone preamp [ 9 ]. Figure 8 shows two highly simplified versions, one using compound pairs and the other using Darlington pairs for comparison. The complete circuit is a long-tailed pair in both cases, and one might expect that the two versions would have roughly similar performance.
+ +
Figure 9 - Long-Tailed Pair Circuits
This is an area where the term 'super transistor' really comes into its own, because the compound pair version shown above has over 5 times as much gain in an otherwise identical circuit using Darlington pairs (the voltage gain is 296 vs. 58). With an output of 296mV (compound/ Sziklai) and 58mV (Darlington), the distortion is around 0.12% for both circuits.
+ +In reality, there are not many requirements for the high gain and linearity of the compound pair in discrete circuits, but it is extremely common in ICs, where it is still difficult to fabricate high performance PNP transistors on the silicon substrate. Accordingly, many of the PNP transistors are actually compound (Sziklai) pairs in a great many opamp IC circuits. This is especially true for 'power opamps' - IC power amplifiers.
+ + +A common comment of mine is that all electronics devices are made from compromises, and the circuits described here are just that. The Darlington pair is a very popular topology, and there are many transistors that are in fact a basic integrated circuit - this is certainly the case with Darlington transistors. Built into the standard 3-lead package are two transistors, one or two resistors and sometimes a power diode as well.
+ +As far as I'm aware, no-one has ever integrated a compound pair as a single transistor. I don't know why it's not been done, but there probably isn't much point. The compound pair works best when the driver is thermally separated from the power transistor, and this cannot be done if the two are on the same piece of silicon (or even just in the same encapsulation).
+ +As far as making audio power amplifiers is concerned, both configurations work very well if properly designed, and there is no reason to believe that there will be any audible difference in a properly conducted double-blind test. It goes without saying that non-blind tests have consistently 'proven' that one or the other configuration is 'better' - which one depends entirely on the prejudices of the listener(s).
+ +Many highly acclaimed amps have been made using both topologies, so audibility claims are obviously frivolous at best.
+ +It is hoped that this article has provided some additional (and useful) information for anyone wanting to know more about these popular circuits. It's fair to say that without both the Darlington and compound/ Sziklai pair, the proliferation of high quality transistor power amplifiers would have been severely curtailed.
+ + ++ 1 - Sziklai Pair - Wikipedia, the free encyclopedia+ +
+ 2 - Analogue Circuit Simulation Software from SIMetrix Technologies
+ 3 - The Audio Power Interface, Douglas Self, Electronics World September 1997, p717
+ 4 - Power Amplifier Design Guidelines - Output Stages, Rod Elliott
+ 5 - Intermodulation at the amplifier-loudspeaker interface, Matti Otala and Jorma Lammasneimi, Wireless World, December 1980, p42
+ 6 - Memory Distortion - Part 1 : Theory
+ 7 - Project 12A - El-Cheapo, presented in more or less original form
+ 8 - Clive Sinclair (Wikipedia)
+ 9 - Project 66 - Low Noise Balanced Mic Preamp, Phil Allison & Rod Elliott
+
![]() | + + + + + + + |
Elliott Sound Products | +Coax Cable Introduction |
![]() ![]() |
+ Introduction+ + +
+ 1 - Impedance
+ 2 - Velocity Factor
+ 3 - Coaxial Connectors
+ 4 - Impedance Matching & Wavelength
+ 5 - Cable Reactance Problems
+ 6 - Impedance Conversion
+ 7 - Coax And Audio
+ Conclusions
+ References +
The coaxial cable was invented in 1929, but no-one could ever have known how popular it would become. Coax (as it's commonly known) is often thought of a being for radio frequency applications, but in reality a great many cables used in audio are also coaxial. The term itself simply means that the conductors share a common axis. There is a centre conductor, surrounded by an insulator, which in turn is wrapped in the second conductor called the shield. In almost all cases, there is a final outer insulating sheath over the shield to protect it from damage and corrosion.
+ +In audio, we usually refer to such cables as being 'shielded', because it's very common to have two inner conductors (balanced microphone cables for example), and the term 'coax' doesn't really apply because the two inner conductors don't share a common axis as such. However, they are twisted together, and the twist is one of the reasons that these cables can reject noise that's external to the cable.
+ +There are many RF (radio frequency) coax cables that can be used for audio, although there are many others that are completely unsuitable for a variety of reasons. For example, some use a single core for the centre conductor, often copper plated steel. This is fine for RF, because the skin effect means that the high frequency signal will be concentrated on the outer surface. Since the plating is copper, it has low resistance. The steel inner core gives the cable added mechanical strength, but it is not very flexible and cannot be bent to a small radius. If constantly flexed the centre conductor will break, so this type of cable is only suitable for fixed installations. However, it can be used for fixed audio installations if desired.
+ +Other RF coax cables use a stranded inner conductor, most commonly 7 strands of copper, copper plated steel, tinned copper, silver plated steel and sometimes copper plated aluminium. Most RF coax cables are designated with an 'RG' (radio guide) number, such as RG-58, RG-174 etc., etc. While these designators are usually a passable indication of the specification of the cable, they are no longer completely reliable. If your application is critical, it's advisable to ensure that the cable meets the required standards. Simply selecting cable based solely on the RG number doesn't guarantee this.
+ +In some catalogues you'll see cables referred to as (for example) 'RG-59 Type'. The word 'type' in this context means that the cable can be considered to have the basic characteristics of the cable that normally bears the number indicated, but there will be differences that may or may not be apparent. The impedance and outside diameter will usually be as expected, but many other things can be different, including the type of central conductor (solid or stranded for example).
+ +Figure 1 shows the basic construction of a typical RF coaxial cable. Each of the sections shown can be changed depending on the intended usage. Many cables don't use a foil shield, and it usually cannot be used effectively for any cable that is intended to be flexible. In some cases, a metallised plastic is used instead of aluminium foil and that is more resistant to damage due to cable movement. The dielectric is the insulation around the centre conductor, and is a critical part of the cable.
+ +At high frequencies, signals do not travel through a coaxial cable in the way you imagine. Once the length of the cable exceeds a significant fraction of the signal wavelength, coax acts as a transmission line. Rather than the movement of electrons, the signal is transferred as an electromagnetic wave. At much higher frequencies (microwaves), even coaxial cable may not be used - the signal is 'transmitted' along a hollow pipe called a waveguide. There is no centre conductor, and the waveguide's purpose is to contain the electromagnetic wave to (usually, but not always) one dimension - along the length of the waveguide from the source to the destination. An example is the waveguide used to carry the energy from the magnetron to the cooking chamber of a microwave oven.
+ +With RF, you also need to be aware of the skin effect. This effect causes high frequency signals to concentrate on the outside of the conductor, and the inner section becomes (almost) irrelevant. Some HF coax for fixed installations will use an inner copper tube rather than twisted wires, because the centre part of the conductor serves no real purpose and isn't required. This saves weight and cost.
+ +The skin effect can be circumvented by using Litz wire - multiple wires twisted together, but insulated from each other so the signal can't cross from one wire to another. If you wish to know more about the skin effect, look it up, because it isn't covered in detail here. Note that at audio frequencies, the skin effect has a very small overall effect on the conductivity of a cable, and it can safely be ignored. Skin depth is defined as the distance below the surface where the current density has fallen to 37% of its value at the surface. at frequencies up to 20kHz it can be measured, but you will rarely hear the difference (despite claims to the contrary by snake-oil vendors).
+ +It's also important to understand that any cable has a characteristic impedance, not just coaxial types. Figure-8 ('zip' cable), twisted pair cables used for data transmission (Cat-5 for example), and even the aerial mains distribution cables (i.e. those on poles) - they all have an impedance that's based on the conductor size and spacing. At low frequencies (50 or 60Hz) the impedance is not a limitation unless the transmission lines are very long compared to wavelength. Consider that the wavelength at 50Hz is 4,800km or 4,000km for 60Hz (assuming a velocity factor of 0.8 - see below for more on that topic). Mains power transmission is a topic unto itself (and is very complex), and isn't considered here.
+ +Another term that you will see along with 'coax' is 'transmission line'. All coax is a transmission line (at least at some frequency), but not all transmission lines are coaxial. A single PCB trace and a ground plane produce a transmission line, as do twisted pair cables and 'Figure-8' aka 'zip' cable. Most have no interactions at audio frequencies other than their intrinsic capacitance. At higher frequencies things can be very different, as discussed below.
+ + +Radio frequency coax always has a designated impedance, most commonly 50Ω or 75Ω. 50Ω coax is pretty much the standard for radio transmitters and receivers, laboratory equipment (for example almost all oscilloscopes are fitted with 50Ω BNC connectors). 50Ω coax matches the impedance of a quarter-wave 'monopole' (1/4 wave 'whip' or ground-plane) antenna. The use of 50Ω coax is indicated anywhere power needs to be transmitted, so most radio and TV broadcast systems will be 50Ω, as will mobile phone (cell phone) repeaters, CB and ham radio, Wi-Fi, etc.
+ +75Ω coax is used for unbalanced TV antenna connections, satellite TV receiver systems, cable TV, broadband (cable) internet, video and S/PDIF digital audio. 75Ω is also a reasonable match for the impedance of a 1/2 wave dipole antenna (~70Ω) as used for many TV antennas. A transformer (balun) is needed to match coax to a 1/2 wave folded dipole antenna, as these have a nominal impedance of 300Ω (actually about 280Ω). 75Ω coax usually has slightly lower losses than 50Ω cables at higher radio frequencies.
+ +Unlike cable used for mains or other power transfer, the impedance of a coaxial cable is not affected by its length. A 50Ω coax has an impedance of 50Ω whether it's one metre or one kilometre long. This doesn't mean that there are no losses though, and most cables are rated for their attenuation in dB per unit length. This varies with frequency, and all cables exhibit higher losses as the frequency increases. Power handling is affected in the same way, so the maximum power that can be transmitted is reduced with increasing frequency.
+ +Don't expect to be able to measure impedance with a multimeter or similar, because the cable impedance is a complex mixture of primarily capacitance and inductance, with resistance (as measured by an ohm meter) having an almost insignificant effect. Of course, this doesn't mean that resistance doesn't matter, because it does. This is why many 75Ω coax cables use a copper-clad steel core for the centre conductor. The steel is cheap, but has high resistance, and the skin effect means that the HF signal will travel in the outer layer only - which is copper for improved conductivity.
+ +The characteristic impedance of a cable is determined by a number of interdependent factors, including the ...
+ +If the last two factors are known, the characteristic impedance (Z0) of a cable can be calculated by ...
+ ++ Z0 = √( L / C ) Ohms+ +
+ Where ...
+ L = inductance in Henrys
+ C = capacitance in Farads +
As an example, if you have a cable that measures 100pF/ metre, it must have an inductance of 250nH/ metre if it's rated at 50Ω. This is easily verified by either rearranging the formula, or using those two values in the formula as shown. If you do that, the cable's impedance will work out to be 50Ω. You will also find that changing the length (and therefore the capacitance and inductance proportionally) doesn't affect the outcome - the impedance remains the same. 100 metres of the same cable will have a capacitance of 10nF and an inductance of 25µH, but the impedance is still 50Ω.
+ +It's also worth noting that the above formula also works with twisted pair cables (as used for networking) and even side-by-side constructions such as 'figure 8' or 'zip cord' commonly used for wiring loudspeakers to amplifiers. Not that we need to care about the characteristic impedance of any audio cable, because the cable length is normally only ever a small fraction of a wavelength at the highest frequency of interest - 20kHz.
+ +![]() | Note that as shown here, a cable has capacitance and inductance, so it is a tuned circuit. To prevent the tuned circuit from becoming a + problem, there needs to be an impedance at least at one the end of the cable (preferably both) that matches the characteristic impedance of the cable. The capacitance and inductance are + distributed along the length of the cable. + |
To reduce the characteristic impedance of any cable, it's necessary to reduce the inductance and increase the capacitance for a unit length of the cable. Increasing the impedance naturally requires the opposite - more inductance and less capacitance. Very low impedance cables can cause audio amplifier instability because of the high capacitance, unless they are properly terminated (for example, by adding a Zobel network to the far end).
+ +Cable impedance can also be calculated if you know the respective diameters of the inner and outer conductors and the dielectric constant (also called the relative permittivity) of the insulator around the centre conductor.
+ ++ Z0 = 138 × log ( D / d ) / √εr Ω+ +
+ Where ...
+ εr = Relative permittivity (dielectric constant) of the dielectric
+ D = Inside diameter of the outer conductor
+ d = Outside diameter of the inner conductor +
The dielectric material is used to provide physical separation between the inner conductor and the shield. The material used should have stable electrical characteristics (dielectric constant and dissipation factor) across a broad frequency range. The most common materials used are polyethylene (PE), polypropylene (PP), fluorinated ethylene propylene (FEP), and polytetrafluoroethylene (PTFE, aka Teflon). PE and PP are common in applications that have lower cost, power and temperature range requirements (PE is 85°C, PP is 105°C). FEP and PTFE are for high power and temperature range applications (200°C), and have much greater resistance to environmental factors. However, they also cost a lot more.
+ +The materials may be used in their natural (solid) form, or injected with gas bubbles to create a foam or cellular structure. This reduces both the dielectric constant and dielectric losses. Some rigid or semi-rigid 'cables' (intended for fixed installations only) use discs of insulating material spaced at intervals, so the dielectric is predominantly air, thereby reducing losses even further.
+ +Material | Relative Permittivity (εr) + |
Vacuum | 1 (by definition) + |
Air (sea level. 25°C) | 1.00059 + |
PTFE (Teflon) | 2.1 + |
Polyethylene | 2.25 + |
Polyimide (Kapton) | 3.4 + |
Polypropylene | 2.2–2.36 + |
Polystyrene | 2.4–2.7 + |
Polyvinyl Chloride (PVC) | 3.18 + |
polyethylene terephthalate (PET, Mylar) | 3.1 + |
Some of the materials listed above may not be found in the cable itself. However, if you ever need to join a coaxial cable that's used at radio frequencies, be aware that 'ordinary' PVC insulation tape or Kapton tape both have a higher dielectric constant than the insulator materials normally used. This can cause an impedance discontinuity where the join is made. More consistent results will usually be obtained by using a dedicated cable joiner or a plug and socket with the same impedance as the cable.
+ +A coaxial cable of a specific impedance is determined by the ratio of the dimensions, not the absolute values. A 50Ω coax can be as small as 2.5mm diameter or as big as 50mm diameter (or more). Provided the dimensional ratios are maintained, the cable impedance is also maintained. For example, assuming a dielectric constant of 2, a 50Ω coax has an outer to inner diameter ratio of 3.3:1 - it makes no difference if the dimensions are in millimetres, centimetres or inches, you will still get the same result. For a given impedance, the dimensional ratio is only changed if the dielectric constant is different.
+ +Needless to say, there is a vast amount of information on-line. This includes impedance, capacitance and inductance calculators and many other tools that can be used to work out the characteristics of a given cable. However, the one piece of info you will almost certainly be unable to find is the relative permittivity of the dielectric, and this is essential before you can calculate the impedance or anything else. You can make an educated guess though, because most will be somewhere between 2 and 3 (see Table 1). If the material is 'foamed' (injected with air bubbles) relative permittivity will be reduced, but it may be next to impossible to find out the actual figure. If you can measure the dimensions accurately you can then work out the dielectric constant, assuming that you know the cable impedance (it's usually printed on the outer jacket or sheath).
+ +One online calculator tool that seems to work well and gives expected results is Coaxial Cable Impedance Calculator. There are countless others, but I rather like this one because it provides everything you need with an easy to use interface.
+ ++ (Note: There is no affiliation between ESP and Pasternack, and the link is provided purely as a service to readers.) ++ +
Something that tends to make the inexperienced really wonder about the overall sanity of electronics as a whole is a cable's velocity factor - an indication of how much the cable slows down an electromagnetic wave travelling in the cable. Sometimes it will be referred to as 'velocity of propagation' or similar, and it's normally expressed either as a percentage or a decimal fraction. A cable with a velocity factor of 0.75 or 75% means that the signal travels at 0.75 times the speed of light (in a vacuum), nominally 3 x 108 metres/ second (299,792,458 metres per second if you wish to be exact). For our example, the signal will travel at only 2.25 x 108 metres/ second - a significant reduction.
+ +This means that it will take 5.6ns for the signal to travel along 1 metre of the cable, but it would only take 3.3ns for the same signal in a vacuum. It doesn't sound like much, but the velocity factor (VF) of a cable is critical with very high radio frequencies, and must be considered when designing some types of antenna (phased arrays for example). It's mostly a non-issue for audio of course, but it can still be a problem with very long lines (several kilometres up to many thousands of kilometres) as used in early telephony. Before fibre-optic cable became the standard for all overseas calls, submarine cables were used, and these were affected by the velocity factor of the cables.
+ +If a signal has to travel from Sydney (Australia) to London (England) for example, that's a distance of 16,983km (as the crow flies). The delay is 75.5ms in a cable with a VF of 0.75 but is only 56ms at the velocity of light. None of this sounds like very much, but if telephone systems are not properly terminated (based on the characteristic impedance of the cables, including that from the exchange ('central office') to the subscriber, you get echo or reverberation effects that can make communication difficult. Note that not all of the delay is due to cable delays - there is always some latency (delay) in the processes of analogue to digital and digital to analogue conversion' ADCs and DACs - collectively known as CODECs (from coder-decoder). These all introduce delays, as does the switching equipment.
+ +Velocity factor is mainly influenced by the relative permittivity of the cable's dielectric, but some other factors can also have an influence. In the early days of television, it was common to use a balanced 'twin-lead' (or alternatively 'ladder' or 'open wire' lines) between the antenna and the receiver. The common impedance used was 300Ω, and due to relatively wide spacing of the conductors, these cables had a velocity factor of up to 0.95 (95%).
+ +As unlikely as it might seem, older 'high-end' oscilloscopes often used a length of coaxial cable as a delay line, coiled up in the case somewhere. The idea was that the signal would be fed directly to the triggering circuits, and a slightly delayed version (via the cable) then processed for the waveform display. This compensated for the short delay inherent in the trigger circuitry, and ensured a very clean trace without showing triggering artifacts - most commonly an apparent glitch at the beginning of the displayed trace. To this day, coax delay lines operate in many environments - anything from cellular phone base stations to airborne electronic warfare systems.
+ + +This is really a can of worms. There are so many different connectors that it's hard to know where to start. The first decision will always be the physical form of the connector, and it will usually have to mate with an existing connector on the equipment. It wouldn't make much sense to try to use a standard 1/4" (6.35mm) phone plug for an oscilloscope . No suggestions are offered in this respect, simply because it almost always depends on the application and the equipment you have to connect to.
Some connectors are available in only one impedance - either 50Ω or 75Ω. It is often important to use the exact type of cable that the connector is designed for - for example, you can't use a cable with a stranded inner conductor with an F-connector (as used for most modern TV installations, cable/ satellite TV, cable internet, etc.). These connectors only provide the outer shell - the centre conductor is simply extended into the connector body and forms the pin of the male plug. These connectors are designed to be used with RG-6/U or RG-59/U cable - note that there may be different versions to suit each cable type, because the cable outer diameters are often different. There may also be some variants of these cables that are the same size. See the comments above regarding coax cable designations - they aren't always reliable.
+ +The characteristic impedance of a connector is determined by the dimensions of the inner and outer conductors, as well as the type of dielectric used to support the centre pin or socket. In other words, the impedance is worked out in exactly the same way as for cable. The F connector referred to above is a case in point, and it's designed to maintain the dimensions of the cable as closely as possible. This is surprisingly important - if the impedance of the connector is wrong, it causes a discontinuity that affects the signal by creating reflections. Every time the impedance changes, some of the incoming signal is reflected back to the source, and this reduces the level reaching the equipment.
+ +Because of this, there is often a surprising amount of skill needed to terminate cables with connectors, since a discontinuity creates problems and loss of signal. Some connectors are much easier than others, and some require special tools or failure is almost guaranteed. The tools are often rather expensive, so only those who work with the connectors on a regular basis can justify the expense.
+ +There is a staggering number of different types of connector designed for coaxial cable. All the major types have carefully controlled impedance (mainly 50 or 75Ω), and a small few are available in either impedance. BNC connectors are a good example - they are available in both 50 and 75Ω types. Many of the others are designed for a single impedance, and are not available with an alternative.
+ +A few common examples include ...
+ +![]() | The RCA connector is listed above only because it's been used for many years for video cables (composite and RGB), which are designed for
+ 75Ω. Unfortunately, RCA connectors are nowhere near 75Ω, and with the common types it's close to impossible to determine their impedance because
+ dimensions change through the length of the connector. There are some RCA connectors that claim to be 'true' 75Ω, but this may be rather optimistic for most.
+ + + Based on the dimensions (8.06mm outer shield, 3.12mm inner pin) and assuming an air dielectric, the impedance is about 56Ω. If a PVC or similar dielectric is + used, that reduces the impedance to around 32Ω. In a domestic setup, RCA connectors usually work fine, but only because the cables are generally quite short + compared to the wavelength of the highest frequencies encountered in the video signal. Since the short cable is not a transmission line, impedance matching isn't + especially critical. + |
This list is not exhaustive, and has been culled so that only the most common connectors are shown. There are a great many more, some of which have faded into obscurity, and others that are only used for very specific purposes (such as military or aerospace equipment). All the connectors listed (except the RCA) are primarily intended for radio frequency applications, but naturally they all work from DC upwards. Common use with audio frequencies is (generally) limited to only two of those listed - BNC and RCA. Not including connectors used for domestic TV (antenna and video, of which there are millions), the BNC is one of the most popular connectors of all time.
+ +SMA connectors are gaining in popularity in recent years, as they are very compact, and have good performance at radio frequencies. They also perform well at audio frequencies of course, and as they use a screw thread on the locking ring, they can't easily be accidentally disconnected. Most are designed to use RG178 miniature coax
+ +Almost every oscilloscope made since the early 1960s uses BNC female sockets on the front panel for all inputs (vertical, horizontal and sync). As a result of the proliferation of BNC connectors on oscilloscopes, other test equipment has also provided BNC inputs and outputs as well, so now almost all quality lab instruments will have BNC connectors as part of the instrument. If other connectors are needed, it's common to provide adaptors to mate with other equipment.
+ +BNC connectors are also common for telecommunications, and were also used for early computer networking systems (ARCNET is one that I was very familiar with many years ago, and it survives to this day). Although the cable has an impedance of 93Ω (RG-62U), standard 50Ω BNC connectors were used. While this is a significant impedance mismatch, ARCNET cables could still be run for over 600 metres from an active hub to an end node, compared to ~180 metres for so-called 'thin Ethernet' aka 10BASE2 using RG-58 coax. This also used BNC connectors, as did other coax based networking schemes. A male line (cable) plug and chassis mount female socket are shown below.
+ +It should be apparent that with so many different systems using BNC connectors, their reputation for reliability is second to none. The only time anyone will have issues is if exceptionally low quality connectors are sourced from Asia, and/ or the cables are badly terminated. Poor crimping (often because the wrong crimping tool has been used) and generally shoddy workmanship will cause problems, but it's surprising just how well even cheap connectors work ... provided you don't expect good performance up to several GHz of course.
+ +It's also probably fairly apparent that I really like BNC connectors. So much so that even my workshop audio input (to amplifier and speaker system) and output (from FM tuner or CD player) are BNC, as are all my test instruments and various workshop preamps. Some adaptors are used, but most of the time I rely on BNC leads for almost everything. The majority have a BNC on one end, and alligator clips at the other with suitable flexible fly leads.
+ +Most of my leads are RG-174U, a nice thin cable (2.55mm diameter) with a capacitance of around 100pF/ metre. It is not advisable to use leads of this type with an oscilloscope with a high-speed signal (such as a 10kHz squarewave), because they will affect the waveform far more than a x10 oscilloscope probe. For coupling audio bits and pieces together they are invaluable. Even 2.7 metre test leads only have a capacitance of 270pF. That can be enough to make a fast opamp oscillate, but a series 100Ω resistor will stop that, with no effect over the extended audio band (up to 100kHz).
+ +For a (more-or-less) complete list of different coax cable types, see Wikipedia - Coax Cable. There are many that appear almost identical, and the 'RG' numbering system is not alone. Most of the cables in common use range from 2.5mm diameter (e.g. RG179) up to a bit over 7mm. The range and variety is extensive, with some designed for flexibility, and others of fixed installations.
+ + +This is a surprisingly complex topic, and even though there's quite a bit of info in this section, it's been simplified as far as possible to make it understandable. If this is a subject that you really need to understand fully, then you'll ideally get yourself some good books that cover the details thoroughly and (hopefully) accurately. While there's a lot of useful info on the Net, there's also a great deal that's either misleading or wrong. It can be very difficult to know which is which when you're starting out.
+ +Unlike low frequency circuits which generally use low output impedances and high (or comparatively high) load impedances, with RF impedances need to be matched. This can also become necessary even with audio, but only if the cables are of significant length - typically several kilometres. The most common place that these conditions are found is in the telephone system.
+ +With RF, it's not just the cables that form the transmission line - the connectors are very much a part of the overall circuit, as is any join in the cable or other transition from one medium to another. As such, the impedance of each part has to be carefully engineered to match the cable being used. This is the reason there is so much info on connectors in the previous section. Ignore these essential components at your peril, and remember that joints (and even small radius bends) need just as much attention.
+ +When dealing with transmission lines, it's almost always necessary to know the wavelength. There are some seemingly very odd (but perfectly reasonable once understood) things that happen with high frequencies, and you will often need to know the wavelength to be able to make sense of the measured results. With low frequencies (such as audio) this is almost never a problem. Consider that the wavelength of a 20kHz signal is 15km for a signal travelling in a vacuum - it's even longer in a transmission line (twisted pair or coax). Increase the frequency to 100MHz and it's down to 3 metres. Wavelength is easily calculated ...
+ ++ λ = v / f+ +
+ Where ...
+ λ = wavelength (metres)
+ v = velocity of propagation (metres/ second)
+ f = frequency (Hz) +
For the remainder of this section, we will assume a coax impedance of 50Ω, velocity factor of 0.75 (75%), and a frequency of 100MHz. It's quite easy to re-calculate everything described below for any frequency, and only basic maths (and a scientific calculator) are needed.
+ +So, with a velocity factor of 0.75, the wavelength of 100MHz in the coax transmission line is 2.25 metres (using the above formula). If we have a source of 100MHz and feed it into a 2.25 metre length of coaxial cable, the signal at the unterminated far end of the cable will be reflected back to the source. This reflection will be in phase with the applied signal, and the cable appears to be open-circuit. The same thing happens if the cable is reduced to exactly 1/2 the length (1.125 metres).
+ +Things become interesting (to put it mildly) if this same cable is a little over 560mm long - this is 1/4 wavelength (often referred to as a 'stub'). When a 100MHz signal is applied to one end, the unterminated cable appears to be a short circuit! The signal is reflected at the open end, but is now 180° out-of-phase. The reflection causes signal cancellation, and the source (such as a transmitter) will 'see' a short circuit and will probably be damaged. If the 560mm open stub is connected to a receiving antenna, it will filter out (remove) any signal at 100MHz, while letting adjacent frequencies through with little reduction. This is all very much frequency dependent, and all multiples of 1/4 wavelength will be affected in different ways, depending on the termination.
+ +With a good quality low loss coax, the Q (quality factor) of this 1/4 wave trap is so high that the bandwidth may be as low as 100kHz, although expecting better than 1MHz is probably unwise. This is one place where the DC resistance of the centre conductor and shield dramatically affect performance. All cable resistance and dielectric loss appear in series with the coax tuned circuit, affecting the depth of the notch. Also, be aware that the signal will also be effectively shorted out at 300MHz, 500MHz, 700MHz, etc. This is known as a 1/4 wave stub, and as always you'll find plenty of information on line if you search for it.
+ +Things get even more interesting if this same 560mm length of coax is now shorted at one end. It will appear to be a short circuit at DC (as expected), but it starts to show a significant impedance at a frequency of a little over 2MHz. At 100MHz (1/4 wave), it's now an open circuit, showing very high impedance - not quite infinite, but getting close. There will then be a series of peaks and nulls in the impedance curve, with the cable appearing to be open circuit at the same frequencies as above (300MHz, 500MHz, 700MHz, etc.). This type of 1/4 wave trap acts as a short circuit at 200MHz, 400MHz, 600MHz, etc.
+ +In the above, you can see the transmission characteristics for a 1/4 wave stub, with the far end shorted (red) and open (green). The traces show the relative impedance seen from the source into the cable. The cable has a delay of 2.5ns, which is 1/4 wavelength at 100MHz. If we use the same cable referred to above (the one with a velocity factor of 0.75), a 1/4 wave stub will actually be 562.5mm in length (560mm is an approximation). This is both confusing and confronting when you come across it for the first time, because it seems to go against all logic, but it's all perfectly reasonable once you understand how it works.
+ +From a little over 200kHz, the cable with the far end shorted (red trace) appears as an inductor. Its impedance increases with increasing frequency, until it appears to be an open circuit at 100MHz. The impedance then starts to fall, and is capacitive (falling with increasing frequency). At 200MHz the cable is a 1/2 wave stub, and it presents a short to the signal source. This process repeats as the frequency is increased further.
+ +As may be becoming apparent, coaxial cable can be used for much more than simply a means for transporting a signal from one place to another. However, once the cable is terminated in its characteristic impedance, for all intents and purposes it disappears. The 1/4 wave stub discussed above simply becomes an almost perfect conductor when the load impedance and cable impedance are the same. Its only when the impedances are mismatched that problems (and apparently strange behaviour) arise.
+ +We can take it from this that impedance matching is critical, but it's very important to understand that these effects to not come into play until the length of the cable is 'significant' compared to wavelength. A 'rule of thumb' that can be applied here is that significant means an order of magnitude - for the example shown above, effects become noticeable at 10MHz - 1/10th of the frequency we are working with.
+ +Most people working with audio will never experience any of the phenomena described, because the cables needed to experiment are simply far too long if you are limited to the audio range. Even if you can generate a 1MHz signal (and somehow consider it to be 'audio'), you're still looking at a 1/4 wave cable that's around 56 metres in length, so it's not easy to verify if you don't have the ability to generate (and measure) high frequency signals. At 100kHz, you need over 500m (1/2 a kilometre) of cable. Unwieldy and expensive to put it mildly.
+ +At any frequency below around 10MHz, the length of coax we've used here is classified as being electrically 'short', because the line length is much less than a wavelength. The impedance seen by the source is almost entirely dependent on the load impedance at the far end of the cable. It follows that for audio (and even well above), this is a 'short' line, and it never behaves like a transmission line - it's simply a cable with resistance, capacitance and inductance. A piece of wire!
+ +Once the cable length is several wavelengths, it is an electrically 'long' line. The load seen by the source now depends primarily on the cable. Provided the load impedance equals the characteristic impedance of the cable (e.g. 50Ω), the source sees only the cable impedance. In an infinitely long transmission line, the impedance seen by the source depends solely on the cable. This is because it will take an infinite amount of time for the original signal to reach the end of the cable, so the load is irrelevant.
+ +In the real world, it's often hard to ensure that a radio frequency load (an antenna for example) is exactly the right impedance. We now know that if the load doesn't exactly equal the cable impedance there will be reflections, and these are easily measured with fairly simple test instruments. The most common of these is the VSWR meter (sometimes referred to as SWR). VSWR stands for 'voltage standing wave ratio', and this is a good measure of the impedance mismatch between the cable and load. If both impedances are equal, the VSWR is 1:1 (unity) - this is the ideal case.
+ ++ VSWR = ( 1 + Γ ) / ( 1 - Γ ) or ...+ +
+ VSWR = Vr / Vf
+ Where ...
+ VSWR = voltage standing wave ratio
+ Γ = reflection coefficient (Gamma) - ( √ Reflected Power / Input Power )
+ Vr = Complex value of reflected voltage
+ Vf = Complex value of forward voltage +
VSWR is essentially a measure of how much of the delivered power is reflected by the far end of the transmission line - coaxial cable in our case. If we deliver 10W to a cable and load (typically an antenna) and 2.5 watts are reflected due to an impedance mismatch, the measured VSWR is 3:1 (or just 3). In this case, the gamma (Γ) is 0.5 as shown for the formula notes.
+ +
Figure 4 - Voltage Measured Along A Transmission Line
In the above, you can see that the voltage varies between a maximum and minimum along the length of the line. This is the voltage standing wave ratio, and for the above, a 50Ω cable was terminated with a 100Ω resistor (representing the load - typically an antenna). This provides a VSWR of 2:1 due to the mismatch. VSWR meters are designed to match the impedance of the system they will be used to test, and since most transmitters use 50Ω, so are the meters used. Naturally, for 75Ω systems a 75Ω VSWR meter must be used.
+ +In many cases, the VSWR will be determined by a different measure - return loss, expressed in dB. A VSWR of 3:1 is equivalent to a return loss of 6dB. The ideal return loss (RL) is infinity, which indicates that there is zero loss and the impedances are exactly equal. With RF systems, it's unrealistic to expect better than 30dB, indicating a VSWR of 1.065:1 and a reflection coefficient of 0.032. There are several useful converters on the Net - one that I used for this article is VSWR to return loss conversion.
+ ++ RL = 10 × log ( P1 / P2 ) or ...+ +
+ RL = 20 × log ( V1 / V2 )
+ Where ...
+ RL = return loss
+ P1 = forward (input) power
+ P2 = reverse (reflected) power
+ V1 = maximum voltage
+ V2 = minimum voltage +
Return loss is always used in telecommunications systems rather than VSWR, and it's measured using a return loss bridge. An example of a return loss bridge is shown in AN-010 - 2-4 Wire Converters / Hybrids on the ESP site. This is specifically related to telecommunications systems, where return loss has been the measure for impedance matching for many years (VSWR is not used). Note that return loss should always be expressed as a positive value, although in some cases you may see it (incorrectly) expressed as a negative.
+ +It's interesting to see a length of coax along with the signal wave, and this is shown below. Only a single cycle is shown, having three nodes (zero voltage points) and two antinodes (peak voltage points). If a short circuit is placed at a node, it's 'invisible' to the source, which sees an open circuit. Conversely, if the node is open, it will be seen as a short by the source. This seemingly odd behaviour may be unexpected, but it happens whether you like it or not. Of course, the wave isn't a static entity as shown in the drawing. From the source, it varies from zero, through the positive peak, back to zero, then the negative peak, repeated indefinitely.
+ +An open circuit antinode appears to be a short to the source, and naturally if it's shorted, it appears as open circuit. These conditions can only exist at frequencies where the length of the cable is a perfect multiple (or sub-multiple, i.e. 1/4, 1/2, 3/4) of the wavelength, so the short vs. open conditions only apply at specific frequencies. At other frequencies, the cable is a complex impedance creating a tuned circuit, but since it's only resonant at very specific frequencies determined by its length, other nearby frequencies are relatively unaffected.
+ +This can all be quite difficult to come to grips with, and isn't easily explained in simple terms. However (and with any luck), the explanations here will be helpful to your understanding. Don't worry too much if it doesn't seem to make sense, because we are talking about RF after all.
Provided you use reasonably well matched impedances with coax, you will generally get fairly good results with RF applications. However, impedance matching is (almost) never used with audio, and that can introduce some apparently strange behaviour with some circuits.
+ +From the info shown above, it is (or should be) quite obvious that a coax cable isn't just 'a piece of shielded wire', but is something far more complex. Indeed, even a piece of shielded wire isn't just 'a piece of shielded wire' - it's a coaxial cable. Anyone who has looked through the various ESP projects will have noticed that I always include a 100Ω resistor at the output of any preamp or other circuit that is likely to be connected to other gear with a cable. This can be thought of as a 'stopper' resistor, in that it stops the output circuit from interacting with potentially very low impedances at specific frequencies determined by the characteristics of the attached cable.
+ +Because the cable between pieces of equipment will nearly always be shielded, that means it has capacitance and inductance and is therefore a resonant circuit. More importantly, it is a transmission line for high frequencies. The reactance of the cable doesn't create a problem within the audio band, but it does cause issues within the bandwidth of the opamp (whether integrated or discrete). The cable is perfectly capable of causing an opamp to oscillate, often at a frequency that's outside the bandwidth of many budget oscilloscopes. That means that even if it does happen, you probably won't even be able to see it on the scope.
+ +The inductance of the coax (for audio applications) is almost never a problem. However, the capacitance is often right in the range where opamps (and even emitter followers) are subject to the greatest potential for oscillation. Few active circuits like capacitive loads, and the most critical range is from around 500pF up to 10nF or so. This is exactly the range of capacitance that common shielded cables and/or 'true' coax will present to the driving circuit. Very short lengths (as used for internal wiring for example) are usually below the critical range, but 'typical' interconnects will usually measure somewhere between 500pF up to a few nF, and will cause problems if an output 'stopper' resistor isn't used. Some opamps are less tolerant than others, and the datasheet may (or may not) indicate the response with capacitive loading. Few opamps can tolerate a capacitive load of more than ~200pF without 'bad' things happening (some can tolerate a great deal less - for example, the LM833 may oscillate with a capacitive load of more than 50pF).
+ +If an opamp oscillates at some extreme frequency, the effect is often audible as either hum or buzz, it may cause audible distortion, or it may have no audible effect - until you use a different cable. It's completely unpredictable, and never good. Adding a series output resistor is enough to swamp the effect of the cable, by isolating the opamp's output from the external high Q resonant circuit that is the cable. Even a simple emitter follower can be affected, and it's worse if the base is fed from an impedance that is low at high frequencies.
+ +While I use a 100Ω output resistor as a matter of convenience, in some cases the series output resistor can be reduced. It's rarely necessary though, because most other audio equipment has an impedance of at least 10k, and usually more. The attenuation caused by the 100Ω resistor is negligible, and I have never seen any opamp oscillate with any cable when the resistor is used. However, I've seen many discrete and opamp based amplifiers oscillate if the resistor is omitted - even as little as a 1 metre cable can cause oscillation in some cases. The following is an example, taken from Project 88 (left channel output stage). The output will generally be connected to a power amp (or perhaps an electronic crossover) via a shielded cable, and R9L is the output resistor.
+ +There are other ways of preventing any oscillation problems caused by a coax cable, but most are more expensive, less convenient, or both. You can use a Zobel network at the far end - i.e. the equipment being supplied with the signal. I don't know of any manufacturer of audio equipment that includes a Zobel network at the inputs, so it would have to be added (a 51Ω resistor and a 220pF cap in series would work). Other than modifying equipment, this is not a viable solution - especially since the solution is so cheap and simple. With power amplifiers, it's common to include a 10Ω/ 100nF Zobel network and an RF 'choke' (inductor) of a few micro-Henrys at the amp's output to prevent problems caused by speaker cable capacitance. The same thing can be done with preamps, but a resistor is a far simpler option, and works just as well.
+ +Over the years, many people have asked me why the 100Ω resistor is included, and now you know the reason.
+ +In some cases, you might find that a circuit just doesn't sound 'right', with audible artifacts or some other issue that indicates that there's a problem. In some cases, you can probe around with a finger (provided there are no high voltages present of course), and you might find that if you place your finger 'there', the problem goes away. This is almost always a good indicator that there is high frequency oscillation within the circuit, and your finger provides just enough coupling/ decoupling/ damping to stop or reduce the level of the oscillation. This usually means that you need a re-design of the board, but in some cases you may be able to include a series output resistor and/or a low value cap in the feedback network, or just use a different opamp. Some opamps are way too fast for audio, and there are a few that like to oscillate (at RF of course) - the LM833 is one that I know that would sometimes rather oscillate than amplify.
+ + +Impedance conversion requires a transformer, which may also be required to convert from balanced to unbalanced. This is needed with a 1/2 wave folded dipole antenna for example, and has to convert a balanced 280Ω (300Ω is generally assumed) antenna impedance to 75Ω unbalanced. The term 'balun' is simply a contraction of balanced-unbalanced, and they are very common with TV and FM receiving antenna installations. In most installations, a balun will be used to connect a balanced antenna to an unbalanced feeder - a coax transmission line leading to the receiver (or transmitter).
+ +Because the frequencies used for TV and FM are fairly high (over 80MHz for nearly all systems now), the transformer is fairly simple, and for receiving systems is normally just a few turns of insulated wire through a ferrite bead. There are countless ways to make baluns, and it's worth doing an image search to see the different types that can be made or purchased. A common TV balun may use perhaps 6 to 8 turns on the 300Ω side, and exactly half the number for the other winding. Not all baluns are isolating, so some will use a single tapped coil (an autotransformer) rather than separate windings.
+ +Baluns are also sometimes used in reverse - converting a balanced transmission line to an unbalanced load, but this is less common. The inductance needed is very small - as little as 10µH is usually more than sufficient for frequencies above around 50MHz. However, this article has no intention to cover the design of RF transformers or baluns - it is a general discussion only.
+ +A couple of more-or-less typical designs are shown above. The single auto-transformer version is not a true balun, because both input and output are unbalanced. However, if it's connected to a folded dipole antenna that doesn't have an earth connection at the mid-point of the dipole itself it will still work fine. Most TV and FM antennas do earth the centre point of the dipole though, as this provides some protection against nearby lightning strikes. However, protection circuits notwithstanding, a direct hit will usually destroy everything regardless.
+ +There's an old myth that says "lightning never strikes the same place twice" - generally untrue, but it can arise simply because the same place isn't there any more!
It's important to understand that many RF circuits are as much an art as they are a science. Some of the most unlikely circuits can be seen in RF installations, and the apparently simple act of impedance conversion can become anything but simple when you have a 500kW transmitter to deal with. A seemingly insignificant resistance or impedance variation can become your worst nightmare very quickly, since the transmitter will have an output of 5kV at 100A for a 50Ω system. They are quite scary numbers, and if the transmission line only loses 10% of the input power it has to dissipate 5kW - that's a lot of watts!
+ +With the introduction of digital TV, transmitter power is generally lower than was the case with analogue transmissions, but they (mostly) operate at higher frequencies. However, in some parts of Australia digital TV transmitters have an effective radiated power (ERP) of up to 350kW. Effective radiated power is a measure of the transmitter's actual output and the gain of the antenna system. A detailed discussion of this is well outside the scope of this article.
+ + +Coaxial cables are common in audio, but are generally referred to as 'shielded cables'. This is simply because the characteristic impedance is usually uncontrolled, and is not relevant. Even with a velocity factor of 0.66, the wavelength at 20kHz is 9.9km (yes, kilometres). These are uncommon in home installations, where the leads are typically no more than a couple of metres in length. Since it's already been established that any coax that's shorter than λ/10 is not acting as a transmission line, as long as your signal leads are less than a kilometre you don't need to be concerned about impedance matching. Note however, that this does not apply to cables handling video!
+ +For audio, only one thing matters ... capacitance. One of my favourite cables for internal wiring (and test leads) is RG174/U, a flexible 50Ω coax that's only 2.5mm diameter. Another is RG316/U which is (IMO) a better cable, but harder to find and more expensive. The capacitance of both is about 100pF per metre, so even if driven by a 10k source impedance (uncharacteristically high, but a good example), the signal will be attenuated by 3dB at 159kHz with a 1m cable. This will naturally not cause any audible degradation. Most connections are far shorter, and are driven with a lower resistance.
+ +As noted earlier, many opamps (and discrete circuits including simple emitter followers) will oscillate if their bandwidth is high enough to reach the resonant frequency of the length of coax connected to the output. If you have 1-metre shielded cables (coax), the resonant (full-wave) frequency will be somewhere between 100-300MHz, depending on the coax itself. Should an active device be connected without a series damping resistor (I use 100Ω), there is a good chance that the circuit will oscillate. This chance is increased if the 1/4 wavelength (25MHz to 75MHz) becomes 'excited' by the coax, and that's well within the bandwidth of many modern devices.
+ +However, it's not even necessary to 'excite' a length of coaxial cable, and capacitance alone is often all that's needed to trigger oscillation. Many opamp specifications show the maximum permitted capacitive loading before the device becomes unstable. For example, the NE5532 opamp has a unity gain bandwidth of 10MHz, loaded with 600Ω in parallel with 100pF. The datasheet doesn't say what the maximum capacitance is, but You can be pretty sure that more than 100pF would be ... inadvisable.
+ +You can see the trend with a simulator, but the models used in most aren't good enough to predict instability at this level. What you can do is run a frequency sweep up to at least 10MHz with a known working circuit, and you'll usually see a peak occur at some high frequency. For example, a SIMetrix simulation with a TL072 shows a peak of over 5dB at 623kHz (and no, I don't believe that at all). However, the trend will be seen, and doubly so if you build the circuit and test it. Often, you'll find that the oscillation is parasitic, and only shows up at certain points on the output waveform. This is easily confirmed by testing the circuit.
+ +Provided you always use an output damping resistor from opamp or discrete circuit outputs, it's highly unlikely that cable-induced oscillation will ever cause a problem. If you don't then the results will be unpredictable at best, unusable at worst. The simple addition of the output resistor ensures you will have no problems (at least from output loading). Poor PCB layout and/ or lack of adequate bypassing can, and do, cause apparently very similar problems. However, the causes are quite different, and are not related (other than by accident).
+ + +By nature, RF is somewhat sneaky. Although radio frequencies really do obey all the laws of physics, to the casual observer this isn't always apparent. Coaxial cables and/or transmission lines are the case in point here, and as should now be obvious they are far more complex than they seem. Impedance is a critical factor once coaxial cable is used at a frequency where the cable is long (or 'significant') compared to wavelength.
+ +In this context, a cable's length has to be considered significant once it is longer than about 1/10th of the signal wavelength at the highest frequency of interest. If you are only dealing with audio frequencies (including up to 100kHz or so), the cable makes little or no difference unless it's more than 300 metres in length. This is rather unusual in most cases, so it's safe to regard any coax (including shielded audio cables) as simply being a piece of wire that has a shield wrapped around it. As such, you need to consider the cable's capacitance, because that will work with the equipment's output impedance to create a low-pass filter. Impedance of most audio cables is irrelevant, and if anyone tries to tell you any different be wary - they may be trying to sell you some expensive snake-oil.
+ +A 30m long piece of coax with a capacitance of 100pF/m (a reasonable value for many cables) has a total capacitance of 3nF, so to get response to 100kHz (-3dB) means that the equipment's output impedance has to be no more than 500Ω. If you need to be able to transmit a digital signal at 100kHz (which is a pulse waveform, essentially rectangular), the cable must be terminated with the correct impedance or the waveform will be distorted by the reflections of the high frequency harmonics. In the worst case this will make the data unreadable, but if marginal it will cause errors and make the connection slower.
+ +Most modern computers operate at speeds where digital buses need to be terminated or the data will be severely degraded. If a data bus is bidirectional, a terminator will usually be located at each end of the bus. Computer bus termination can be passive (just a resistor) or active, using circuits designed for the purpose. The PCB tracks form transmission lines for high speed data, and these are affected by every issue noted in this article. There is some more detail on this topic in the article Analogue vs Digital - Does 'Digital' Really Exist?.
+ +The interactions between high frequency signals and transmission lines of all kinds are very difficult areas to understand, and engineering at this level is very different from that needed for audio, industrial processes and most other areas where electronics is used. As data speeds increase, most digital system designers have to be aware of the limitations of their PCBs, interconnects and other wiring.
+ +Hopefully, this article has cleared up at least some misconceptions about shielded cables (coax) in general. Remember that all shielded cables are equally affected, regardless of whether they are specifically intended for RF applications or not. Shielded audio cables are still coaxial, but their impedance is undefined. Due to their intended purpose (audio), they may have a lower Q than RF coax, but are still more than happy to cause a circuit to oscillate if precautions aren't taken. Dual conductor shielded mic cables will also share many of the characteristics of a 'true' coax cable, but the inner twisted pair causes some changes to their operation. Despite this, they can (and will) still become resonant circuits at radio frequencies.
+ + +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Comparators |
![]() ![]() |
It's worth pointing out from the outset that opamps are often perfectly alright as comparators in low-speed applications. While there are some texts that warn of 'dire consequences' if you even think about using an opamp, they fail to differentiate between 'high' and 'low' speed operation. 50-60Hz is low-speed, as is the filtered output from a peak detector (for example). Class-D amplifiers and switchmode power supplies are high-speed, and if you attempted to use an opamp then 'bad things' are likely to happen.
+ +Electronics circuits are designed for a specific purpose, and you don't need a 100ns comparator if you're looking at a 50Hz mains waveform or a DC voltage that changes over a period of a few hundred milliseconds. If there's a spare opamp in the circuit and you don't need sub microsecond response times, then it would be silly to add another package just because you read an article that says using an opamp will cause 'something' to blow up! Mostly, it will do nothing of the kind, but there are applications where the low speed can cause serious circuit malfunctions.
+ +There is no doubt whatsoever that using the wrong part can cause issues, but you need to understand what the circuit is doing and design accordingly. Comparators often let you do things that you can't do with an opamp, but that doesn't mean that you should never use an opamp as a comparator if speed isn't an issue. One thing that you cannot do is use a comparator as an opamp, because it won't work (or will work very badly).
+ +Design is about understanding the circuit, not blindly following a technical note from a manufacturer (for example) that doesn't specifically address what you wish to achieve.
+ + +In many electronic circuits, you'll see something that looks like an opamp, but it's called a comparator. Despite appearances, they are not the same, and while opamps can be used as comparators, the converse is not true. This short article discusses the difference between the two, and describes their differences. Yes, it was going to be short, but there's actually a great deal to cover, and this is still only an introductory foray into the topic.
First and foremost, I must reiterate ESP's 'Golden Rules' of opamps (and comparators, #2 only!) which state the following ...
+ +In the case of #1, the opamp uses the negative feedback path to ensure that the two inputs (inverting, or -ve, and non-inverting, or +ve) are at the same voltage. If there is an input (+ve in) of 1V, the output will be of the appropriate magnitude and polarity to ensure that the -ve input is also at 1V, provided the circuit is operating within its linear region. This is 'closed loop' operation, and is the way that opamps are generally used.
+ +When #2 applies, the opamp will swing its output as close as it can to the appropriate supply voltage. This is not a linear function, as the opamp is operating 'open loop' (i.e. there is no negative feedback). For example, if the +ve input is at +1V and the -ve input is at +0.99V, the +ve input is the most positive, and the output will be at (say) +14V, assuming a standard opamp and a ±15V supply. Should the -ve input rise to 1.01V, the output will quickly change to -14V. When both inputs are at the same voltage but there's no negative feedback, the output state is indeterminate, and the smallest input change will cause a large output change.
+ +Comparators are used where the output is either on or off. There is no linear region, and attempting to use a comparator as a linear amplifier will almost always produce an oscillator, where the frequency is determined by stray capacitance, inductance (in PCB traces for example) and resistance. Some comparators may not work at all if you attempt linear operation.
+ +Note that almost all comparators rely on an external pull-up resistor (or active circuit) at the output, because they don't use a push-pull output stage. The most common output is an open-collector NPN transistor. There is also no protection against the output being shorted to the positive supply! While a resistor pull-up is the most common, in some cases it may be an active circuit such as a current source. Due to additional propagation delays created by the active circuitry, this approach is far less common than a resistor, and it may be significantly slower. A current source output load may add up to 50ns to the response time, depending on implementation.
+ +A few examples of comparator uses include the following ...
+ +This is a small sample. The much used 555 timer IC uses comparators for both timing and triggering, with the threshold voltages set inside the IC. Most stand-alone comparators have two inputs, just like an opamp, and they behave in much the same way - but not with negative feedback. If you need a linear circuit, use an opamp, never a comparator.
+ +To quote Linear Technologies [ 1 ], "Comparators are frequently perceived as devices which crudely express analog signals in digital form - a 1-bit A/D converter. Strictly speaking, this viewpoint is correct. It is also wastefully constrictive in its outlook. Comparators don't "just compare" in the same way that op amps don't "just amplify". They go on to state that "Comparators may be the most underrated and under utilised monolithic linear component". It's very hard to argue against this, and opamps have taken over many roles that should be handled by comparators, and not always with the best results.
+ +Due to the extraordinary speed of some comparators (such as the LT1016 and many others), a seemingly benign PCB layout can result in wildly unpredictable output behaviour, so careful attention to grounding and bypassing is absolutely essential. More pedestrian devices can lull the designer into complacency that evaporates in a flash when a high speed part is used. Sockets? Forget it. The capacitance of a socket can be more than enough to cause serious errors, including sustained or parasitic oscillation.
+ +This is a whole new world which looks all too familiar to the uninitiated, but can cause an avalanche of grief if not done properly. Also be aware that some opamps have protective diodes between their inputs, and attempting to use them as a 'quick and dirty' probably comparator won't end well. This is especially true if the input voltages differ by more than 0.6V, as the diodes will conduct and can cause havoc with the circuit's operation.
+ +In some cases, you will see the symbol shown on the right for a comparator. This is generally used if the comparator is being used in amongst logic circuitry, because it's familiar to logic designers (the circle indicates inversion). I prefer to use the opamp symbol because it's closer to reality - after all, opamps are often used as comparators where their low speed is not going to affect operation.
+ +The first referenced document is an application note from Linear Technology, and it's partly a cautionary tale of the traps and pitfalls that await anyone who imagines that very high speed comparators are as easy to use as (say) opamps. It also provides valuable circuit ideas and tips on using the LT1016 - an extraordinarily fast comparator. In fact, it's faster than a TTL inverter, and that takes some doing. It's unlikely that many people will build the reference circuits shown in the application note, but the ideas shown are instructive in their own right.
+ +Beware: Some opamps such as the NE5532/ NE5534 have clamp diodes between the two inputs. This makes them unsuitable for comparators, because the input voltages can never be more than around 0.65V different from each other. They can be used in some instances, but mostly they should be avoided in this role.
+ +The drawing above shows the internals of an LM393 comparator, adapted from the 2001 Fairchild datasheet. This is a dual device with two independent comparators sharing only the supply rails. The inputs are designed to be able to operate at below zero volts even with a single supply (the datasheet specifies -1.5V). It can be used with a dual (positive and negative with respect to ground) supply or a single supply from 2V up to 36V. They have been around for a long time, and are available in both DIP and SMD versions. In one-off quantities they are less than AU$1.00 each. These are highly recommended if you wish to experiment with comparator circuits. The values shown for R1 and R2 are those I used in a simulation to test operation. They will work, but are not the values used in the IC (the values are not shown in the datasheet).
+ + +All opamps and comparators have input devices that are matched, but matching never means that the two devices are identical. Close, perhaps even very close, but that's not the same as identical. The inputs can also be subject to noise (external, internal or thermal noise), and there will be cases where the input voltage moves very slowly (such as a charging capacitor in a timer). There will be a point where the input and reference voltages are at the point where the output state is indeterminate. This means it could be positive, negative, somewhere between the two, or oscillating. If the output is used by logic circuits (including micro processors/ controllers), this can cause errors.
+ +A common way to prevent indeterminate output states is to add a small amount of positive feedback. This gives the circuit some hysteresis, so once the output swings (e.g.) positive, the input has to drop by a small amount below the reference voltage before the output can swing low again. The concept of hysteresis is not especially easy to grasp at first, because it's somewhat counter-intuitive. Consider a standard toggle switch ... there is no position of the actuator that can result in an indeterminate output, so the switch is always either on or off (at least that's the idea - the mechanical system doesn't always work if you operate the switch very slowly). The most common version of a device with hysteresis is the Schmitt trigger, but the common CMOS devices like the 40106 or 74HC914 Schmitt trigger ICs don't have two inputs, so the 'reference' voltage is roughly half the supply voltage.
+ +Electronic hysteresis with a comparator is much the same as a toggle switch, except it's easily controlled by component selection, and is pretty much 100% guaranteed to do exactly what you've set it up to do. You can decide how much the input voltage must change before the output changes state by selecting appropriate resistor values. Hysteresis can be added to opamps used as comparators as well as 'true' comparators. Some more examples of hysteresis are shown further below. Figure 2 (below) shows the standard arrangement used with an opamp to obtain hysteresis.
+ +In the Figure 2 drawing, you can see that the comparator is inverting, but the +ve and -ve trip points are different. The output will swing high only when the input voltage has reached -1.3V, and it won't return low until the input has reached +1.3V. Any change that occurs between these two voltages has no effect. Without R3 (which provides the positive feedback), the output will change state at zero volts (plus or minus any input offset), but is easily influenced by noise. With a slow-moving input voltage, the positive feedback also reduces the switching time which may be important in some applications. + +
By varying the value of R3, you can apply more or less hysteresis. Increasing the value reduces the effect, and reducing it gives more hysteresis. If R3 is made equal to R2, the trip voltages will be half the opamp's (or comparator's) peak output voltage. For a TL07x opamp, that means roughly ±6.8V with 15V supplies. A non-inverting Schmitt trigger would have the -ve input grounded, and the input is via a series resistor (R1 is not grounded, but becomes the input resistor). The -ve input is grounded. The disadvantage of this is that fast pulses are passed through the input resistor, back into the circuit being monitored. If it's an audio circuit, this will usually cause audible distortion, especially at low levels.
+ + +All amplifiers have a slew rate that's set by the speed of the active devices, the current density (higher current means higher speed) and circuit impedances. High impedance circuits are generally slower than low impedance types, because stray capacitance has a greater influence. 10pF of stray capacitance limits a 1Megohm circuit to 16kHz (-3dB), or 16MHz if the impedance is reduced to 1k. Of course, lower impedances mean higher current, so the voltage limits for very high speed devices are generally lower than for slower circuits to limit the power dissipation.
+ +Slew rate is simply how fast the output signal can change, usually expressed in volts per microsecond (V/µs). If the input voltage changes too quickly for the circuit (and its feedback network if applicable) to keep up, the output signal becomes limited by the slew rate. The output voltage rate of change means that a fast transient may not be detected and processed properly. With audio systems, this created a furphy called 'TID' (transient intermodulation distortion) or 'TIM' (transient intermodulation). The effects are certainly real, but almost never happen with a normal audio signal unless the designer made a fairly epic error.
+ +Slew rate is important for comparators used in high speed processing, because if too slow, power dissipation may become excessive and/or the process simply doesn't work properly. Opamps range from a very leisurely 0.5V/ µs (µA741 for example) through 13V/ µs (TL07x) and up to several hundred volts per microsecond (or more) for some specialty devices. However, just because an opamp has a high slew rate, that doesn't mean it has a short enough response time to be useful as a fast comparator.
+ +When a linear feedback system is pushed to the point where slew rate becomes an issue, the opamp operates open loop while the output is slew rate limited. That means that there is no feedback, so the requirements for a 'linear' system aren't met and the result is distortion. Slew rate is simply the maximum rate-of-change for the output of a device (opamp, comparator, audio power amplifier or industrial control system). Once the maximum is reached, it doesn't matter how much harder you push the input, the output can't change any faster.
+ +It's important to understand that slew rate is not necessarily equal for positive and negative going output signals. Depending on the circuit, it's not at all uncommon to find a high slew rate for negative-going signals, but a much slower slew rate for the positive-going transition (or vice versa). The may be cases where this can be used to your advantage, although I must confess that I can't think of any .
As noted above, you can use an opamp as a comparator, but compared to the 'real thing' the opamp will often be too slow. Even fast opamps are much slower than fairly ordinary comparators, and this is especially true when the opamp has a built-in compensation capacitor. The cap is used to ensure the opamp remains stable when feedback is applied, usually down to unity gain. For opamps that don't have the internal cap, there will be connections provided to allow the designer to add a compensation capacitor that's designed to maintain stability at the gain being used.
+ +When any opamp is used with high gain, the amount of compensation is much less than needed for low (or unity) gain. By using external compensation, the circuit can be optimised, providing a higher slew rate than is available from internally compensated devices. Most externally compensated opamps are also provided with input offset null pins. These are readily available in 8 pin packages, but they include only one opamp. Any 8-pin dual opamp must be internally compensated, because there are only enough pins to provide power, inputs and outputs.
+ +There are some dual externally compensated opamps in 14 pin packages, but they are not common. In general, if you need an uncompensated opamp, you will use a single package, but not all single opamps have provision for external compensation, so you need to make your selection carefully. The NE5534 is one example, it's a single opamp with external compensation and offset null. However (and this is why you need to check the datasheet), the NE5534 is already compensated for gains of three or more, so they aren't as fast as you might imagine. They also use clamping diodes between the two inputs, making them unsuitable in most cases.
+ +The drawing below shows an opamp connected as a comparator, and only Rule 2 applies. When the two inputs are at exactly the same voltage, the output is indeterminate, and it will be affected by the smallest change of voltage, such as the tiny variations we get due to normal thermal noise. The transition voltage is also affected by the opamp's input transistors, which will never be 100% identical. Given that the open loop gain of many opamps is well over 100,000 (100dB), it follows that a few microvolts difference between the two inputs is all that's needed to send the output to one supply rail to the other. In datasheets, the open loop gain may be specified as V/mV, so 200V/mV indicates a gain of 200,000 (106dB).
+ +The circuit for an opamp Schmitt trigger is shown below, along with the standard symbol for a Schmitt (the circle at the output shows it's inverting). The amount of positive feedback is set by R2 and R3. R1 is not needed if the input is DC coupled to the inverting input of the opamp, and its value is selected to suit the application. Supply voltages are not shown, but are assumed to be ±15V for the simulation.
+ +R3 applies a small amount of positive feedback, and that provides a 'dead band' between the two trip voltages. Assuming ±15V supplies and ±14V output swing, the input has to rise to +1.27V before the output will swing high, and -1.27V before it swings low again. As long as the input is between these two values, the output won't change state, so noise (from any source) is effectively rejected. To reduce the dead band, reduce the value of R2. For example, if R2 is 1k, the hysteresis is reduced to ±138mV, or 100 ohms reduces that further, to just 14mV. Rather than reducing R2, you can increase R3 if preferred. If a bipolar transistor opamp is used, you need to account for input current when selecting the value of R3.
+ +Note that the voltages described are the theoretical values - the input pair's differential offset voltage will affect the actual voltages. The opamp's peak-to-peak output swing also changes the trip voltages, especially when a only small amount of hysteresis is used. Some hysteresis is almost always needed if you have a slow input signal, such a long time delay. Without it, the transition between high and low states will be poorly defined and may show a large noise signal as the output changes state.
+ +You also need to be aware that most opamps cannot swing their outputs to the full supply voltages, although some are specified for rail-to-rail output swing. Most CMOS opamps come very close, but all opamp output stages are affected by the load on the output. The datasheet is definitely your friend here (as always).
+ +When an opamp is used as a comparator, the most important specifications for reasonable speed are the slew rate and response time, although the latter is rarely specified for opamps. In general, it's better to use a real comparator than an opamp for anything operating at more than a few kHz. Naturally, this depends on the specific application, and it's the designer's job to determine the optimum part. Not all comparators are as fast as might be required either, and that makes it harder to find the best overall compromise.
+ +Note that the opamp Schmitt trigger can also be set up to be non-inverting. The inverting input is connected to the reference voltage (or ground), and the signal is then applied via R2. Because the current flowing through R2 is non-linear due to the positive feedback, it can couple switching transients directly to the signal source.
+ +You also need to verify that the opamp you use does not have protective diodes across the inputs, and that there is no phase reversal with high common mode voltages (this can eliminate the TL07x series of opamps, because they do exhibit a phase reversal). Also, verify (usually by experiment as it won't be in the datasheet) that there is no interaction between opamp sections of dual or quad packages. Unless you use a rail-to-rail output opamp, it may not interface properly with TTL logic circuits or even simple transistor switches. There is (usually) no problem with CMOS logic, but it needs to be verified.
+ +![]() | NOTE: While the TL07x family can be used as comparators for many low speed applications, beware! These devices (along + with several other opamps) suffer an output phase reversal if their common mode voltage is exceeded. You must make certain that the input voltage can never + approach or exceed the supply rail voltages. Based on the TL071 datasheet info for common mode input voltage, it's claimed that the worst case maximum common mode + voltage is ±11V when using ±15V supplies. Typical is said to be -12V to +15V under the same conditions. I suggest that you avoid these opamps + if you need a comparator. + |
A circuit that uses an opamp comparator is Project 39, which uses a µA741 opamp because speed is not an issue. There are some applications where it doesn't make sense to use a true comparator, especially for very low speed circuits. Comparators are also used in A-D (analogue to digital) converters, and countless other circuits. Many can use opamps because they don't need high speed, while others need to be as fast as possible. For example, you couldn't use an opamp in a Class-D amplifier, because they are much too slow to be able to follow the audio and reference (triangle wave) signals. Opamps can also be used for mains frequency zero crossing detectors (there's more on that topic below).
+ + +As the name suggests, a comparator is designed to compare two voltages. The output state is determined by whichever input pin is the most positive. As with opamps, there will always be an input offset and this can cause errors when low input voltages are involved. Many comparators have provision for an offset null trimpot so the error can be adjusted out. Hysteresis can be used to minimise errors caused by noise, but may cause problems with some applications. For example, if there is hysteresis designed into a Class-D modulator, it will cause distortion of the output waveform.
+ +Comparators are used in many common applications, and Class-D amplifiers were mentioned above. A comparator has the incoming audio applied to one input, and a triangle wave on the other. The output is a rectangular waveform, with the mark-space (on-off) ratio varying depending on the audio input signal. This is shown with example waveforms in the article Class-D Amplifiers - Theory & Design. The circuit has to be fast, because the triangle reference waveform is usually over 100kHz (sometimes well over!).
+ +Like opamps, both comparator inputs must be referred to a suitable voltage, which can be ground or some other voltage set by a voltage divider. If an input is left open, the output will be unpredictable and the circuit won't work as expected - if at all. The input signal can be capacitively coupled to the input, but you still need a resistor (commonly to the reference voltage) to ensure that the proper DC conditions exist. Also like opamps, comparators are available in single, dual and quad versions, and in various package styles.
+ +Unlike opamps, many comparators have an open-collector output, and there isn't a transistor to pull the output high (I don't know of any that use a PNP output transistor and require a pull-down resistor, other than the discrete circuit shown below). You need to include a resistor from the output to the positive (or negative) supply. This is sometimes a nuisance, but comparators are usually used in a different way from opamps, and an open collector output is often more convenient (believe it or not).
+ +The LM311 is an example of an open collector output comparator. There are also comparators that are designed specifically to interface to TTL ICs, and are complete with a separate 5V supply for the logic outputs (the LM361 is an example). The open collector output can also drive a relay, provided the current is less than the maximum specified (50mA for an LM311). Diode protection must be added to the relay to protect the output transistor from high voltage when the relay turns off.
+ +Many comparator datasheets don't specify a slew rate, but tell you the propagation delay or response time instead. For example, the LM311 has a slew rate (from the graphs) of around 30V/ µs, and the response time is specified to be 200ns. There are several dependencies and conditions that affect the slew rate and response time, and I suggest that you look at the data to see some of the info. It's not particularly intuitive, so be prepared to spend some time to acquaint yourself with the terminology used.
+ +The LM311 is a fast comparator, and it has many options. As shown, the input section uses ±5V supplies, the relay is powered from +12V (referred to ground). A small positive input (456mV or more as shown) on pin 2 will activate the relay, but it can be prevented from operating by a logic signal applied to the 'Inhibit' input (this input is called 'TTL strobe' in the datasheet).
+ +If you wanted to trigger the relay based on a negative input, it's simply a matter of reversing the input pins, so pin 2 would be returned to Vref and the input applied to pin 3 instead. This level of flexibility doesn't appear with opamps, in particular the supply options. The output is referred to a separate pin (pin 1), so the inputs and output can be referenced to different voltages. An opamp used in a circuit to achieve the same result would need many more support parts to achieve the same result. The circuit shown is adapted from the LM311 datasheet.
+ +The datasheets for comparators can be quite confusing if you are used to reading the data for opamps, and they often have seemingly strange features. While the basic operation is similar to an opamp used open loop, there are options that you would never see for most typical opamps. There's no point trying to cover them all though, because (like opamps) there is an astonishing number of different devices, some straightforward, and others very different.
+ +You will see comparators with facilities to change the input device bias or a 'strobe', where the output can be switched on or off with an external signal from a micro controller or other logic circuitry. As noted earlier, most have open-collector outputs, but some others have a traditional 'totem-pole' output stage similar to that used with logic ICs.
+ +In some cases, and especially if you don't need extreme high speed, a CMOS comparator can be an excellent choice. They are typically low power (some as little as 1µA supply current), usually have extremely high gain, and will usually be fairly well behaved. A comparator such as the LMC7211-N is an example. Supply current is 7µA, and it will operate from 2.7V to 15V supplies (maximum, between supply pins). Like most CMOS ICs, the supply voltage is limited to a typical maximum of 16V, and most are only available in SMD packages. However, they are a good choice when current is limited (such as battery powered equipment) and you need to interface with other CMOS (or TTL) gates or other logic ICs.
+ +Many comparators provide dual outputs as well as dual inputs. When dual outputs are available, they are (usually) complementary, so when one goes high, the other goes low. This provides greater flexibility when interfacing with logic, and can save the designer from having to include a separate inverter to obtain differential outputs.
+ + +If you wish to do so, it's fairly easy to make a comparator with discrete components. There's not really much point because most comparators are very reasonably priced, but building one is guaranteed to give you a better overall understanding. The circuit for a simple comparator is shown below, and as simulated it works rather well despite its simplicity. It's somewhat unconventional, in that the output transistor is PNP, while most commercial devices use an open collector NPN transistor. The rise and fall times are respectable, and response time is also fairly good. It won't beat any of the ultra-fast devices around, and obviously will occupy a great deal more PCB real estate than an IC, but it's a good learning tool.
+ +A simplified schematic also provides some insight into the inner workings. As shown below, the output pull-down resistor (R2) connects to ground, but it can just as easily connect to any other voltage, provided it's less than the +5V supply. There's no reason that it can't be connected to the -5V supply, but a voltage varying between 0 and 5V is compatible with most logic. This flexibility extends to most IC versions as well, although most use a pull-up resistor. This is typically connected to the +ve supply, but it can connect to any (almost always positive) voltage within the ratings of the device.
+ +To get the highest possible gain from a simple circuit, Q3 and Q4 form a current mirror as the load for the input pair. A resistor at the collector of Q1 could be used instead, but that reduces the available gain and the circuit doesn't work very well. Comparators usually have similar gain to opamps (typically between 50,000 and 200,000).
+ +The graph shows the input signal (red) and the output (green), and you can see the small delay between the input going high or low and the output doing the same. It's obvious that it takes longer for the output to turn off (580ns to zero) than it takes to turn on (300ns to +5V).
+ +Part of the difference is due to the use of a resistor to pull down the output, but Q5 also has to leave its saturation region which creates a further delay due to the stored base charge of the transistor. This can be reduced at the expense of greater complexity. Adding a large number of extra transistors is of little consequence in an IC but has a large impact on discrete circuits.
+ +As simulated, response time is well below 1µs, but as seen above, it's different depending on the polarity of the input signal. Rise and fall times are 65µs and 47µs respectively, measured using the standard procedure which measures between 10% and 90%. I don't think I quite believe that part, because simulators and real life can often diverge significantly. Is it as good as a cheap and cheerful LM311? No, and the LM311 will cost far less than the parts needed for the discrete version (the LM311 is available for well under $1, which is very hard to beat). Admittedly, the LM311 does need an output pull-up resistor in most cases, but that's true of most comparators.
+ +Many comparator datasheets include a simplified schematic of the device, and these can be used for ideas. However, most are much more complex than you may have expected, necessary to achieve very high speed. Current sources are often shown as a symbol, rather than the actual circuit. These are easy to include in a simulation, but less so in 'real life'.
+ + +Sometimes, you need to monitor a signal to ensure that it remains within specific boundaries. A window comparator will remain off as long as the input is within the 'window' of allowable limits. A window comparator isn't a single part - it's built using two comparators, with appropriate biasing resistors or voltage references to provide the upper and lower bounds of the 'window'. Window comparators are common in industrial processes to ensure that a particular process is functioning within allowable limits.
+ +They have also been used in alarm systems intended to detect tampering by intruders. You can also use a window comparator to ensure that an audio signal remains below the clipping level, so for a circuit operating with ±15V supplies, you may want to indicate overload should the signal exceed ±8V. The window ranges from -8V to +8V, and as long as the signal remains within these limits, the overload LED stays off.
+ +The above shows a window comparator that will provide a low output (drawing current through the LED and R5) if the input voltage goes above 2/3 Vs or below 1/3 Vs (Vs is the total supply voltage, 30V), and the circuit is similar to the comparator arrangement used in the 555 timer. In this case, the 'overload' LED will come on if the signal voltage goes above +5V or below -5V. The comparator outputs are simply joined together, something you cannot do with opamps. If power consumption is an issue, a CMOS device could be used. Some have a total current drain of around 1-2µA, but the total supply voltage is usually limited to around 16V.
+ +To change the range where the overload LED comes on, simply change R3. For example, increasing R3 to 22k means the LED will come on if the input voltage exceeds ±7.86V (close enough to the ±8V mentioned above). You only need Ohm's law and the voltage divider formula to work out the value needed. If you need to detect that a signal has strayed by only a small amount, it may be necessary to use comparators that provide DC offset adjustment to ensure an accurate result.
+ +Note that the drawing doesn't show supply bypass capacitors (one from each supply pin to ground), but these are essential because many comparators will oscillate if they are not included. This is especially important with very fast devices. The bypass caps should be as close to the IC as possible, and all PCB tracks to the inputs should be kept short.
+ +To achieve the same result using a dual opamp, you would need to add 2 diodes (one at the output of each opamp) so the outputs can be added without causing the opamp outputs to draw excessive current. The open collector outputs of the comparators means that they can simply be joined, and either U1A or U1B can pull the cathode of the LED low to indicate that the window limit has been exceeded in either polarity.
+ +Multi-level comparators can also be made using much the same principle as shown above, but with more sections in the voltage divider string and multiple comparators. This technique is used in the internal circuitry of the LM3914 (linear) and LM3915 (log) LED bargraph drivers. Equivalent circuits are shown in both datasheets, and if you need to know how to create a multi-level comparator these are a good reference.
+ + +Many of the oscillators that are commonly built using opamps will work better with a comparator. For low frequencies (less than 1kHz or so) this is of no consequence, but no normal opamp can be used as a crystal oscillator running at 10MHz or more. Comparator oscillators are limited to generating squarewave outputs. If you need a sinewave, that's a linear function, and therefore requires opamps (integrated or discrete).
+ +The RC oscillator is shown in almost every opamp application note ever created, and it certainly works well with most opamps up to a few kHz or so. If you use an opamp, R5 is not needed, but it is required here because the comparator has an open collector output. When built using a comparator, response can easily be extended to 1MHz using 'ordinary' comparators, but much higher frequencies are easily achieved. As shown, frequency is around 95kHz, and it can be adjusted easily by making R4 variable. The circuit is adapted from the LM311 datasheet.
+ +The crystal oscillator shown is adapted from the LT1016 datasheet, and that can be used up to 25MHz. Such speeds are unthinkable with opamps. Some may get you to 1MHz or so (with some difficulty), but a fast comparator makes it seem easy. Both oscillators have squarewave outputs. Because some of the pins on comparators have 'odd' assignments, the various grounded pin assignments are also shown, and two unused pins are included in the listing for the LM311.
+ +To give you an idea of how 'odd' the pin assignments can be, pins 5 & 6 on the LM311 are either for offset null or to increase the input stage current, and pin 6 can also be used as a 'strobe' input to disable the output. Naturally, only one of these extra functions can generally be used at any one time. The output can also be taken from pin 1 (normally GND) and used as an emitter follower, by tying pin 7 (output) to the positive supply and using a resistor to ground as a pull-down.
+ +Confused? Welcome to the wonderful world of comparators.
When people think of timers, the 555 almost immediately springs to mind. This isn't unreasonable of course, because it's ideally suited to the task. The 555 timer has, at it's heart, comparators. Again, not at all unreasonable. However, not every timer needs a 555, although they are cheap, ubiquitous and work well. To learn more about the 555 timer, have a look at the 555 Timer article. However, if you wish to experiment with a comparator by itself then there's much to be gained in the knowledge department.
+ +The voltage across a capacitor over time is determined by the capacitance and the charging current. When a resistor from a fixed supply voltage is used to charge the cap, the voltage across the resistor falls as the cap charges, reducing the charge current and producing the familiar exponential charge waveform. This is visible in the graph below (VC1). This class of timer is not capable of great accuracy, but that's not always necessary. Repeatability is usually better than you might expect, provided the supply voltage is regulated.
+ +The timer is started by pressing the button. This discharges C1 (via R1 which limits the capacitor discharge current), and timing starts when the button is released. This general class of timer is usable for medium time delays of up to a few minutes. The delay time can be varied by means of the pot (VR1). The graph shows the voltages when VR1 is at minimum resistance, and delay time is increased with increasing pot resistance.
+ +Press the button, C1 is discharged, and the output of U1 goes from low to high. When the button is released, C1 charges until its voltage reaches the 8.25V threshold (Vref). Once the threshold is reached, the output goes low again, indicating that the selected time has elapsed. Note the diode in series with R6 - that applies positive feedback to provide unidirectional hysteresis - it only works as the output falls from high to low, but has no effect on the trip voltage set by the voltage divider (R3, R4). When the output falls low, the reference voltage is reduced from 8.25V to around 6.5V (blue trace). This ensures a fast and unambiguous output transition.
+ +With the values shown, the time delay is from 11.5 seconds up to about 125 seconds by adjusting VR1 (maximum resistance gives maximum time delay). Be aware that this circuit is intended as an example only, and is not a recommended design. The most obvious problem is that the time can be extended simply by keeping the button pressed, so it can't be relied upon if an absolutely reliable delay is needed. It's also a bad idea to use electrolytic caps in a timing circuit, because they have a large capacitance tolerance and aren't especially stable with temperature. There are other problems too, so please use this as an example so you can understand the basic function, rather than imagining it's necessarily a usable design as shown.
+ +The circuit shown will work equally well with an opamp or a comparator, but the latter has the advantage of a full rail-to-rail output, limited only by the load on the output. This should be no less than 10 times the value of R5 to minimise errors. If the load current is too great (relative to the current through R5), the circuit may malfunction.
+ +There is one use for this style of timer - a delayed switch for lighting. As long as the switch is closed, the light will be on. When the switch is turned off, the light will remain on for the preset delay time, and turns off when the delay has expired. Yes, I know it can be done more simply, but this is an example to demonstrate that even apparently 'flawed' circuits often have very valid uses.
+ + +There are many places where zero crossing detectors are used. Mains phase control switching is one very common usage, as a zero crossing detector is needed to detect the beginning of each cycle. Another is where an audio signal is required to switch 'silently', so switching takes place when the audio signal passes through zero. Zero crossing detectors are also used for signal generating applications, such as tone burst generators. Comparators make very good zero crossing detectors, and the circuit shown in Figure 2 is one way to do it.
+ +The amount of hysteresis needed is very low (depending on the signal level), or you can 'cheat', and use an amplifier in front of the comparator, as shown next. It doesn't matter if the amplifier stage clips (in fact it's better if it does), because we are interested only in the period where the input voltage is at (or close to) zero. The output pulse frequency from the type of detector shown is double the input frequency, because there is one pulse for every zero crossing, so two per input cycle.
+ +A disadvantage of comparators is that they will usually produce a positive output as the signal passes through zero from negative to positive, and a negative signal for the other half cycle. This means that additional processing is needed to provide (say) positive pulses for each crossing, regardless of the signal polarity. If you need a zero crossing detector that produces only positive pulses each time the input passes through zero, you could use something like the circuit shown below.
+ +The first stage amplifies the voltage (x38), and along with the next stage (a unity gain inverter) outputs a full-wave rectified output. As the input signal passes through zero, the output from the rectifier is also zero, and this is detected by the comparator, which produces a positive pulse. The width of the pulse is largely determined by the amount of gain in the first stage and the input frequency, and with the values shown provides 8.5µs pulses with a 2V p-p sinewave input signal at 1kHz (less than 0.1% duty cycle). The pulse width can be reduced to give a lower duty cycle (and reduce the pulse width) by increasing the gain of U1A, which provides a better resolution of the true zero crossing point. It will be necessary to use opamps that provide DC offset adjustment if very high gain is used. C1 is used to minimise offset for less critical applications.
+ +The reference voltage at the +ve input of U3 is nominally about 380mV, rising to 420mV when the output is high. This isn't much hysteresis, but it's sufficient to ensure clean transitions at each zero crossing point. More hysteresis can be used (and/ or the reference voltage increased) by increasing the value of R8. This will also make the pulses wider, so the gain of U1A can be increased to compensate.
+ +The circuit is well behaved and very flexible, and can easily be changed to suit your specific needs. It's more complex than most that you'll see on the Net, but it has the advantage of being easily adjusted, and it produces a positive pulse at each zero crossing. If greater speed is needed, use faster opamps and a faster comparator. Note that supply bypass caps are essential, but are not shown for clarity. Note that this circuit is not intended to be used with mains voltages!
+ +For more ideas on zero crossing detectors in general, see AN005 - Zero Crossing Detectors on the ESP website.
+ + +As with most of the ESP articles, this is simply an introduction to the subject. Manufacturer datasheets are usually one of the best places to start if you want to know more, and where available, application notes can provide you with a great deal of additional info, and often provide specific examples for many different arrangements. Naturally, they reference only that maker's part(s), but you can often substitute other devices to increase performance or reduce cost.
+ +In the audio field, there isn't usually a great demand for 'true' comparators, because the signals of interest are almost always comparatively slow. A-D converters and Class-D modulators are another matter of course, but these are most commonly IC based, and all the required processing is usually within the IC itself. In some cases, the flexibility of comparators makes them a better choice than a circuit using opamps, particularly for overload indicators and similar circuits, but the speed of even 'slow' comparators is such that it's easy for them to inject noise with switching transients.
+ +Even opamps used as comparators can easily produce switching transients, and it's generally a good idea to provide isolation of the power supply, by using ferrite beads or low value resistors for example, and with separate decoupling capacitors. The isolation is needed to prevent fast transients from affecting the audio circuits. Careful attention is also needed for the grounding arrangements. A 'shared' ground is usually a recipe for unwanted interference, so you need to work out a plan to make sure that ground currents are separated.
+ +As should be obvious, comparators are very different from opamps, and while they are far more flexible, they are also a lot less forgiving. Most opamps specify their short-circuit period as 'indefinite', but many comparators either can't tolerate a shorted output, or can do so for a limited time only (some specify 10 seconds, but even that is a risk). Supply bypassing is critical with any high speed comparator (much more so than opamps), and PCB layout has to be just right or you get oscillation as the output transitions from one state to the other.
+ +If you intend to use comparators in a project, you must consult the datasheet and/ or any available application notes, because you need to know what precautions are necessary to ensure reliable operation. Their greatest advantage (speed) is also the property that makes them cantankerous if everything isn't to the liking of the IC.
+ +The references shown below are easily found on the Net, and some devices are available from multiple sources (although TI now owns National Semiconductor). There are literally hundreds (perhaps thousands) of different devices from many manufacturers, and it would not be practical to even try to provide examples and references to them all. You can also look at Maxim, ON Semiconductor, Intersil, Toshiba, Analog Devices, ST Microelectronics - the list goes on. You can get comparators using bipolar transistors or CMOS technology, fast and slow, micro-power, etc., so there's definitely a suitable device for every occasion.
+ +Note: The inclusion (or non-inclusion) of any manufacturer does not imply any preference or otherwise on my part, nor does it indicate any connection whatsoever to those listed. The manufacturer references shown are simply to assist the reader, and are listed for no other purpose.
+ + +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Coupling & Bypass Capacitors |
There seems to be some mystery in the selection of both coupling and bypass caps for audio applications. The selection is actually quite simple, and is only based on a few criteria. The value is usually not especially critical, and there are a few general guidelines that can be applied in the vast majority of cases. There is only one formula that's really needed - at least for coupling capacitors ...
+ ++ C = 1 / ( 2π × f × R ) or ...+ +
+ f = 1 / ( 2π × C × R ) +
The above formulae define the lower -3dB frequency, where the capacitive reactance is equal to the resistance. While one might think that when the two impedances are equal the attenuation should be 6dB, this is not the case because of phase shift.
+ +Input, feedback and DC supply paths in power amps and preamps will always have a defined resistance, and the capacitor value is chosen to ensure that the lowest frequency of interest (typically 20Hz) is passed without attenuation. While the capacitor value can also be used to form a basic high-pass filter, this will often be rather poorly defined, and where a specific lower frequency limit is really needed, this is best done using a dedicated filter.
+ +This might be needed where a vented woofer is used, because frequencies below the box cutoff frequency can cause huge cone excursions and speaker damage. A good example of such a filter is shown in Project 99, and this is designed for 36dB/octave rolloff below the designated frequency. As with all things, care is needed, because all filters created by coupling caps introduce two potentially unwanted effects ...
+ +These effects don't normally cause problems at extreme low frequencies, because the loudspeaker and room usually have a far greater influence. They are mentioned simply because they exist, and you need to know this. The two are closely related, but for simple (6dB/octave) filters, they are generally considered benign. One school of "thought" claims that the best cap is no cap. This is fundamentally nonsense and extremely silly - there is absolutely no requirement for DC coupling in any audio amplifier. DC is a decidedly unwanted component, and invariably causes far more problems than the relatively small rolloff at very low frequencies caused by the capacitor.
+ +Bypass applications are more complex. The DC supply impedance is dominated by resistance, but includes inductance. While small, the inductive effects become troublesome at very high frequencies (such as those frequencies where fast opamps want to oscillate).
+ +For a more detailed look at capacitors in general, have a look at Capacitor Characteristics. That article covers many of the points made here, but in somewhat greater detail.
+ + +The purpose of a coupling cap is to pass the wanted audio (AC) signal, while blocking any DC from preceding stages or source components. DC will cause pots to become noisy (scratching noises when operate), and cause relatively loud clicks when (if) muting relays or similar are used. Since DC carries no audio information, there is no reason to allow it through your audio system. Some power amps will misbehave very badly if DC is present, and even small DC offsets into the speakers (anything above ~500mV) displaces the cone from its central position, and increases distortion. There is also a small static power dissipation - 1V DC across a 4 Ohm loudspeaker causes a constant static dissipation of 250mW. Not much, but the cone displacement can be much greater than you might expect.
+ + +It is often possible to eliminate both input and output caps with preamps, and even the feedback bypass cap can be omitted. The disadvantage of this is that some sources may have a small (or perhaps not so small) DC offset - especially digital sources that use a single 5V supply for the audio output. A capacitor is mandatory for these because they have a 2.5V DC offset, and if this is not removed completely, most DC connected preamps will simply saturate - the output voltage will be 2.5V multiplied by the preamp's gain. A gain of only 6 times (16dB close enough) will convert the 2.5V into the full 15V maximum output from the preamp.
+ +Any DC in a preamp is bad, because it will appear across the volume pot, and this will become noisy. Switching from a source that has no DC offset to another that has some (even 100mV or less) will cause a loud BANG through the speakers when the source switch is changed. This is undesirable, to put it mildly.
+ + +In most cases, a polyester cap is the best choice. Polypropylene is popular too, but they are physically much larger and can easily dominate the preamp PCB. Some people prefer polypropylene because the popular audio myth tells them that the dielectric losses are so much smaller than polyester, and therefore they sound better. This is complete rubbish, and can be ignored. Dielectric loss (or dielectric absorption) is immaterial for slow, low level signals. Audio certainly seems to be a very demanding application, but it is very slow by comparison to other electrical signals, and capacitor losses are less than negligible in any sensibly designed circuit.
+ +Another common choice is a bipolar electrolytic (or a polarised electro for some applications). While it is easily demonstrated that these caps can create distortion, one must examine not the input voltage, but the voltage across the capacitor. With a 1µF capacitor, the voltage across the cap at 20Hz is still very low. With 1V RMS input signal, the voltage across the cap is only 343mV RMS at 20Hz. While this may well create a small amount of distortion if a non-polarised electro is used, this distortion will still be inaudible in almost any hi-fi system. At 100Hz, the voltage has fallen to 72mV, and at 1kHz it's only 7mV. The distortion caused by such a small voltage will rarely (if ever) be measurable, let alone audible. The voltage across the cap at any low frequency is easily reduced by increasing the capacitance value.
+ +The value must be chosen as described in the introduction - but with a slight twist. If the lowest frequency you need is 20Hz, then the capacitor is normally chosen to be around 1/2 to 1/3 of the minimum wanted frequency. This means that the -3dB frequency should normally be somewhere between about 6-10Hz. A common choice for ESP projects is to use a 1µF cap, and an input impedance of 22k. The -3dB frequency is just over 7Hz, and at 20Hz the signal is only 0.55dB down. Since few speakers can manage to get that low anyway (and the room will make a real mess of such low frequency signals), this is a good compromise between safety (protection from very low frequency signals) and good bass performance.
+ +Now, there is nothing at all to say that you can't use a 1,000µF coupling cap, but there's simply no point. That would give a -3dB frequency of 7.2mHz (milli Hertz) - and affords no useful protection against subsonic frequencies.
+ +
Figure 1 - Coupling Caps in Action
In Figure 1, the red trace shows the effect of using a single 1µF cap into an impedance of 22k. The green trace shows what happens if you have two identical circuits (both 1µF, 22k), separated by a gain stage. The gain has been set to unity for clarity. With a single stage, response at 10Hz is -1.8dB, and with 2 stages is -3.6dB. At 20Hz, the figures are roughly -0.5dB and -1.1dB respectively. If you think that the low frequency response will be too limited by this, you may use (say) 10µF caps - typically bipolar (non-polarised) electrolytics. Using this value, response at 10Hz is -22mdB (milli-dB) down for a single stage, and -45mdB for 2 stages.
+ +Potentially more irksome to some is group delay and/or phase shift. A 1µF cap gives a group delay of 7.5ms (at 10Hz) for one stage, and 15ms for two stages. The corresponding phase shift is about 36° (single stage) and 72° (two stages). While this might seem to be an issue, in the vast majority of cases the speaker box and room will create far more phase shift and group delay than any simple filter ever will. At a more sensible frequency of 20Hz, the group delay is reduced to 2.56ms (one stage) and 5.1ms (two stages).
+ +Vented speakers in particular often have significant group delay and associated acoustic phase shift. Despite many claims to the contrary, there is actually very little to indicate that phase shift is audible, provided it is static. Moving phase shift (an example is a mid-bass driver - so-called 'Doppler' distortion) can be very audible if the shift and rate of change is high enough, although this is uncommon with most mid-bass drivers.
+ + +The days of single supply amplifiers with large electrolytic coupling capacitors are now almost over, although there are still a few small low power amps that are built that way. Because these amplifiers are almost invariably considered 'lo-fi' and will normally drive small speakers in horrible small plastic boxes, the coupling cap doesn't make much difference.
+ +If such an arrangement were to be used in anything serious, one would make the cap very large. It is important that the AC voltage across the capacitor remains as low as possible, otherwise there will be significant measurable distortion at the lowest frequencies. Some early amps that used a speaker coupling cap included it in part of the feedback loop, thus letting the feedback correct frequency response droop and (at least to an extent) capacitor distortion. This is generally a poor choice though, and is no longer relevant.
+ +Of course, there are other coupling caps too. One in particular is the feedback bypass cap. At various times, there have been some extraordinary arrangements used to either eliminate this cap entirely (a bad choice as we shall see), or concoct little networks that supposedly make the cap's contribution less intrusive. By far the simplest arrangement is to use a large value capacitor - one that is at least 10 times greater than theoretically needed.
+ +While it would be nice to have the luxury of using the same ratio for speaker coupling caps, this makes the capacitor overly large and expensive. For example, a cap intended to couple a single-supply power amp into 4 ohms down to 20Hz should be 20,000µF if we apply the same formula. Because this cap will charge through the speaker, the rate of change of voltage must be kept low enough to prevent speaker damage, so the amp has to settle to the ½ voltage rather slowly. To maintain a peak speaker current of (say) 200mA through a cap of that size, the voltage can change at no more than 10V per second. This is not a major issue, but does need to be mentioned.
+ +If any power amp is allowed to operate to DC, some interesting but undesirable factors come to light. The first is that the amp will amplify DC - any small DC that finds its way to the input will be amplified, putting speakers at risk and likely pushing the cone out of the centre of the gap between the pole pieces. This increases distortion and reduces power handling. If the amp should be driven to clipping with an asymmetrical waveform (most audio), there is a DC component that's generated. It doesn't happen if an amp is AC coupled, and since no instruments create DC and no recordings contain it, there is absolutely no reason to reproduce any DC that may happen to sneak into the system.
+ + +Power amplifier coupling caps will generally be electrolytic types, because the values involved are large and film capacitors are simply too bulky and expensive. While many people don't like using electros, far more serious problems will occur if the feedback cap were to be a film type. One way is to use a high impedance feedback network, but this leads to noise, and much greater susceptibility to noise pickup from external sources. The other way is to use a large bank of film caps, but this will also cause problems with noise susceptibility.
+ +Tantalum caps are specified in some cases, but I will never use them because they have a worldwide reputation for being unreliable. There are (allegedly) some of the newer types that are far better, but tantalum caps earned me everlasting distrust many years ago, and I have had no reason since to change my opinion.
+ +If the feedback network uses a 22k resistor with 1k to ground (most ESP designs use this combination), the cap needs to have a reactance of no more than 100 ohms at the lowest frequency of interest. For typical 20Hz operation, you can calculate the value as 80µF ... for all normal applications somewhere between 100 and 220µF is perfectly alright. This keeps the AC voltage across the capacitor small, so distortion is minimal. Certainly it will be at least an order of magnitude lower than any loudspeaker at 20Hz.
+ +While it is generally considered bad form to use polarised electrolytic caps with no polarising voltage, in reality it generally doesn't bother the cap in the least. The requirement for long life when used like this is that the voltage across the cap must be as low as possible - certainly less than 1V, and preferably less than 100mV. I have seen many examples of electrolytic capacitors that have been used like this for 20 years or more, and still perform just as well as a brand new cap.
+ + +This is an area where there is some confusion, and a great deal of disinformation ... ok, it's not actually disinformation, it's complete bollocks! The purpose of a bypass capacitor is to maintain a low impedance for the DC supply, at all frequencies where the circuit has gain. With many circuits, this extends to several MHz, and even small lengths of wire or PCB trace can introduce enough inductance to make the circuit unstable.
+ +Bypass capacitors serve one function - to keep the impedance low. When you see claims that large electrolytic capacitors have lots of inductance, you are reading nonsense. Contrary to common belief, the coiled up foil in a capacitor does not constitute an inductor. There is no need for me to reproduce everything described in Capacitor Characteristics, so I suggest that you read that if you want all the details.
+ + +High speed opamps must have good bypassing. Most of the time, this will be between the power supplies, avoiding the earth (ground) circuit completely. A normal opamp has no knowledge of earth, ground planes or anything else earth related. It is only interested in the voltages present at its two inputs, and when used in linear mode will attempt to make them the same voltage.
+ +Accordingly, bypass caps do not need to connect between each supply and the signal earth. If there is noise on the power supply, this will be transferred from the supply (where it may be completely harmless) to the signal earth, where it can induce noise into the circuit. My projects recommend low noise linear supplies, and generally use a couple of caps between each supply and earth, but the remaining bypassing is between the +ve and -ve supplies only.
+ +Many of today's opamps are quite fast (some are very fast), and without proper bypassing they will often oscillate cheerfully. Oscillation frequencies are usually well outside the expected frequency range, and are usually well over 1MHz. PC sound card based oscilloscopes are useless for fault finding at this level, because they are limited by the sampling frequency of the sound card. Even at the highest available frequency (196kHz, but these are rare and expensive), you cannot see any frequency over 90kHz or so. Sometimes you might get a result using an RF detector probe (see Project 74 for an example).
+ +In general, a proper oscilloscope is indispensable for any DIY projects. These days, you get a lot of oscilloscope for your money, but you have to be prepared to take the time to learn how to use it properly, and how to make best use of the features offered.
+ + +Bulk bypass caps (where the DC enters the board) are almost always electrolytic, and can be anything from 10µF to 100µF or more, depending on the current drawn by the circuit. While most of the basic opamps don't need bypass caps across each device, a 100nF multilayer cap is cheap insurance, and allows you to use even very fast opamps if you so desire.
+ +The only cap worth considering for opamp bypass is the multilayer ceramic. They have many problems (the value varies with voltage and temperature for example), and do introduce measurable distortion. However, they are used on the power supply pins, and distortion of DC is simply a silly concept. I have heard people claim that these caps should never be used for bypass because they ruin the sound, but this is simply nonsense.
+ +Not one person who will make (or stand by) these silly claims will ever conduct a double-blind test, they will not measure the results to provide proof, nor will they accept that they are talking complete rubbish. However, these claims are rubbish, and should be ignored until someone offers proof that they can hear DC, and that it affects the music in a measurable way. I don't recommend that anyone holds their breath.
Power amplifiers are generally comparatively low speed, but bypassing is almost always needed unless the amp is only millimetres from the power supply. It is fairly common for power amps to use bypass caps ranging from perhaps 10µF up to 220µF or more, and these are often in parallel with smaller caps.
+ +While adding small film or ceramic bypass caps certainly does no harm, it usually makes no difference to the amp's performance whatsoever. As noted in the Capacitor Characteristics article, large value bypass caps are always better than low values. Connecting small caps in parallel with high value electrolytic caps usually achieves nothing at all. It is common to see amplifiers power supplies, showing perhaps 10,000µF main filter caps, with paralleled 1µF film caps and perhaps 10nF ceramics. The small caps are simply wasted - they do no harm, but their reactance is so high compared to that of the 10,000µF main filter cap that they achieve nothing at all.
+ +If it makes you feel better to use them, then by all means do so. They do no harm, and will not adversely affect the sound of the amp in any way. However, if you do include them, don't expect the amp to sound "better", because it won't. Needless to say, this means a proper double-blind test, not a silly test where those involved know if the caps are in or out of circuit. It is also a requirement of any such test that the amp is verified as being free of any form of oscillation before running the test.
+ +Also, remember that even a few centimetres of wire can introduce inductance (approximately 5-6nH/cm), and that may cause parasitic oscillation. Bypassing is not an exact science, and on occasion you will find that you really do need a small bypass cap in an unlikely position. Again, without an oscilloscope, finding and fixing power amp oscillation is usually impossible.
+ +While it is common in low level circuitry - such as preamps - to use bypass caps between the supply rails with no connection to earth, this usually doesn't work with power amps. This is because significant current flows in the earth/ground circuit because of the speaker return. Almost all power amps will use caps from each power supply to earth, and this includes multi-supply amps (Class-G for example).
+ + +As noted above, power amps will often just use electrolytic caps for bypass. Where low value film caps (typically 100nF) are needed, these will normally be polyester or similar. Because of the high supply voltages used, most multilayer ceramic caps aren't usable because they are most commonly available only up to 50V. Film caps are available for very high voltages, so there is no limitation other than cost.
+ +Electrolytic bypass caps may be as large as 470µF or more, or as low as 10µF. It depends on the design of the amplifier - some can function just fine with no bypass caps at all, but modern high speed output devices make this uncommon today. Many (especially budget) commercial amps will use the smallest caps that will allow the amp to function normally - without parasitic oscillation or other misbehaviour. When things are reduced to the bare minimum it is expected that after some time there will be problems, but such problems are actually very uncommon.
+ +Electrolytic caps have had some bad press over the years, but if they are kept cool and were made properly in the first place, they are surprisingly reliable. I have equipment that's over 30 years old, still with all original electros and still working just fine. I also have a stash of large "computer grade" electros, and most of them would be at least 30 years old, and haven't been powered up for perhaps 15 years or more. Those that I have pressed into service for any odd project have all been perfectly ok. In some cases it's been necessary to take them to full voltage with a current limited supply, but most can just be connected and used.
+ +![]() | + + + + + + |
Elliott Sound Products | +Crossover Distortion |
![]() ![]() |
In a number of articles I've explained that negative feedback can never eliminate crossover distortion. The simple reason is that at low amplitudes, an amplifier with crossover distortion has no gain, and without gain there is no feedback. The problem is that this is probably not immediately intuitive, so the issue is looked at more closely here.
+ +Rather than make a simple assertion (however true it may be), it's necessary to prove the hypothesis. This is fairly easy to do, and it can be done with real circuitry or simulations, with results that will be almost identical. This makes it easy for anyone to duplicate the results.
+ +There are actually two problems, not just one. Almost all real amplifiers have less open-loop gain at high frequencies than they do at low frequencies, and this is due to the compensation capacitor (aka Miller cap). The combination of lower gain and inevitable reduction of transistor gain at high frequencies means that distortion rises with increased frequency, even if there is no crossover distortion as such.
+ +It's worth pointing out that if an amplifier is being used to control a motor (other than a loudspeaker driver) or other non hi-fi application in an industrial controller or similar, a tiny bit of crossover distortion is not an issue. True Class-B operation is common in these systems, as it minimises quiescent current, and the 'dead band' created by the crossover distortion is so small that it doesn't affect operation. It's only when we look at 'proper' audio circuits that it becomes a problem that must be solved.
+ +Almost without exception, hi-fi amplifiers are Class-AB, meaning that they operate over a small range in Class-A, with both output devices conducting. The idle or quiescent (no signal) current is generally within the range from a few milliamps up to 50 or 100mA in some cases. There is almost always a thermal feedback mechanism in place to keep the quiescent current stable, as the emitter-base voltage falls as the transistors' temperature increases. In some cases the bias is held constant with diodes attached to the heatsink. Transistorised versions (commonly known as bias servos) are the most common, with the temperature of the output devices (or driver transistors for a Sziklai pair) sensed to maintain stable current. In the simplified circuit used here, a floating voltage source is used, and emitter resistors keep the current reasonably stable.
+ +The problem is actually more complex than it appears. Transistor gain falls as the current is reduced, and this means that there will always be some non-linearity as the signal passes through zero. Class-A amplifiers get around that by conducting for the full waveform cycle, at the (great) expense of high continuous current and very low efficiency. Most modern amplifiers have levels of crossover distortion that are negligible - it's still there, but is usually well below audibility at any listening level. If you measure an amplifier and look at the waveform of the residual (distortion + noise) from a distortion meter, it's easy to recognise crossover distortion. Rather than a smooth waveform consisting of low-order harmonics, you'll see a spiky waveform with a high peak-to-average ratio. It's worth connecting an amplifier to the output of a distortion meter so you can hear it. If it sounds nasty, then it is nasty.
+ + +The first test circuit uses a pair of medium power transistors, BD139 (NPN) and BD140 (PNP) and these will be used for most of the examples. With ±12V supplies, 10Ω emitter resistors are included, but these make no difference if the transistors have no bias. The emitter resistors are only needed to stabilise the quiescent current in later tests where the transistors have bias, with the intention to eliminate (or at least reduce) crossover distortion.
+ +The circuits used are all nominally unity gain, but without any bias this is reduced. The zero-bias condition can result in a gain of between -1.1dB and -80dB, depending on input level. The tests will use an input voltage of between ±14mV (10mV RMS) and ±8V (5.66V RMS).
+ +The first test uses the transistors with zero bias, and an input of 5V RMS. The test circuit is designed for simulations, and includes things that are a little irksome to incorporate into a bench test. This is primarily due to DC offset, which can be very hard to remove in a 'real' circuit. It's possible to add a servo circuit, but that adds complexity that's hard to justify.
+ +The circuit for the tests is shown below. Some of the devices (particularly voltage sources) are 'ideal', in that they have zero impedance. This has been compensated for by adding an opamp, which can be configured as a buffer or within a feedback loop. The circuit is deliberately very simple, and the 4558 opamp is adequate for use with feedback. For many tests, it's just a buffer. You can use any opamp you like - it's not critical.
+ +There's a switch to connect the bias voltage (shown as a battery) and another to turn feedback on or off. The circuit operates with (nominal) unity gain at all times, but without bias this is not possible because there can be no output until the input exceeds the base-emitter forward voltage. The opamp can be anything you like (including a simulated 'ideal' model), and in this circuit it makes virtually no difference whatsoever. C3 is used only to remove any offset that may make the output unpredictable, and the 100Ω load ensures that the transistors have to pass some current. If that's omitted the results are not useful.
+ +If we apply an input of ±14mV (10mV RMS) to the circuit (bias switch off), the output is 435nV (yes, nanovolts) RMS, a 'gain' of -87dB. This isn't exactly zero, but it's such a low gain that assuming zero gain is pretty close. Applying bias, the gain becomes -250mdB (0.25dB), and this is what we normally expect from a voltage follower. With the zero gain case, applying feedback will reduce the distortion, but even with an overall gain of (almost) unity and ideal parts throughout, the distortion is not zero!
+ +The current source is used for the tests performed in the 'High-Impedance Drive' section below. The point marked 'A B' is opened, and the current source connected to the base circuit of the transistors. This is (close to) the way output stages are normally driven within an amplifier circuit. It's easy to do in a simulator, but less so in a bench test. The current source needs an internal impedance of at least 1MΩ, so a source voltage of ±100V could be used with a series 1MΩ resistor connected to 'B' will provide ±100μA base current. This will work, but is somewhat impractical.
+ +The 1Ω emitter resistors mean that even if the transistors really were unity gain, the maximum output is 990mV/ V. This is not a limitation, and when feedback is applied that compensates for the small loss of gain anyway. The main thing is that the circuit is a simple emitter-follower output stage, albeit low power. The behaviour can be used to predict that of a 'full' output stage will do, either Darlington or CFP (compound feedback pair, aka Sziklai pair). It's easily constructed if you wanted to run bench tests, and the 1.3V supply can be a 1.5V cell plus a paralleled trimpot to adjust the voltage, as shown in the inset. Adjust the trimpot for about 5-10mA through Q1 and Q2. R1, R2, C1 and C2 ensure that the input is DC coupled, ground-referenced, and has a low impedance for AC.
+ +As you can see, the red trace is at close to zero volts peak (616nV peak), so the circuit effectively has no gain. We can calculate the 'gain' of course - it's about -87dB. Not quite zero, but close enough. When bias is applied, the output increases to 13.6mV, so only 0.4mV has been 'lost'. This is because of the emitter resistors, and the gain of an emitter follower is always slightly less than unity - typically about 0.99 but it varies with the current.
+ +The main test is at an input of 7V peak, and without bias the output is 6.2V peak. There can be no output until the input exceeds the base-emitter voltage for Q1 or Q2, so we see that 0.8V has been 'lost'. The distortion measures 5.73%, as shown in the graph. Applying bias (about 6.2mA) reduces this to 0.098%. These figures are without feedback. The performance with bias is respectable, and it can be reduced further by applying feedback. Without bias, the performance with or without feedback is unacceptable.
+ +The distortion waveform of the 7V peak output without bias is shown above. This is the output from a 1.8kHz, 10th order high-pass filter (60dB/ octave), so only frequencies that are greater than 1kHz are seen, with the fundamental removed. This is what you'll see at the output from a distortion meter that uses a high-pass filter instead of a notch. I used this so the simulation and scope trace are reasonably consistent. The spiky nature of the waveform is immediately obvious, although this is an extreme example. In case you think this is an exaggeration, I ran the same test and measured the result - it's almost identical. The peak level is different because the distortion meter has an internal gain of two for the distortion output.
+ +The input was 7V RMS for the bench test, and where we should see a 9.9V peak output (yellow trace) it's only 9.31V peak, and around ±590mV has been lost because there's no bias. The distortion meter measured less than the simulator, at 3.8% (the simulator says 4.7%). This is neither here nor there of course, as it's quite unacceptable. Why are they different? Almost all distortion meters use an average measurement, RMS calibrated, but the simulator uses 'true RMS'. The wave shape of the distortion (blue trace) is almost identical to the simulated waveform. The spiky nature of the waveform is easily seen, and shows the presence of high-order harmonics. Even though feedback can reduce the low-order harmonics, due to reduced gain at higher frequencies, the high-order harmonics cannot be reduced effectively. It's clearly impossible for feedback to eliminate the crossover distortion, as that would require a feedback circuit with infinite open-loop gain.
+ +Note: If you look at this the wrong way, it can appear that feedback has increased the high-order harmonics. However, the real issue is that the feedback cannot reduce the high-order harmonics because there's not enough of it. The internal gain reduction of the opamp due to its dominant pole means there's less feedback at high frequencies, so more distortion components can get through (relatively) un-attenuated. By themselves (no FB) the harmonics should decay in an orderly manner, so referred to the 3rd, the 5th harmonic should be -4.9dB , the 7th at -8.4dB, etc. With a 6dB/ octave open loop gain rolloff, the decaying relationship no longer applies, and you can see harmonics at almost equal levels over a wide range (as much as two decades of frequency).
+ +If 100% feedback is applied without bias, the 7V peak output distortion falls to 0.055% (optimistically), and rises to 0.43% at 10kHz. Why? Simply because the opamp has 20dB less gain at 10kHz than it has at 1kHz (internal compensation causes a gain reduction of 20dB/ decade or 6dB/ octave). I haven't shown graphs of this, because distortion below 1% is not visible on the waveform, and this also applies to an oscilloscope display.
+ +At 20kHz the distortion has risen to 1.08% because there's another 6dB drop of open-loop gain from the opamp. You may have expected that the distortion would double (to 0.86%), but it's worse than that. 'Real' amplifiers are no different in this respect.
+ +The important thing is that the gain is not almost zero when bias is applied. By applying bias to the output transistors, they can conduct even with the smallest input (all the way down to ±1nV or less). In reality you won't be able to measure any signal that small, as noise will dominate any reading. 1nV is -180dBV, easily done in a simulator but not in real life. The output is supposed to be a perfect replica of the input waveform, but as you can see, this cannot be the case with no bias. Too little bias means that the transistors are operating with less current, so their gain falls. If the bias voltage is reduced to 1.1V, quiescent current falls to ~170μA, and the output voltage with 14mV input is reduced to about 8.5mV. With an 80mV peak input, the distortion is a rather unimpressive 1.8%, with only 50mV peak output.
+ +The internal emitter resistance (re or 'little re') for a transistor is generally taken to be 26 / Ie (in mA), so with 1mA re is 26Ω, rising to 152Ω at 170μA. It's not a fixed quantity unless the emitter current doesn't change, but with a quiescent current of a few milliamps the change of re (Δre) becomes less of a problem. Clearly, if re is significant compared to the load impedance, the output must be distorted - even with bias!
+ +Fig. 1.6 shows that an 800mV (peak) input can just turn on the transistors, with absolutely gross crossover distortion (57.9%). Just applying 9mA bias is enough to reduce that to 0.008%, which would generally be considered quite acceptable for any amplifier. If feedback is applied, the same two tests show distortion to be 0.17% (no bias) and 0.0005% with bias.
+ +Even though 0.17% could be considered acceptable (in some quarters at least), in this instance it's made up of many harmonics, ranging from the 3rd up to very high frequencies. The residual waveform (as seen at the output of a distortion meter) is nasty, showing significant spikes at the crossover points. The RMS value of the residual may only be 2.84mV as measured from the next waveform, but it invariably sounds worse.
+ +As the level is reduced, the distortion increases. This is characteristic of crossover distortion, and at 80mV input, the distortion has risen to 1%, and it's over 5.5% with 8mV input. We normally expect to see the THD+N (distortion plus noise) to rise at low levels, but this should be due to noise alone. The distortion from most circuitry falls with reduced level. Fig. 1.4 shows the significant crossover distortion at 14mV (peak) without bias, and it requires no magnification to be seen easily.
+ +THD measures 3.39% without bias, falling to 0.00054% with bias. The latter is quite acceptable, but that's only at 1kHz. We need to look at performance at higher frequencies too, or it's too easy to miss something that proves to be a problem during listening tests. 10kHz is a reasonable figure, even though the 2nd harmonic at 20kHz will not be present, and the first 'real' harmonic will be at 30kHz, well out of hearing range. However, any distortion will create intermodulation products, and these almost certainly will be audible.
+ +Fig. 1.8 shows just how bad this can be. The conditions are the same as for Fig. 1.7, but the frequency has been increased to 10kHz. The unbiased output is beyond revolting, with a distortion of almost 20%. Once the output transistors are biased, the waveform is greatly improved, with distortion reduced to 0.26%. This isn't wonderful, because the opamp is running out of 'reserve' gain at 10kHz. If I substitute an 'ideal' opamp, the distortion is reduced to 0.0005%, because its gain remains constant with frequency. Unfortunately, you can't buy an ideal opamp.
There's another way to reduce the distortion in an output stage before feedback is applied, and that's to use a current source to drive the output devices. The opamp is disconnected at the 'A B' point, and the current source (shown in Fig 1.1) is connected to 'B'. Almost every amplifier made uses this technique, and it helps to overcome non-linearities, including crossover distortion. It's not a panacea though, as we shall see shortly.
+ +It's somewhat harder to model with a simulator, and much harder to perform a bench test, because we expect signal sources to provide a voltage, not a current. Of course they do both when driving a load (the test circuit), but the impedance is low (50 or 600Ω for most instruments).
+ +If the voltage drive from U1 is changed to an AC current source with an impedance of 100k, we can drive the output stage with ±1mA to obtain a peak output of about 7.3V with no bias. Under almost identical conditions otherwise, voltage drive (red) has a distortion of 5.7%, reduced to 3.4% with current drive. That's a fairly significant reduction by itself. Unfortunately, applying bias current doesn't help, and actually makes the distortion worse (3.73%) with high-impedance drive.
+ +This is overcome in 'real' amplifiers by ensuring that the current source can deliver at least 5 times the peak current expected by the output stage. The primary reason for the constant current used in the voltage amplifier stage (VAS, aka Class-A driver) of an amplifier is not to overcome output transistor non linearities, but to get the best possible linearity from the VAS stage itself. Expecting the high-impedance drive to overcome the 'dead band' is unrealistic, and it doesn't work. It helps, but bias current is still required.
+ +That the high impedance (current source) can help to linearise the output stage is a small bonus, but not the primary reason. The constant-current source may be active (using one or more transistors) or passive, using a bootstrap circuit. There is no significant difference between the two, but the passive bootstrap circuit remains my personal favourite.
+ +It's not intuitive as to how high-impedance drive can overcome (at least to an extent) the dead-band created by an unbiased output stage.
+ +The red waveform is the output, when the circuit is driven by a ±100μA current. It looks alright, but the distortion is still 3.75% (the inset shows crossover distortion). Of more interest is the waveform at the input of the stage (base drive). It has near-vertical sections as the current tries to remain constant, so the input voltage 'jumps' from one base-emitter offset to the other. The near-vertical sections jump from (about) -500mV to +740mV (positive-going) and roughly from +500mV to -650mV (negative-going) - it's slightly asymmetrical. Unfortunately, this process is imperfect due to other non-linearities in the circuit.
+ +Ultimately, applying bias to an output stage is the only way to minimise crossover distortion. Without it, neither high-impedance drive nor vast amounts of negative feedback can overcome the dead-band, where the stage has zero gain. Even if the gain only falls a bit (say by 6dB), that means there is 6dB less overall loop gain at that point. Any reduction of distortion is proportional to the amount of feedback, so if the open-loop gain is reduced, so too is feedback. That means the distortion must increase.
+ + +The primary point of this short article is to demonstrate that feedback cannot work when a circuit has no overall gain. This point is rarely mentioned. There's another form too, which has been called 'secondary' crossover distortion [1]. This can be caused by old, slow power transistors, and is created as the transistor(s) fail to turn off cleanly. Most early power transistors were inherently slow, and turn-off behaviour is dictated by the speed at which electron-hole pairs can return to their quiescent state. Any transistor takes a finite time to turn on, and (usually) a longer time to turn off again.
+ +At high frequencies (e.g. 20kHz) this can still be an issue, as the transistors have a degree of cross-conduction (i.e. both upper and lower transistors conducting). You'll often see the current demand of a power amp rise as the frequency is increased, and that's the reason. If taken to extremes, many amplifiers (even today) will fail if you try to obtain full power at 30kHz or more.
+ +The BD139 used in the experiments will turn on in about 54ns, but it takes 140ns for it to turn off again. If we assume the same for the BD140 (usually unwise, but it will do for this explanation), there's a period of around 90ns when both transistors are conducting. If the output stage is driven at a high enough speed (with a fast squarewave), the extra dissipation can cause device overheating and failure.
+ +If the Fig. 1 circuit is driven with a very fast squarewave (±8V, 1ns rise & fall), the peak collector current will exceed 250mA (the load only draws ±80mA). Bigger transistors are inherently slower. This is partially solved by using MOSFETs because they can switch much faster than BJTs, but they are also more nonlinear.
+ +In Class-D MOSFET power amplifiers, there's always a 'dead-time' where both switching devices have no gate drive. This prevents so-called shoot-through current that can cause catastrophic output stage failure.
+ +Please note that this article is not intended to demonstrate the optimum bias current, determine the 'best' output stage topology, optimum transistors to use or any of the other esoteric things that are discussed endlessly elsewhere. The only point is to demonstrate that negative feedback can never eliminate crossover distortion, because if both driver/ output devices are turned off, the circuit has no (useful) gain.
+ +That doesn't mean that issues such as gm doubling (when both transistors conduct) and other issues are not important, but it also has to be considered that very few modern amplifiers have any audible crossover distortion. Eliminating all distortion is simply impossible, but once it's all below audibility further improvements are academic. There are many designers who strive for the lowest possible distortion at any level or frequency, and this is a worthy goal. However, it doesn't mean that the end result will be accepted by all as the ultimate, even if it's flat from DC to daylight and has distortion that's too low to measure at any level or frequency.
+ +We can now get opamps with distortion at vanishingly low levels, so much so that 'trick' circuitry is needed to even measure the distortion. That doesn't mean that everyone loves them though (many people will still complain of 'poor bass' [for example], which is simply impossible as all opamps work perfectly to DC). It's the same with output stages, which are generally well behaved as long as decent transistors are used, ideally with the flattest gain vs. current possible. However, few (if any) power transistors will maintain their gain at the highest and lowest currents encountered in any amplifier.
+ +Omit a decent bias servo at your peril with most amps, especially those using BJTs or switching MOSFETs (not recommended for linear operation, but that has never stopped anyone). Getting the bias current right is one of the most important things you need to do with any amplifier, lest you fall into the 'zero gain' issue described.
+ + +![]() ![]() |
![]() | + + + + + + |
Elliott Sound Products | +Passive Crossover Design Tables |
There are several crossover design programs available, some free, and others with variable price ranges. There seems little doubt that these can make your life easier, but for many people the 'old' techniques are still preferred. You also have to consider the learning curve - most of these programs will take some time to master. I make no recommendations for design software, but be aware that many will require data inputs that may not be available in the format needed. Should that be the case (or where detailed data are not available), you will need to characterise the drivers yourself, and it may not be possible to provide the required data in the format needed. Some of this software is a 'normal' executable program, while others use a spreadsheet. Neither is necessarily better or worse than the other, but one must admire the amount of work involved to get usable results, regardless of how the data are manipulated.
+ +Most design programs are complex by necessity, and while they will always give you a result, it can only ever be as good as the data you can provide. The tables and formulae shown here can be made to work with any driver, provided you know how to measure the characteristics and/ or provide impedance compensation to ensure that the drivers appear resistive across the crossover region.
+ +The tables shown below can be used for the calculation of passive filters (first, second, third, and fourth order) in 2-way and 3-way crossover networks. After deciding on the topology you want, you need to know the corrected impedance of the tweeter, woofer and midrange (for a 3-way network). There are quite a few configurations that I've left out, because they are either sub-optimal or a bit too far from 'conventional' alignments. If you want all of the formulae, I suggest that you buy the book shown in Reference #1 (or the latest revision). There are many other texts on the same topic, but I don't have them and cannot comment on their usefulness.
+ +For 2-way systems, the mid-bass/ woofer will almost always require a Zobel network to correct the impedance rise due to Le (voicecoil inductance). The tweeter will likewise almost invariably require a notch filter to suppress the resonant peak, for both 2-way and 3-way systems. While it is certainly possible to design a filter that works without any impedance compensation, it will be much more difficult and time-consuming to do so.
+ +With a 3-way system, the midrange driver may require both a Zobel network and a notch filter, depending on its resonant frequency. The notch may not be necessary if the resonance is more than two octaves below the crossover frequency (for example midrange resonance at 75Hz for 1 300Hz crossover frequency). This is something that must be tested thoroughly before you'll know if it causes any measurable (or audible) problems. All formulae are based on the premise that the driver impedance is resistive, having been equalised as necessary. Do not use the nominal impedance of the drivers, as the results will be highly unpredictable.
+ +The circuits shown do not include impedance EQ. See the companion article Impedance Compensation For Passive Crossovers.
+ +Driver impedance correction must be determined before using these tables, and the measured (equalised) resistance used. This will typically reduce the impedance of each driver by around 20% or more, with the average being roughly equal to the driver's electrical voicecoil resistance (re). Failure to provide impedance EQ will usually result in an unsatisfactory end result, and be aware that the EQ networks will add many more parts (inductors, capacitors and resistors). No provision is made here for determining the relative levels from each driver, and L-Pads are likely to be needed for tweeters and midrange drivers to ensure that their levels match the woofer. Ensure that the woofer has the lowest efficiency (in dB/W/m) or it will be difficult to get the levels correct.
+ +The crossover component values are calculated using the following formulae (adapted from 'The Loudspeaker Design Cookbook' by Vance Dickason). Not all variations are covered, only those that are in common usage, and the 'esoteric' versions have been culled to make the tables more readable. Make sure that you use the correct table, especially for 3-way designs. The values are different, depending on the upper and lower crossover points. Formulae are provided for a range of 10 (e.g. 300Hz to 3kHz, 3.4 octaves) and a range of 8 (e.g. 375Hz to 3kHz, 3 octaves).
+ +Circuit diagrams are shown for 1st, 2nd and 3rd order networks (6dB, 12dB and 18dB/ octave respectively). I've not included schematics for 4th order networks because their complexity and component sensitivity is such that getting a good result will either be extremely difficult/ expensive or (usually) both. Impedance equalisation becomes (even more) critical, and small errors can cause large variations in performance. This doesn't mean it can't be done, but the cost is such that active filters (and multiple amplifiers) will give better, more predictable performance for less financial outlay and a greatly reduced risk of failure.
+ +While you can choose any of the alignments to suit your needs, the ones I recommend are indicated by a star/ asterisk (*). Capacitance is in Farads, inductance in Henries and resistance/ impedance in Ohms.
+ ++ +
1st Order Butterworth * | ||
---|---|---|
C1 | 0.159 / rH f | |
L1 | rL / 6.28 f |
+
Figure 1 - 2-Way 6dB/ Octave Crossover
While the above shows a parallel network, IMO a series network is preferred for first-order 2-way systems. Although the two are theoretically identical with a resistive load in place of the speaker drivers, a series network doesn't need impedance compensation. See the article 6dB/ Octave Passive Crossovers for more on this (slightly unusual) configuration.
+2nd Order 2-Way
+ +2nd Order Butterworth | 2nd Order Linkwitz-Riley * | |
---|---|---|
C1 | 0.0912 / ( rH f ) | 0.0796 / ( rH f ) |
C2 | 0.0912 / ( rL f ) | 0.0796 / ( rL f ) |
L1 | 0.2756 rH / f | 0.3183 rH / f |
L2 | 0.2756 rL / f | 0.3183 rL / f |
+
Figure 2 - 2-Way 12dB/ Octave Crossover
+ +
3rd Order 2-Way
+ +3nd Order Butterworth * | 3nd Order Bessel | |
---|---|---|
C1 | 0.1061 / ( rH f ) | 0.0791 / ( rH f ) |
C2 | 0.3183 / ( rH f ) | 0.3953 / ( rH f ) |
C3 | 0.2122 / ( rL f ) | 0.1897 / ( rL f ) |
L1 | 0.1194 rH / f | 0.1317 rH / f |
L2 | 0.2387 rL / f | 0.3294 rL / f |
L3 | 0.0796 rL / f | 0.0659 rL / f |
+
Figure 3 - 2-Way 18dB/ Octave Crossover
+
4th Order 2-Way
+ +4th Order Butterworth | 4th Order Linkwitz-Riley * | |
---|---|---|
C1 | 0.1040 / ( rH f ) | 0.0844 / ( rHf) |
C2 | 0.1470 / ( rH f ) | 0.1688 / ( rHf) |
C3 | 0.2509 / ( rL f ) | 0.2533 / ( rLf) |
C4 | 0.0609 / ( rL f ) | 0.0563 / ( rLf) |
L1 | 0.1009 rH / f | 0.1000 rH / f |
L2 | 0.4159 rH / f | 0.4501 rH / f |
L3 | 0.2437 rL / f | 0.3000 rL / f |
L4 | 0.1723 rL / f | 0.1500 rL / f |
+ + +
The 4th order network circuit is not shown, as its complexity is such that 4th order networks are best achieved using active filters. High order passive filters are not recommended. The cost and complexity rapidly become such that the cost will be far higher than an active solution. This is especially true when you consider the component sensitivity - the parts used need to be selected for close tolerance or the filter response will not be accurate.
+ +For all 3-way designs, the midrange 'centre' frequency is determined by fM = √( fH × fL ) or ( fH × fL )^0.5
+ +Select either fH / fL as 10 (3.4 octaves) or 8 (3 octaves)
+ +1st Order Normal Polarity * fH/fL = 10 | 1st Order Normal Polarity *
+ fH/fL = 8 | |
---|---|---|
C1 | 0.1590 / ( rH fH ) | 0.1590 / ( rH fH ) |
C2 | 0.5540 / ( rM fM ) | 0.5070 / ( rM fM ) |
L1 | 0.0458 rM / fM | 0.0500 rM / fM |
L2 | 0.1592 rL / fL | 0.1592 rL / fL |
+
Figure 4 - 3-Way 6dB/ Octave Crossover
+
2nd Order 3-Way
+ +2nd Order (Reverse Midrange Polarity) * fH/fL = 10 | 2nd Order (Reverse Midrange Polarity) *
+ fH/fL = 8 | |
---|---|---|
C1 | 0.0791 / ( rH fH ) | 0.0788 / ( rH fH ) |
C2 | 0.3236 / ( rM fM ) | 0.3046 / ( rM fM ) |
C3 | 0.0227 / ( rM fM ) | 0.0248 / ( rM fM ) |
C4 | 0.0791 / ( rL fL ) | 0.0788 / ( rL fL ) |
L1 | 0.3202 rH / fH | 0.3217 rH / fH |
L2 | 1.0291 rM / fM | 0.9320 rM / fM |
L3 | 0.0837 rM / fM | 0.0913 rM / fM |
L4 | 0.3202 rL / fL | 0.3217 rL / fL |
Bandpass Gain 2.08db | Bandpass Gain 2.45db |
+
Figure 5 - 3-Way 12dB/ Octave Crossover
+
3rd Order 3-Way
+ +3rd Order Normal Polarity * fH / fL = 10 | 3rd Order Normal Polarity *
+ fH / fL = 8 | |
---|---|---|
C1 | 0.1138 / ( rH fH ) | 0.1158 / ( rHf H ) |
C2 | 0.2976 / ( rH fH ) | 0.2927 / ( rHf H ) |
C3 | 0.0765 / ( rM fM ) | 0.0884 / ( rMf M ) |
C4 | 0.3475 / ( rM fM ) | 0.3112 / ( rMf M ) |
C5 | 1.068 / ( rM fM ) | 0.9667 / ( rMf M ) |
C6 | 0.2127 / ( rL fL ) | 0.2130 / ( rL f L ) |
L1 | 0.1191 rH / fH | 0.1189 rH / f H |
L2 | 0.0598 rM / fM | 0.0634 rM / f M |
L3 | 0.0253 rM / fM | 0.0284 rM / f M |
L4 | 0.3789 rM / fM | 0.3395 rM / f M |
L5 | 0.2227 rL / fL | 0.2187 rL / f L |
L6 | 0.0852 rL / fL | 0.0866 rL / f L |
Bandpass Gain 0.85db | Bandpass Gain 0.99db |
+
Figure 6 - 3-Way 18dB/ Octave Crossover
+
4th Order 3-Way
+ +4th Order Normal Polarity * fH / fL = 10 | 4th Order Normal Polarity *
+ fH / fL = 8 | |
---|---|---|
C1 | 0.0848 / ( rH fH ) | 0.0849 / ( rH fH) |
C2 | 0.1686 / ( rH fH ) | 0.1685 / ( rH fH ) |
C3 | 0.3843 / ( rM fM ) | 0.3774 / ( rM fM ) |
C4 | 0.5834 / ( rM fM ) | 0.5332 / ( rM fM ) |
C5 | 0.0728 / ( rM fM ) | 0.0799 / ( rM fM ) |
C6 | 0.0162 / ( rM fM ) | 0.0178 / ( rM fM ) |
C7 | 0.2523 / ( rL fL ) | 0.2515 / ( rL fL ) |
C8 | 0.0567 / ( rL fL ) | 0.0569 / ( rL fL ) |
L1 | 0.1004 rH / fH | 0.1007 rH / fH |
L2 | 0.4469 rH / fH | 0.4450 rH / fH |
L3 | 0.2617 rM / fM | 0.2224 rM / fM |
L4 | 1.423 rM / fM | 1.273 rM / fM |
L5 | 0.0939 rM / fM | 0.1040 rM / fM |
L6 | 0.0445 rM / fM | 0.0490 rM / fM |
L7 | 0.2987 rL / fL | 0.2983 rL / fL |
L8 | 0.1502 rL / fL | 0.1503 rL / fL |
Bandpass Gain 2.28db | Bandpass Gain 2.84db |
+
The 4th order network circuit is not shown, as its complexity is such that 4th order networks are best achieved using active filters. High order passive filters are not recommended, and an active system should be considered first.
++ +
![]() | Figures 1 through to 6 are based on the drivers appearing purely resistive, using networks shown in Figure 7. If impedance compensation isn't used, the tables will give answers that may + make some sense, but only if the actual impedance at the crossover frequency is used, and not the driver's nominal impedance. Actual performance is something of a lottery unless + you are prepared to do a fair bit of adjustment after the system is assembled. + |
Please be aware that although the utmost care has been used to create these tables, there may be errors - particularly with the constants used for each formula. Because of the repetitious nature of these data, it's very easy to 'misplace' a digit, and that will affect the outcome of the formula. Also, it's essential to use the correct configuration for the midrange filter. If the order of the low-pass and high-pass filters is changed, you may get more pass-band ripple (deviations from flat for the summed response). With care, it should be possible to get the summed response to have no more than ±0.5dB ripple, and it's unrealistic to expect it to be much better.
+ +It's a point I've made countless times, but you only have to look at a 3-way 4th order passive crossover to see that it will be very expensive to put together. Not only do you have the crossover components, but you also require impedance compensation for the drivers or the results will be unpredictable (and rarely in a good way). The filters are sensitive to even small variations, and if you also consider voicecoils heating up during loud passages (or if you listen at high volume) then the crossover is messed up quite badly. This happens even with small changes - just a couple of ohms can make a surprisingly large difference.
+ +The only sensible approach to high-order crossovers is to use active circuits. Yes, you need an amplifier for each driver, but these are easy (and comparatively cheap) to build yourself, and the end result will be a no-compromise system. You don't need any impedance compensation, and the compete system will outperform any passive network. There's zero power loss in inductors or resistors, damping for the woofer is not compromised, and the crossover frequencies don't change if a voicecoil gets hot. There is still a loss of level (because the impedance is higher), but this is a minor side-effect when compared to the major changes that occur with a passive network.
+ + +Because speaker drivers are reactive, they have impedance, not resistance over the audio range. This means that the load presented to an amplifier or crossover network is frequency dependent, as shown in any impedance curve you wish to examine. For a passive crossover to work correctly (with the sole exception of a 2-way, first-order series network), the drivers must be made to appear resistive, for a range of at least 1.5 octaves (preferably 2 octaves) either side of each crossover frequency. The following circuits are used, assuming a 3-way system.
+ +The required design processes for impedance compensation is not shown here. They are described in detail in the companion article Impedance Compensation For Passive Crossovers.
+ +
Figure 7 - Impedance Equalisation Networks
The important thing to note is that the above drawing shows only the impedance compensation networks. The crossover network is in addition to what's shown, adding even more parts. It is possible (at least in theory) to build a crossover that doesn't require full compensation, but it will be an empirical (i.e. trial and error) process. Some people will be better that this than others, and there are various computer programs that may be able to produce a design, provided all driver characteristics are known (and are accurate). It's almost certain that the final design will still need some adjustments, because speaker parameters will change depending on the enclosure size, damping applied or even panel resonances.
+ +In general, tweeters almost always need a notch circuit to flatten the resonant peak (usually somewhere between 700Hz to 1.2kHz or so), and rarely need a Zobel network because the voicecoil inductance is generally quite low. Midrange drivers require a Zobel network to flatten the impedance at higher frequencies. An L-Pad is generally required to reduce the tweeter level to match the woofer or mid-bass driver.
+ +A notch filter is necessary if the midrange resonance is less than two octaves from the bass-mid crossover frequency. For example, for a 300Hz crossover frequency, the midrange resonance (in its enclosure) should be no higher than 75Hz. It may be possible to use a simplified circuit to suppress the resonant peak, but that's not something I'd count on. An L-Pad is almost always necessary for 3-way systems, because the filter network provides up to 2dB of 'gain' for the midrange output. An L-Pad should not be used on a mid-bass driver.
+ +Woofers (or mid-bass drivers in a 2-way system) only need a Zobel network to counteract the impedance rise due to voicecoil inductance. There is no requirement for a notch network to equalise the woofer/ mid-bass resonant peak, and even attempting it is futile. Very high values of capacitance and inductance are needed, which will add significant cost for no good purpose. While it may make the electrical impedance look 'nicer', it will not change the acoustic performance of the woofer in any way.
+ +If you can manage to obtain perfectly flat impedance response across the range for each driver, the results will be very good indeed. However, the values of all crossover components are critical, and the formulae shown don't take inductor resistance into consideration. This will always reduce the sensitivity of midrange drivers and woofers. It's essential to measure the sensitivity of the drivers in the enclosure they are intended for, as everything makes a difference. The resonant frequency of mid-bass, midrange and woofers is affected by the enclosure and the amount of acoustic fill used. If the final sensitivity isn't measured, it will be very hard to get the L-Pad calculations right. For a calculator to work out the values needed for L-Pads, see Loudspeaker L-Pad Calculations.
+ +Note that none of these networks are required with an active system, because speaker impedance cannot influence the crossover network's performance.
+ + +Despite initial appearances, this article is intended to dissuade prospective loudspeaker builders from using passive networks. It's fairly easy to see that the complexity of passive networks is much higher than often expected, and the final cost will reflect this. Few commercial loudspeaker systems incorporate everything described here into their designs, and that's the result of the primary goal - to build a system that can be sold at a reasonable profit. Usually, you can expect the manufacturer to have spent hundreds of hours testing various combinations of driver and crossover parts to arrive at a product that will satisfy buyers (and reviewers!) within its price range.
+ +There may well be exceptions to the basic comments above, and it's quite easy to spend well in excess of $100k for a pair of 'top-of-the-line' loudspeakers. It's expected (or at least hoped) that if you spend that much, you should be getting the best of everything, but that's not necessarily the case. Some manufacturers rely on their reputation to justify sky-high prices, and may cut corners just like their lesser rivals. Unless you have access to the crossover networks or at least a schematic, you don't know. Likewise, you also need to know the Thiele-Small parameters for all drivers used, because they dictate the impedance equalisation that's required to get a flat impedance across the crossover frequencies.
+ +With enough time, patience and test gear, it's possible to 'tweak' a crossover network so that it deliberately incorporates driver characteristics to arrive at a final system that isn't so complex that it would make the system unaffordable for the target market. Some may not even bother too much, and will sell the system with claims of 'magic' performance, 'musicality' or just a few naughty fibs about its 'superlative' performance. It's notable that no loudspeaker manufacturer will ever tell you about any limitations, and everyone seems to perpetually keep making the 'world's best' system. Frequency response graphs may be created by using excessive 'smoothing' so you don't see the amplitude variations, and other may take an average of multiple tests from different angles. The number of loudspeaker systems that all lay claim to glory is astonishing, and loudspeakers are still the weakest link in the audio chain. Differences are (usually) clearly audible, even with designs from the same manufacturer.
+ +You'll often see references to 'voicing' a system, meaning that it's been tweaked by the designer so it sounds the way s/he likes it. Some listeners/ reviewers will agree, others will disagree. As a result, you'll see a great many crossover schematics that seem far too simple to be effective, but they can still be made to sound good to the average (and often above-average) listener. When an L-Pad is used to attenuate the tweeter, the requirements of the notch circuit are relaxed, because the parallel resistance reduces the amplitude of the impedance peak. There are also other tricks that can be used (such as configuring a high pass filter to act as a 'bridged-T' network), along with using the driver's characteristics to advantage. Most such networks will only work with the original drivers used in the design, and substitutions will often cause the system to be changed - often dramatically, and almost always for the worse.
+ +With a fully active system, a driver change only needs a small adjustment to account for different sensitivity. Because no impedance compensation is needed, a replacement driver should manage to sound much the same as the original, provided it has equivalent frequency response, cone rigidity and freedom from 'artifacts' that cause colouration (bass drivers are an exception, especially when used in a vented enclosure). With a passive system, the impedance compensation networks will almost always need to be changed, and the crossover may need to be altered as well if the equalised impedance is not identical to the original. This seriously limits your options for exchanging drivers, because there are so many interdependent factors that come into play.
+ +My recommendation will always be for an active system, but just biamping can be a major improvement over a full passive crossover. This means separate amps for the woofer and mid+high sections, with a passive network between the midrange and tweeter. It eliminates the very large (and expensive) parts needed for the low-frequency crossover network, and the changes needed if you want to use a different midrange or tweeter are minimised. No, it probably won't match a fully active system, but it's a viable alternative to a complete 3-way (or, perish the thought, 4-way) passive crossover.
+ + +Other references are from ESP articles, which cover a wide range of options. Projects include Project 09 (stereo 24dB/ octave 2-way active crossover) which can also be configured for 12dB/ octave, and Project 125, a 4-way 24dB/ octave crossover (two for stereo).
+ +![]() | + + + + + + + |
Elliott Sound Products + | Compliance Scaling |
Since Thiele published his seminal paper on the design of reflex enclosures [ 1 ], his original table of 28 'alignments' has been greatly extended, and these are widely available in both speaker design cook books and on the net.
+ +The impression that seems to be given about these alignments is that they are chiseled in stone, and that if one has a particular driver then one is stuck with the f3 and box volume dictated by the alignment. This however is not true. The fact that the driver's suspension compliance is very insensitive to system parameters, i.e. ...
+ ++ α, h, f3 ++ +
means that very many alignments exist that are close to but not quite the 'classical' ones usually published and, as White states, 'virtually any driver can be fitted into an alignment', [ 2, p.892 ]. Thiele states that from certain points of view the suspension compliance is 'unimportant' (his italics), [ 1 , p.188 ].
+ +The miracle of fitting just about any driver to just about any alignment is performed by a procedure called 'compliance scaling', and was, as far as the author is aware first published by Keele, in a 1974 AES paper, [ 3 ].
+ +In addition to compliance scaling two other techniques are outlined, electronically increasing Qt and modifying auxiliary filter damping factor.
+ +Programs available as downloads such as WinISD do a good job of system modeling, the difficulty is that once one strays from the standard alignments that these usually have as defaults, one is in the dark as to what to change and by how much, in order to achieve the desired response.
+ +What follows is a simple method that yields some figures to enter into such programs to give a desired result, and allows you to design a box with a particular driver to achieve a target f3, and or a target box volume, also included is a method of increasing Qt.
+ +But first a word about compliance in relation to the Thiele - Small parameters.
+ + +The T-S parameters have become the standard way of specifying the performance parameters of low frequency moving coil drivers. This is because they clump the fundamental physical properties together in such a way as to ease the problem of calculation and specification.
+ +they are related to the physical properties by the following relationships ...
+ ++ Vas = poC² CmsSd²+ +
+ Qt = √(Mms / Cms) (1 / Rmt)
+ Fs = 1 / (2π √(Mms Cms) +
Where Cms = suspension compliance (m/N), Mms = moving mass (kg), Sd = diaphragm area (m²), po = density of air (1.2kg / m³), and c = speed of sound (343m/s).
+ +As can be seen compliance plays an important role in all of the above, and also in the system parameters, α, h, f3
+ +As stated by Keele, [ 4 p.254], typical batches of drivers have a 10-20% variation in T-S parameters, and this is largely due to differences in suspension compliance, but luckily for the speaker designer this has little effect upon the frequency response in a given box because of the insensitivity of this to alignment.
+ +The actual scaling procedure entails applying a set of transforms to a 'seed' alignment to give use a new set of system parameters, and this uses a 'normalised compliance' constant, Cn defined as ...
+ ++ Cms / Cms' ++ +
Where Cms is the driver compliance, and Cms' is the driver compliance that would be needed for the driver to be 'correct' for the alignment. Cn then gives us the transform set ...
+ ++ h = h' √Cn+ +
+ Qt = Qt' / √Cn
+ α = α'Cn
+ f3 / fs = f3' / fs' √Cn
+ fa = fa' √Cn +
Applying these transforms to a standard B4 alignment gives the following results, (Data copied from, [ 2, p.892 ]), for the Cn values of 0.25, 1.0, 4.0 ...
+ +
Figure 1 - Transformed B4 Alignments
In the above graph the blue plot is for a Cn = 0.25, this corresponds to a Qt of 0.624, the magenta line is for Cn = 1, Qt = 0.312, the exact B4 alignment, and the green is for a Cn of 4, Qt being 0.156. It can be seen that the greatest deviation is for low Cn values. Figure 2 is for a filter assisted B6 alignment ...
+ +
Figure 2 - As Above, With Filter Correction
The Cn values are the same for this plot, and the low Cn plot is somewhat smoother than for the non filter assisted case. Overall the Qt can vary over a wide range without any significant change in overall frequency response and at the low frequencies involved this slight aberration is completely swamped by room effects.
+ + +We usually have a specific f3 or box volume in mind when we design a speaker. The techniques that follow allow you to obtain a specific f3 or box volume. Also discussed is the method of alignment adjustment by means of increasing power amplifier source impedance by means of current feedback around the power amplifier, (see Rods article). To make it easier the tables that follow contain two constants, one enables a particular f3 to be obtained, another a particular box size. From the transforms we can write f3 as ...
+ ++ f3 = (fs / f3' Qt') / (Qt fs') ++ +
This separates into two constants, one characteristic of the enclosure, the other of the driver, these are ...
+ ++ kbf = (f3' qt') / fs' and kdf = fs / Qt ++ +
Likewise the expression for Vb ...
+ ++ Vb = (Vas Qt² Vb') / (Vas Qt'²)+ +
+ kbv = Vb' / (Vas x Qt'²) and kdv = Vas x Qt² +
Table #1 has the values of kbv and kbf for the Qb5 alignments (alignment table in 'Satellites and Subs' article).
+ +Qt | kbv | kbf | Qt | kbv | kbf | Qt | kbv | kbf + | ||
0.324 | 4.789 | 0.303 | 0.445 | 9.182 | 0.445 | 0.514 | 7.101 | 0.529 + | ||
0.318 | 4.702 | 0.318 | 0.425 | 7.372 | 0.490 | 0.517 | 7.099 | 0.533 + | ||
0.311 | 4.618 | 0.329 | 0.415 | 6.783 | 0.505 | 0.520 | 7.112 | 0.536 + | ||
0.303 | 4.546 | 0.338 | 0.405 | 6.351 | 0.528 | 0.523 | 7.127 | 0.540 + | ||
0.295 | 4.464 | 0.345 | 0.394 | 6.009 | 0.541 | 0.526 | 7.157 | 0.543 + | ||
0.287 | 4.378 | 0.351 | 0.384 | 5.685 | 0.554 | 0.530 | 7.177 | 0.547 + | ||
0.279 | 4.295 | 0.356 | 0.373 | 5.425 | 0.565 | 0.534 | 7.201 | 0.550 + | ||
0.271 | 4.217 | 0.361 | 0.362 | 5.202 | 0.575 | 0.538 | 7.258 | 0.554 + | ||
0.263 | 4.147 | 0.365 | 0.352 | 4.982 | 0.585 | 0.543 | 7.294 | 0.558 + | ||
0.255 | 4.088 | 0.368 | 0.342 | 4.792 | 0.594 | 0.549 | 7.340 | 0.562 + | ||
0.247 | 4.041 | 0.370 | 0.331 | 4.659 | 0.599 | 0.555 | 7.412 | 0.566 + | ||
0.240 | 3.975 | 0.373 | 0.322 | 4.498 | 0.608 | 0.562 | 7.503 | 0.570 + | ||
0.233 | 3.921 | 0.376 | 0.312 | 4.392 | 0.612 | 0.569 | 7.626 | 0.574 + | ||
0.226 | 3.880 | 0.378 | 0.303 | 4.280 | 0.618 | 0.577 | 7.781 | 0.577 + | ||
0.219 | 3.852 | 0.379 | 0.294 | 4.190 | 0.623 | 0.587 | 7.973 | 0.581 + | ||
0.213 | 3.804 | 0.381 | 0.286 | 4.092 | 0.628 | 0.598 | 8.249 | 0.584 + | ||
0.207 | 3.767 | 0.383 | 0.278 | 4.013 | 0.632 | 0.610 | 8.614 | 0.587 + | ||
0.201 | 3.742 | 0.384 | 0.270 | 3.953 | 0.635 | 0.625 | 9.143 | 0.589 + | ||
0.196 | 3.692 | 0.386 | 0.262 | 3.910 | 0.637 | 0.643 | 9.913 | 0.592 + | ||
0.190 | 3.692 | 0.386 | 0.255 | 3.851 | 0.640 | 0.664 | 11.228 | 0.594 + | ||
0.185 | 3.665 | 0.387 | 0.249 | 3.778 | 0.645 | 0.691 | 13.962 | 0.595 |
+ kdf = 27.4 / 0.281 = fs / Qt = 97.51 ++ +
This needs ...
+ ++ kbf = 30 / 97.51 = 0.308 ++ +
From the QB 5 Class I alignment table the suitable alignment is No. 1 with a kbf = 0.303, so ...
+ ++ √Cn = 0.324 / 0.281 = 1.153, therefore Cn = 1.329+ +
+ a = 1.989 x 1.329 = 2.64
+ h = 0.995 x 1.153 = 1.322
+ fa = 0.944 x 1.153 = 1.255 +
multiplying or dividing the a, h and fa data in the QB5 table by Cn or √Cn as appropriate, gives the transformed values ...
+ ++ Vb = (2 x 81.58) / 2.64 = 61.8 litres+ +
+ Fb = 27.4 x 1.322 = 36.2 Hz
+ Fa = 27.4 x 1.255 = 34.4 Hz +
Putting these figures into WinISD gives the result ...
+ +
Figure 3 - WinISD Plot of Scaled Peerless Box
By using the compliance scaling and filter assistance this is a much better result than would normally be expected. The filter is a second order high-pass, having a -3dB frequency of 29.8Hz and a Q of 2.141
+Should anyone simply select the same driver and run WinISD, the optimum result is actually far from optimal, having a -3dB frequency of almost 47Hz. While the box is smaller, it will be sadly lacking in the bottom end.
+ + +Small's power handling parameter 'kp', is a very useful tool in determining the excursion limited power handling of a particular system. I have yet to find any reference to it on the Net except for my QB5 alignment article, I suspect this is because kp is difficult and tedious to calculate, and programs like WinISD provide a plot of maximum SPL. However, I still like to check by calculating kp, and the simplest way I have found is as follows ...
+ +Divide the peak excursion for your transformed alignment by the 'correct' one, and substitute in Small's expression, [ 5, P.439] ...
+ ++ kp1 = 0.425 / ((f3 / fs)4 j(xmax)2) ++ +
Where kp' is the correct 'seed', kp
+ ++ J(xmax)' = (0.424 / ((f3 / fs)4 / kp)0.5+ +
+ J(xmax) = ((pkx / pkx') j(max))'
+ Kp' = 0.425 / ((f3' / fs')4 j(xmax)'2) +
In our first example the low peak Excursion is = 1.69mm and the high peak = 0.873mm, an average of 1.28mm. The 'correct' alignment has low peak = 0.532 and high peak = 0.977, averaging 0.755
+ +The correct alignment has a kp of 9.349, giving j(max.)' of 0.131, j(max) is then = 0.222 giving Kp = 5.98
+ +Giving a conservatively rated output of around 105db Peak for a pair before the linear excursion limit is reached, the SLS 213 driver will give more output before excursion limiting, but at the expense of a box twice as large. It should also be noted that since the class I alignments have an average of 6db of boost at around the f3, they need around four times the power that the nominal efficiency would indicate, this is the price we pay for a small box.
+ + +We can increase Qt by increasing Qe, and this can be done by increasing the driving amplifiers output impedance by means of current feedback, [ 6 ], (Rods article).
+If we write Qe as [ 7 ] ...
+ ++ Qe = Rvc / Lces ws ++ +
We can increase Qe by adding a source resistance to Rvc
+ ++ Qe' = (Rvc + Ro) / Lces ws ++ +
If the Qe we need is = Qe', then The required Qe' is given by ...
+ ++ Qe' = 1 / (1 / Qt' - 1 / Qm) ++ +
Where Qt'= the required Qt. The required source resistance is then ...
+ ++ Kl = Rvc/Qe+ +
+ Ro = Qe'Kl - Rvc +
[ ] The following diagram shows the essentials of modifying the amplifier's output impedance, giving a circuit that is relatively easily scaled for any loaded gain and a defined output impedance. The calculations for this are fairly straightforward, but only if you can accept an apparently random loaded gain. If you want to specify the loaded gain, the calculations become extremely tedious. While a far better mathematician than the editor may be able to derive a suitable equation, I was unable to do so.
Although I did try this with a spreadsheet (and that's where the formulae I eventually used came from), it is a reiterative process. Those wishing to experiment are encouraged to do so. The formulae shown below work, and the results are an exact science. Calculating the values is anything but, unfortunately.
+ +Essentially, the calculations involve solving for an unbalanced Wheatstone bridge network, with specific desired end results. Of course you can always cheat and use a series resistor of the appropriate value, but this will have to dissipate (and waste) considerable power with any amplifier capable of reasonable output. I can't recommend using a series resistor unless the required output impedance is no more than one ohm.
+ +In essence, the determination of the correct values is not easy. Because we generally need to specify the output impedance, as well as the normal loaded gain (so the amp is in line with others in the system), we end up with too many unknown variables - the circuit may look simple, but calculations for it are not.
+ +
Figure 4 - Amplifier With Defined Zout
As can be seen, there is minimal additional complexity to achieve this result, and in my experience the final exact impedance is not overly critical, given the 'real world' variations of a typical loudspeaker driver.
+ +The no-load voltage is 28.5V with an input of 1V, and this drops to 19.2V at 8 ohms, and 14.5V with a 4 ohm load. These voltages are measured across the load, ignoring the voltage drop of the series feedback resistor. Note that a resistive load is assumed, but a speaker has an impedance that varies with frequency.
+ +From this, we can calculate the exact output impedance from ...
+ +++ ++
++ I L = VL / RL (where I L = load current, VL = loaded Voltage and RL = load + resistance) + Z OUT = (VU - VL) / I L (where Z OUT = output impedance, VU = unloaded voltage, + VL = loaded voltage) + + I L = 19.2 / 8 = 2.4 + Z OUT = (28.5 - 19.2) / 2.4 = 9.3 / 2.4 = 3.875 Ohms
Note that I have deliberately not developed a single formula to calculate impedance, because no-one will remember it. By showing the basic calculations (using only Ohm's law), it becomes easier to understand the process and remember the method used. An approximate formula to calculate Z OUT is shown below. According to this formula, Z OUT is 3.875 ohms. This is in agreement with the result I obtained above, and with a simulation, and it will be more than acceptable for the normal range of desired impedances. It isn't complex, but it does require either simulation or a bench test to determine the loaded and unloaded voltages. Results will be within a couple of percent of the theoretical value, which is more than good enough when dealing with speakers.
+ +The circuit above is almost identical to that shown in the article / project Variable Amplifier Impedance. By varying R2, R3 and R4, it is possible to achieve a wide range of impedances that will be usable in this application. The circuit can be made variable, however this is not normally useful except for ongoing design and experimentation.
+ + + +The loudspeaker driver's nominal impedance is used to determine the loaded gain. The preamp is useful because nearly all ESP amps are designed with a gain of 23 (approx. 27dB). However, when you start playing with the output impedance, you'll need a dedicated preamp in front of the power amp, with a gain pot (or trimpot) that lets you adjust the final gain. Mostly, the preamp will only need a gain of about two, and the output is then reduced to get the final gain you need (namely 23 with the nominal impedance).
+ +As shown above, the values apply for the following example. R1 is 22k as used in most ESP amplifier designs, and the feedback resistor (R4) is 0.2 ohm. Using 2.4k (2 x 1.2k in series) for R2 and 1.2k for R3, the output impedance is 3.875 Ohms, and the loaded gain is 16.1 with an 8 ohm load. [ ]
This scheme is especially useful for increasing the Qt of low Qt high efficiency drivers, such as the JBL K140. This driver is still much available on the internet as a re-cone, and is especially designed for bass guitar applications. If we put it into a filter assisted alignment the result is an increase power handling. Increasing Qt has the advantage that the drivers high efficiency is preserved whilst achieving a low enough f3 in a small box. We want f3 = 40Hz for the usual bass guitar tuning.
+ +Looking at the QB5 class I alignments the required f3 / fs of 1.333 is achieved by the alignment No. 7, this needs a Qt of 0.271, and a box of 92 litres ...
+ ++ Qe' = 1 / (1 / 0.271 - 1 / 5) = 0.287+ +
+ Ro = 1.68 +
The above values are perfect, with only the smallest discrepancy which is of no consequence in the final design. WinISD gives the following ..
+ +
Figure 5 - WinISD Plot of JBL K140 With Modified Qt
This is achieved by using the amplifier set for an output impedance of 1.7 Ohms, and with a second order 40Hz high-pass filter having a Q of 1.658. It is worth noting that using any one or two of the techniques described will not work - the final result is derived only by the combination of compliance scaling, elevated source impedance and filter assistance applied as a complete system solution.
+ + +Thiele points out that all other things being equal, doubling Qt results in a +6db peak at the resonant frequency, [ 1 ], i.e.
+ ++ 20 Log(Qt / Qt') ++ +
From this we can fit a driver to a particular filter assisted alignment by modifying the fa damping by the amount
+ ++ dbfa' = dbfa - (20 Log(Qt / Qt')) ++ + +
With the advent of five and now seven speaker surround systems, there is an increasing need for small speaker systems with good power handling, and the all important SAF index improves with less intrusive enclosures. A typical high quality 150mm. Bass/mid. Driver is the Vifa P17WJ. In this case we fit it to a box that will give an f3 of around 80Hz, making it suitable for use as a satellite, and use a QB5 class II alignment, this optimizes excursion limited power handling.
+ +If we use the QBQ class II alignment No. 14, we have
+ ++ Vb = 12.6 litres, fb = 68.6 Hz, Fa = 78.1 Hz, 1/Qfa = 1.9 ++ +
Giving the WinISD plot ...
+ +
Figure 6 - WinISD Plot of QB5 Alignment Response
In some cases we only want to use a simple passive first order filter, in others we only want to use two passive first order sections, in which case we have a fixed auxiliary filter damping factor of two. This is characteristic of the auxiliary filters in the QB5 class III alignments. In the case of being able to use only a single first order section, the B5 alignments [ 8 ], are useful, these are reproduced in table #2.
+ +Qt | Vas/vb | Fb/fs | Fa/fs | F3/fs | kbf | kbv + |
1.320 | 0.0438 | 0.695 | 0.431 | 0.651 | 0.859 | 13.10 + |
1.230 | 0.0561 | 0.709 | 0.455 | 0.66 | 0.812 | 11.78 + |
1.120 | 0.0711 | 0.725 | 0.485 | 0.671 | 0.752 | 11.21 + |
1.010 | 0.0914 | 0.745 | 0.526 | 0.686 | 0.693 | 10.725 + |
0.881 | 0.126 | 0.774 | 0.588 | 0.712 | 0.627 | 10.225 + |
0.791 | 0.163 | 0.801 | 0.645 | 0.736 | 0.582 | 9.805 + |
0.727 | 0.200 | 0.824 | 0.694 | 0.76 | 0.553 | 9.460 + |
0.666 | 0.251 | 0.852 | 0.746 | 0.79 | 0.526 | 8.920 + |
0.633 | 0.289 | 0.870 | 0.781 | 0.811 | 0.513 | 8.636 + |
0.564 | 0.406 | 0.917 | 0.862 | 0.871 | 0.491 | 7.743 + |
0.529 | 0.499 | 0.948 | 0.917 | 0.915 | 0.484 | 7.161 + |
0.508 | 0.567 | 0.967 | 0.943 | 0.944 | 0.480 | 6.834 + |
0.497 | 0.614 | 0.979 | 0.962 | 0.964 | 0.479 | 6.594 + |
0.489 | 0.645 | 0.987 | 0.98 | 0.977 | 0.478 | 6.484 + |
0.478 | 0.701 | 1.00 | 1.00 | 1.00 | 0.478 | 6.243 + |
0.435 | 0.97 | 1.05 | 1.088 | 1.10 | 0.479 | 5.448 + |
0.392 | 1.39 | 1.10 | 1.20 | 1.25 | 0.490 | 4.682 + |
0.346 | 2.03 | 1.13 | 1.348 | 1.48 | 0.512 | 4.115 + |
0.328 | 2.36 | 1.13 | 1.42 | 1.60 | 0.525 | 3.934 + |
0.317 | 2.55 | 1.13 | 1.466 | 1.68 | 0.533 | 3.902 + |
0.309 | 2.72 | 1.12 | 1.506 | 1.76 | 0.544 | 3.850 + |
0.298 | 2.94 | 1.12 | 1.56 | 1.86 | 0.554 | 3.830 + |
A driver that is popular for computer speakers is the Tangband, W3-926S. Using a B5 alignment with an F3 = 100Hz gives a box of 4.7 litres, tuned to 105Hz. This achieves its rated xpeak with one Watt of input. If you put this driver into a sealed box the maximum output at 100Hz is 77.9dB, with the B5 box this is raised to 89.9db. Using a QB5 class III box gives a rather large 7.5litres.
+ +It also should be noted that putting a capacitor between 150 and 220µF in series with the driver only changes the frequency response by around 2db. As shown in the Matlab plot of Figure 7 the frequency response with 220µF is not as smooth as with an isolated filter, but is within acceptable limits.
+ +
Figure 7 - Capacitor Coupling (Non-Isolated Filter)
In some instances it is convenient to provide one isolated and one non isolated filter, i.e. by means of an input coupling capacitor on the amplifier, and a capacitor in series with the driver. Using a first order isolated filter of twice the non isolated filters f3 gives a plot of Figure 8.
+ +
Figure 8 - Isolated Plus Non-Isolated Filter
As illustrated in this article it is possible to fit just about any driver to just about any alignment using the three techniques outlined, or indeed a combination of them.
+ +All efforts have been made to make the calculations of the simple plug in the numbers variety, I apologise where this is not possible. Anybody is at liberty to turn this article into a spread sheet, or some other easy to use form, do not ask me for this however - programming computers is a thing that I dislike, and avoid at all costs (that's the author's comment, not mine).
+ + +![]() | + + + + + + + |
Elliott Sound Products | +Speaker Current Drive | +
This topic has been looked at in a couple of articles/ projects, but it's something that creates problems for people, as it always seems to sound 'better' when applied to any given loudspeaker driver or system. In the article/ project Variable Amplifier Impedance (aka Project 56) the basics of both positive and negative impedance are covered, but here we will only look at positive impedance because negative impedance has too many ... negatives . As discussed in the article, negative impedance is inherently unstable, and that's not something you want when driving a loudspeaker.
This is for the experimenter, and the results can be worthwhile if it's done properly. For example, for any number of reasons, it may be advantageous to build a speaker box that's a little too big for the driver (as determined by the Thiele/ Small parameters, and modelled in WinISD or similar). If the driver's Qts is increased by driving it with a higher than normal (i.e. greater than zero ohms) impedance, everything falls back into place, and a useful extension to the low frequency -3dB point can be achieved. Surprisingly, this can work with vented (ported) enclosures as well, but the results are less predictable. Damping is especially troublesome with a vented box.
+ +The idea of being able to vary the output impedance of a power amplifier has been around for a long time. I have used these techniques since the early 1970s in various designs, and much as I would like to be able to claim otherwise, I was by no means the first. In some cases (especially in the early years), it's likely that the high output impedance was 'accidental', in that the makers of some of the equipment weren't at the highest skill levels, and just copied what someone else had done before them. The end result worked, so it wasn't given another thought.
+ +Current drive (or at least a modified form thereof) is used to drive spring reverb units, and various other transducers where a constant current is either preferable or essential, and where voltage drive is inappropriate. For many years (even before transistor amps), voltage drive has been what we all strive for with power amplifiers - a perfect (ideal) voltage amplifier has zero ohms output impedance, and the amplitude does not change as the load varies. Loudspeakers are very non-linear loads, and the impedance will change at different frequencies for all sorts of reasons. Voltage drive has an advantage, because it is easy to achieve (down to well below 0.2Ω) and, most importantly, it is easy to achieve consistent results.
+ +'True' current drive has a high impedance, which may be several thousand ohms or more. Despite simplistic circuits you might come across, it can be difficult to achieve, and amplifiers designed for such high impedances should ideally be installed in (or on) the loudspeaker cabinet. Disconnection of the speaker can result in the amplifier's output voltage swinging to one or the other rail voltage, because there is no feedback. Extreme care is needed to ensure that the amplifier's gain is properly matched to the speaker driver, because the two are inextricably linked. Even a small change of speaker characteristics can cause a fairly substantial level change, at one or more frequencies.
+ +This article mainly covers 'mixed mode' feedback, which provides a defined source impedance to the drivers. This is not current drive, and isn't intended to be. There are some 'interesting' challenges to building an amplifier that has a particularly high output impedance, not the least of which is the likelihood of DC offset which ideally should be maintained at less than 50mV (or less than 10mA) into a nominal 8Ω load.
+ +Based on available literature, the use of current drive (very high amplifier impedance) does appear to improve some aspects of loudspeakers. However, it's entirely up to readers to look through the references and make a decision for themselves. I make no firm claims one way or the other, but merely look at the current 'state of the art', and examine ways to achieve a desired output impedance. To some extent, there seems little doubt that current drive does improve performance, but what matters is whether it can be made to work in a real loudspeaker system. Of course, it also depends on whether the 'improvements' are audible, assuming that identical frequency response can be achieved with both voltage and current drive. This is difficult because so many aspects of the driver(s) change dramatically depending on the source impedance.
+ + +A loudspeaker responds to current, not voltage. When a voltage is impressed across the voicecoil, a current flows that is directly related to the impedance at that frequency, and it is the current flow that creates the voicecoil movement. A moving coil loudspeaker will generate a back-EMF whenever the impedance is inductive, seen as impedance rising with increasing frequency. The back-EMF opposes the applied current. Above resonance (impedance falling with increasing frequency), the speaker appears as a capacitive load. These complex interactions are responsible for the impedance curve seen for any loudspeaker. Adding a vent to the enclosure adds to the complexity by including another resonance, this time due to the enclosure tuning and dictated by the air mass within the enclosure and vent.
+ +A typical driver is resistive at two frequencies only. At the resonant peak the impedance is purely resistive, and the same is true at a frequency between resonance and where the impedance starts to rise due to the voicecoil's semi-inductance. The voicecoil is not a 'true' inductance because it's influenced by eddy currents in the steel pole pieces. This resistive frequency is the lowest impedance shown on the curve, and is usually between 200Hz and 400Hz. The voicecoil along with the attached cone and spider (etc.) form a mechanical resonance that is reflected back to the source. There are non-linearities in all the mechanical components, and further electrical non-linearities are caused by the magnetic structure.
+ +The so-called 'damping factor' quoted by amplifier makers only has an effect at the speaker's resonance. This is the point where the impedance it its greatest, and by applying an effective short circuit (by the amplifier), the resonance is damped. However, the damping is limited by the voicecoil resistance, which is in series with the 'resonant circuit'. This resonant circuit is seen in Figure 1, with the components shown as Lp, Cp and Rp. Rp is a special case, and is basically the value of impedance at resonance (plus the voicecoil resistance (Rvc) to be exact). It's due to mechanical losses in the cone, surround and spider. The woofer shown in Figure 1 shows an impedance of 47Ω at resonance. Note that the rather low resonant frequency is simply due to the model used, and the actual frequency is of little consequence. The same effects are produced regardless.
+ +Voltage drive (the most common by far) maintains a constant voltage across the load, regardless of impedance variations. Consider the simple loudspeaker system shown below. The woofer and tweeter use a simple passive series crossover network, consisting of L1 and C1. The equivalent circuits of the two drivers are included, and while these do not represent any particular drivers, they are reasonably close to 'typical' values that you might determine by analysis.
+ +
Figure 1 - Two Way Loudspeaker Schematic
The crossover is at 3.17kHz, and is a relatively conventional Linkwitz-Riley 12dB/ octave design. It includes compensation for the tweeter's resonance, as well as impedance compensation for the woofer. This ensure that the woofer's impedance remains flat across the crossover frequency to prevent response aberrations. The impedance compensation networks are indicated on the drawing. There are several articles on the ESP site that deal with crossover networks, and for this exercise we'll stay with this 12dB/ octave network. As always, one driver must be connected in reverse phase due to the phase behaviour of the crossover itself.
+ +
Figure 2 - Two Way Loudspeaker Impedance Curve
The impedance curve is much as one would expect, and when this speaker is driven from a voltage amp (low Z out) it will (or should) sound just the way you'd hope for. The two electrical signals (woofer + tweeter) sum flat when driven by a voltage amplifier. We need to examine the power delivered to the system, so first we'll look at using 'conventional' voltage drive.
+ +If we assume a nominal power of 1W (2.83V RMS into 8Ω), the power at 200Hz is 1.44W because the impedance is less than 8Ω. At woofer resonance (39Hz), the impedance is 46Ω, so power is down to 174mW. Hopefully, the resonant boost obtained will mean that the level isn't too far down (-3dB is expected for a sealed box). At 3kHz there's another peak (12.6Ω) so power is reduced to 636mW at the crossover frequency, possibly resulting in an audible dip at that frequency. When we get to 20kHz, the impedance is only 5.4Ω, so power is greater, at 1.48W.
+ +To keep everything the same with 'pure' current drive (effectively infinite Z out), the current at 200Hz needs to be 508mA (close enough). This current will be forced into the system at any frequency, so at the woofer's resonance, the power is now 11.9W (ouch), at 3kHz it's 3.25W, and back down to 1.44W at 20kHz. It's fairly obvious that the result will not sound as it should. However, the bass boost and increase in 'presence' at 3kHz may give the impression of 'better' bass and treble. By using a modified impedance, it can be (almost) possible to maintain fairly consistent power regardless of impedance, but will that make the system sound any better?
+ +According to a few articles on the Net, no-one should use voltage drive. This is a somewhat naive approach for a number of reasons, not the least of which is that everyone designs loudspeaker systems with the express intention that they will be driven by voltage amplifiers. Crossover networks are designed expressly for 'conventional' voltage amplifiers, as are the loudspeakers used in the enclosure. Multi-way systems (3-way or more) become something of a nightmare to design for a high impedance source, and an amplifier capable of very high output impedance (at least 10 times the highest speaker impedance at resonance) is also a difficult proposition. It can be done, but no commercial systems that I know of do so.
+ +Once an impedance other than zero or infinity is used, the calculations become a great deal more tedious and the results are less predictable. Most of the time, performing said tedious calculations or simulating the results will not be useful, so it becomes either a subjective assessment or the results have to be measured. Unfortunately, the measurements are also tedious (and somewhat error prone unless you have an anechoic chamber handy). I'll save you the trouble - mostly, the answer is 'maybe'. With a system having a flatter impedance curve overall there will be an increase in bass output, and while a bass boost initially might sound 'better' (at least initially), usually it's not. Where a modified impedance can be most useful is when a system is biamped or triamped, with the modified impedance usually applied to the woofer and/ or midrange driver. This is easily done using the circuit shown below.
+ +Lest anyone be misled by some on-line material, I suggest the following experiment. Disconnect the power amplifier from one of your speakers (the amp will, of course, be turned off). Lightly but sharply tap the woofer cone, and listen to the resonant sound of the decay. In some cases it will be more audible if you can place your ear near the vent (if applicable). Almost without exception, there will be a 'boomy' single note bass frequency that should be quite audible. Now, join the speaker terminals with a piece of wire and repeat the test. The resonant 'boominess' should be audibly reduced, indicating that the amplifier does indeed apply damping to the loudspeaker. It might not be as great as amplifier specifications claim, but the damping effect is almost always audible.
+ +Because any current drive amplifier has a significant (non-zero) output impedance, it should be immediately obvious that without amplifier damping, the speaker will sound boomy. In some cases, you may even hear 'one note' bass with music - i.e. bass notes at the right frequency will be heavily accented, while other bass notes will be much quieter. This is not what we want to achieve, so the enclosure itself must be modified to include a great deal more damping material than otherwise to suppress the unwanted resonance(s). This can have some negative effects on the port tuning and box resonance (both of which are important for any tuned system).
+ + +Voltage drive is firmly established as the #1 method for powering loudspeakers. The drivers are designed and manufactured with that in mind, and the Thiele-Small parameters are invariably quoted with the assumption that the driver will be used with a conventional (voltage) amplifier. Where response correction is needed (whether for artistic or practical purposes), the most common methods are equalisers based on 'traditional' analogue techniques or (more commonly these days) digital, using DSP - digital signal processing. There's no real reason to think that using EQ will produce a result that's any different from an amplifier with a defined output impedance. Using a voltage amp with EQ retains the amplifier's damping of the speaker.
+ +In most cases, speakers rely on at least some degree of acoustic damping provided by the amplifier, although for very high power systems that may run speakers in parallel with a combined nominal impedance of perhaps two ohms, amplifier damping is seriously curtailed by the resistance of the speaker leads. When high output impedance is used, the enclosures must be very well damped acoustically, because the amplifier provides no useful damping at all. The situation is different for guitar (and some other instrument amplifiers), where players prefer the added 'tonality' the speakers add when under damped. Many are accustomed to using valve (vacuum tube) amps, most of which have a relatively high output impedance because there's often very little negative feedback.
+ +Since a large amount of negative feedback is used in nearly all transistor amplifiers, this reduces the open loop output impedance dramatically. Any amplifier with a high open loop gain and significant feedback reduces the intrinsic output impedance, and that's used in most amplifiers to create the low output impedance that's expected in the market. It's not uncommon for power amps to have Z out at the amplifier terminals (not including any wiring or connectors) to be less than 10mΩ.
+ +
Figure 3 - Seas P17RC In 8 Litre Box (Voltage Drive)
The Seas P17RC driver was selected from the database of WinISD-Pro as an example only. The program will suggest a 10.62 litre box, but that can be reduced to 8 litres with very little change in the -3dB frequency. Unfortunately, the driver only manages to get to 80Hz in either enclosure, so a larger box is better. With voltage drive, that doesn't change the -3dB frequency at all, so the next thing to do is increase the source impedance. With 7Ω, there's a little peaking (+1.19dB at 110Hz), and the -3dB frequency is reduced to 60Hz - a useful improvement. Compared to room effects, the small peak is probably of little consequence (but this is for the designer to decide).
+ +
Figure 4 - Seas P17RC In 12 Litre Box (4 Ohm Drive)
The response with 7 ohm drive is shown above, with the enclosure increased to 12 litres. Because the amplifier provides little damping, the box needs to be well stuffed with appropriate material to ensure that there's little 'overhang' after a transient, but that's easily achieved and should be considered mandatory anyway to minimise internal reflections. The only thing to do now is arrange an amplifier that has an output impedance of 7Ω. By increasing the source (amplifier) impedance, the apparent voicecoil resistance is increased, which results in an increase of the electrical Q (Qes) of the driver. The effective increase of Qes means that the driver performs better in a larger box. However, the 'law of diminishing returns' strikes quickly, so to get down to 50Hz (-3dB) would require a 25 litre box and an amplifier impedance of 12Ω. This is impractical for a number of reasons.
+ +For initial testing, it's easy to simply add a physical resistance in series with the amplifier's output. While this 'wastes' considerable power, it's an easy way to run tests so you can decide whether it's worthwhile to pursue the process to a modified impedance amplifier. The resistor needs to be at least 5W (10W is better if you use a large amp), and the power loss is not important if you are performing low-level listening tests and/ or measurements. By maintaining a reasonable stock of resistors in various values from 2.2 ohms up to perhaps 22 ohms, you can test the theory easily without making changes to the power amp. If you decide that 'elevated' output is advantageous, then you can run final tests with an amplifier with the selected output impedance. Remember that listening tests must be at the same SPL or the results will be skewed towards the configuration that's louder. SPL should be within 1dB overall, but if you can manage better that's preferable (0.1dB is generally considered the optimum level matching).
+ +While the above shows the response with 7 ohms, it's (probably) better to limit the output impedance to around 4 ohms - at least for an initial test. Within reason, you can set up almost any impedance you like, and the exact value isn't particularly critical. Most loudspeaker drivers will have more variation than you'll get with an error of 0.5Ω or so.
+ +
Figure 5 - Concept Four Ohm Output Impedance Amplifier
Mixed mode feedback component values must be determined to achieve the desired result. The values shown will achieve Z out of just over 4Ω, but there are practical issues that need to be addressed. The main one is that the 0.22Ω resistor has to be at least 2W and may run hot, so mounting it on the amplifier PCB might not be a good idea. Having it connected using wires isn't a good idea either, because if the connection from R2 to R3 is lost, the amp will almost certainly oscillate and may destroy itself. Most amplifiers (whether discrete or IC types) have a minimum gain that can be used, below which oscillation is likely. It's common that IC amplifiers in particular have a minimum gain requirement of 25dB (a gain of around 18), below which they are likely to oscillate. The Figure 5 circuit cannot achieve this (gain with an 8 ohm load is only 13).
+ +To combat the gain problem, the feedback network has to be arranged so that the minimum gain is always present, regardless of the load's impedance. This is shown in several ESP projects, and the general form (arranged for 4Ω impedance) is shown in Figure 6. While the Figure 5 circuit works, it is not recommended for use with any power amplifier. The following version is tried and tested, and works properly with almost any power amplifier.
+ +++ ++
+Z out = R3 × ( R1 + R2 ) / R2 + (where R1, R2 and R3 are in the locations shown in Figure 5) +
The above formula isn't especially accurate, but it does allow you to get a rough idea of the output impedance with different values. Figure 5 has a Z out of 4.4Ω based on the formula, but it's actually (almost) exactly 4Ω. While this may seem to be a large error, it's not really worth worrying about. A discrepancy of 10% is neither here nor there for the amplifier, because the speaker will have much greater errors.
+ +
Figure 6 - Practical Four Ohm Output Impedance Amplifier
The practical version is a simple rearrangement of feedback resistances and the addition of a resistor and a capacitor. C1 is there so that the amp doesn't have a huge DC gain, which will cause problems. The value can be increased if you prefer, but the value shown gives a -3dB frequency of under 5Hz for output impedances of 4Ω or less. Up to 470µF will be necessary when R2 is less than 200Ω. R4 ensures that the amplifier has a nominal gain of 23 before the current feedback is connected. R2 is no longer critical, and if disconnected the amplifier works normally without oscillation. Unfortunately, a current amplifier (whether 'true' constant current or mixed feedback) is reliant on the load impedance, so setting the gain can be irksome with any biamped or triamped system. With the values shown, gain is just over 43 (32.7dB) with an 8 ohm load, but of course that changes as the speaker impedance varies with frequency. The minimum gain requirement is met easily, and can only be violated if the amp's load is less than 1.5Ω (not recommended for any amplifier).
+ +Fortunately, it's easy to come up with a formula that comes close for this version, at least for output impedance - it's the same as shown above. Calculating the gain with no load is easy, but working out the gain with a load connected is a great deal more difficult. There are too many simultaneous voltages and currents that combine together to reach the end result, so it's easier to produce a table with different values for R3. This is (usually) the only value that needs to be changed, but even then the loaded voltage will always be different as the frequency is changed, because the amp's output impedance is non-zero and the loudspeaker load has an impedance that changes with frequency.
+ +R2 | Output Impedance | Gain - No Load | Gain - 8Ω Load + |
100 Ω | 22 Ω | 243 (47.7 dB) | 65 (36.3 dB) + |
120 Ω | 18 Ω | 206 (46.3 dB) | 63 (36 dB) + |
150 Ω | 15 Ω | 169 (44.5 dB) | 60 (35.5 dB) + |
180 Ω | 12 Ω | 145 (43 dB) | 58 (35.2 dB) + |
220 Ω | 10 Ω | 123 (42 dB) | 55 (34.8 dB) + |
270 Ω | 8 Ω | 104 (40 dB) | 52 (34.3 dB) + |
330 Ω | 7 Ω | 90 (39 dB) | 49 (33.8 dB) + |
510 Ω | 4 Ω | 66 (36.4 dB) | 43 (32.7 dB) + |
680 Ω | 3 Ω | 55 (34.8 dB) | 39 (31.8 dB) + |
1k Ω | 2 Ω | 45 (33 dB) | 35 (30.8 dB) + |
For low impedances and especially if the load is 8Ω or more, it will be easier to use a series resistor to set the impedance. For example, if you only need a 2Ω output impedance, a wirewound resistor is a lot simpler than modifying the amplifier. While some power is lost across the resistor, it's generally comparatively low and won't be audible. For example, a 2Ω resistor in series with an 8Ω load and a 60W amplifier, the resistor would dissipate a bit over 12W (at full continuous power), and you'll 'lose' about 1.9dB. However, the amp's peak voltage swing around the speaker's resonant frequency is barely affected, and it's highly unlikely that you'll even notice the difference. Average power dissipation in the resistor won't exceed 5W with 'typical' programme material.
+ +Note that the above table is approximate - there are small errors that are of little consequence with this approach. The values are close enough for most purposes, and if you are using particularly high impedances, a few ohms of difference is of no account. You can see that the unloaded gain becomes rather extreme for Z out above 10Ω, and the loaded gain may be higher than desirable as well. With a discrete amplifier this can be reduced with some circuit changes, but not with IC amplifiers.
+ +Increasing the value of R4 reduces the gain (both with and without load) and has only a minor effect on the output impedance when it's greater than 10Ω or so. The effect of changing R4 is far more pronounced at low impedances, where R2 is also a comparatively high value. As noted earlier, gain must always be greater than the minimum specified for the amplifier. The suggested value of 1k ensures that the amplifier's gain can never be less than 23 (27dB), unless the load impedance is below 1.5Ω. That represents a fault condition that cannot be allowed to occur during operation. When Z out is greater than 10Ω, there is some 'wriggle' room to reduce the gain by increasing the value of R4. You will have to run tests to ensure that the gain doesn't fall below the minimum required and/ or that the amp remains stable (doesn't oscillate).
+ +It's not hard to see why voltage drive is preferred - the amplifier gain remains the same regardless of the load impedance. With partial current drive (Z out > 0), the amplifier's gain depends on the load impedance, and the amp and speaker must be properly matched or the results are unpredictable. For instrument amps this isn't a problem, because it's just part of 'the sound', and speaker levels don't require matching as they do with a biamped system.
+ +It must be considered that almost without exception, loudspeaker drivers and complete systems are designed based on the assumption that the amplifier has a low (less than 0.5 ohm) output impedance. If driven using current drive (full or partial), the result always sounds different, and because of extra bass (and usually treble), people often equate 'different' to 'better'. They are not equivalent, and the result is almost invariably worse, with uneven frequency response and poor low frequency damping. The only exception is if the speaker enclosure and amplifier are designed 'as-one', with the output impedance of the amplifier matched to suit the driver's performance.
+ +Other than for instrument amplifiers (especially guitar and bass), once you decide to use a modified impedance amplifier it becomes an integral part of the loudspeaker. You can no longer mix and match amplifiers, because that will affect the system's response, as shown in Figures 3 and 4. If the speaker system was designed to be driven from a 4 ohm source impedance, the response (especially bass) will be adversely affected if a 'normal' amplifier is used.
+ +From Table 1, the no-load voltage and 8Ω voltages are given. These voltages are measured across the output, and include the voltage drop of the series feedback resistor. Note that a resistive load is assumed, but a speaker has an impedance that varies with frequency. We'll use the values for a 510 ohm resistor as R3 in the formulae below. From this, we can calculate the exact output impedance from ...
+ +++ ++
++ I L = V L / R L (where I L = load current, V L = loaded Voltage and R L = load resistance) Z out = (V U - V L) / I L + (where Z out = output impedance, V U = unloaded voltage, V L = loaded voltage) + + I L = 43 / 8 = 5.375 A + Z out = (66 - 43) / 5.375 = 23 / 5.375 = 4.28Ω +
I simply used the voltages (gain values) from the table, rather than any actual operating voltage. This makes no difference to the final result. You can subtract the value of R3 from the final result, but it's not worth the effort. Note that I have deliberately not developed a single formula to calculate impedance, because no-one will remember it. By showing the basic calculations (using only Ohm's law), it becomes easier to understand the process and remember the method used. An approximate formula to calculate Z out is shown above. According to this formula, Z out is 4.4Ω. This is not entirely in agreement with the results obtained above, nor with a simulation, but it will be more than acceptable for the normal range of desired impedances and it isn't complex. Results will be within a few percent of the theoretical value, which is more than good enough when dealing with speakers.
+ +So we have created an amp with an output impedance of 4.28Ω, with very little loss. Just over 0.5W is lost in the 0.1 ohm series feedback resistor with 50W output into 8Ω, but you must use at least a 2W (wirewound) resistor so it can handle the current. To see if this is useful, we will now have a look at what happens when the load impedance doubles or halves.
+ +With a 16 ohm load, the power into the load falls to 36W, or about -1.4dB. Contrast this with the conventional low impedance amp whose power will fall to 25W (-3dB or half). When the impedance is reduced to 4Ω, the output power is now 56W (an increase of 0.5dB), while a conventional amp would be producing 100W - an increase of 3dB.
+ +There is no magical impedance that will give the same power into any load from double to half the nominal, but about 4Ω for a nominal 8 ohm system comes close. I am not about to test all possibilities, but having experimented with the concept for many years I am quite convinced that there are practical benefits to the use of modified current drive, where the impedance is defined. The exact impedance will depend to a very large degree on just what you are trying to achieve. It's not a panacea for anything of course, but it can be used to advantage when applied properly.
+ +Measuring the output impedance is easy, at least when it's 4Ω or greater. With no load, apply a sinewave input, and set the level to something convenient (e.g. 8V peak-peak with fits on a scope screen nicely). Next, apply a load that's around the value you expect for output impedance. The level should drop to exactly half when the load is connected. For example, look at the values for an 8 ohm impedance in Table 1. With no load, the gain is 104, falling to 52 with an 8 ohm load - output impedance is therefore 8Ω.
+ +It's harder when the designed output impedance is low (less than 4Ω), because you risk damaging the amplifier with very low load impedances. It can still be done, simply by reducing the input level so the output is (say) 80mV peak-peak. This ensures that amplifier output current is low - about 28mA with an output of 80mV, so the amp will not be damaged. The alternative is to use the rated load impedance and run some calculations to determine the output impedance, using the formulae shown above.
+ +An important point needs to be made regarding amplifier clipping. When an amplifier's output voltage attempts to go beyond the power supply voltages, the amplifier is clipping (cutting off) the waveform peaks, and all forms of feedback are inoperable. Feedback (whether voltage or current) relies on the amplifier remaining within its linear range at all times. A current drive or mixed mode amplifier cannot provide more current or voltage than it's designed to provide to the load, and if the maximum current is exceeded the amplifier may be destroyed. Exceeding the linear voltage range simply results in clipping, and the output is limited by the supply voltage - current drive is inoperable with an overdriven amplifier.
+ + +It has been suggested that loudspeaker intermodulation distortion is dramatically reduced by using a high impedance source [ 1 ]. One site I looked at some time ago was Russian, and a reader sent me a translation. I have experimented with this idea to some extent, but have been unable to prove that this is the case - at least with the drivers I tried it with.
+ +This does not mean that the claim is false, but I am unable to think of any valid reason that could account for such driver behaviour. It is interesting anyway, and some of you might like to carry out a few experiments of your own. I would be most interested to hear about your results should you decide to test this theory. It's worth remembering that with no exceptions I can think of, loudspeaker drivers are designed for (and tested with) as close to a zero ohm source impedance as possible. All commercial speaker systems are designed to be fed with a normal low impedance power amplifier, because that's considered the 'ideal' case and virtually all commercial hi-fi and sound reinforcement amps are designed for (very) low output impedance.
+ +By adjusting the impedance of an amplifier, the total Q (Qts) of a loudspeaker can also be altered, so driver behaviour in a given sized box can be changed. This can be used to adapt an otherwise unsuitable loudspeaker to a speaker enclosure, but it does have limitations in terms of the overall variation that can be achieved.
+ +More variation can be achieved by virtue of the fact that it is now possible to either retain or increase the power delivered to a loudspeaker at (or near) resonance, so that the ultimate -3dB frequency may be lowered from that theoretically claimed for a loudspeaker/ enclosure combination. Care is needed, since too much additional power will make the speaker boomy, and usually additional internal damping material is needed to compensate for the minimal damping factor provided by the amplifier. With the amplifier output impedance set at 4Ω, damping factor into an 8 ohm load is 2 - a far cry from the figures of several hundred typically quoted. These (of course) fail to take into consideration the resistance of the speaker leads, and loudspeakers themselves are usually compromised by the crossover network, so the damping factor figure is not always as useful (nor as high) as it might seem.
+ +
Figure 7 - Variable Impedance Amplifier
The version shown above has variable impedance. It can be varied from (close to) zero ohms when the pot wiper is at ground, up to 100Ω with the pot at maximum. Be warned that the gain varies as the pot value is changed, although the variation isn't overly dramatic for most of the range.
+ +The results of using modified impedance can be very satisfying, allowing a useful extension of the bottom end. My own speakers are driven from a 2 ohm amplifier impedance, and there is no boominess or other unpleasantness (the enclosures are exceptionally well damped), but a worthwhile improvement in bass response is quite noticeable for the woofers, and the midrange drivers would otherwise have a slight droop at 300Hz (the crossover frequency between the woofer and midrange).
+ +Partial current drive can also be used with vented enclosures. Care is needed because they are more sensitive to the actual output impedance of the amplifier, but it's well within the abilities of anyone who chooses to experiment to it try out for themselves. WinISD-Pro is very handy for this, as it offers the ability to select the source impedance, something that the standard version doesn't provide. Using the same driver as shown above (Seas P17RC) in a 35 litre box, tuned to 35Hz and with a 3Ω source impedance, the response extends to 35Hz (-3dB) or 42Hz at the -1dB point. That's not bad for a 170mm diameter driver, and it would satisfy many listeners.
+ + +It's worthwhile to reiterate the comments made in the Project 56 article about power compression in loudspeakers. This is a natural phenomenon that causes loudspeaker drivers to lose efficiency as the voicecoil heats up, and while it's generally considered a nuisance, it may be the only thing that prevents driver failure in a system that's pushed to the limits. Consider a speaker driver rated at 1,000W - very silly, but they exist in great numbers. If operated with a 1kW amplifier, the average power might be around 500W - assuming some clipping, and heavy signal compression at the mixer output.
+ +After a short while, the voicecoil heats and its resistance rises, so less power can be absorbed from the amplifier. 3dB power compression is considered to be quite good (see Loudspeaker Power Handling Vs. Efficiency for more details), so the actual average power will drop to around 250W. There is one detail that it's worthwhile remembering ...
+ ++ Power compression may well be the only thing that saves the speaker from failure! ++ +
As the voicecoil heats up, the power is reduced, and that alone prevents the temperature from continuing to rise until the voicecoil fails or sets the cone on fire. If the amplifier were to have current drive (and sufficient reserve power - aka 'headroom'), the power will increase as the voicecoil gets hotter, ensuring the demise of the loudspeaker. For this to be 100% effective at destroying the speaker, the amp's output impedance has to be somewhat higher than the speaker's impedance (at least 6Ω for a 4Ω driver).
+ +Ultimately, the amp's supply rails limit the maximum power that can be delivered, but there are plenty of amps that are capable of destroying any loudspeaker ever made - especially very high power Class-D amps. It's probably fortunate that it's often somewhere between inconvenient to impossible to convert some Class-D amps to current drive without serious modifications.
+ +Power compression is very real, and if you do anything to 'compensate' (such as using a bigger amp and turning up the volume) driver failure is almost a certainty. Equipping amplifiers with partial current drive would be an excellent way to guarantee driver failures, because the voicecoil self-heating cannot protect the system from excess temperature. Unfortunately, the use of negative impedance has too many other problems, so it can't be used to help protect the drivers.
+ +The effects of voicecoil temperature (and therefore its resistance) also have implications for a passive (inductor and capacitor) crossover network. As the voicecoil resistance rises, so too does the impedance of the speaker, and passive crossover networks have to be designed to match a particular impedance. When the impedance changes, so does the crossover frequency and filter alignment, leading to response anomalies when a system is pushed to its limits. This does not occur with active crossovers of course, but level differences between drivers can (and do) change unless all of the voicecoils are at the same temperature. To say that this is unlikely is serious understatement.
+ +Note that positive output impedance is very common for guitar (and to a lesser extent, bass) amplifiers, but they are traditionally equipped with speakers that can handle the full output power when the amp is driven into hard clipping, so the output impedance cannot create a situation where the speakers get more power than they can handle safely. It's used as a tonal modifier, allowing the speakers to provide their own colouration to the sound, and is simply an extension of the situation with valve ('tube') amps, most of which have comparatively high output impedance.
+ +There is the potential for power compression to introduce distortion, due to the heating and cooling of the voicecoil. However, this is a relatively slow process (seconds rather than milliseconds), and will not usually generate significant audible distortion, even at the lowest frequencies of interest (below 25Hz). While I have no doubt that it could be measured if one were so inclined, attempting to eliminate (or mitigate) it would be a big mistake for the most part. As already stated, power compression may be the only thing that saves high power loudspeakers from destruction, although using current drive at 'sensible' power levels is unlikely to cause any harm. 'Sensible' in this context means an average power level of perhaps 5-10 watts, implying peak levels of up to 50-100 watts (calculated by voltage, and assuming that the voicecoil's impedance is the nominal value).
+ + +With all this info, it would be remiss of me not to include a proper current (aka transconductance) amplifier. They aren't trivial, and the circuit shown does not include complete details of the amplifier itself. The main addition is the DC servo circuit (U1), which is essential to keep DC out of the speaker. Use of a feedback coupling capacitor isn't practical because of the extremely low impedance of the feedback network, which would require an unrealistically large value capacitor. Even the DC servo needs to have a very slow response, simply because the output impedance is very high and unwanted interactions will occur.
+ +
Figure 8 - High Impedance Amplifier
The DC servo can't simply be connected to the feedback point either, because without R4 in its new location, the impedance is too low for the opamp to be able to correct any DC error. By using R4 and R7, the opamp can deliver just enough current to pull typical DC offsets (up to 1V or so at the output) back to something less than a couple of millivolts. The output impedance is about 500 ohms - not exactly infinite (as required for 'ideal' current drive), but it's an order of magnitude greater than the typical maximum impedance of a loudspeaker load. If R1 is deleted (meaning that you can also delete R2), Z out increases to over 8kΩ, but there is no reason to expect that this will be beneficial.
+ +You also face some difficulties trying to build an amplifier with an open-loop gain (without load) that may exceed 60 thousand (8kΩ Z out), while retaining flat open loop response and stable operation when loaded. These are not insignificant undertakings, and expecting an off-the-shelf power amp IC to provide good results is wishful thinking at best. The design of an amplifier that satisfies all of the criteria for true current output is daunting, simply because achieving very high output impedance is, to put it mildly, a serious undertaking.
+ +There are some suggested circuits in the second reference, but they are not trivial. The article covers the salient points, and specifically mentions the difficulties involved. It's unknown if anyone other than the authors have built amplifiers using the circuits shown, but be aware that it's a fairly old document and some of the suggested devices may be obsolete or difficult to obtain. It's also important that the final amplifier can not only deliver the current demanded by the load (loudspeaker), but also has sufficient voltage to accommodate the peak voltages, which may be far greater than are typically provided by a voltage amplifier.
+ + +It is too easy to make a change such as shown here, and fully believe that the result is an improvement, where in reality (as eventually discovered after extensive listening and comparison) the opposite is true. Positive impedance can produce an improvement in bass response, but the cost can be high - boomy, over-accentuated bass around resonance, usually accompanied by a loss of definition. There will be more freedom for the speaker cone to waffle about after the signal has gone ('overhang'), and it is rare that a speaker driven by a higher than normal impedance will perform well without additional damping material in the enclosure.
+ +There is no doubt that at output impedances in the order of 4 to 6Ω your amp will sound more like a valve amp (but generally with lower distortion), but it is up to you to decide if this is what you really want to do. The technique works well for guitar amps, as it allows the speaker to add its own colouration to the sound, which adds to the overall combination of distortion and other effects to produce pleasing results. For Hi-Fi the case is less clear, and experimenting is the only way you will ever find out for sure.
+ +However, you will need to take great care to avoid inadvertent bias towards one scheme or the other. This is sometimes known as the 'experimenter expectancy effect', in that the experimenter expects to hear a difference, and due to subconscious bias will hear a difference, even if the outcomes are actually identical. There is no known cure, and even the most experienced people (who already know about the effects of subconscious bias) will be caught out anyway. Getting around it with loudspeakers is particularly difficult, because the DBT (double-blind test) methodology is very difficult to implement with large physical enclosures that have to be in the same location so that room effects don't affect the outcome.
+ +I'm unsure just how you can avoid this effect for listening tests, but if careful measurements are used they are a more reliable way to determine whether a loudspeaker/ system is better or worse. This doesn't consider the psycho-acoustical phenomena that influence 'the sound' of any speaker system though, and this is one place where measurements may not coincide with listener preferences. The references show measurements that indicate lower levels of speaker intermodulation distortion, but that doesn't actually mean that the speaker sounds better. Many of the measurements described seem to have been taken at (IMO) unrealistically low power levels, so correlation with listening tests (using music) may not be as great as hoped for.
+ + +Much of the info here is similar to that shown in Project 56 and some parts are duplicated (deliberately). I have added more details so the info presented is easier to use, and it is intended to be a starting point for experimentation. The circuits shown will all work with 'real' amplifiers, but great care and considerable testing are needed to ensure that the results you actually obtain are providing a real benefit. Be very careful if you use IC power amps (LM3886 or TDA7293 for example). Most are designed to run at a particular minimum gain, and they may oscillate if the gain is reduced below the minimum recommended due to the current feedback. This is especially dangerous if the load impedance falls at high frequencies.
+ +There have been many claims over the years that current drive is the best, and some may claim it's the only) way to drive loudspeakers, as it reduces distortion and allows the speaker to work the "way it was intended". While there is some discussion of this on the Net (see [ 2 ] as an example), there is little real evidence that the benefits are anywhere near as great as claimed. Tests I've run have shown little improvement, and this is expected given that loudspeaker systems and the drivers used therein are designed specifically with the understanding that they will be driven with a voltage amplifier. By definition, that means the output impedance is low, always below 0.2Ω, and often much less.
+ +A claim that you may see is that current drive eliminates power compression in loudspeaker drivers, because the change of voicecoil resistance doesn't affect the amplifier current. While this is perfectly true, in reality as the voicecoil heats you may actually get more power with pure current drive, thus pretty much guaranteeing that the driver will be destroyed without human intervention. This can be mitigated by using modified impedance, but why? The reduced power delivered to speakers when they get hot is often the only thing that saves them from destruction, and current drive ceases as soon as the amplifier clips anyway.
+ +Naturally, there are a great many outrageous and/ or poorly thought through claims made by the ever present audio nut-cases - 'new' and 'revolutionary' are but two of the silly terms used to describe what they think they have found. Well, sorry chaps, it was actually never lost, it's anything but new, and isn't even a little bit revolutionary. Discoveries in this area are pretty much old-hat now, because so many people have played with current drive for so long.
+ +Many full-range loudspeakers are likely to sound better with current drive (extended bass and treble in particular), but cabinet size, internal damping and (more than likely) parallel filters have to be optimised to account for the loss of amplifier damping and to minimise peaks and/ or excessive high frequency output. Using mixed mode amplifiers can allow a speaker to work at its best in a larger than optimal enclosure, because the use of a defined source impedance affects the Thiele-Small parameters.
+ +It is also possible to adapt a bridged amplifier to use current drive, but there are some interesting obstacles to overcome. This will not be covered here unless there is overwhelming interest. In particular, the problem of ensuring a high gain with good frequency response remains, and maintaining stability at the lowest gain (coincident with the lowest impedance of the speaker driver) is difficult to achieve, especially for an amplifier that's expected to cover the full audio range. This becomes even harder if the output impedance is more than 100Ω.
+ +I've been using current drive in various forms since the early 1970s, with typical output impedances (at low frequencies) of up to 200Ω. Over the years many people have heard what they initially thought were huge improvements in the sound of individual drivers and/or complete systems. In reality, only some effects were ultimately found to be useful, and almost identical results can often be achieved with fairly basic equalisation. This doesn't negate the process though, and there are some who think that current drive is worthy of taking out a silly patent on a process that is already well known to a great many people, and for a very long time.
+ +For myself, I still like playing around with variable impedance. I have a 3-way active test amplifier with two channels that can be varied from -8 to +32Ω, and I use it regularly - it drives my workshop 3-way active sound system. It has been used in the past to test many, many drivers, enclosures and compression drivers + horns, and it remains a useful tool for testing, despite its age (it was built sometime in the 1980s!).
+ +Useful tool, major improvement in loudspeaker driver performance or just a fun thing to play with? I leave it to the reader to decide.
![]() |
Elliott Sound Products | Current Detection and Measurement |
There are countless requirements for monitoring current. The electricity meter in your fuse-box measures power, and determines the power from the voltage and current consumed. With reactive loads, the phase angle between voltage and current is used to ensure that the meter records power, and not volt-amps (VA). This also works with non-linear loads, such as the switchmode power supplies (SMPS) used for many home appliances, including high efficiency lighting (mostly LEDs these days), PC, TV and other similar devices, etc.
Being able to monitor the current is a requirement for a great many systems, and many SMPS circuits include a current monitoring function to protect the supply against overloads or short circuits. What was once a fairly esoteric area, current monitoring is now mainstream, with a wide variety of different systems used, depending on the application. While it would be 'nice' to include every possibility, that's no longer possible, because there are so many.
The purpose of this article is to give an overview of techniques, some of which are intended for low frequencies (50-60Hz) with others designed to monitor the instantaneous current through a switching MOSFET at 50-500kHz. There are requirements to be able to monitor/ measure both AC and DC (including pulsed DC), with the distinction being that DC is unipolar (of one polarity) while AC is bipolar (positive and negative).
There are two classes of current detection. One (and the most common) is a linear monitor that provides an output that is directly proportional to the current. These are used for measurement, overload detection and electronic fuses. They are also used in ELCBs (earth leakage circuit breakers, aka GFCI [ground fault current interrupters] or 'safety switches'). These have a proportional output, but the circuitry is only interested if the current exceeds a preset threshold for a preset time limit.
The second class is simply detection. These systems are used to detect that current is flowing, without being used for measurement. Some allow calibration, so only a current above the threshold provides an output. While less common for most electronic products, they are still useful. One common application is to detect that an appliance is drawing current, and turn on something else. An example is shown in Current Sensing Slave Power Switch, which can be used anywhere you need to switch on multiple devices when the 'master' device is turned on.
![]() | WARNING: Several circuits described here are directly connected to household mains voltages, and must be built with extreme care to ensure the safety of you and your loved ones. Do not experiment with anything that you do not understand perfectly, and can construct in a safe manner. All mains wiring must be segregated from low voltage wiring, and in many countries, mains wiring must be performed only by suitably qualified persons. | ![]() |
Whether you need simple on/ off detection or measurement depends on the circuit and its purpose. While a measurement system can be used with preset thresholds to provide a go/ no-go function, the converse is not true. Detectors are not designed to provide a linear output, and react only to current flow above a predetermined minimum. Once that's exceeded, the value of current is irrelevant. Regardless of the technique used, it's up to the designer to work out what's needed for the application.
Some of the earliest systems in industry used current transformers, which are AC only devices. Early DC measurements relied on a shunt resistor, a (sometimes very) low value, allowing the current to be displayed using a moving coil meter. Digital meters have now taken over, but with an analogue system it's often easier to see trends (current rising or falling). A shunt is calculated using Ohm's law, but because it's a series resistance it dissipates power. For example, if one uses a 'standard' 200mV digital display to measure up to 2A, there will be up to 200mV across the shunt. It will dissipate 400mW at maximum current (and the powered circuit gets 200mV less than the applied voltage). One can measure up to 2,000A just as easily, but the shunt will then dissipate 400W. Current shunts work equally well with AC and DC, but are mainly restricted to DC because there are better methods that can be used for AC.
The use of shunts is covered in some detail in the article Meters, Multipliers & Shunts. This mainly covers simple measurement systems, and other techniques aren't included. The material here looks at these other methods, many of which (at least in theory) don't dissipate power. As circuitry becomes more compact, eliminating as many heat sources as possible becomes very important, so the use of shunts isn't as common as it would have been without better monitoring methods. There are several specialised ICs available now that allow the use of very low-resistance shunts (e.g. 25mΩ) and amplify the small voltage produced (and in many cases shift the level as well). These are used in many products now, and make 'high-side' monitoring easier. High-side monitors measure the voltage across the shunt in the output voltage supply rail, but convert the output to a ground (or common voltage) reference.
AC is easier in most cases, because a current transformer (CT) can be used. These monitor the current flowing in a wire (or bus bar for high current installations), and produce an output current that's a direct representation of the load's current flow. The output from current transformers is current, not voltage. A 1:1,000 ratio CT outputs 1mA for each ampere flowing in the (usually single 'turn') primary. For industrial applications, a much higher output current is often used, with many of the older systems using a 5A output, suitable for 500A to 5,000A primary current.
Current transformers are designed specifically to work into a short circuit (or close to it). Smaller CTs use a 'burden' resistor in parallel with the output, to convert the output current to a voltage. For example, the 1:1,000 ratio CT described below will typically use a 100Ω burden, and will provide an output of 100mV/A. Improved performance can be obtained by reducing the burden - 10Ω provides an output of 10mV/A, which is better for high current measurements.
Hall-effect devices are also quite common in this role, but they are generally more expensive than current transformers. Most Hall-effect devices can measure both AC and DC, and a good example is the Honeywell CSLA2CD, as described in Project 139, Mains Current Monitor. A current transformer is used in the simplified version (Project 139A, Simple Mains Current Monitor. I have both, but the P139A is used most of the time as it's far more compact and it doesn't need a power supply. However, it cannot measure DC.
There are several options for monitoring/ detecting current in an AC circuit. The first is a current transformer, which up until recently was the best option. A CT provides excellent isolation, and all mains wiring through the transformer can easily be made very safe. Small voltage transformers can be used in a similar manner, but they require a shunt resistor in parallel with the secondary (which is used as the primary in this role).
The second is a Hall-effect current monitor IC, such as an ACS-712 or similar. These are available as a small PCB designed to interface with an Arduino or similar. There are several different types, with some of the more advanced units being very accurate (and expensive). Many are quite noisy because of the very high amplification needed to bring the small Hall-effect voltage up to something usable.
Next is a diode string, with two or three diodes in series, in parallel with another equal string with reversed polarity (inverse parallel). This provides a comparatively constant output voltage regardless of the load current, so it's a detector and cannot be used for measurement. The voltage developed across the diodes can be used to activate an optocoupler or can be coupled with a small transformer. The transformer will be a mains type, typically used with the secondary across the diodes, and with the primary used for the output. This combination is a lot harder to insulate properly to prevent accidental contact.
Then there's a current shunt - a low resistance in series with the load. The voltage across the shunt is monitored, and it has a voltage proportional to the current. A 'typical' shunt may be 0.1Ω (100mΩ) that will provide a voltage of 100mV at 1A. Unfortunately, the shunt dissipates power, and with 1A it dissipates 100mW, rising to 10W at 10A (I²R). Unlike the other techniques described, there is zero isolation, so all circuitry is at mains (or other supply) potential. This option is strongly discouraged for mains, but remains very common (and popular) for DC.
Each method has its pros and cons. With current transformers and Hall-effect devices, the main limitation is the minimum current that can be detected/ measured accurately and reliably. The practical minimum for Hall-effect detectors is about 50mA, below which it's difficult to get enough output signal to noise ratio. The parallel diode string dissipates possibly significant power (the CT and Hall-effect devices don't), and that limits the maximum current that can be passed, above which a heatsink becomes essential. It can only be used as a detector, and is non-linear. It does have one major advantage, in that it's quite easy to detect anything from a couple of milliamps up to 2-5A without difficulty. The output is strictly 'current flowing/ current not flowing' though - the output is not proportional to the mains current drawn.
A current transformer offers very high signal to noise, and can detect a lower current than expected if the burden resistor is omitted (or raised to a much higher value than normal). A 1:1,000 CT provides 1mA/A with the recommended 100O burden resistor (100mV/A). You might expect to get (say) 2.2V/A with a 2.2k burden resistor, but that won't happen because the core will saturate. For simple detection, we don't care if the core saturates, and we can clamp the output at around ±600mV with the base-emitter junction of a transistor and a small-signal diode. 50mA AC load current is easily detected with this arrangement. Saturation must be avoided for measurement.
Figure 2.1.1 - Current Transformer
The maximum current depends on the CT of course, and the CT needs to be selected appropriately. With a 5A (primary) CT and a 100Ω burden, you'll probably be able to measure up to 10A without losing much accuracy. To measure lower current, simply wind more turns through the centre of the CT. Ten turns increases the sensitivity by ten (not surprisingly), so the 5A CT is now rated for 500mA. With a 100Ω burden, the output is 1V/A, so you'll get 500mV with a 500mA input current. Two current transformers I've used are the AC-1005 and the ZMCT103C, and datasheets for both are available in the references shown below.
The CT is also by far the safest option, because all mains wiring is insulated, and the CT provides extremely good isolation between mains voltages and the rest of the circuit. The only change that's needed depends on the current drawn by your equipment. As a guide, the following table should help. IMin is the minimum current that can be reliably measured without complex circuitry (an output voltage of ~15mV at the minimum suggested current). For most applications (other than really high power), a 3-turn CT primary is probably a reasonable compromise.
Load Max. VA | IAvg (230V) | IAvg (120V) | Primary Turns | IMin |
500 | 4.5 A | 9 A | 1 | 150 mA |
250 | 2.2 A | 4.4 A | 2 | 75 mA |
150 | 0.65 A | 1.25 A | 3 | 50 mA |
100 | 435 mA | 830 mA | 5 | 30 mA |
50 | 217 mA | 417 mA | 10 | 15 mA |
The above is a guide, and is based on acceptable dissipation within the CT's winding. For example, if you used 5 turns with a 10A continuous load, the output will be up to 50mA (1mA/A × 5 turns). This will result in a current transformer dissipation of 100mW, assuming a 40Ω winding. This is acceptable for the current transformer, but it may subject the following circuitry to higher current than is desirable.
The inherent nonlinearity of an open secondary CT is our friend for detection, but not for measurement. While the theoretical peak current can reach 50mA as described above, I ran tests and verified that it can provide that easily. I tested an AC-1005 CT with 50A primary current (one turn), and it happily provided the full 50mA into a 10Ω burden with good linearity. Linearity can be improved further by using an opamp transconductance amplifier, and this is covered next. You need to be careful though, because the optimum feedback resistor to set the sensitivity may be too low for the opamp to be able to drive satisfactorily. Most opamps cannot drive less than ~2k close to the supply rail voltages. You'd generally use the following circuit for very low current.
Figure 2.1.2 - Optimised Current Transformer Circuit
As noted above, a CT is happiest when its output is shorted, as that provides the best protection against core saturation. By using an opamp as a current-to-voltage converter (transconductance amplifier), the coil 'sees' close to a short, and the opamp converts the input current into a voltage. As shown above (using a 1k feedback resistor) the output is 1V/A. This works with both high and low current, but if you wanted to measure (say) 10A RMS, that demands ±14.14V peak from the opamp, which will be unable to swing its output that far. The higher than normal output current to supply the feedback resistor and the following circuitry will overload the opamp, and there isn't enough supply voltage, even with ±15V. You ideally need an opamp that can drive low impedances, or you could use a buffered opamp as described in Project 113 Headphone Amplifier. With that, you can reduce R1 to 100Ω, allowing you to measure 10A RMS comfortably. The output voltage is 100mV/A with a 100Ω feedback resistor, so you'll get an output of 1V RMS with 10A RMS. You can scale this up or down as required. You can get very sensitive current measurements with just the opamp and using 10k for R1. That will provide 10V/A, so even measuring below 10mA is easy.
The Hall-effect devices can be bought quite cheaply, pre-mounted on a small PCB with a terminal block for the input, and pins for the supply and output. The ACS712 is very common, but there are also other similar devices available. They require a 5V supply, and the output must be amplified before it's useful. The output of a 5A version is 185mV/A (nominal), so at around 50mA you only get about 3.7mV output. Unfortunately, the noise output is quoted as 21mV (2kHz bandwidth), so the wanted signal is buried in the noise. The PCB can be modified to get a lower noise level, and a filter capacitor of around 1µF is called for (100Hz bandwidth). This certainly helps, but noise is still a problem.
Figure 2.2.1 - Hall-Effect Current Detector
It's possible to use a tuned filter to separate the wanted signal (at 50/ 60Hz) from the noise, but that means more parts and far greater complexity overall. This is very hard to justify for something that should be simple. The cost of the PCB is roughly the same as a current transformer, with these also available mounted on a PCB with an amplifier. There's not much point though, as the CT itself is all that's needed for most applications. Note that if you need to measure the current accurately, a tuned filter will give an erroneous reading, because it will only pass the fundamental (50 or 60Hz for mains), and harmonics will be discarded. This will change the reading, and it may not be possible to get good accuracy.
All circuitry connected to Pins 1-4 in Fig. 2.2.1 must be protected from accidental contact, as it's at mains potential. The maximum peak current is (claimed to be) 100A for the 5A version, but it's likely that the PCB traces on some boards will not be able to handle that. Be careful with these, as the isolation voltage depends on the device itself and the PCB it's mounted on. It doesn't take much contamination to bridge the isolation gap. A better solution is a Hall sensor with a 'concentrator' - essentially a toroid with a small gap to contain the sensor itself. An example is the CSLA2CD as used in Project 139.
Figure 2.2.2 - CLSA2CD Hall-Effect Current Detector
There are two different types of Hall effect current sensors - open-loop and closed-loop. The ACS712 and CLSA2CD shown above are open-loop types, and the IC includes processing to compensate for temperature and linearity effects. While performance is quite good (at least at higher currents where noise isn't a problem), a closed-loop system is more accurate. These use feedback to cancel the flux in the core induced by the current flowing through the centre hole, and the internal circuitry is essentially a servo system, which maintains the net flux at zero.
One area where Hall effect sensors are useful is for the measurement of DC. A current transformer can only measure AC, where Hall effect devices can measure AC and DC, including AC superimposed on DC. This is a unique property that a CT cannot match, as they measure only the AC component. If DC is present, that will cause the core to saturate asymmetrically, ruining linearity and accuracy.
The output is derived from the output of the servo amplifier, which is (at least in theory) a perfect replica of the conductor current. These use a fairly high-power servo amplifier, which drives an auxiliary winding on the toroid. Pretty much by definition, the flux generated by the current in this coil is identical to the flux created by the (usually) single-turn 'primary'. Closed-loop systems avoid core saturation by maintaining net-zero flux, but they are relatively power-hungry and can have issues with stability. The latter is always an issue with servos. See Hobby Servos, ESCs And Tachometers for a discussion about how servos work. Most of the Hall effect sensors you can obtain for a sensible price are open-loop types. If you want to know more, see Closed Loop Hall Effect AC/DC Current Sensors (ChenYang Technologies GmbH & Co. KG).
Another class of current monitors use a fluxgate magnetometer as the sensor. These are comparatively complex but very sensitive devices, and there's a lot of information available for anyone who's interested. Neither of these will be covered any further here, but there's plenty of information on-line (of course). Suffice to say that there are many different ways to monitor current based solely on the magnetic field produced when current (AC or DC) flows in a conductor, but many can best be described as esoteric, and aren't particularly useful for DIY projects.
Reverse-connected power transformers can be used for monitoring current. A small (typically around 2-5VA transformer is connected with the secondary used as the primary, used to step-up the voltage developed across a low value resistor (up to 1Ω). A 230V primary, 6V secondary transformer in reverse will increase the voltage across R1 by a factor of 38.33 (at least in theory), but in reality it will be a bit less. A pot (or trimpot) is required to calibrate the output if it's used for measurement. The power dissipation of R1 must be verified, and remember that the output voltage can be very high indeed if the load being monitored draws high inrush current. Back-to-back zener diodes are suggested across the output, to protect any following circuitry against excessive voltage.
Figure 2.3.1 - Reverse Connected Transformer
With a load current of 1A, R1 will have 470mV across it and will dissipate 470mW. The transformer steps this up to ~17V (9V for a 120V transformer), and 17V is easily adjusted to (say) 10V RMS output to indicate 1A (10V/A). This won't work with 120V mains, so a 3V transformer would be preferred. The circuit is not ideal though, because most small transformers are designed for good isolation of the primary winding, and the insulation for the secondary will not be as robust. If there's a major fault, the transformer's frame could become live, so it needs to be enclosed. Ideally you'd use an encapsulated type, but these are usually PCB mounting and may not be suitable.
Even small transformers aren't inexpensive though, and you'll typically pay far more for one than for a true current transformer. This scheme is useful if you already have a small transformer to hand and don't want to buy something else. Linearity is likely to be quite good, and it's improved with a lower value for R1. There's no need to be too fussy about the value, because the output is adjustable. This is a technique I've used, but it's not optimum. The very high sensitivity can be useful, but it's not recommended for high current because the shunt resistor will dissipate possibly significant power. The voltage across the shunt should ideally not exceed one tenth of the rated secondary voltage (600mV for a 6V transformer). This is to prevent nonlinearity caused by transformer core saturation. Any voltage can be used, so a 9V transformer is perfectly alright, but it naturally has a lower step-up than a 6V version. A 9V, 230V transformer will increase the voltage by about 25 times.
I tested a suitable candidate from my parts drawers. It has a 12.6V centre-tapped secondary, a 230/240V primary, and is rated for 150mA (1.89VA). The primary resistance is 1.07k, and 5Ω for the full secondary. With an input voltage of 105mV across a 0.22Ω resistor (a total resistance of 210mΩ) and 0.5A current, the output voltage was 1.49V RMS. There was no sign of saturation. However, you may find that there's a possibly significant phase shift when the transformer is used in voltage mode. This can be (mostly) eliminated by using it with a current output instead of voltage. See the previous section. Another problem with the reverse-connected transformer is that the winding resistance is much higher than a proper current transformer.
Do not be tempted to use one of the tiny 'audio' transformers you can get. These are available with a ratio of around 1.3k:8Ω (12:1), but they don't have insulation suitable for mains usage. These are extremely dangerous if there's 230 or 120V between primary and secondary, and if you were to use one, expect it to fail spectacularly, taking other circuitry (and possibly you, too) with it to the grave. Yes, I am being serious.
A current detector can be made using the voltage drop across power diodes, which triggers an optocoupler. Regardless of the load current, the diodes will have a minimum forward voltage of at least 550mV each. The forward voltage is not a fixed value (0.65V is commonly [and often incorrectly] claimed to be 'standard'). A 100Ω resistor is used in parallel to ensure that the LED in the optocoupler is never 'floating', and it also reduces sensitivity a little. I tested the circuit thoroughly, and it will reliably detect as little as 1mA of mains current (without R1). The sensitivity can be varied by changing the value of R1, in parallel with the diodes. The default value is 100Ω, and lower values increase the detection threshold. Unfortunately, you can't use a trimpot because the power dissipation climbs rapidly at low values.
Figure 2.4.1 - Diode/ Optocoupler Current Detector
This scheme has a 'hidden' trap for the unwary, because LED current is always concerning. Optocouplers have what's referred to as a 'current transfer ratio' (CTR [ 1 ]), which is a measure of the transistor current vs. LED current. Most optocouplers (e.g. 4N28) have a fairly low CTR, which may be as little as 20%. That means that you need a fairly high LED current, which leads to gradual degradation of the LED. Achieving a sensible LED current isn't easy with a diode string, because the input voltage is low (between 1.65V and 2.1V with three diodes in series), and it's impossible to maintain the LED current at the optimal value (about 10mA) over the full operating current range.
If we are only interested in knowing whether current is flowing or not, the diode string and optocoupler might seem ideal, but power dissipation is a very real problem for high-powered appliances. If the detector is only used with a power amplifier (for example), the average dissipation will be fairly modest, but it's something that must be verified. Be aware that the diodes will dissipate power. If your monitored device draws 5A from the mains, each diode will dissipate about 3.5W. 5A is quite a lot, and for a 230V system that's over 1kVA (over 500VA at 120V). With high powered equipment you will need a heatsink for the diodes.
Note that the diodes must be rated for a surge current that's greater than that produced by the load's inrush current (see Inrush Current Mitigation for details). The continuous current rating depends on the current draw of the equipment. I suggest 10A diodes (TO-220 package). All circuitry within the box in Fig. 2.4.1 must be protected from accidental contact, as it's all at mains potential!
The INA250 is one example of a dedicated IC current monitor. It has an optimised Kelvin (4-wire) layout for the shunt resistor internally, and is available in four different sensitivities - 200mV/A, 500mV/A, 800mV/A and 2V/A. The shunt resistor is 2mΩ, so power loss is minimal, even at maximum current. They can handle up to 10A, and are said to have better than 0.03% accuracy for the shunt and amplifier. They are bi-directional, and are often used as part of a battery management system (BMS) to monitor charge and discharge current. The shunt is independent of the supply voltage, and it can be at any voltage between 0-36V (the latter being the maximum rated voltage).
Figure 2.5.1 - Dedicated IC Current Monitor (INA250)
This is one of many - similar devices are made by several manufacturers, with many using an external shunt. An example of the latter is the NCS199 series from OnSemi, available with a gain of x50, x100 and x200. While this reduces the package size, it also means that the circuit designer must be very careful with the tracks going to and from the shunt resistor. A seemingly small PCB track routing mistake can lead to a very large output error. Most device datasheets have clear guidelines for track routing to obtain high accuracy. Low value shunt resistors (e.g. 20mΩ) are more critical than higher values (e.g. 100mΩ).
While it might look like you could use one of these ICs for AC, that won't work. The current-carrying supply voltage must be within the range of 0-36V, so you'd need an 18V DC offset, and the voltage cannot exceed 36V peak-peak (12.7V RMS). It can be done, but there would be no point, and the final circuit would be much more complex than necessary.
All of the techniques described above are suitable for on/ off monitoring or measurement, other than the diode + optocoupler. That makes it the least usable method, as it can only detect that current is flowing, but not how much. The others are linear within their operating range, so can be used to measure the current quite accurately. For AC, the current transformer is a very hard act to follow, and while Hall-effect devices such as the ACS712 will work (and have a fairly wide bandwidth), they are also noisy, making low-current measurements difficult.
A reverse-connected mains transformer is handy if you have one lying around that saves you from having to purchase a current transformer, but they have limited bandwidth and may not show a complex waveform accurately. Their electrical safety is also of some concern, because the secondary (used as the primary) will rarely have very good insulation from the core and frame. The extra hassle of having to use a shunt resistor adds to its woes, but it's still a good option for measuring very low current. In theory, the transformer can be used as a 'true' current transformer, utilising the current from the primary rather than the voltage.
For DC, you have fewer choices. A resistive shunt is the standard method, which has been used almost forever. With a high-gain opamp, it's possible to get high sensitivity with low shunt resistance, but many other factors start to influence the design, such as opamp input voltage/ current offset, the requirement for a true Kelvin 4-wire shunt connection, and even the Seebeck (thermocouple) effect caused by dissimilar metals can affect the reading. Hall-effect current monitors are available for DC applications, but noise remains a problem at low current.
Most of the available ICs have a limited maximum voltage, typically from around 20V up to 40V or so. Because they have high gain, they are also somewhat noisy, and like all semiconductors have a limited upper frequency, which always falls with higher gain. The maximum allowable voltage can be restrictive, something that is not an issue with Hall effect ICs (these are fully isolated). Something to be aware of with Hall effect ICs is that any nearby magnets will cause errors. The Hall sensor can't differentiate between the magnetic field in the conductor, from a magnet or even the Earth's magnetic field (the latter is not considered to be an issue).
You must be aware of the claimed isolation voltage. Just because an IC claims 2,100V isolation, that doesn't mean that you can have that voltage between the sensing and output circuits. The Allegro ACS712 is rated for 354V (DC or peak AC) for equipment using basic insulation (earthed appliances), but only 184V (DC or AC peak) if used in double-insulated products. Failure to observe the allowable maxima can result in product failure, electric shock or even death, and it's not to be taken lightly.
Also, you need to know that there are two distinctly different Hall effect current measurement systems; open-loop and closed-loop. Open-loop designs rely on the Hall sensor being linear over its operating range, something that manufacturers have managed to do very successfully. A closed-loop system uses feedback to counteract the instantaneous flux in a 'concentrator' - typically a soft magnetic (usually circular) core with a slot cut out for the Hall sensor. These don't require the Hall sensor to be particularly linear, as the net output of the sensor is zero. The amount of voltage needed to counteract the load induced magnetic field becomes the output.
The current monitor/ detector circuit you select depends on the application. In some cases (e.g. test and measurement) you need high accuracy, low drift with time and temperature, and a range suitable for the task. Using a 50A current monitor to look at a few milliamps would be unwise, because you may not even be able to separate the wanted signal from the device noise. DC applications are usually comparatively easy, because you can use a shunt resistor selected for the expected current. For example, if you need to measure from 0-100mA, you can use a 100mΩ shunt, and you only 'lose' 100mV across the shunt at maximum current. This is of no consequence for a 50V supply, but it's significant if the voltage is only 3.3V. A point I've made in several articles is that electronics design is all about choosing the 'ideal compromise'.
You have to decide which things are important, and which other things are less so. If a supply voltage is unregulated, then you know that the voltage will vary over a fairly wide range as the mains voltage can change by at least ±10%, and sometimes more. The voltage also varies with load current, so aiming for a very small voltage loss across a shunt isn't sensible. With a regulated supply, you can sense the current before the regulator, so the output voltage isn't affected. Even if there's ripple voltage present, with most regulators the current remains almost identical to that in the load. You may have to make changes to the regulator design to ensure that its quiescent current isn't included in the measurement. This is dependent on the design used and your expectations for accuracy.
There's no point achieving (say) 1% accuracy if the current meter used can't resolve a 1mA current change for a 100mA output. The same applies at any current, and most of the time you'll only be interested in general trends rather than exact values. This is especially true when testing audio circuitry, because there's always a current range provided for opamps and IC power amps. Exact values aren't needed, and small errors are of little consequence.
AC imposes some additional challenges. If you're making a wattmeter, phase shift between voltage and current is very important. A wattmeter has to be able to compute power based on the voltage, current, relative phase and waveform distortion. It doesn't matter if the processing is digital (e.g. Project 172 or analogue (using an AD633 analogue multiplier) as described in Project 189. Both use a current transformer, and these have almost no phase shift when terminated with the recommended burden resistor. If used with an opamp transconductance amplifier (Fig. 2.1.2) phase shift is reduced to (close to) zero.
If you were to attempt the same thing with the Fig. 2.3.1 reverse-connected transformer you'll almost certainly be very disappointed, as there is considerable phase shift (about 16° as simulated) when the output is used in voltage mode. Using current mode reduces the phase shift to 6°, but it's there. This will cause the wattmeter to read low with most loads. So, while a transformer works as a current transducer, it has limited use if phase shift is important. Be warned that if you include a filter to remove or reduce noise, this will also cause a phase shift. The amount of phase shift is greatest when a low-pass filter frequency is close to the mains frequency.
Many switchmode power supplies (SMPS) use current sensors to detect a fault condition. The range of techniques include shunts and CTs, and while Hall-effect sensors are also usable I've not seen a circuit that employs them. The detection is always instantaneous, so if the current passes a preset maximum the circuit shuts down. Most then go into a 'hiccup' mode, and will attempt to re-start at intervals determined by the design. If the current is 'normal' the supply will operate, otherwise it will retry until disconnected from its input supply (mains or DC).
In the early days of transistor power amplifiers, some used a current trip that shut down the supply if the maximum was exceeded. Some of these presumably worked quite well, while others were a disaster. Mostly, the voltage across a low-value resistor was monitored, and if it reached a value that would trigger an SCR that would shut down the power supply. Many early amps used a (crude) regulated supply, and turning it off this way was easy to do.
The heart of a safety switch, aka RCD, ELCB or GFCI (aka GFI) is a current transformer. The basic principle is no different from any other CT, except that both 'primary' conductors pass through the centre (the 'live' and 'return' conductors). When everything is functioning as it should, the CT's output is zero, because the magnetic fields around each conductor oppose and cancel each other. Should a path to earth (ground) present itself (such as a fault or a person contacting the live cable), the core is unbalanced, and an output is produced. A neutral to earth/ ground fault can also be detected, because any current that bypasses the sense coil causes an imbalance. Most are designed to trip with an imbalance of 30mA or less, and a fault will register within one half-cycle of the mains waveform.
Figure 5.1 - Earth Leakage Circuit Breaker (Conceptualised)/ 'Conventional' Coil Representation
These were once known as 'core balance relays', because the core's flux is balanced (to zero) by equal and opposite current flow in the two conductors. The test switch deliberately unbalances the circuit with the designated fault current. This is typically 30mA, although some are more sensitive. Note that the trip coil releases the contacts, and once activated it requires a manual reset. RCDs are an example of a non-linear current monitor. The only thing of interest is whether the 'residual' (leakage) current is greater than the threshold. If it is, the trip coil is operated and the circuit is de-energised.
The construction of the CT coil varies, and while some use a toroidal core, others don't. US 'GFCI' breakers generally have a second toroid to sense a grounded neutral in the protected circuits. It uses the same principle as the main coil. Many GFCI breakers for the US/ Canada market appear to be based on the Fairchild (now OnSemi) RV4141 IC, which has the power supply, detection, delay and trip circuitry built in, but it requires an external SCR (silicon controlled rectifier, aka thyristor) to operate the trip coil. The application circuit shown is adapted from the datasheet.
Figure 5.2 - OnSemi Application Circuit For RV4141 GFCI Breaker
The style used in the US is different from those used in Australia, the UK, Europe, etc. The term 'GFCI' is not used outside the US/ Canada (120V, 60Hz mains), but try as I might I was unable to find a representative schematic for an RCD. Most simply show a block diagram similar to that in Fig. 5.1, with no details of the circuitry. I did find a datasheet for one IC, but it doesn't appear to be common. Many of the first RCDs made available were electromechanical, with no electronics. The sense winding acted on the trip coil directly, without amplification. The mechanical parts (particularly the latching mechanism) are usually precision mouldings to ensure that the minimal current available would trip the breaker reliably.
Not all safety switches use electronics. As unlikely as it may seem, a sensitive trip coil can be activated directly by the output of the 'core balanced' current transformer. I recently purchased a pair, and upon testing it was easy to determine that they are purely electromechanical. With an 'unbalancing' resistor of 2.2k, an input voltage of 55V AC was enough to trip the latching mechanism, with an audible 'buzz' at a slightly lower voltage before it tripped. The unbalance current was 25mA at 55V, well within the 30mA requirement. Despite the advantages obtained with an electronic circuit, they still need a fairly sensitive latch, and more parts means more to go wrong. Total reliability is expected from a safety switch, and an electromechanical system with no electronics has almost nothing to fail.
It's surprisingly easy to use a CT (I used an AC-1005) to detect a small fault current. With 27mA of 'fault' current I obtained an output of ~80mV RMS (secondary open circuit) in a bench test, and the load current is immaterial. It doesn't matter if it's 500mA or 10A, only the imbalance is detected. It's a relatively simple task to amplify that signal and use it to trip the breaker contacts. Building the mechanical parts would be a real challenge, and if you value your life I suggest that you buy an RCD from a reputable manufacturer, with all functions verified by a test lab. You can build one of course, but it has to be fail-safe. This isn't easy to achieve, and if someone were killed or injured because it failed to operate, you will almost certainly be held liable.
Note: Because of the serious risk to the health and safety of readers, I will never publish a construction circuit for a safety switch. There is a design that's been on-line for a while, but it's seriously flawed. Activation causes a pair of relays to interrupt the mains (mains load current flows via normally closed contacts), so if any part of the circuit fails, you're not protected at all. It wouldn't be so bad if the relays had to be engaged to enable current flow, but the person who designed it didn't think of that. If built properly, it would have a continuous current drain of about 50mA. The capacitor used for the transformerless power supply is an ordinary 400V DC type, not a Class-X type which is designed for mains voltages. Most constructors would be unaware of these serious errors. If you do come across it, stay well away!
Being able to monitor current has always been a requirement for electrical and electronic devices. One fairly common application is an electronic fuse ('e-fuse' - see Electronic Fuses, and of course electrical safety switches. In many cases, 'protection' is provided only by means of one or more fuses. These are still essential, because any electronic circuit can fail (including current monitors or e-fuses), and the fuse or circuit breaker is the last line of defence. In most cases, the required precision depends on the application. Some circuitry will require very accurate measurement of the current, others less so. Some don't require anything more than a preset threshold - this may (or may not) require an accurate trip-point, depending on the application.
The range of different techniques is fairly broad, with some methods more or less suited to a particular application than others. For AC, the humble current transformer remains one of the most versatile components available. Low noise and high sensitivity are easy to obtain, and they remain my preferred technique for monitoring or measuring. You can make anything from a wattmeter to an RCD using an off-the-shelf current transformer.
There's no direct equivalent to a current transformer for DC, because there's no variation in flux density, so no current is produced in the winding. Current shunts and Hall effect devices are the only choice, and these also work with AC. Although they offer good flexibility, shunts cause a power loss which is dissipated as heat in the shunt, and it reduces the voltage available to the powered circuitry. The value has to be selected carefully to obtain the required accuracy along with a low power loss, and the two can be conflicting if you want to use simple circuitry.
Hall effect devices are a good choice too, as they have almost no power loss, but have a limited dynamic range due to noise. Closed-loop designs are significantly better than open-loop for noise, but they are larger and more expensive. The final sensor choice depends on the system requirements, allowable space and budget, along with ease of calibration and overall functionality. There is no simple answer, other than "It depends ...".
![]() |
Elliott Sound Products | Dangerous Plug-Packs |
![]() ![]() |
When one goes online to find a plug-pack ('wall-wart') power supply (aka PSU - power supply unit) either to provide power for a project or charge their phone, it's not at all unreasonable to expect that it meets regulatory requirements for electrical products. In Australia, we have a list of 'prescribed' or 'declared' products, and these require mandatory type testing, and must have an 'RCM' (regulatory compliance mark) (formerly an 'A-Tick' or 'C-Tick') that indicates that tests have been performed. The model number is listed with the ACMA (Australian Communications and Media Authority), and it's a requirement that declared products are registered with ACMA. This requires the supplier to have a valid equipment test report, a 'Statement of Compliance Form' or Declaration of Conformity (DoC), and maintain a compliance folder. Finally label the product with the RCM logo, after applying to the ACMA to allow the use of the logo. The scheme is also overseen by the Electrical Equipment Safety System (EESS). See Regulatory Compliance Mark. I quote from the web page ...
For electrical safety, in-scope electrical equipment must not be sold unless the item is marked with the RCM in compliance with AS/NZS 4417.1 & 2 and the EESS.
'In scope' simply means any prescribed or declared product. Amongst these are external power supplies and battery chargers. An 'external power supply' means any power supply that connects to mains voltage (either with a mains lead [detachable or attached] or that plugs directly into a wall outlet).
RFI/ EMI may often be the least of one's concerns though, as many of the cheap supplies you can get are dangerous. There was a case in Sydney a few years ago when a young woman was killed by a fake 'name brand' phone charger, and I've seen quite a few that wouldn't pass even the most cursory examination, let alone a full lab test. One practice that's common in cheap 'knock-off' SMPS (switchmode power supplies) is the use of an 'ordinary' 1kV ceramic capacitor, where regulations worldwide call for a Class-Y component, with full certification and marked with multiple standards. Needless to say, Class-Y caps are more expensive, but they are specifically designed to ensure that a short-circuit failure is as close to impossible as you can get.
Many countries require that certain classes of electrical equipment must have been laboratory tested to ensure compliance with applicable directives or other laws or statutes. Unfortunately, there's very little co-operation between 'trading zones', so a single product may have to undergo several bouts of testing to meet the requirements of all countries where it's to be sold. This is a significant financial burden, and it's usually impractical for small-scale manufacturers. This is often 'circumvented' by allowing individuals from anywhere to purchase the equipment from its country of origin, and the purchaser then becomes the importer. If it's for personal use there's no real problem if it's safe and doesn't interfere with other equipment, but if it's sold to someone else (and this often includes hire, lease and even gifts) then the rules apply. Should a 'personally imported' product cause injury or death, the importer (which may be you) risks becoming liable.
Regulatory Compliance Mark (RCM) - Australia/ New Zealand
While you will almost always get a compliant (and therefore as safe as can be expected) from reputable suppliers, the same cannot be said for products obtained from 'flea markets', eBay, Amazon, etc. Same may be perfectly alright, but others are either seriously dangerous, or can be expected to cause radio frequency interference (RFI) aka electromagnetic interference (EMI) that may interfere with the operation of other equipment (including WI-Fi, Bluetooth, AM/ FM radio or TV reception). Just because a cheap phone charger doesn't try to kill you the first time it's used does not mean that it's safe. Some faults (including potentially lethal ones) may not manifest themselves for some time. The problem is that no-one knows when (or how) the device will become dangerous.
You'll often see power supplies (and other goods) that appear to have the CE (from the French 'Conformité Européenne') mark, which certifies that a product has met EU health, safety, and environmental requirements. However, the presence of the CE logo does not indicate that the product has been tested to ensure consumer safety. Manufacturers in the European Union (EU) and abroad must meet CE marking requirements where applicable in order to sell their products in Europe. To add to the confusion, there's a remarkably similar 'CE' logo which supposedly indicates 'China Export'. It's doubtful that this is a coincidence - it's just a trick to fool consumers. However, note that the CE logo is not necessarily an indicator for electrical (or other) safety, and other EU (European Union) rules may also apply - in particular the 'Low Voltage' and EMC (electromagnetic compatibility) Directives.
Real 'CE' (Left) and Fake 'China Export' (Right) CE Logos
The 'real' CE logo is made from letters formed within overlapping circles, and the spacing is as shown (the 'construction' lines in grey are not part of the logo). The 'China Export' logo is almost identical, but the letters are more closely spaced. Very few people would realise the implications, and would assume that the product meets European standards. If the logo is 'China Export', it is completely meaningless, but most people wouldn't notice the difference. It's safe to assume that was the intention from the outset.
NOTE: For a good explanation of all compliance marks, see Power Supply Safety Standards, Agencies, And Marks (CUI Inc.)
In Australia/ New Zealand, the US, Europe and almost certainly other countries, there's a requirement that external power supplies (plug-pack or 'brick' types, including battery chargers) must comply with the standards for no-load power consumption. In Australia/ New Zealand the scheme is called MEPS (minimum energy performance standards) although the acronym will be different elsewhere. There's an assumption (which IMO is badly flawed) that these power supplies will be powered on permanently (see The Humble Wall Transformer is the Latest Target for Legislators. Regardless of the reality, external PSUs must meet the performance standards that apply in any country with a similar scheme. A good example is shown at Efficiency Standards for External Power Supplies (CUI). As always, the full data is only available if you purchase the relevant standards documents, at considerable expense!
Goods intended for sale in the EU must also comply with RoHS (restriction of hazardous substances), meaning that only lead-free solder can be used. They must also comply with the LVD (low voltage directive) and EMC (electromagnetic compliance) directive. These rules may also be applied elsewhere, but it's generally fairly haphazard outside of the EU. Like many people, I really dislike lead-free solder, but if that's required then the item should be marked as such. I found a rather amusing ad on an on-line site for a plug-pack supply, and the dopey seller included a photo of the PCB, showing that it had zero parts for EMI suppression. This ensures that it would fail any proper test procedure, because it will generate high levels of RF interference. At least anyone who knows about switchmode supplies would recognise its failings without having to buy one.
Figure 1.4 - PCB Image Shown On Seller's Listing
I freely admit that the image is basically 'stolen', something I would normally never do. However, this illustrates the issues discussed here and I considered it 'fair game'. Overall, the PCB looks rather like that shown in Fig. 4.3, and while it has provision for an output filter inductor, it's not fitted. The remnants of dashed red lines are where the seller pointed out 'special' features, such as "Large Capacitor" and "Third Generation Smart IC Chip", to which I would reply "Bollocks!". The fuse is referenced as (and I kid you not) "Circuit Safe Running Safety Tube". The transformer is claimed to be 'high quality', but I fear that's impossible to verify from a photo. Interestingly, the same seller shows the PCB for a 12V, 2A version which has proper EMI filtering and is a much better proposition, although there's no indication that it meets any applicable standards for Australia or anywhere else.
I don't propose to even try to cover the rules that apply in each country or trading zone, as there are far too many. Australia/ New Zealand use AS/NZS standards, the US/ Canada have UL and FCC requirements, Germany has VDE, Great Britain has British Standards (BS), etc. For a reasonable overview, see Electrical Safety Standards (Wikipedia). In almost every case, the requirements of any of the standards documents are not freely available, and the relevant documents have to be purchased at considerable cost. I consider this to be an abomination - all of us should have access to information that would help us to decide whether or not to return a device should it be found non-compliant, and DIY people should be able to access information needed to ensure that their project is compliant with applicable regulations.
With the almost universal adoption of switchmode power supplies for plug-packs/ wall transformers, the requirements for electrical safety are more important than ever before. Older types used a conventional mains frequency transformer, and these were pretty close to being intrinsically safe. Nothing can ever be 100% safe under all circumstances of course, but the difference between a mains transformer and a SMPS is chalk and cheese. When (almost always Chinese) manufacturers decide to cut corners to the extent they often do for cheap 'after-market' products, important safety (and other) requirements are bypassed, leaving the customer with a product that is dangerous.
Two of the mandatory requirements worldwide are electrical safety and radio frequency interference (RFI) and/ or electromagnetic compatibility (EMC). Products must have suitable circuitry to minimise risk and interference, but it's common to see these omitted in low-cost (and usually unmarked) products. In most countries, it's an offence to sell non-compliant products. I've seen several plug-pack supplies that have no interference blocking, generally a common-mode inductor plus one or more X-Class (mains rated, usually X3) capacitors. A further requirement is almost always a capacitor from the mains (hot) side of the PSU to the DC output, and if this isn't included few SMPS would ever pass 'radiated emissions' tests (a measure of how much RF interference is radiated into nearby equipment). This must be a Y1-Class capacitor. The IEC 60384-14 safety capacitor subclasses are ...
Class-X | Class-Y |
Y4 - < 150VAC | |
X3 - peak voltage pulse of ≤ 1.2kV | Y3 - ≤ 150VAC up to 250VAC |
X2 - peak voltage pulse of ≤ 2.5kV | Y2 - ≤ 150VAC up to 300VAC |
X1 - between 2.5kV and ≤ 4.0kV | Y1 - ≤ 500VAC |
The only capacitor that may be connected between mains voltage and an isolated ('safe') output is a Class-Y1, and a standard 1kV ceramic cap does not comply. Most standard ceramic caps have a 5mm (0.2" or 5.08mm to be exact) pin spacing, where a Y1 cap will use 10mm spacing. This provides sufficient creepage distance across the PCB surface to withstand mains voltage, where a 5mm spaced cap only allows about 3mm once the PCB pads are included. The minimum is generally 5mm, but in most cases reputable manufacturers will allow at least 7mm creepage.
These are two terms that most people do not understand. This is not surprising, because although they are self-explanatory, the explanations themselves don't mean anything without context. Clearance is the distance, through air, separating hazardous voltage from phase to neutral, earth or any other voltage. The minimum value is typically 5mm, but there is a vast variation depending on pollution categories (not normally applicable inside sealed equipment) and voltage. Using the minimum figure is not sensible for hobbyists, and it's preferable to ensure that the separation is as great as possible.
Creepage is the distance across the surface of insulating material, including printed circuit boards, plastic terminal blocks, or any other material used to separate hazardous voltages from phase to neutral, earth, or any other voltage. Again, 5mm is generally considered 'safe', but that depends on the material itself, pollution categories (again) and the voltage(s) involved. Note that the creepage distance is from the closest edges of PCB copper pads or tracks, and not the pins of the connector or other device. The following drawing shows the difference between creepage and clearance.
Creepage And Clearance Distance Measurement
In the above, creepage is shown between two transformer windings (only the layer adjacent to the primary/ secondary insulation is shown). The second drawing shows creepage across the PCB and clearance between the wire 'cups' on a barrier type terminal block. Creepage exists on both sides of the board. Where pollution is expected, this may be able to bridge the creepage distance with partially conductive 'stuff', possibly allowing sufficient current to cause fire or injury (including death) to the user. Be aware that burnt materials (such as PCB resins) can become carbonised (and therefore conductive) if heated beyond their rated maximum temperature. The lower drawing shows a Class-Y1 capacitor, one wired conventionally, and the other with a slot in the PCB to increase the creepage distance, so clearance becomes dominant.
The above is adapted from Electrical Safety - Requirements And Standards on the ESP website. While that article covers similar material to that shown here, this one is specific to plug-pack power supplies. It was prompted when I stripped down a small PSU I obtained (knowing in advance that it was almost certainly non-compliant). It cost me less than AU$2.00 including post, so the seller made a loss on the transaction.
A potentially useful test you can perform (if you have the equipment) is a 'Megger' test. The Megger™ and similar insulation testers operate at either 500V or 1kV, and test insulation resistance. Mine isn't the original, but it tests at 1kV up to 2,000MΩ. If anything breaks down during the test (which is pretty severe) then the item cannot be used. 'Proper' lab tests usually include (literally) testing to destruction, with the required tests described in the applicable standards documents (which of course you cannot obtain without paying through the nose for them).
Unfortunately for everyone, the vast majority of these supplies are designed to be non-serviceable, and you can't see what's inside without breaking the case apart. Otherwise, a visual check could be done easily enough, so you can look at the creepage distances provided, check that the Class-Y1 cap is the 'real thing', and verify that the EMI filters are included. To save everyone at least some pain, I've included photos below that show both compliant and non-compliant PSUs. Some may appear alright, but still not have the appropriate safety certifications, and if that's the case you use it at your own risk.
Be aware that the presence of compliance marks is no guarantee that the product has actually been tested. It's not at all uncommon for these markings/ logos to be faked, because it's just a logo that can be printed on the sticker, incorporated into the injection moulding die or laser etched into the plastic body. It has to be understood that counterfeit transistors, ICs and even electrolytic capacitors are a serious scourge on the electronics supply chain (and it even happens with aircraft and aerospace products - NASA has been caught out).
The techniques used by suppliers are sometimes very crude and easily detected, but others are so good that they are very difficult to detect without specialised equipment. With this in mind, what hope is there for the 'ordinary' consumer? With small 'disposable' electronics like plug-pack power supplies, there can be anything inside, and you'll never know unless it's dismantled. This is why I suggest buying only from reputable sellers. You will pay more, but at least you have some certainty that it won't try to kill you.
Sometimes you find 'stuff' that almost defies belief. I have a couple of SMPS units that were once part of something else. The PCB (non-compliant) was liberated from its original housing, re-wired and re-packaged into a 'new' wall transformer housing. I have two (never intended to be used as plug-packs), one of which completely wipes out the FM radio in my workshop. The sale of these in Australia is prohibited because they have no compliance markings (one had a UK plug!), but some eBay sellers are oblivious to the rules, or perhaps they just don't care if someone dies because of their dodgy products.
Figure 3.1 - Basic SMPS Schematic #1
The drawing is adapted from the CSC7203 datasheet. As near as I can tell, the IC manufacturer is in China, and as shown it has a reasonable chance of passing both electrical safety and EMI/ EMC tests for most countries. However, I have a (Chinese, no certifications) SMPS using this IC, and every part needed to make it compliant is missing. There's no input fuse (just a 4.7Ω 'flameproof' resistor), no X-Class caps or TVS (transient voltage suppressor) and no common-mode inductor. The cap marked 'CY1' was a 1kV ceramic (not Class-Y1) and is dangerous. The minimum creepage distance is less than 5mm, and is only 3mm where the 1kV ceramic was placed. The isolation of the transformer is unknown, but it did pass a 1kV test from my insulation tester.
However, I would use this PSU only in a Class-I device (using a safety earth as the secondary barrier against electric shock). It would never be used in anything other than a piece of test gear, that I alone would use. All supplies of this type are supposed to be constructed (and tested) to Class-II (double-insulated) standards. Of real concern is the fact that the transformer's winding window is only half-filled, which indicates that the wire is thinner than it should be, and there's likely to be insufficient insulation between primary and secondary. None of this inspires confidence.
The output regulation is via a zener diode (not a voltage reference as shown in the drawing), and secondary filtering consists of a single electrolytic capacitor. There's no inductor to ensure low levels of EMI on the DC output. The same applies to countless other (very similar) ICs, which are available from multiple manufacturers. They nearly always show the EMI and safety components in the 'typical application circuit', but the SMPS will function without them.
Just because a circuit works, that doesn't mean that it complies with mandatory EMC of safety regulations. This is particularly true if the minimum creepage (and/ or clearance) conditions aren't met. Any PCB contamination (from a failed [usually explosively] electrolytic capacitor for example). Many cheap SMPS use phenolic PCBs (the same stuff that Veroboard is made from). This is nowhere near as robust as fibreglass (FR4), and it is more sensitive to humidity. It's used because it's much cheaper than fibreglass, and it's usually reinforced with paper, so it's easier on tools (drills, routers, guillotines, etc.). I consider it to be marginal for mains usage, but it's usually alright if safety-critical parts of the PCB are routed through, or wider than 'normal' creepage distances are used.
In some cases, the minimum creepage distance is set by an optocoupler. These usually have the standard 0.3" (7.62mm) spacing, allowing a couple of millimetres for the pads. SMD types may have a narrower pin spacing, and a cut-out beneath the optocoupler is good practice.
Figure 3.2 - Basic SMPS Schematic #2 (Simplified)
In the second drawing, a different approach is used. Rather than regulating the secondary via an optocoupler, the primary side is regulated using the supply for the IC. This means there's only one thing that has to provide full isolation, and that's the transformer. The above is adapted from an application circuit for the TNY267, an IC made by Power Integrations. In the datasheet, they state that "The TinySwitch-II oscillator incorporates circuitry that introduces a small amount of frequency jitter, typically 8 kHz peak-to-peak, to minimize EMI emission. The modulation rate of the frequency jitter is set to 1kHz to optimize EMI reduction for both average and quasi-peak emissions."
Stated another way, the circuit is designed specifically to ensure it will pass EMI tests using the standard test procedures. Fortunately, that also means that it won't cause significant EMI, as the test process is wide-band and will pick up any 'errant' frequencies that may be generated. The above quote is intended for designers, so they know before formal testing that the SMPS is unlikely to fail (that means re-testing, at considerable extra expense). It's interesting that the application circuit does not include a Class-Y1 capacitor, which is unusual.
The point here is that I can find (as can you) any number of off-line (mains powered) SMPS circuits, and many of them are flawed beyond belief. I will not show these circuits, nor provide links, because they are a menace and should never have been published. Just about any PSU design can be simplified dramatically, but at the cost of high levels of interference, and deplorably low standards for safety. One I saw used a half-wave rectifier so the output could be referred to the mains Neutral. In theory this is 'safe', but regulations worldwide state that the Neutral is to be considered as 'live', because many power outlets are not polarised.
I have taken photos of a selection of compliant and non-compliant SMPS. If you're willing to dismantle the supply it's fairly easy to see if it's likely to be compliant or not, but otherwise you rely on external markings (CE, UL, RCM, etc.) which may or may not be real. There's no way to know by looking at the logos or other markings, but if any PSU is much cheaper than from a reputable supplier then assume the worst.
Figure 4.1 - Complete Rubbish #1 (Top View)
This supply has had its input filter cap removed, but is otherwise intact. It was installed in a housing that didn't use the two AC 'connectors' (bottom left), and had wires running to the AC pins. This was quite obviously 'recycled', which isn't an issue per se, but it would not comply with any EMI tests as there are zero RF filtering parts installed. There's space for an input inductor (L1, top left) and an output inductor (L2, right centre). At least the Y1 cap really is a Y1 cap, or so it's marked. It could also be a fake.
Figure 4.2 - Complete Rubbish #1 (Bottom View)
The bottom view shows that the isolation barrier is acceptable, but we don't know if the transformer would withstand any serious test voltage. A recycled supply is not what you expect when you buy it 'new', but I was fully aware that it wasn't the 'real deal' when I paid AU$1.80 for it (including postage!).
Figure 4.3 - Complete Rubbish #2 (Top View)
Here's another one, but this one originally had a 1kV ceramic capacitor instead of a Y1 cap (even though the PCB is marked 'CY1', top right). The separation between 'hot' and 'safe' sides of the PCB is inadequate, with the minimum being only 3mm. I replaced the 1kV cap with a Y1 cap, but this supply could not be trusted as a Class-II (double-insulated) device, because it doesn't comply.
Figure 4.4 - Complete Rubbish #2 (Bottom View)
The so-called 'isolation barrier' is the empty section. You can see where the original 1kV cap was (top mid-left), and I drilled a new hole to accommodate the wider pin spacing for a Y1 cap. This supply doesn't even have provision for input or output EMI filtering, so it was obviously designed by an idiot. The transformer winding window is half empty (not visible in the pix), which tells me that there is almost certainly nowhere near enough insulation.
Figure 4.5 - LED Driver Supply (Top View)
By way of comparison, Fig. 4.5 shows a LED driver board which has everything required. The X3 cap is installed as part of the EMI filter, it has a fuse, inductor and MOV for input protection and is built to a high standard. At the output, there are two filter caps and an inductor, so I'd expect this supply to be compliant.
Figure 4.6 - LED Driver Supply (Bottom View)
The bottom view shows a clearly defined isolation barrier (the dark vertical line), and it has compliant distance between the 'hot' and 'safe' sides of the supply. This is what we should expect to see in a SMPS, but if you buy on an extreme budget expect to see 'complete rubbish'.
Figure 4.7 - USB Charger Supply (Top View)
This USB charger failed, with a rather loud BANG right next to me at the time. However, nothing became dangerous, and you can see that the input electro (400V) has exploded. The shrapnel was contained, but as the next image shows there was considerable PCB contamination when the electro started to leak. Consider that this supply was no more than a couple of years old when it blew up. It has all the required markings allowing it to be sold in Australia. Note that the Y1 cap was removed to install into the Fig. 4.3 supply.
Figure 4.8 - USB Charger Supply (Bottom View)
PCB contamination is obvious on the right side of the PCB. However, it hasn't bridged the isolation barrier, so it never became dangerous. This is why the creepage and clearance distances are so important. A 3mm barrier could easily have been bridged by the electro's contents, but it's also important to note that the failed electro is as far from the isolation barrier as possible.
There is a limit to the number of photos I can provide, but I hope you now have the general idea. 'Budget' almost certainly means the supply is non-compliant with mandatory safety requirements, and as noted earlier if you purchase one of these from overseas you become the importer, and will probably be held liable if anyone is injured or killed. The risk is obvious, but only if you pull the supply apart. This means that it's no longer safe to use, even if it is (or appears to be) compliant with applicable standards.
If you need a plug-pack style power supply, it should be obvious that buying from random on-line sellers (on any platform) is unwise. Reputable suppliers will only sell compliant products, because the risk of selling anything else is not worth it. There are very heavy fines imposed in Australia, and no doubt much the same applies elsewhere - the EU is well known for strict safety requirements (as it should be).
Most people will be unaware that mains leads (power cords) have a mandatory approvals requirement to allow them to be sold in Australia. Many other countries will have similar rules, and it is illigal to sell (including trade, swap or loan) unapproved cables. As a result, it's almost certain that not one of the 'high-end' power cables sold is legal. Any hi-fi shop selling them is liable for prosecution if found to be selling un-approved mains cables.
For the worst possible example of a non-compliant mains lead, see Electrical Safety - Requirements And Standards, Section 10. This is best described as a travesty, and it would not pass safety testing in any country on Earth. This was supplied by an eBay seller, along with a (non-compliant in Australia) 12V, 10A power supply (which was recycled). The supply may not have approvals, but I've determined that it is 'safe' - at least for the application where it will be used (which will be in an earthed metal case).
There are very good reasons for the requirements, with electrical safety being the most obvious. The cables (including distribution boards and extension leads) must have the RCM (or approval number for earlier products) moulded into the plug/ socket and/ or the lead itself. There are no exemptions, but people are permitted to make their own extension leads or mains leads, provided they are not offered for sale. All manufactured cables are covered by AS/NZS 3112 (Australian/ New Zealand Standards).
Similar requirements apply elsewhere, and most products need to comply with IEC requirements (IEC standard means an International Electrotechnical Commission standard). There's a lot of info at the EESS (Electrical Equipment Safety System) website. In one section, they state ...
A Responsible Supplier (on-shore manufacturer or importer) must meet all the requirements of the EESS, including:
Second or subsequent suppliers in the supply chain must ensure that electrical equipment offered for sale complies with the following:
The info above is directly from the EESS website.
The above apply equally to small power supplies, so local (Australian) eBay sellers are breaking the law if they sell unapproved power supplies, 'high-end' mains cables or any other products that don't comply with mandatory standards. Even a cursory look through any of the online 'market places' will reveal countless non-compliant products.
I expect that the ramifications of cheap (but definitely not cheerful) SMPS are fairly clear. If you buy on-line (eBay, Amazon, etc.) you usually have no idea if a power supply is approved for use where you live or not, until it arrives and you're willing to take it apart if it looks dodgy. Not everyone knows what to look for, and I hope this article helps readers to know what's important. In almost all countries (or trading zones like the EU) there are specific markings indicating approval, but they aren't necessarily genuine. The old adage "if it looks too good to be true then it probably is" applies in all cases for sellers who aren't 'reputable'. Retail outlets and well known suppliers will almost always ensure that they don't fall foul of the regulatory bodies, as the penalties can be severe, and doubly so if someone is injured by the product they sell.
On-line sellers think they can get away with it, and if they're in another country they are probably right. That's because you become the importer, and therefore you are responsible for the suitability of the product for use where you live. If you have no idea what to look for, this places you at considerable risk, and doubly so if you re-sell the product to others. While you can certainly purchase a budget supply, dismantle it, and test it thoroughly, very few people have the necessary equipment to run the electrical safety tests, and testing for EMI is even harder. Even if you do determine that the PSU is safe and free from any interference problems, you then have to put it back together so that the case is properly and securely joined. In most cases, getting it apart will cause damage that can't be repaired easily, and it becomes a potential death trap.
Unfortunately, on-line 'marketplaces' have little or no idea of what their sellers are permitted to sell, so unsafe power supplies are a major problem. It seems that there's no incentive (or no real disincentive) to prevent the sale of non-compliant and possibly dangerous products. I have attempted to alert NSW Fair Trading on a number of occasions (Fair Trading is one of the Australian bodies who look for prohibited or unsafe goods, amongst other things). To say that my attempts were generally unsatisfactory would be an understatement. The same will apply elsewhere, so the only option is to buy plug-pack power supplies from known and reputable sellers. These should ideally have a 'bricks and mortar' outlet (i.e. a physical shop) where you can examine products before buying to ensure they have the proper approvals.
For the most comprehensive documentation I've seen, I recommend Mean Well User Technical Manual, which I found after this article was published. It applies mainly to larger ('frame' type) supplies, but it has much useful information that isn't available elsewhere.
![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +DC Servos |
A number of audio circuits use a DC servo circuit, with the idea being to remove all traces of DC from the output of a preamp or power amp. Apart from the (IMO) complete futility of making audio equipment DC coupled throughout, it's also potentially dangerous to loudspeakers in particular. Operating any audio gear with response to DC is asking for trouble, and it should be obvious that a DC servo will not (by definition) allow operation to DC. The idea that a DC servo removes DC but doesn't affect AC (at any frequency) is simply untrue. Unless the DC servo is set for an unrealistically low frequency (0.01Hz for example), it must and does affect the low frequency AC as well. At question here is whether this is more or less 'intrusive' than a couple of capacitors.
+ +A good part of this has come from the stupid idea that "The best cap is no cap." The best cap is a cap that's been chosen to ensure that your loudspeaker drivers will never be subjected to DC or very low frequencies that aren't audible anyway. It will usually be polyester, sometimes people insist on polypropylene, and in many cases an electrolytic cap is used. Despite all the objections, provided the voltage across any capacitor is low enough, the distortion contributed is negligible. Phase shift is often stated as a 'good' reason to avoid using an input cap, but a DC servo can actually make it worse. It's easy to ensure that there is close to zero phase shift at any frequency of interest, simply by using a larger cap than normal.
+ +When you include a DC servo system, it creates issues of its own, and these are rarely discussed by anyone. There is also additional complexity in the overall circuit, which is sometimes considerable. A power amplifier will run from fairly high voltage supplies (typically in excess of ±25V), but the DC servo needs an opamp, which requires a lower voltage (around ±15V maximum). That means additional regulation is needed, which may only include a couple of resistors and zener diodes, but may use regulator ICs instead. In a combined preamp and power amp, the DC servo(s) can be run from the preamp supplies, and now two supplies are needed for the power amp board(s) - operating voltages and servo supply voltages.
+ +This all means that there are more parts, more connectors and (obviously) more things that can go wrong. If any part of the DC servo circuit fails, there's every chance that the circuit will develop a DC output as a result, and that may be sufficient to cause speaker failure in a fully DC coupled system. The chance of a capacitor failing in such a way as to cause the same problem is very small - so small as to be considered negligible in most cases.
+ +For anyone who thinks that caps are 'evil' (hint; they aren't) the only way to ensure a low DC offset is to use a DC servo, but as you'll see these impose their own special constraints. In many cases, the servo may be more intrusive than using capacitors, and I can't see how this can be considered a sensible approach. However, DC servos definitely have their uses, and dismissing them out of hand would be just as silly as rejecting capacitors because they 'ruin' the sound (another hint; they don't).
+ +It must be remembered that any DC servo system will be set up so that it can remove small amounts of DC offset - perhaps up to ±1V or so would be a sensible maximum. If a faulty preamp is connected with (say) 5V DC at its output, the DC servo system will not have enough range to remove that much, so the power amplifier will provide DC straight to the speakers (which will announce their displeasure by liberating 'magic smoke'.
+ +Consider that just about every piece of music you listen to has already passed through countless capacitors within the recording process. Not just coupling caps, but those used for equalisation (whether vinyl or CD - EQ is almost invariably used during recording), and even in microphones such as capacitor (aka 'condenser') mics or any other that has electronic circuitry. It's unrealistic to imagine that every piece of equipment used for recording only contains capacitors with the most advanced dielectrics available, because the vast majority will include no such thing. It's equally unrealistic to assume that if no capacitors are used in the playback audio chain it will make anything sound 'better'.
+ +By definition, an amp or preamp using a DC servo cannot reproduce DC. The servo will operate and remove (or try to remove) the DC component, but if it's large enough to saturate the servo opamp then DC will get through anyway. Everything has its limits, and no ideal devices exist, so the end result will always be a compromise.
+ +This is not to say that the DC servo is 'pointless'. There are countless pieces of equipment that rely on a DC (or other) servo for their operation, and the purpose of this article is to provide useful information, and not to dissuade anyone from adopting a DC servo if it suits their purpose. When used for some perceived benefit (such as eliminating capacitors from the signal path), then the actual benefit may be far less than expected. All circuit building blocks have their place in electronics, and it's up to the designer to determine what is necessary to achieve the desired goals. If this includes a DC servo, then that's what should be used.
+ + +Before continuing, not everyone will know what a DC servo is or how it's used, so some explanations are in order. If a circuit has a DC error (i.e. some amount of DC output when it should be zero), a servo is used to provide just enough input offset to correct the output and set it to zero with no signal. The servo is almost always a fairly simple integrator, most commonly using a FET input opamp to allow low values of capacitance and high resistances. Some practical examples are shown further below.
+ +The integrator is set up so that it provides negative feedback, but with very high DC gain to maintain a low final error. Even a 'pedestrian' opamp such as a TL071 has a DC open loop gain of at least 100,000 (100dB) and often more. The primary error term in the final system is the opamp's input offset voltage (typically 2-3mV, but usually less in practice). The overall open-loop gain (i.e. before feedback is connected) of an amplifier and servo for DC and very low frequency signals can easily exceed 120dB (1,000,000).
+ +The DC servo provides a very large open-loop gain improvement over the amplifier circuit by itself. This is (by design) limited to sub-audible frequencies, and the additional DC gain provided by the servo's opamp is able to remove DC offset almost completely. By design, few power amplifiers have a high enough open loop (or DC) gain to be able to effectively eliminate any DC offset. The opamp (and associated integrator) ensure that there is more than sufficient DC gain to reduce overall DC offset to negligible levels.
+ +Note: The ultimate limitation of any DC servo is the DC input offset voltage of the opamp used for the servo itself. For an opamp such as the TL072, the 'typical' input offset voltage is 3mV, and unless you include a DC offset control for the servo opamp, the main amplifier's output DC offset can be no better than this. I mention the TL072 because it's ideal for this purpose, having very low input current which minimises errors due to this factor. The integrator's input DC offset has been assumed to be zero for the following discussion, but it will rarely be so in practice.
+ +
Figure 1 - Basic DC servo Principle
The basics of a DC servo are shown above. The integrator ( ∫ ) essentially ignores AC, and produces the integral (in simple terms, the average) of the output. If it happens to be some value of DC, then the output of the integrator will be just that, provided of course that the AC component is at a frequency high enough to be 'ignored' by the integrator itself. Note that the integrator is inverting. The input and integrator are then summed ( ∑ ) so that any DC at the amplifier's output is effectively cancelled.
+ +The circuit has been shown connected to a loudspeaker (this site is mainly about audio after all ), but in reality it can be any transducer, as may be used for scientific, medical, industrial or other application(s). DC servos are used in some unlikely places, but the same principles apply regardless. Because they are DC servos, much of the complex feedback loop stability criteria may not be necessary, but as you'll see below, just the addition of an input capacitor can mess that up badly.
In Figure 2, there is an amplifier circuit (shown simply as 'Amp') and a DC servo circuit (shown as U1). If the amplifier shows any sign of DC at the output, this is integrated by U1, and that signal is applied to the amp's input to correct the offset. Let's say that the amplifier (for whatever reason) has an output DC offset of 620mV (corresponding to an input DC offset of around 27mV). While that won't hurt a loudspeaker (power into an 8 ohm driver is only 49mW) it may cause a small but unacceptable shift in the speaker cone's static position. In some other applications, it may be catastrophic (for example, driving a transformer).
+ +
Figure 2 - Practical Inverting DC Servo
When the DC servo is connected, the initial DC is still 620mV, but the servo circuit reduces that to less than 1mV within a few seconds. After around 15 seconds (when the circuit has fully settled), the DC offset is about 100µV - a significant improvement. Any DC at the output of the amp is integrated by U1 (via R6 and the integration capacitor C2), and once settled the output of U1 applies exactly the right amount of DC offset to the input to force U1's output to (close to) zero. With the values shown (and a DC offset of -630mV without the servo), the servo's output voltage will be +300mV, and it feeds just enough correction to the amp's input to force the offset down to only 100µV. The (passive) summing point is the junction of R1, R2 and R3.
+ +However, the circuit shown is now sensitive to the source resistance, which has to be in excess of 20k or the DC servo is unable to make the correction needed. U1 can supply a maximum output voltage of around 13V, and this can't force enough current through the bias network (R3, R2 and R1) to cope with low impedance inputs. This is obviously unacceptable, since most sources have an output impedance of close to 100 ohms, so the DC servo can't function. There's another problem as well, in that if the source is connected or disconnected while the amp is on, it takes time for the servo to reset itself to suit the changed conditions. With an audio system, the speaker will make a fairly loud 'thump' as the input is changed. You also can't use an input pot, because the DC will make it noisy (and it will cause more issues with source impedance).
+ +One answer is to include C1 (shown greyed out) so the DC servo feedback path is isolated from the source. This has some unexpected consequences though, because there are two time constants involved in the feedback path, which cause some potentially serious issues. This means that we do need to concern ourselves with feedback loop stability. The graph below shows what happens if you use a 100nF, 1µF and 10µF cap for C1. With 10µF there's some bad ringing as the circuit settles, and this also shows up in the frequency response at very low frequencies. The frequency response shows a peak of more than 6dB at 0.36Hz, and although well below audibility, it will cause 'disturbances' when stimulated by the audio signal. If C1 is reduced to 100nF, settling time is as close to perfect as you'd ever need, but response is about 2dB down at 20Hz. This is almost certainly unacceptable.
+ +
Figure 3 - Effect Of Two Time Constants In Input Circuit
Using a 1µF cap for C1 gives a perfect response, with just the smallest overshoot and no low frequency boost where you really don't need it. Unfortunately, the servo makes the input capacitor value critical for proper circuit behaviour, something that isn't usually a problem. We've come to expect that altering the low frequency response is simply a matter of changing the input capacitor, but once a DC servo of the form shown is in place, the capacitor value becomes a critical part of the circuit. In particular, the response of the red trace is not simply undesirable, it's potentially dangerous! There's more on that further below.
+ +While they are used sometimes, inverting DC servos are the least desirable way to achieve the goals expected. An input capacitor should be considered mandatory to prevent possibly serious interactions with the source impedance/ resistance. The capacitor value has to be selected with care, and extensive tests are needed to ensure that the circuit is absolutely stable. A damped oscillation or premature rolloff will result if the cap is too large or too small (respectively). Consider that many sources (e.g. preamps) have an output capacitor, and that may interact very badly with the power amplifier/ servo combination.
+ + +
If the DC servo is non-inverting so its output is at the same polarity as the amp's output, the correction signal can be applied to the negative feedback point of the main amplifier to correct any error. This overcomes the problem of the input capacitor, because it's no longer part of the DC feedback loop. The value can be changed at will (or even left out if you are particularly brave) without affecting the response of the DC servo. Note that if there is any DC potential at the amp's input, that can cause issues, and the servo may not have sufficient range to change that.
+ +The resistance of the DC feedback resistor now becomes part of the main amp's feedback circuit, so it has to be high enough as to not adversely affect the desired gain. With the values shown below, the gain is affected very marginally, but it won't normally be a problem. You need to be aware that when used like this, the opamp's output noise (and any distortion that may be created) will be injected into the amplifier's feedback loop, so that needs to be considered in circuits designed for very low noise. The opamp's output is also part of the feedback loop, and by extension is also part of the signal chain.
+ +
Figure 4 - Non-Inverting DC Servo Connections
The input to the DC servo opamp must be constrained so that it's within the opamp's input voltage range. If the amp has supply voltages of ±50V, you can't apply that to an opamp's input because it will die. Now, we can either add an attenuator (which will badly affect performance) or get clever (the preferred choice whenever possible). If a passive integrator is used we can ensure that nothing below 1Hz can cause a problem, and the opamp's input can be protected easily because of the high impedance. An interesting point about this circuit is that the rolloff is 6dB/ octave, and not 12dB/ octave as you might expect. This is fortunate, because it means that only one time constant is involved (2.2MΩ and 100nF). The benefit of the circuit shown is that it has far greater gain at DC (and below 1Hz).
+ +The diodes protect the opamp's input from fault voltages. Note that when diodes are connected in the 'preferred' position, leakage can cause the servo to adjust the output voltage to a few millivolts (rather than less than 1mV). This is minimised by using lower value resistors and higher value caps. For the circuit shown, 100k resistors and 2.2µF caps minimise any offset created by diode leakage. An alternative is to use two (or even three) diodes in series at each location.
+ +Despite the capacitor from the servo opamp's output to input, this is not an integrator. The cap allows the opamp to run at maximum gain for DC voltages, but doesn't add any usable AC filtering. In theory it can use lower (or higher) values, but it's more sensible to maintain C2 and C3 at the same value. This ensures that the circuit is unconditionally stable and has no very low frequency response aberrations which will occur if the values are different. Likewise, R5 and R6 should also be the same value, both to maintain a stable circuit and minimise opamp input DC offset.
+ +If the servo is configured any other way, it will reduce the available gain of the DC servo, and that affects the ability of the circuit to remove DC. With the arrangement as shown above, the servo can pull the offset back to well under -25µV (as simulated). No-one actually needs offset to be that low, but nor does it hurt anything. This is obviously a far better option, as it means that you can use any value of input capacitor you like (including no cap at all), but beware if part of the DC offset problem is actually caused by the input stage of the power amp. That will cause a pot to become noisy, and will also 'upset' the delicate balance achieved by the DC servo when the level is changed. It will correct for any change, but it's not instant (it will take up to 1.5 seconds to re-settle with the values shown).
+ +The servo's settling time is an important consideration, and it should be at least twice the periodic time of the lowest frequency of interest. If you expect the amp to be flat to 10Hz, that's a period of 100ms, and the integrator requires a time constant of at least 200ms (2.2MΩ and 100nF gives 220ms). In a simulation, response was still flat to 2Hz with the circuit shown. Making the servo slower will allow lower frequencies, but there's no point because 2Hz is already well below any audible (or reproducible) frequency. The calculated (and simulated) -3dB integrator frequency for the values shown is ...
+ ++ f = 1 / ( 2π × R × C )+ +
+ f = 1 / ( 2π × 2.2M × 100n ) = 0.72Hz +
It may be unexpected, but the integrator's -3dB frequency does not necessarily correspond to the amplifier's -3dB frequency. The value of the DC servo output resistor not only changes the amplifier's gain, but also the low frequency -3dB point. With R4 being 22k as shown, the amp has a -3dB frequency of 0.72Hz, as expected. If the value of R4 is increased, the -3dB frequency is reduced and vice versa. For example, if R4 is 100k, the -3dB frequency is 0.16Hz. Provided the integrator's frequency is low enough (aim for less than 1Hz), you don't need to worry too much about it. If you choose to worry anyway, the amp's -3dB frequency is inversely proportional to the value of R4. Double R4 to 44k, and the -3dB frequency is halved, to 0.36Hz. Below the integrator frequency, the amp's response falls at 6dB/ octave.
+ +Note the connections for the two diodes. These are sometimes placed in reverse-parallel with C2 (shown as 'alternate connection', in light grey), but this is basically a very bad idea. The reason is distortion, and this is covered in the following section. It appears that many people seem not to have noticed that this can create measurable distortion with high-level, low-frequency amplifier output signals. The method shown (with diodes in black) is a far better option, provided the integrator frequency is low enough. No audio signal should ever be able to drive the opamp's input outside its linear range.
+ +In general, the non-inverting DC servo is to be preferred in almost all cases. It's inherently stable and has no 'bad habits'. There may well be cases where it's not appropriate, but these are likely to be few and far between. It's essential that you know about both possibilities, because one never knows where a particular electronic building block will be used, and the idea is to pick the topology that works best in the final circuit.
+ + +In some cases people operate power amplifiers wired as an inverting amplifier. This isn't especially common, but it may be done for one amp in BTL (bridge-tied-load) configuration. A DC servo doesn't really care very much is the amplifier is inverting or non-inverting, provided the DC feedback applied is negative. It's easy to inadvertently connect the servo's output to provide positive feedback, which will result in the amplifier developing a very high DC output voltage (usually close to one or the other supply rail). It's obvious that this would not be good.
+ +If the power amp is inverting, it may be tempting to use an inverting servo, supplying the necessary DC offset compensation to the unused non-inverting amplifier input. The signal to the non-inverting input will typically be bypassed so that it's at earth (ground) potential for AC, as is usually required to ensure proper amplifier function. This approach can produce some rather alarming (and undesirable) results, and it's very hard to recommend. Figure 5 shows an example circuit.
+ +
Figure 5 - Inverting Amplifier With DC Servo
This circuit is basically the same as shown in Figure 2, except that the input is now via R5, connected directly to the inverting input of the power amp. The resistors (R1, R2 and R3) have been adjusted to 'sensible' values for this topology. It appears that it should be quite alright, but as discussed with the inverting DC servo, there are some issues that make this approach unstable. The ringing waveform seen in Figure 3 is back in full force, due to the two time-constants (R6, C2, and R1, C1). Not only does this create ripple as the circuit settles, but it creates a resonant boost of 9dB at 3Hz. The only way to prevent both the settling-time ripple and dangerous low frequency boost is to use an input capacitor (C3) in series with R5. The value is critical (again), and with the values shown it has to be 47µF, which ensures complete stability. Alternately, C1 can be reduced to 1µF and C3 bypassed, which also results in stable operation. However, noise from the servo is not attenuated as well.
+ +C3 is optimal at 47µF. Any other value for C3 (and especially no capacitor at all) will provide results that are entirely unacceptable unless C1 is also adjusted. The response with C3 shorted out is shown below, and it should be immediately apparent that this is not a good idea. By way of contrast, if the circuit is used with a non-inverting servo system, it makes no difference if the input capacitor is there or not, and the circuit is much better behaved. Any system that has critical capacitor and/ or resistor values is inherently unstable, and if there is any deviation 'bad things' will happen. By ensuring the servo is unconditionally stable, the potential issues are avoided.
+ +
Figure 6 - Inverting Amplifier Settling With DC Servo And C3 Shorted
The initial offset is 600mV as simulated. A damped oscillation such as that shown above is always a sign that something is wrong, and it will occur every time there is a change of impedance at the input. Adding the capacitor provides additional damping that removes the oscillation, but as noted the value is critical. It's also a large value, and the only viable part is an electrolytic capacitor. With the values shown (and including C3), the phase shift at 10Hz is 23 degrees. The circuit does behave itself if the input is left open circuit, so at least that's not something you'd need to be concerned about. One advantage of an inverting servo is that you don't need to be so concerned about protective diode leakage.
+ +
Figure 7 - Inverting Amplifier With Preferred DC Servo
The arrangement shown above is a far better proposition than that shown in Figure 5. It behaves itself without ringing or other misbehaviour regardless of whether you include an input capacitor or not, and is the circuit I'd recommend. A power amplifier is no place for any circuitry that's potentially unstable, because you can never know the exact specification of the preamp driving it, unless the driver circuit is within the same chassis. No performance graphs are shown simply because there's no need for them.
+ +This circuit will increase the amplifier's noise floor very slightly, both because it's an inverting amplifier which has an inherently higher noise that a non-inverting circuit anyway, and also due to the opamp's output injecting opamp output noise into the summing point (the junction of R2, R3 and R4). R1 is not used in this arrangement.
+ + +In the introduction, I stated that a DC servo can (and does) introduce low frequency phase shifts, and that this can be worse than using a capacitor. We need to examine the circuit to see how this is true, because a DC servo may be used by some people in the belief that it eliminates low frequency phase shift. A quick look at Figure 4 shows that there is feedback at DC, but importantly, low frequencies must also be affected. While a DC servo does remove the DC offset, it must pass some AC as well, because it's basically a fairly simple low pass filter. The only component that absolutely removes DC is a capacitor, which can be as large as you wish so it doesn't affect anything within the audio range.
+ +Looking at the Figure 4 circuit, you see that there are two integrators, with an effective (combined) turnover frequency of 0.72Hz. The output from U1 is fed back into the inverting input of the amp, and that has two effects. The first is that in increases the gain, not by very much, but it is increased because R4 is effectively in parallel with R3, giving an effective value of 990 ohms. Secondly, the output of U1 is only mostly DC, but it also passes some low frequency AC back to the inverting input of the amp. That reduces the gain for low frequency AC, and in turn creates a phase shift. It cannot be otherwise!
+ +
Figure 8 - Amplitude And Phase Of Amp With Servo
The above graph shows amplifier frequency response, DC servo frequency response and amplifier phase, from 1Hz to 10kHz. C1 and C2 are shorted out, and the amplifier and DC servo shown in Figure 4 is used to remove the DC offset. It's quite apparent that the amp's output phase changes as frequency is reduced, and the drop of level below 4Hz is also visible. This graph was taken without an input or feedback blocking capacitor, yet there is still an obvious phase shift and a reduction of the low frequency signal.
+ +You can't tell from the graph, but the frequency response is 1.8dB down at 1Hz. That's nothing to complain about of course, but the phase shift at the same frequency is 36°, rather spoiling the party for those who insist that a servo prevents phase shift. The only difference between the two circuits used is the gain - when the servo is in place, the AC gain is 24 rather than 23 as you'd normally expect, due to the 22k servo resistor (R6) which is in parallel with R3. When used with the servo, the input DC offset was set to 27mV, and DC output was 100µV.
+ +This should be enough to demonstrate that a DC servo does not ensure zero phase shift. In fact, if the input cap and a feedback cap are used, it's not difficult to get less phase shift than with a DC servo, without the added complexity. You don't get the very low DC offset at the output of course, but there's no good reason to aim of less than 1mV in a real power amplifier. It's generally acceptable to have up to 100mV offset (a power of less than 2mW into an 8 ohm driver).
+ +
Figure 9 - Amp Circuit Without Servo
This circuit was used to evaluate non-servo amplitude and phase. With the servo disconnected and caps as shown are used, the amp will have an output DC offset of 27mV, but this is well within the acceptable limit for a power amplifier. Most reasonably typical power amps have a DC offset of no more than about 20mV, and in some cases a trimpot is provided to allow it to be removed (almost) completely. Many people don't like trimpots, but they are never a problem if properly sealed multi-turn types are used, rather than cheap open-frame single turn trimmers.
+ +No distortion figures are applicable because the circuit is simulated (including the input DC offset voltage). While the coupling and feedback caps are high values, they are low voltage types because there is almost no voltage across them. It's sometimes thought that electrolytic caps should always have a polarising voltage, but that's not true at all. Countless circuits (DIY and commercial) use electros without any polarising voltage, and they live a long and happy life provided the voltage across them remains below 1V at all times (although I aim for no more than 100mV, AC and/ or DC).
+ +
Figure 10 - Amplitude And Phase Without Servo
The amplitude is down by 118mdB (0.118dB) at 1Hz, and the worst case phase shift is only 12° at 1Hz (vs. 1.8dB down and over 35° using the DC servo). This was achieved using a 33µF capacitor for C1, and placing a 1,000µF cap in series with R3. The capacitance values are a bit over the top, and I could easily have used lower values and achieved a good result, but it's still quite easy to beat the DC servo with appropriate capacitors, and there is no change to the 'settling time' (this is inevitable of course, because caps have to charge if there's any appreciable offset). With the values shown, steady state DC conditions are achieved in under 2 seconds. This is almost identical to the settling time with the DC servo in place. If R1 is reduced to 22k (which is a more sensible value), the phase shift is still only 21° at 1Hz, and is negligible (< 2°) for any frequency above 10Hz. + +
Remember, if there's virtually no (AC) voltage across any capacitor, then it can contribute virtually zero distortion, regardless of its 'credentials' or otherwise as discussed ad nauseam on internet forum sites. The capacitor values used are much higher than necessary, and it may appear that if the two time constants (C1, R1 and C2, R3) are made the same as those used for the servo (about 220ms) the response and phase should be identical. However, this isn't actually the case at all - they need to be larger. If C1 is 10µF and C2 is 330µF, then the servo and non-servo phase shift is virtually identical, but low frequency attenuation is less (at 1Hz, -1dB without servo, -1.8dB with servo). + +
It's safe to say that this is probably not what you expected, but before you scoff I recommend that you either run a physical test or a simulation using the values described so you can see it for yourself. The use of a DC servo has long been held as the 'solution' to using input and feedback capacitors in terms of phase response (which is actually inaudible). However, it can easily lead to a system that has turn-on noise, and the phase 'problem' isn't fixed despite the added complexity. The effects of having the opamp's output connected into the feedback path may easily undo any perceived benefit, although again, it's likely to be inaudible in practice if a competent opamp is used.
+ + +You have to be careful with any DC servo. If by some misadventure you end up with excessive gain and enough phase shift in the servo loop, it's possible for the entire circuit to oscillate at some very low frequency. It will take a serious error to accomplish this, but it most certainly is possible. I think I can say with some certainty that this is undesirable, so if you intend to use a servo circuit it must be tested thoroughly to ensure that it is stable under all possible operating conditions. The circuit shown in Figure 1 is likely to show a damped oscillation, but only if you try to filter the DC feedback from the amp with a resistor/ capacitor filter. That isn't shown, and for a very good reason - with the wrong combination of input and bypass capacitance, it's may be quite easy to create a low frequency oscillator. Any time you have three time constants in a circuit, you run the risk of creating an unintentional phase shift oscillator, so care is always necessary. Three time-constants is a recipe for disaster! All you need is a 2-stage 'post-servo' filter plus the servo itself, and oscillation is almost a certainly.
+ +The precursor to this peculiar (and most likely unexpected) problem can be seen in Figure 2 (red trace), where there is already a damped oscillation. If a third time constant (i.e. another filter) is added, an oscillator becomes probable. A damped oscillation is bad enough, but one that slowly but surely builds to full power output at a sub-audible frequency has little to commend it. Essentially, adding a third filter creates a phase-shift oscillator, with an unpredictable frequency and amplitude, but the ability to destroy any speaker.
+ +All DC servo systems take time before the servo can correct any gross errors, but small errors are usually dealt with quite quickly. Regardless, it's a good idea to have a muting relay at the amp's output so speakers aren't connected until the system is stable. If this isn't done, there's a good chance that the amp will 'pop' or even 'thump' when turned on, because of the servo's time lag. This problem only becomes critical when a circuit naturally has a high DC offset, because that will be passed through the system until the servo circuit has had enough time to make the necessary correction.
+ +Most of the time, amplifier circuits have a low enough DC offset that a servo isn't necessary. One of the main reasons that servos became popular in the first place was the desire for amplifiers that are flat to DC (or close to it). Claims that phase shift caused by the input (and/ or feedback blocking) capacitor somehow 'ruins' the music are a fantasy, and have no place in engineering. The vast majority of such claims are made based on sighted tests, where the listener/ tester knows which is which. Without the safeguard of a blind (or double blind) test, sighted tests give results based on the 'experimenter expectancy' effect - if you expect something to sound better or worse, then it will. Once the same test is conducted blind, 'obvious differences' vanish in an instant.
+ +The idea that using a DC servo 'eliminates' the need for an input capacitor is true, but it comes at a cost. Not just the extra parts, but like it or not, the servo opamp will have some influence on the amplifier's performance. If done well the influence is minimal, but it's still a consideration for anyone who thinks that eliminating capacitors is a worthy goal. As with all things in life (and electronics) there are compromises. If you want the best performance with a minimum of influence on the amp, then the integrator must be very slow, but that means the amp isn't ready for use until the DC component has been removed. If it has a fast action, the low frequency end of the spectrum is affected, both amplitude and phase.
+ +Something else that may come as a surprise is that at low frequencies, a DC servo may increase distortion. Looking at Figure 4 again, it should be apparent that at some low frequency, the diodes shown in the 'alternate connection' will clip the AC waveform going to U1. Although U1 appears to be configured as an integrator, that's an illusion - it acts as a voltage follower for AC. The capacitor provides AC feedback so the opamp doesn't clip the AC that gets past the 'real' integrator (R5 and C2), and is necessary to ensure very high DC gain so any offset can be cancelled. When the 'grey' diodes are used, they will clip low frequency AC waveforms, and the DC servo couples a distorted signal back into the amplifier's feedback network. This is now part of the amplifier's output. Even with an 'ideal' (completely distortion-free) amplifier, the distortion of the Figure 4 circuit with a 50V peak output signal (full power) is 0.07% at 10Hz, and around 0.05% at 20Hz. The distortion will increase with decreasing frequency, but at higher frequencies it's negligible. With four series-connected diodes as shown, there is no effect at any frequency, and the effects of diode leakage are minimised.
+ +This particular issue can be eliminated by omitting the diodes, but the opamp's input stage may be damaged if an amplifier DC fault develops. While the diode 'alternate arrangement' shown in Figure 4 is common, it's better to use the diodes from the opamp's non-inverting input to each supply rail as shown. To prevent diode leakage from creating offset issues, use ultra-low leakage diodes, or two in series. Provided R5 and C2 are dimensioned properly, no audio signal can exceed the opamp's linear input range. Without proper testing and close attention to every voltage in the system, this potential problem can easily pass un-noticed. An amplifier fault may cause the opamp's input to be forced to just above/ below the supply voltage, but this is allowed for in most opamps. The high value integration resistor limits the current to a safe value.
+ +You also need to select the value of the servo's output resistor carefully. If it's too low, it will affect gain and may inject opamp noise into the amplifier. If it's too high, the servo opamp may not be able to deliver sufficient current into the summing point to remove the offset. The value used in Figure 4 (22k) is reasonable, but it can be increased if desired. However, in combination with the feedback network, this acts as an attenuator, reducing the total DC gain through the circuit. That means there may be a little more DC at the output. If the value is increased too far, the opamp may not have enough output voltage to reach equilibrium. In general, the opamp's output voltage should not exceed ±5V (assuming 15V supplies) once the system has stabilised, to ensure that there is sufficient range to cope with changes over time. The same caveat applies if you use an inverting servo.
+ + +Despite the comments made above, there are times when the use of a DC servo is either essential or at least highly desirable. For many commercial products, it's essential to ensure that the wrath of 'audiophiles' or reviewers is not incurred, as might be the case if there's any measurable offset. This is a small market, and a perceived 'deficiency' can be damaging in the marketplace, especially for 'high end' products commanding a premium price. Because of the unwarranted bad rap that capacitors have in some circles, it may be seen as desirable to eliminate them from the signal path. No mention shall be made of the electrolytic caps used in the power supply of course, as these are generally ignored, despite the fact that they are most certainly part of the signal path.
+ +A very important application is for instrumentation, where DC offset may be not just troublesome, but may seriously impact the performance of the equipment. Naturally, this isn't easily solved if the measurement system has to include DC, because the servo will (attempt to) remove it. However, the ability to use small metallised film caps instead of bulky electrolytics can deliver an overall improvement, and there's no need for a manual 'set zero' control as might be necessary if there is no DC servo system. The use of film caps and high value resistors can easily extend low frequency response to 0.1Hz or less if needs be, and that would demand very large coupling/ feedback capacitors if extremely low frequency response is needed.
+ +There are many applications for DC servos in test and measurement, scientific equipment and industrial processes, so it would be unwise to dismiss the process. The purposes of this article are to ensure that the user understands that the DC servo is not a panacea, but it is a useful tool when applied sensibly. There are many systems in common use that rely heavily on the ability to remove DC offset, and reduce the remainder to a few microvolts at most. This may not be possible in some systems without the addition of an 'offset null' facility (usually a trimpot), which then demands that the presence of any DC be checked before use, and manually adjusted before the equipment can be used.
+ +Audio doesn't demand ultra-low DC offset in the majority of cases, and where DC is a problem (such as across pots which can make them noisy), a capacitor is always the easiest and cheapest option. If the reader happens to believe that caps somehow 'ruin' the sound, I need only remind him/ her that the music has already passed through countless capacitors in the recording and equalisation chain before it even gets onto a disc, so the point is moot.
+ + +In short, a DC servo uses the extraordinarily high gain (at DC and very low frequencies) and low input DC offset of an opamp to 'negate' any DC that appears at the amplifier's output. Because the circuit uses filters, there's a limit to the low frequency response, and pretty much by definition an amplifier fitted with a DC servo can't amplify DC. Should the DC input be high enough, the opamp will be forced outside of its linear range, meaning its output will be pushed to one or the other supply rail. The final result will not be a happy one.
+ +Because even a 'pedestrian' opamp will have far greater open-loop DC gain than any power amplifier, it can maintain a much better control of DC offset than the amplifier by itself. While it's certainly possible to include a DC offset trimpot in an amplifier, a servo will usually do a better, more consistent job of removing residual DC. However, it needs to be designed with care, and tested thoroughly to ensure that it doesn't do anything you wouldn't like (such as oscillate!). Ensuring that you have the optimum topology is critical to ensure unconditional stability. That means no hint of damped oscillations, at all, with any input device (whether DC coupled or not).
+ +There is a persistent myth that using a DC servo means that there is no phase shift at (very) low frequencies, but this is simply untrue. If input and feedback caps are used, the DC offset from most amplifier designs will be well below 50mV, and if both are made larger than normal, it's easy to keep the phase shift below that you'd normally get with a DC servo. Because the capacitors are large, there is very little voltage dropped across them even at the lowest frequency of interest, and therefore there can be very little distortion contributed by the capacitor(s).
+ +The point that's often missed is that if there is next to zero voltage across any component, then it can contribute next to zero distortion. Large value capacitors generally mean that electrolytic caps will be used, but even if the distortion of the cap is (say) 5% and the voltage across the cap is perhaps 1% of the input voltage, the worst case distortion can be 0.05%. I have never measured any (sensible) capacitor with 5% distortion (not even electrolytics with significant AC voltage across them), so the distortion will naturally be lower than the example given.
+ +A DC servo does pretty much eliminate any DC offset, but for most power amplifiers it's already low enough so as not to cause any problems. A DC servo is a very good idea if an amplifier is driving a transformer, but that's purely to ensure that there is no DC in the transformer winding. The low frequency content must be carefully tailored to ensure that the transformer doesn't saturate, so a low frequency filter must be considered mandatory. The filter will (of course) use capacitors. This particular topic is covered in detail in the article High Voltage Audio Systems, which discusses amplifiers connected to output transformers.
+ +The preferred connection will use a non-inverting servo, because that minimises interaction with the input circuit (especially the input capacitor if used). Consider that a capacitor may be present without you knowing it, depending on the source, and that will create very unwanted interactions if you choose the wrong topology. However, and as noted above, it still comes with caveats, and you need to be aware of the potential interactions. The servo opamp is in effect part of the signal chain, and although its contribution is small, it's not negligible. With care and good design, it can be configured to have minimal effect on the signal while still being able to do its job properly.
+ +The above comments notwithstanding, DC servos are a useful addition where very low DC offset is essential. If you like the idea of close to zero DC output from a power amp, then a DC servo will deliver, but it will not eliminate phase shift, and if not done correctly may increase distortion at low frequencies. As noted earlier, it's essential to check that all operating conditions are well within device capabilities, and that nothing 'bad' can happen if the DC servo dies (yes, opamps can, and do, fail).
+ + ++ Audio Power Amplifier Design Handbook, Douglas Self - 2012, ISBN 1136123660+ +
+ Simple DC Servos - Wayne Stegall
+ Ask the Doctors: Servos - By Dr. Dave Berners (Universal Audio WebZine, Volume 4, Number 9, December 2006)
+
Interestingly, I received an email from someone who claimed to be the inventor of the DC servo for audio applications, but as it came from a random email address (so my reply bounced) and provided no proof of any kind, I have chosen to ignore the request for attribution. Should the real inventor of the idea be prepared to contact me and provide acceptable proof, then I will include this information.
+ +![]() | + + + + + + + |
Elliott Sound Products | +Subtractive Crossover Networks |
A class of electronic crossover is variously described as a 'derived' or 'subtractive' filter is hailed by some users as the ideal. They have (apparently) perfect transient response, in that the summed output is not only flat, but a squarewave is also passed intact. This implies that they are the 'Holy Grail' of electronic crossover networks. Almost by definition, no other crossover network should even be considered.
+ +So, are they any good? Why aren't they used everywhere?
+ +These questions are best answered by a full examination of the network, so that all the facts are available.
+ + +The general idea of the subtractive crossover is quite simple. If we have a filter, and subtract the filtered signal from the input, the result is a filter with the opposite effect (i.e. a low pass filter is 'derived' from a high pass filter and vice versa). Because of the subtraction process, the result must be perfectly in phase, and the sum must (by definition) be flat response.
+ +There have been many variations on the general theme, some of which are claimed to provide better performance than others. Subtractive filters have been discussed in Elektor magazine [ 1 ] and some I have seen are quite complex. While the added complexities may suit a particular arrangement of specific loudspeakers, they generally don't add anything that changes the overall performance.
+ +
Figure 1 - Block Diagram of a Derived (Subtractive) Filter
While all circuits shown in this article are configured as shown in Figure 1, the filter itself can be a low pass section. No other changes are needed, but this connection may give rise to performance limitations that at the very least must be classified as undesirable (see below for more information).
+ +In the circuit diagrams below, all buffers are unity gain, and all circuits are driven from a low impedance (voltage) source. This is a requirement for all filters, so the input buffer is not shown for clarity. The voltage source shown is an ideal voltage source - zero ohms output impedance.
+ +Likewise, for clarity, the power supplies are not shown. All the results below are from the SIMetrix simulator, and while somewhat idealised, are representative of reality with any reasonable opamp in a real world circuit - especially within the audio band.
+ +Within this article and for the simulations used to get the graphs shown, the same values were used for filter tuning, regardless of the filter order. While this does change the crossover frequency, as you will see it is actually of little consequence.
+ +There is another method for creating a subtractive filter that cancels out the rather annoying fact that the derived section otherwise always has a 6dB/ octave slope. By adding phase shift networks, the derived filter can have the same slope as the main filter. However, as soon as you do that you eliminate the main (supposed) benefit of a subtractive network - it will not longer pass a squarewave without changing the wave shape! In addition, the tolerance of the phase shift networks (all-pass filters) is such that very good component matching is needed or the summed response will not be flat. An example is shown in Section 6.
+ +Note: In the following circuits, I used the same resistance and capacitance for each filter. This means that the crossover frequency changes, depending on the network. This is of no consequence, as the idea is to show the trend rather than complete working designs. It works out that with the values used, the 6dB/ octave circuit crosses over at 1kHz, the 12dB/ octave at 882Hz and the 24dB/ octave at 708Hz. All frequencies are nominal except for the 6dB filter and the reference 24dB/ octave Linkwitz-Riley circuit, as they are the only circuits that are well defined. Subtractive crossovers are somewhat 'undefined' because of the overlap region and frequency peak. The frequency is based on the output of the filter, not the 'derived' output.
+ + +While there is little point looking at a first order (6dB/ octave) network, it is the simplest to examine, and this will make it easier to follow the more complex filters. A first order crossover is already 'phase perfect', so making a subtractive version should give an identical result.
+ +As shown below, this is the case. The only advantage of the subtractive method is that only one reactive element (the capacitor) is used, and this is highly debatable as an 'advantage'. This is especially true with the increase of overall complexity of the circuit.
+ +
Figure 2 - First Order Subtractive Vs. 'Conventional' Crossover Networks
Figure 2 shows the schematic of the subtractive filter, and for comparative purposes, a conventional filter is also shown. A conventional high pass first order filter is used, although a low pass filter can also be used and gives identical overall results. The subtraction circuit is simply a common balanced amplifier, which only amplifies the difference between its two inputs. The frequency and phase responses are shown below, and they are identical to a 'normal' 6dB filter response. The summed output is flat, having no peaks or dips at the crossover frequency. Since a straight line is hardly inspiring to look at, this has not been included for this or any of the graphs that follow.
+ +
Figure 3 - First Order Amplitude Response
The amplitude response is as one would expect, and requires little or no further comment. As stated above, this is identical to a conventional first order filter response.
+ +
Figure 4 - First Order Phase Response
Phase response again shows the normal behaviour for a first order filter. In all cases in this article, the red curve is for the high pass filter, and the green curve is the low pass. As noted earlier, there is no point using the subtractive method for a 6dB/ octave crossover - the above is by way of example only.
+ + +When we look at second and higher orders, we start to see real difference between the subtractive filter and other more conventional crossover networks. Figure 5 shows the schematic, and it must be admitted that it is a little simpler than a standard Linkwitz Riley filter (for example). While the difference in complexity is not great, the summed response is better, and unlike nearly all filter networks above first order, it is not only phase coherent, but the summed signal reproduces a perfect squarewave.
+ +It is at this point that some people get excited - any filter that can pass a squarewave must be better than one that cannot, and in truth, virtually no conventional filter above first order can reconstruct a squarewave. The subtractive versions therefore must be better.
+ +As we will see later on, this is not necessarily true, and the ability to reproduce a true squarewave is vastly overrated. Apart from anything else, we rarely listen to any audio signal that even approaches a squarewave, but there are other relevant factors that will wait until the conclusion of this article.
+ +
Figure 5 - Second Order Subtractive Crossover Network
Above, we see the schematic for a second order (12dB/ octave) derived crossover. A single second order Butterworth highpass section is used, with the difference amplifier subtracting the filter's response from the input signal. One would think that by doing this, the derived filter would match the rolloff characteristics of the filter, but this is thwarted by phase shift. It is phase shift that causes the derived rolloff slope to remain at 6dB/ octave, and although it is possible to include a phase shift network to equalise the slopes, this will no longer allow the filter to recreate a squarewave, and it will behave the same as any other filter network.
+ + +![]() |
+ In fact, various magazines (Elektor being one that I know of - thanks to a reader) have published projects that use a combination of a standard subtractive crossover
+ with a phase shift (all pass) network. This does equalise the rolloff slopes, but the network behaves in the same way as a conventional crossover network. These are covered
+ in section 6 below. These filters suffer from high sensitivity to component tolerance. + + The 'saving' is two capacitors, but you need more resistors and one additional opamp (not much of a saving). The circuit complexity is greater than a conventional filter because + the repetition is replaced by a relatively complex phase shift network plus a summing amp. This makes it more likely that a mistake will be made while wiring the circuit. + IMO there is absolutely no benefit, and it is far easier to build a conventional Sallen-Key (Linkwitz-Riley alignment) based filter network such as that shown in + Project 09. |
Look carefully at the graph below ... as explained above, while the high pass section certainly rolls off at 12dB/ octave, the derived low pass section is indeed only 6dB/ octave. This is one of the greatest disadvantages of the subtractive crossover. The derived filter is always 6dB/ octave, regardless of the rolloff slope of the filter itself. (However, see note above.)
+ +
Figure 6 - Second Order Amplitude Response
Potentially of some concern is the peak in the low pass response, just before it starts to roll off. This can be reduced by reducing the Q of the filter. While it is not serious, the expectation is that the tweeter will have sufficient output at this frequency to cancel the acoustic peak, thus restoring flat response. As discussed in greater detail below, this may be wishful thinking.
+ +
Figure 7 - Second Order Phase Response
The phase response is shown above. It is seemingly impossible that two outputs with such frequency and phase responses could possibly be summed to a flat response, but they do, and this filter (just like the first order network) can pass a perfect squarewave when the outputs are summed. Likewise, the summed response is completely flat, with no peaks or dips.
+ +It is rather unlikely that the acoustic outputs from the drivers will be able to match an electrical summing network, so it is less likely that the acoustic output will be flat. Electrical and acoustic summing are not the same thing, and although electrical summing is an effective way to find out the ideal response of the system, what happens in reality is likely to be altogether different.
+ + +Finally, the circuit diagram below shows a derived 24dB/ octave (fourth order) network. Where this should offer the best response, in fact it is the worst of the three shown.
+ +
Figure 8 - Fourth Order Subtractive Crossover Network
The amplitude response (below) shows that there is a substantial rise in the response of the low pass section (the derived part of the network). If (and that is a very big ask indeed) the drivers sum as flat as an electrical network, then there isn't much of a problem. It is highly unlikely that the drivers will be able to produce a flat response in reality.
+ +
Figure 9 - Fourth Order Amplitude Response
The response peak is 4.3dB, and that represents more than double (x 2.7 in fact) the power applied to the driver over that frequency band. The amount of frequency overlap is (IMO) completely unacceptable, and a system built using this crossover would have to use accurate time alignment. Great care would also be needed to ensure that the polar response of the drivers is very similar over at least 3 octaves across the crossover frequency. The high pass filter shown uses the Linkwitz-Riley alignment, because it has a relatively low Q. A more traditional Butterworth filter (Q = 0.707) increases the amplitude of the peak to over 5dB. To add insult to injury, it doesn't even reduce the overlap !
+ +
Figure 10 - Fourth Order Phase Response
The phase response also shows a peak, but this is of less consequence than the amplitude peak. Subtractive filters are usually not phase coherent. That is to say that the phase of the signal applied to each driver varies, and the two are not in phase across the crossover region. Unless the phase response of the drivers is very predictable (no phase shift from voicecoil inductance or resonances) the two signals can no longer sum flat - even electrically. Acoustic summing will be worse than electrical summing in all cases.
+ + +So that everything can be seen in the one article, I have included a schematic of a 24dB/ octave crossover, along with the amplitude and phase response. The first thing you will see is that there is actually little additional complexity. Rather than a complex circuit, it is simply repetitious. This is the simplest of the topologies that will give the desired overall response.
+ +
Figure 11 - Fourth Order L-R Filter Schematic
The high pass section is at the top, with the low pass section at the bottom. This is identical to the circuit used in Project 09, which has been a popular project from the time it was first published.
+ +
Figure 12 - L-R Amplitude Response
Amplitude response is exactly as we would expect. A nice steep rolloff for both sections, and a clearly defined crossover frequency. Because of the Linkwitz-Riley alignment, the summed output is completely flat (just like the subtractive filters), but without any of the associated problems of excessive overlap. No, it won't pass a squarewave without changing the wave shape, but the summed output still contains every frequency that made up the original squarewave, and testing by ear reveals that it is not always possible to positively identify the squarewave from the modified version. While there is almost always a difference, it's often below the threshold of audibility, and the nature of the difference has more to do with the loudspeakers than the crossover.
+ +The only valid test with something like this is what I call the "walk out of the room" test. You listen to music, a squarewave or some other audio stimulus, then walk out of the room while an assistant either swaps out one network for the other - or not. When you return, you can then decide if there is a difference - or not. Naturally it's important that your assistant maintains a 'poker face' and provides no clue one way or the other. This is a hard test, and you might be surprised how many things you thought you could identify easily magically disappear when you use this method.
+ +
Figure 13 - L-R Phase Response
In case you wondered, no, I didn't leave out the high frequency phase response. It is simply overwritten by the low frequency graph - they are perfectly overlaid. That means that the two drivers remain in phase over the entire frequency range. This filter network relies on proper filtering, rather than hoping that the acoustic outputs of the drivers will complete the job that in reality is only half done by a subtractive filter.
+ + +There have been many subtractive/ derived crossover designs that use a phase shift network to make the filter's rolloff symmetrical. This approach certainly works, and gives results that are identical to a 'traditional' filter network. One small point that is rarely mentioned by the authors is component sensitivity - how much the response will deviate from the ideal when component tolerances cause the (mainly capacitor) values to vary a little.
+ +A 'conventional' 4th order filter as shown above can be built using caps that are simply removed from the bag - they do not need to be selected. Measuring and selecting the caps gives a better result, but it's not essential. I've built a great many 24dB/ octave xover networks, and tests have shown that the deviation from ideal is well within normal expectations without having to select the parts. This is not the case with a phase corrected subtractive network! A small variation of one or more values can have a large effect on the overall response, because the final circuit relies on a perfect phase shift to derive the second output.
+ +The circuits that have been published also use more parts overall than a Sallen-Key filter as shown in Project 09, with the majority requiring an additional opamp. This rather defeats the whole purpose, which is to make a crossover network that's less complex and allegedly has 'better' performance. It doesn't happen.
+ +
Figure 14 - Phase Corrected Subtractive Crossover
The circuit shown above is fairly typical of a phase corrected subtractive/ derived network. Several versions (virtually identical) have been published by several authors, and it's hard to see why anyone would bother. The necessary phase shift is created by the bandpass filter based on U3, and the summing network (U4) creates an all-pass filter (i.e. phase shift only). Its tuned frequency must exactly equal the -6dB frequency of the main filter network (U1, U2), 707Hz with the values shown. This matches the Figure 11 circuit, which is tuned to the same frequency.
+ +There are more parts overall, an extra opamp, and it has identical frequency and phase response to the circuit shown in Figure 11 (if everything is exact). Some alternatives use a conventional (Sallen-Key) low-pass filter, and derive the high pass. The net result is still the same - greater complexity for no net benefit. There's no symmetry, the circuit is harder to build, and there are quite obviously no advantages.
+ +There is clearly nothing to be gained by using more parts in a circuit that has far greater component sensitivity to produce a circuit that (if all goes well) simply mimics the results obtained from a simple 24dB/ octave high and low pass filter network. The best that can be said for this approach is that it's a flawed concept. At worst, it's just a waste of components.
+ +Just in case you might imagine that the version shown in Figure 14 can pass a squarewave - wrong! Because it has identical frequency and phase response to the standard 24dB/ octave filter, it follows that overall characteristics must also be the same. With any piece of electronics, the frequency and phase response determine what it will do to the incoming signal. If two circuits (however different they may be) have the same response in the frequency domain, their effect on the signal in the time domain must be the same.
+ +If you have guessed by now that I really don't like this approach, then you'd be 100% correct.
The first - and possibly the most important - thing that must be understood is that electrical and acoustical summing are not the same thing. Just because a crossover network sums flat electrically, this does not imply that it must also sum flat acoustically. With subtractive crossovers, the very worst scenario is presented to the drivers, where there is considerable frequency overlap between the adjacent loudspeaker drivers, and unless they have identical polar response over the entire overlap region (and at least an octave either side), the combined acoustic output will be anything but flat. This seems to have been missed by many of the proponents of these filters.
+ +Unlike conventional filters, where the higher the order sections have less overlap than low order, the subtractive networks present the opposite case. The derived section using a 24dB/ octave high pass section has the greatest overlap, and we can see from the above that the 6dB network is actually the best in this respect. Let us simply say that this is less than desirable (note careful use of understatement).
+ +The next issue is the derived filter section's rolloff slope - 6dB/ octave. All the circuits above derived the low pass section, because that gives the greatest protection for tweeters (and midrange) drivers against excessive excursion. However, the midrange (or mid-bass) driver gets a significant boost at the highest frequency it is expected to handle, and this can lead to distortion due to cone breakup. Adding a phase shift network with an additional filter can make the slopes symmetrical, but the resulting circuit has high component sensitivity and uses more parts than an equivalent circuit using 'conventional' filter networks.
+ +Quite a few published circuits over the years have derived the high pass section, and this places extreme demands on the drivers because of the power delivered below the crossover frequency. In addition, there is the peak at the very frequency where it is least desirable - at the lowest frequency the driver is meant to handle. It gets additional power at that frequency, increasing excursion and hence intermodulation distortion. If used for a tweeter, failure is likely because it gets too much power at frequencies it can't handle properly.
+ +Speaking of crossover frequency, it is almost impossible to predict exactly where it is. It is obvious in the first order example, but as the filter order is increased, so too is the overlap region. One might want to use the -3dB frequency of the actual filter as a guide, but that's all it really is - a guide.
+ +So, it should now be obvious that subtractive crossovers are most certainly not the 'Holy Grail', and in my opinion are virtually useless. Increased overlap at crossover may cause excessive beaming because the drivers are working as a mini-array, poor rolloff slope of the derived filter section can allow cone breakup (or if reversed, will probably cause excessive intermodulation), all because they can reproduce a squarewave. I think not.
+ +The phase shifts caused by conventional crossover networks may seem extreme, but they are generally inaudible. Provided the phase of each driver is controlled and maintained (such as with a Linkwitz-Riley crossover), there are no audible effects. While phase anomalies may be audible if two different speaker systems are operated alongside each other, this is not a problem for home audio systems. The subtractive crossover network still has overall phase shift between drivers, so it doesn't solve that particular problem anyway.
+ +Early in my exploits into electronics I did experiment with the idea, and have done so since as well. The measured results match the simulations pretty much exactly. While there is no doubt that the end result can be acceptable in a non-demanding application and at relatively low power levels, it's simply not good enough for a high grade system. I'd be happy enough to use a subtractive crossover for a background music system if that were the only option (although I'm unsure how that could come about ), but it only takes a bit more effort to do the job properly. For the sake of a few more parts (or fewer parts if a phase shift network is included), you can have a 24dB/ octave filter that works properly.
So, if anyone was ever mildly curious, now you know why I have not (and will not) publish a project based on what I consider to be a seriously flawed design.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +Distortion & Feedback |
Let's make something completely clear before we continue. Yes, negative feedback can increase the level of higher order harmonics. Low order harmonic content is reduced, but harmonics that were previously below measurement thresholds may suddenly raise their ugly little heads to annoy and frustrate the designer. This generally only happens when small amounts of feedback are used around amplifiers that have limited gain and often rather poor performance to start with, but there might be exceptions (I've not found any so far). In many cases, while you may see distortion products that didn't exist before feedback was applied, you need to consider their level with respect to the primary signal. In many cases, it will (or should) become apparent that the level of any harmonics you see is below the noise floor, which makes them largely irrelevant.
+ +The point of this article is to show that when properly implemented, negative feedback will invariably reduce distortion to levels that are well below audibility. Not just harmonic distortion, but the much more intrusive intermodulation distortion. If done incorrectly the results can be awful. There are many exciting possibilities that generally employ overly simplified circuitry, often in the mistaken belief that 'simple is better'. Albert Einstein is credited with saying that "Everything should be as simple as possible, but not simpler." Some attempts at amplifiers violate this rule, being either overly complex or too simple to be effective. Neither is useful.
+ +However, it's important to understand that negative feedback does not necessarily reduce all harmonics by the same amount. There are myriad reasons for this, but one of the underlying issues is that amplifiers (or opamps) simply don't have the same gain at all frequencies, with the gain generally falling at high frequencies so the amount of feedback that's actually applied is not constant. In Class-AB amplifiers, the overall (open loop) gain may also fall at levels where the output transistors are carrying too little current to maintain a worthwhile overall open loop gain for the whole amplifier.
+ +As with many things in electronics, we often make assumptions that don't necessarily hold true for a real-world circuit, and failure to understand this can lead to unexpected outcomes. In the material that follows, I've taken an admittedly 'simplistic' approach, not because I believe that feedback is a 'cure-all', but to make the information easy to understand at a fairly elementary level. Having said that, there is little doubt that the level of performance achieved from modern amplifiers (including opamps) cannot be achieved without the use of feedback. It doesn't matter much whether you like it or not, it's used in almost every circuit that demands linearity.
+ +The last Reference shown is by Bruno Putzeys, and he explains that there is no such thing as "too much feedback". Many people may disagree, but that doesn't change the fact that he's right.
+ +Needless to say, this article seems to have annoyed some people. One who posted anonymously on the ESP forum raised the issue (and even went to the trouble of 'proving' his point) and insists that established wisdom is correct, and therefore I am mistaken. Established wisdom is indeed correct if one approaches the problem the way it has been described (in great detail by Boyk and Sussman [ 5 ] for example). However, this is not the way amplifiers are designed, and is not the way they are normally used. While interesting, the findings are (IMO) rather pointless, because they do not describe a real-world use of the amplifying devices. Using 0.4mV input to a BJT amplifier with little or no feedback is not a normal application in a modern high fidelity system. One place this is sometimes used is with moving coil 'head' amplifiers, and with such low levels feedback can be omitted. However, the gain will change with supply voltage, temperature and (perhaps) whim. Feedback prevents gain variations and makes the circuit usable.
+ +As for the criticisms raised, the first of these is terminology - degeneration vs. feedback. Although it is commonly accepted that emitter (source or cathode) degeneration is feedback, this is only partially true. It reduces gain and raises input impedance (as does negative feedback), but it has no effect on effective bandwidth or output impedance. Harold Black invented negative feedback, not degeneration (which pre-dated his invention). Degeneration is a form of feedback because it injects a portion of the output signal in series with the input (thus improving linearity), however, it provides no error correction facilities.
+ +Harold Black's invention incorporated the error amplifier concept, although the term was not used at the time. It is worthwhile to examine the actual patent (U.S. Patent 2,102,671 filed in 1932, issued in 1937). Prior to Black's invention, a usually tiny amount of negative feedback was used to stabilise amplifiers against oscillation caused by positive feedback - this is more commonly known now as 'neutralisation'. It was (is) applied locally, not globally, and is mainly used with RF amplifiers.
+ +The second criticism is based on the impossible - the perfect square-law device does not exist other than in mathematics. No real amplification device can produce a waveform with only second harmonic distortion. Using a simulation to prove a point and testing with something that does not exist in nature is at best pointless, and proves nothing. This point was covered (but ignored) in the initial version of this article, and obviously requires emphasis.
+ +Of the possible options, using degeneration with a FET or BJT can introduce harmonics that did not appear before degeneration was applied. There have been some exhaustive examinations of this effect [ 5 ], but in general it only occurs at extremely low levels. Once the device is used in a real-world application, the effects generally become insignificant. This is something that has to be physically tested - throwing maths at it to get the result you first thought of is not helpful. The tests described apply to degeneration, not global negative feedback, and are not representative of most modern amplifiers.
+ +Much of this work has been purely theoretical. In practice, any additional harmonics created by degeneration are likely to be below the noise floor, and are of limited significance.
+ +The focus of the article is on 'true' negative feedback, not degeneration. The general principles described for negative feedback are not something I pulled out of my hat - I have seen countless claims that global feedback recirculates the signal (including Cheever [ 4 ], whose 'findings' are suspect at best). The feedback loop recirculates an instantaneous voltage - not the 'signal'. The (true analogue) signal consists of an infinite number of instantaneous voltages, and it is the designer's responsibility to ensure that the loop reacts quickly enough to be able to treat the input signal (at the highest frequency of interest) as an infinite number of instantaneous voltages.
+ +In reality, this will never really be the case, but for the audio range one can come remarkably close. At no time does the 'signal' (assuming a discrete portion of a continuous waveform) pass through the feedback loop, as is often assumed. DIY audio critics have cited square waves, and these are dealt with in the article. Unless slow enough to remain within the amp's bandwidth, of course they will cause problems. Tests, claims or assertions based on irrelevant signals are equally irrelevant - not a difficult concept to grasp I would have thought.
+ +In most cases where additional harmonics are realised by test or simulation, the feedback ratios are very low. That this is unrealistic and rather useless should be obvious, but that is exactly what the person who complained on the ESP forum did to 'prove' his point. The whole idea of negative feedback is that the circuit should have the highest practicable open loop gain. While performing tests where the open loop gain is only marginally higher than the closed loop gain will certainly prove the point (yes, additional harmonics can be produced under some conditions), the end result is not representative of the way that we use feedback. This is as meaningless as demanding that an amplifier should respond perfectly to signals that have components well outside the audio band (fast risetime squarewaves, for example).
+ +The circuit shown in Figure 3 of this article is real. It works exactly as described, and this has been verified by simulation and experiment. This is probably one of the most compelling tests, yet has been ignored because 'conventional wisdom' has been challenged. If you doubt that it can be so, build it! I did, and it does just what I say it does. You don't need to worry about multiple synchronised oscillators, just inject any signal into the second opamp and watch it disappear as the feedback ratio is increased.
+ +Just because something is taught at university or technical college, this does not make it so. I was taught that a common emitter/cathode amplifier had 'medium' output impedance, and common base/grid amps had 'high' output impedance. This was almost universally accepted (and probably still is in some cases), and is simply false. In both cases, the output impedance is the same as the collector/plate resistor - no more, no less. Only by testing, working with the devices and taking careful measurements will you find out what really happens. Relying on maths formulae (regurgitated ad nauseam) or 'common wisdom' is not always the best way to get to the truth.
+ +The whole idea of the article was to debunk some of the more preposterous claims (Cheever, et al), and to stimulate further thought. Posts such as that by the anonymous poster show clearly that further thought has not been stimulated at all, but the same old claims are simply being repeated. Until such time as people look beyond the mantra and examine the situation in real-life, no progress is made. Negative feedback will never make a 'silk purse from a sow's ear' and it's not a panacea that can be used to cover up poor design. It's a tool that when used wisely, does what we expect and improves performance.
+ +Now, you can either go back to what you were doing, or read the article (again), do some experiments (making sure that they represent real life), and then make comments. Nothing is set in stone, but I feel that the details given represent a shift from the way the issue is normally approached - hopefully for the better.
+ + +Claims abound regarding how 'bad' negative feedback is, how it ruins the sound, and how zero feedback amplifiers with comparatively vast amounts of distortion sound so much better with music. Entire papers have been written on the topic, new methods described to quantify the audibility of different harmonics, and new measurement techniques are suggested and described ad nauseam.
+ +Of those papers, articles and semi-advertisements, many make completely incorrect assumptions as to how feedback actually functions in an amplifier, and some extrapolate these false assumptions to arrive at a completely nonsensical final outcome. Before continuing, we need to clear up one very important point ...
+ ++ Feedback does not - repeat does not - cause the signal to travel from the output, back into the inverting input, and continue through the amplifier several (or multiple) + times. At any instant in time, only a single voltage level is of interest. ++ +
Feel free to re-read that statement as many times as you need to. This is a claim that has been made on numerous occasions, and it is simply false. The whole idea of feedback is that it is as close as possible to instantaneous - feedback is applied to the input of an amplifier in direct proportion to the signal at the output, and for all intents and purposes at exactly the same time. (This means that the amplifier must be fast enough to keep up with the input signal at all times.) Only a voltage exists at any point in time, not a 'signal' and not 'audio', and the feedback works to make the instantaneous output voltage as close as possible to a replica of the instantaneous voltage at the input. With CD (for example) this happens 44,100 times per second, but with analogue it's a continuous process.
+ +Once you have grasped the logic of how feedback actually works (as opposed to the way some people think it works), you are a long way towards understanding that many of the evils attributed to feedback are due to a lack of understanding, and have nothing to do with feedback itself. It has been claimed that applying feedback can actually increase the levels of higher order harmonics [ 1 ], however, this claim does not stand up to scrutiny (at least for any practical application). It is reasonable to expect that measurement errors or flawed assumptions are almost certainly the cause of this 'problem', but some parts of the industry will never let the truth get in the way of a good story. While it is true that in some (rather specific) cases application of feedback (or degeneration) can cause an increase of higher order harmonics [ 5 ], this is not (or should not be) the way the semiconductor (or valve) devices are generally used, so relevance is very limited.
+ +Application of negative feedback (i.e. from output back to input, as opposed to degeneration) on single stage amplifiers with (often very) limited open loop gain and relatively high distortion will reduce the amplitude of low-order harmonics. With the small amount of feedback available and often with limited open-loop bandwidth, such circuits may indeed increase the levels of higher order harmonics. Sometimes they may not do anything of the sort.
+ +However, it must be understood that such a circuit has very poor performance to start with. If a circuit has perhaps 3-5% THD without feedback, and has a gain of maybe 20 times, this cannot be considered a good start. Such a circuit will sound bad whether feedback is used or not - it's immaterial if some higher order harmonics are increased slightly. If you start with a bad circuit, you'll end up with a bad circuit. Feedback cannot (and does not) cure all ills, and expecting it to do so is unrealistic in the extreme. In such cases, it may be better not to use feedback - perhaps zero feedback makes such an amp sound 'less bad'. No amplifier with inherently poor linearity and low gain will ever sound good, even if measured distortion is reduced by adding small amounts of feedback.
+ +For this article, it is expected (at least for the most part) that the circuit we start with has reasonably good linearity, and in particular has sufficient open loop gain at all frequencies of interest and at all signal levels for the feedback to be effective. Adding small amounts of feedback applied to already poor circuits is simply not sensible, and is not generally the way feedback is intended to be used. On occasion, feedback might be added just to reduce output impedance, and while this does work with low gain circuits, it's still comparatively ineffective. Just like distortion reduction, sufficient gain must be available to ensure that the circuit's parameters are determined by the feedback components rather than the amplifying devices.
+ +When low gain circuits are used, applying feedback does not reduce the gain or output impedance by the expected amount. Gain is not a simple ratio defined by a pair of resistors, but becomes a complex interaction between the amplifying device and the feedback ratio.
+ +For the majority of the tests described, the effects were simulated rather than measured. There are some very good reasons for this, with the primary reason being that the simulator has access to 'ideal' amplifiers. These have infinite bandwidth, infinite input impedance, zero distortion and zero output impedance. Being perfect, they also contribute zero noise. This enables one to perform experiments that simply cannot be done in the real world, and provide a level of accuracy that is also unattainable using real circuits. Likewise, the signal sources have zero distortion, so resolution exceeds anything attainable using actual circuitry. In addition, the simulator allows multiple different tests that are very time-consuming to perform on the test bench, and require all circuits to be built and analysed.
+ + +It is useful to understand what distortion is, and how it is produced. The generation of harmonics is not a weird function of a valve, transistor or MOSFET, but is a physics phenomenon that occurs whenever a waveform is not a pure sinewave. A pure tone contains only one frequency - the fundamental. By definition, this pure tone is a sinewave - no other waveform satisfies the criterion for purity. As soon as a sinewave is modified, the waveform that now exists is created by adding harmonics. Likewise, anything that adds harmonics changes the waveform - the two are inextricably intertwined. Amplifying devices do not add harmonics per se! Amplifying devices modify the waveshape, and this requires that harmonics are added to create the 'new' waveform. The creation of harmonics is a physics requirement, and has nothing (directly) to do with the type of device that caused the modification to the waveform. Devices with high linearity modify the sinewave less than devices with lower linearity, so fewer harmonics are created in the process.
+ +Because the sinewave is a pure tone, it has long been used as a measure of the amount of non-linearity for amplifying devices. Even very small wave shape modifications can cause a large amount of distortion (and hence harmonic generation), and it is for this reason that sinewave THD (total harmonic distortion) tests are still used. Despite many claims to the contrary, a sinewave is not an 'easy' test - quite the reverse. Less than 1% distortion of a sinewave is easily heard (depending on the exact type of distortion), and it may be completely inaudible with some music or barely audible with others. Any device that amplifies will also distort, and the purity or otherwise of the output signal shows non-linearities very clearly. Interpretation of the test results does take some background knowledge though, and simply quoting a percentage with no qualifying parameters is completely useless.
+ +Strictly speaking, simply turning a sinewave on or off causes distortion, because a truly pure tone is not only without harmonics, but has existed (and will continue to exist) for eternity. While this is real, no-one will ever take it to that extreme. If you doubt that this can be so, try measuring the distortion of a sinewave that's been fed through a tone burst generator (such as Project 143). Even with a perfect sinewave, the distortion will be over 5% THD (10 cycles on, 10 cycles off). The spectrum contains frequencies that are directly related to the switching frequency (on and off timing, in this case, 50Hz).
+ +Because of the nature of a non-linear device which modifies the waveshape and thus causes the creation of harmonics, it should be obvious that it is not the amplifying device that generates the harmonics directly - it only modifies the waveshape. The harmonics are the result of the modified waveform - nothing more. To explain how a device modifies the waveform it is necessary only to look at the device's transfer function, and understand the process of amplification.
+ +Amplification is an (almost) instantaneous process. An amplifier does not 'see' a complex waveform any more than we can experience all of last week simultaneously. As the Compact Disk medium has demonstrated, time can be separated into discrete fragments, and digital data can be derived that describes the instantaneous voltage at that point in time. This process is repeated 44,100 times each second. Compared to an analogue amplifier, this is very slow. The analogue domain does not use time fragments - all processing is done on a continuous basis - but, the amplifier is only capable of processing one instantaneous voltage level at any one time, and that's all it needs to do. The input voltage is a moving target, and the output signal follows it as closely as possible.
+ +If an amplifying device has a gain of 10 when its (instantaneous) input voltage is 100mV, the output voltage will be 1V. If the device is non-linear, then the gain may fall to 9.5 when the input voltage is 1V, so the output will be 9.5V instead of 10V. This is distortion! That's it! The amplifying device does nothing more than change its gain slightly depending on the amplitude of voltage or current it has to deal with at any value of input voltage. It doesn't matter what device is used to create the non-linearity - bipolar transistors, junction FETs, MOSFETs and valves (vacuum tubes) are all non-linear, although in subtly different ways.
+ +Intermodulation distortion (IMD) is another very interesting (and far more intrusive) effect of non-linear circuits. While this is covered in some detail below, it's still worth noting that this is another physical phenomenon. It doesn't matter if the non-linearity is caused by a transistor, valve, diode or corroded wires twisted together - the effect is the same for a given degree of non-linearity. Wherever there is harmonic distortion, there is also intermodulation distortion. The two cannot be separated, and if harmonic distortion is reduced, so too is intermodulation distortion (and of course, vice versa).
+ +Of the forms of distortion that might be discussed, intermodulation is by far the worst. There simply is no 'nice' sounding intermodulation distortion, regardless of the topology of the amplifier. In very small amounts, and with some programme material, some listeners may like the sound of IMD, as it imparts a 'wall of sound' effect. High levels of IMD just sound dreadful with any recorded or reinforced music source.
+ + +Let's look at a common bipolar transistor as an example. The primary (but by no means only) form of distortion is caused by the internal emitter resistance of the transistor. Figure 1 shows a simple single transistor amplifier. A bias resistor is shown - it must be pointed out that this biasing method is never used in practice, because it is too dependent on device gain, temperature and supply voltage. Proper biasing that allows for thermal effects, device parameter spread, etc. is beyond the scope of this article.
+ +
Figure 1 - Basic Single Transistor Amplifier
This is a very basic amplifier, but it embodies all the issues that face other amplifying devices as well - valves, JFETs and MOSFETs all have similar non-linearities, but for different reasons. It just happens that with a transistor it is easy to describe in simple terms. The output waveform is also shown, and distortion measures 12%, being second (-18.5dB), third (-52dB) and fourth (-56dB) harmonics. All others are over 90dB below the fundamental. It is generally taken that ... + +
+ re = 26 / Ie (mA) where re is the internal emitter resistance and Ie is the emitter current ++ +
The gain is determined by the ratio of the collector resistance to the emitter resistance, and is approximately ...
+ ++ Av = Rc / ( Re + re ) where Av is voltage amplification, Rc is collector resistance, Re is external emitter resistance, and re as above ++ +
Re (the external emitter resistance) has not been included in the circuit of Figure 1, which has a gain of about 390. As we shall see, this varies over the output voltage range, so the measured value gives a false impression because of waveform modifications. Table 1 shows how much the circuit of Figure 1 will vary the emitter current and hence the (theoretical) gain, depending on signal level. The base current has been ignored, but this also has an influence - albeit rather small.
+ +Vc (Volts) | Ie (mA) | re (Ohms) | Voltage Gain |
29 | 1 | 26 | 38 |
25 | 5 | 5.20 | 192 |
21 | 9 | 2.89 | 346 |
17 | 3 | 2.00 | 500 |
13 | 17 | 1.53 | 654 |
9 | 21 | 1.24 | 807 |
5 | 25 | 1.04 | 962 |
1 | 29 | 0.89 | 1115 |
You can see from the table how the waveform of Figure 1 comes about. When the collector voltage is high, the current and gain are lower, and the waveform is flattened. When the collector voltage is low, the current and gain are much higher, so the waveform becomes elongated. As is obvious, the gain varies over a wide range, and any voltage waveform applied to the base must become distorted. Transistors show a logarithmic response when the base to emitter junction is driven from a voltage source, and table 1 shows this effect quite clearly.
+ +Because the transfer function is non-linear, it must alter the wave shape. If the wave shape is altered, harmonics are produced. To reduce distortion (of all forms), the application of negative feedback will make the amplifier more linear, and this results in fewer harmonics. There is no mystery and no magic. It doesn't matter if the feedback is global (applied around a complete circuit) or local (applied to each device individually). In general, global feedback gives better results than local feedback, but only if the amplifier has high open loop gain (i.e. gain without feedback).
+ +Prior to adding feedback, it is advantageous to improve the circuit's linearity by other means if possible. Since the gain of a transistor varies widely with emitter current, maintaining a constant current (via the collector) will help. Since transistors are current controlled, using a variable current for the input will also help - distortion can be halved by this alone, but voltage gain is reduced. In the case of the above circuit, using a 15mA constant current source instead of the 1k resistor increases the voltage gain to 3227, and reduces distortion to 4% - using current input (via a series resistor) reduces gain, but also reduces distortion even further.
+ +The additional gain from the use of a current source load allows us to apply feedback - if the gain is set at 400 (close enough to the 390 measured before), distortion is reduced to 0.7%. The second harmonic is now -43dB, the third is -70dB and fourth is at -95dB (all with respect to the fundamental). Compare these figures with those obtained for the circuit as shown - no comparison! This is covered in more detail in Section 5.
+ +Alternatively, Re (the external emitter resistance) can be added to create 'local feedback'. By adding an external resistor, we actually do nothing more than (partially) swamp the variation of re with emitter current. While this makes the circuit more linear, it is not really feedback at all - the correct term is degeneration. Gain variation (and hence distortion) is reduced because Re + re is much greater than just re alone and base current is also more linear, but one of the benefits that feedback (as opposed to emitter degeneration) gives is reduced output impedance. Emitter, cathode or source degeneration does not lower output impedance.
+ + +There is a great deal of information that was compiled a long time ago that seems to have been forgotten, dismissed, or simply neglected. Of particular interest is the section on distortion in the Radiotron Designer's Handbook [ 2 ]. Since some (many) of the detractors of negative feedback advocate single ended triode operation, one would expect that they would have examined what was considered 'high fidelity' back in 1957, rather than claim that amplifiers that were considered low fidelity back then represent high fidelity today. This is not a tenable position!
+ +Of some interest is a table of harmonics based on a fundamental of C - taken for convenience as 250Hz. The table is reproduced below. It shows the musical relationship of each harmonic up to the 25th with respect to the fundamental, based on the natural or just musical scale (as opposed to the equally tempered scale that is used for most instrument tuning).
+ +Harmonic | Frequency | +Note | Comment |
1st | 250 | C | Fundamental |
2nd | 500 | C1 | |
3rd | 750 | G | |
4th | 1000 | C2 | |
5th | 1250 | E | |
6th | 1500 | G | |
7th | 1750 | - | Dissonant |
8th | 2000 | C3 | |
9th | 2250 | D | |
10th | 2500 | E | |
11th | 2750 | - | Dissonant |
12th | 3000 | G | |
13th | 3250 | - | Dissonant |
14th | 3500 | - | Dissonant |
15th | 3750 | B | |
16th | 4000 | C4 | |
17th | 4250 | - | Dissonant |
18th | 4500 | D | |
19th | 4750 | - | Dissonant |
20th | 5000 | E | |
21st | 5250 | - | Dissonant |
22nd | 5500 | - | Dissonant |
23rd | 5750 | - | Dissonant |
24th | 6000 | G | |
25th | 6250 | G# | Dissonant |
Obviously, harmonic distortion that extends to the 7th or beyond is to be avoided. It is (or was) well known to guitar amp manufacturers that the seventh harmonic and above should not be reproduced if possible (even during overdrive conditions) because of just this issue - discordant (or dissonant) harmonics simply don't sound nice.
+ +Another table shows the levels of distortion that were considered objectionable, tolerable and perceptible for various frequency limits and triode or pentode valves. This table is also reproduced, but I have only included the 15kHz bandwidth results - other bandwidths were listed, but no-one would consider a bandwidth of 3,750Hz acceptable these days.
+ +Source | Mode | +Distortion | Comments |
Music | Triode | 2.5% | Objectionable |
Pentode | 2.0% | ||
Speech | Triode | 4.4% | |
Pentode | 3.0% | ||
+ | |||
Music | Triode | 1.8% | Tolerable |
Pentode | 1.35% | ||
Speech | Triode | 2.8% | |
Pentode | 1.9% | ||
+ | |||
Music | Triode | 0.75% | Perceptible |
Pentode | 0.7% | ||
Speech | Triode | 0.9% | |
Pentode | 0.9% |
These figures are interesting compared to amplifiers of today. Both triode and pentode amplifiers used in the test had an output of 3W, and were conducted in a 'typical' listening environment. While modern (competent) transistor amps will invariably beat the distortion criteria by a wide margin (at any level or frequency), some modern SET amps seem to be considerably worse than one would hope, many having distortion that rates as objectionable - and this table was compiled was a very long time ago indeed.
+ +For those who have access to the complete text of the Designer's Handbook (or at least Chapter 14 which concentrates on fidelity and distortion) I strongly recommend that it be read in its entirety. There is a great deal more to it than I have the space to reproduce here, and the fundamental principles have not really changed, despite the passing of the decades since it was written.
+ +There is an informative section covering intermodulation distortion, in which it is pointed out that there is no direct correlation between THD and IMD. It is also pointed out that no actual amplifier has only second or third harmonic distortion - every form of distortion is accompanied by multiple harmonics, although either even or odd harmonics can be the most dominant. Note that it is now possible to build a circuit where odd-order harmonics are several orders of magnitude greater than even-order harmonics, and for all intents and purposes there are no even-order harmonics present. This wasn't possible when the book was written.
+ + +Negative feedback (or just feedback) has been used for many years to linearise amplifiers. Between 1935 and 1937, Harold Black of AT&T received three U.S. patents relating to his work on the problem of reducing distortion in amplifiers by means of negative feedback. The invention caused little controversy for many years, but eventually this happy situation had to end - at least in the hi-fi industry. Feedback is used extensively in medical, military, aerospace and industrial applications and seems not to cause any problems there, despite its bad reputation amongst some audiophiles.
+ +Although many of the early attempts were less than perfect, it must be understood that the results without the feedback would have been many times worse. Negative feedback cannot make a dreadful amplifier sound good, but may make it sound acceptable. There is no possibility that the use of feedback will make a good amplifier sound bad. Not only are distortion components reduced, but negative feedback also increases the input impedance, reduces output impedance, and linearises frequency response. It is not a panacea, but it does come very close.
+So, let us examine what feedback really does. Figure 2 shows the basics of a gain block - in this case, an operational amplifier (opamp). It may be comprised of any number of devices, and the active components can be valves (tubes), transistors, FETs, MOSFETs or any combination thereof. The gain block will be assumed to have infinite gain and infinite bandwidth for the initial analysis - we all know this is not possible, but it makes understanding the principle easier.
+ +
Figure 2 - Basic Feedback Analysis Circuit
An amplifier (power amplifier of conventional topology, opamp, etc), consists of three discrete stages. These are ...
+ +Each of these may be as simple or complex as desired or needed, and each can use a different technology. The functions of each stage are (or will become) self explanatory, and a quick look at any of the project amplifiers (e.g. P101, P3A, etc.) will show that the same basic stages are used in most amplifiers.
+If you have read the article Designing With Opamps, you will know the two rules of opamps (a typical semiconductor power amplifier may be thought of as an opamp for all intents and purposes). These rules are ... + +In any linear circuit, rule #2 is inapplicable unless there is a fault or overload condition, so only rule #1 needs be considered for this discussion. As shown below, a voltage of 1V is applied to the non-inverting input - the normal input for an audio amplifier. I will state at the outset that only one thing is important - the value of the voltage presented. We need not concern ourselves with frequency - indeed, time is utterly inconsequential (at least for a basic theoretical discussion).
+ +Referring to the practical circuit shown in Figure 9, in order to fulfil rule #1, the amplifier's output voltage must be exactly 11V. This assumes that the open loop gain (without feedback) is at least 100 times greater (but preferably more) than the desired gain with feedback. The figure of 11 is simply derived from the voltage divider formula ...
+ ++ Vout = Vin × ( R1 / R2 + 1 ) Where Vout is the voltage at the -ve input and Vin is the voltage at + the amplifier output ++ +Therefore, at the inverting input we should measure ... + +
+ V-in = 11 / ( 10k / 1k + 1 ) = 11 / 11 = 1V ++ +
The first rule is satisfied, and the system is stable. The error amplifier is the critical element here. If the input voltage changes, the error amplifier simply detects that its two inputs are no longer the same, so commands the VAS to correct the output until equilibrium is restored. This is not an iterative process, which is to say that the amplifier does not keep feeding the input signal (meaning a significant part of the input waveform) into the inverting input to be re-amplified, re-distorted and re-compared. This is where some of those who criticise negative feedback have made their first error.
+ +The output of the amplifier simply keeps changing in the appropriate direction until the error amp detects that the voltages are again identical, at which point the output of the error amp ideally just stops where it is, and so does the rest of the chain. In reality, there will be a small amount of instantaneous correction as the two voltages approach equality, but this must happen much faster than the input signal can change with normal programme material.
+ +The fact that the correction is usually done well before the input voltage has even changed significantly clearly means that no part of the feedback signal is fed through the amplifier over and over again - that just doesn't happen. In our ideal device, the change is instant, in a real device it is possible to measure the time it takes for the correction to be made. For an audio amplifier, the correction must be completed faster than the highest frequency of interest can change - how much faster is open to some conjecture, and that will be looked at later in this article.
+ +All amplifying devices have some distortion. Desirable though it may be, a distortion free amplifier doesn't exist - other than in a simulator. Some opamps come very close (with feedback), but inherent non-linearities within the amplification chain are inevitable. Without feedback, the distortion components tend to be low order (i.e. second, third, fourth, etc., with diminishing amplitudes as the order increases. The application of negative feedback reduces the amplitude of these harmonics (hence the term harmonic distortion), in direct proportion to the amount of feedback applied.
+ +A common claim is that, because the feedback signal is re-amplified, the distortion components are subjected to additional distortion. This supposedly creates high order harmonics that did not exist as a result of the original distortion mechanism in the amplifier. Since the feedback acts as an ultra-high-speed servo system, it is difficult to imagine why it is assumed that high-order harmonics are 'generated'. They are not generated at all, but simply become more easily measured because all the lower harmonic clutter is removed (in part at least).
+ +However, if simple (single amplifying device) amplifiers are analysed carefully, it will be found that additional harmonics are generated when feedback is applied. The issue is generally that only a small amount of feedback can be used because the device gain is not high enough to allow more, and it's often 'degeneration' (using a resistor in the cathode/ source/ emitter circuit) rather than global feedback. This is a fairly complex area, and because such simple amplifying stages have largely fallen from favour, I don't propose to go into to any detail on this. It usually doesn't happen with high gain circuits such as opamps or power amplifiers unless the designer does something unwise.
+ +Also notable is that any signal that is created within the feedback loop (most commonly noise) is also cancelled (within the open-loop gain constraints of the amplifier) by global feedback. Because noise (or distortion) generate signals that did not exist at the input, the error amplifier 'sees' any such extraneous signal as a deviation from the input signal, and cancels it to the best of its abilities. Note that input device noise is not cancelled, because the error amplifier cannot differentiate between noise it has created and the input signal.
+ +That this works was amply demonstrated many years ago when the only cheap opamp was the venerable uA741 and a few others of similar noise performance. These are (still) notoriously noisy, so many designers added an external input stage using low noise transistors. This addition reduced the noise to acceptable levels, even for sensitive high-gain amplifiers as used for phono preamps and tape head amplifiers. The external transistors formed the error amplifier, and being low noise types were able to cancel out much of the opamp's internally generated noise - the additional gain also improved distortion performance.
+ +This ability of the feedback loop to cancel internally generated signals (be it noise or distortion products) is so critical to your understanding of feedback that I have included a circuit and simulation results. These probably show more clearly than any other method how feedback works to remove anything that is not in the original input signal, by using the error amplifier to correct the output by applying an 'anti-distortion' component to the amplification stages within the feedback loop.
+ +
Figure 3 - Injection of Harmonics Into Feedback Loop
All signal sources have the frequency indicated, and all are set for an output of 1V peak (707mV RMS). Because of the simulator, there is no concern with frequency drift, so the distortion waveform will remain the same - this test can be run easily with real opamps, but attempting any harmonic relationship is pointless because the frequencies will drift. If you have access to synchronised oscillators it's not a problem, but I don't, and I doubt many others will either.
+ +
Figure 4 - Output Waveforms vs. Open Loop Gain
The first waveform is with VCVS1 set for unity gain. There is some degeneration, but no feedback as such. If the feedback loop is disconnected, the waveform remains the same, but at a slightly higher amplitude. As the gain of VCVS1 is increased (only the gain of the first stage (error amplifier) is changed), the distortion is reduced in direct proportion to the error amplifier's gain. There is no point reproducing a spectrum for this test, as the relationships are fixed by the 2, 3 and 4kHz signal sources. Only the total amplitude of the 'harmonics' is reduced with respect to the fundamental.
+ +Although the circuit shown is configured as a unity gain buffer, adding feedback resistors to give the circuit gain makes no difference to its ability to remove the injected harmonics. To verify this, the error amp was set to a gain of 10, and the gain of the whole stage was increased to 10 by means of a 9k resistor from output to inverting input, and 1k from inverting input to ground. There was a significant gain error (Av = 5 rather than 10 as set by the resistors), but the rejection of the extraneous signals was just as effective.
+ +Likewise when the error amp's gain was 100 (Av = 9.09) and 1000 (Av = 9.9). This is normal behaviour for an opamp - the open loop gain ideally needs to be 1,000 times greater than the required gain to achieve gain accuracy of 0.1%. While interesting and useful to know, that is not relevant to this article.
+ +The above circuit will work with opamps too. Voltage controlled voltage sources are convenient in the simulator because their gain can be changed where one has no control over the open loop gain of an opamp, and some changes are needed to make a 'real' opamp work. However, the same distortion reduction is clearly evident - this has been tested and verified using real opamps.
+ + +Sorry, but yes . A negative feedback system may be thought of as a servo, but that won't help anyone who is not familiar with servos. A toilet cistern is another matter - everyone has seen one, although not everyone has looked inside. I encourage you to do so
. The cistern is a good example of a simple negative feedback system. Unlike an amplifier (which is bipolar - it can generate positive and negative output voltages), a cistern is more like a regulated power supply - these also use negative feedback to maintain a stable voltage.
When water is let out of a cistern, the water level falls, and this in turn opens a valve. The water is replaced until such time as the level is restored to its original preset level. If water is allowed to escape at a low but variable rate, the float valve (ball cock) will regulate the water level (more or less) perfectly (Note 1), maintaining the same level even as you allow more or less water to escape. This is a simple example of negative feedback at work in your bathroom. For expedience, I have neglected the uncertainties of the mechanical linkages and valves (as well as the inertia of the water itself), but you knew that already.
+ +
Figure 5 - Water Analogy of Feedback System
Should the water be allowed to escape faster than it can be replenished, the system is in an overload condition. This is no different from an amplifier where the input signal changes faster than the output can - the system cannot keep up, so the output is 'distorted'. I am unsure if this will help, but if it does improve your understanding of negative feedback, then it was worth it.
+ +++ ++
+- In any such case (whether water or electrons), the accuracy/ regulation of the system depends on the loop gain of the feedback system used. There is always a requirement for stability, + and that affects the high frequency performance because high gain at high frequencies may cause instability. So, it's not 'perfect', but can be made to be vanishingly close if the system has enough gain. +
For those in Australia, be aware that the above analogy cannot be used because our water reserves are too small to allow the luxury of playing with water. We will just have to imagine that it works .
So, having established that the output signal is not re-amplified over and over again instantly removes one of the criticisms of negative feedback - that it creates frequencies that didn't exist before feedback was added (at least for high gain circuits with global feedback). Since there is no re-amplification of the signal, there will normally be no new frequencies created, other than the distortion of the waveform caused by device non-linearity. Figure 6 shows a simulation circuit, using a diode to create distortion [ 3 ]. Note that the voltage across the diode is dramatically reduced - it's less than 5mV RMS because the diode is conducting, and the VCVS with a gain of 300 is used only to restore the level. The distorted signal is enclosed within the feedback loop (Feedback) of a pair of VCVS (voltage controlled voltage sources - 'perfect' amplifiers in the world of the simulator). A second circuit (Open Loop) applies the same distortion, but simply amplifies the distorted signal to obtain the same RMS voltage. C1 and C2 provide DC blocking to remove the diode's forward voltage.
+ +
Figure 6 - Distortion Analysis Circuits
The applied input signal is 2V peak at 200Hz + 500mV peak at 7kHz, so we can see both harmonic and intermodulation products as generated by the non-linear element - a forward biased diode, passing ~15mA. This attenuates the signal greatly, and applies a controlled amount of distortion, measuring at 8.5% for a single frequency. In each case (feedback and open loop) the input voltage to the distortion cell was maintained at as close as practicable to the same level, although quite wide variations do not cause significant changes to the distortion level.
+ +
Figure 7 - Distortion Analysis Spectra (Red = Feedback, Green = Open Loop)
Looking closely at the FFT analysis of both the feedback and open loop circuits shows clearly that the distortion is reduced by the application of negative feedback. There is no evidence that any individual harmonic frequency is at a greater amplitude when feedback is applied, but you can see some signals that are not affected either way - these are simulation artifacts, and should be ignored. Note that the base level is -240dBV - this can never be achieved in reality, so you can ignore any value below -120dBV. Even this is rather adventurous, and -100dBV is more realistic.
+ +Note the peaks at and around 14kHz, 21kHz, 28kHz and 35kHz. These are highly affected by feedback because they are harmonics and intermodulation products of the 200Hz and 7kHz input frequencies, and are virtually eliminated by applying feedback.
+ +The spikes at 26.92kHz and 40.92kHz are not affected, because these are artifacts of the sampling rate (a simulator works in a manner similar to any digital system, and uses sampling to convert the 'analogue' signal into digital for processing).
+ +For reference, I have also included a spectrum analysis for a single 1kHz sinewave. This makes the picture clearer, and is the way THD is measured using spectrum analysis. The harmonics are seen clearly, and it is notable that a circuit that one may assume would produce only even harmonics also produces odd harmonics. There is a school of 'thought' that is convinced that single-ended triode amplifiers (for example) produce only even ('nice') harmonics, while yucky push-pull amps produce only odd harmonics. This is not the case. While it is true that push-pull amps do indeed cancel the even harmonics, if the first claim were true, a push pull amp using triodes would cancel the even harmonics (which they do), leaving no distortion at all at the output (which they don't).
+ +Even-order harmonic distortion in isolation does not happen - it is invariably accompanied by odd-order harmonics, as demonstrated by the open loop response shown below. Taking the 'even order distortion only' argument to extremes, in order to obtain only even order harmonic distortion, the first harmonic (the fundamental) cannot be present because it is an odd number! While a bridge rectifier can achieve this, the sound is unlikely to gain wide acceptance .
Figure 8 - Harmonic Distortion - 1kHz (Red = Feedback, Green = Open Loop)
Note that the open loop distortion products show diminishing amounts of both odd and even harmonics. Only those up to the seventh harmonic (7kHz) are relevant - all others are more than 100dB below the fundamental. When feedback is applied, all of the distortion products are greater than 114dB below the fundamental. Also, note that not one distortion product is at a greater level than in the open loop circuit. The spectra shown only extend to 10kHz because there are no significant harmonics above that frequency. Reducing the gain of E1 reduces the feedback ratio and increases the level of the harmonics as one would expect. Changing from 100k to 10k (20dB) increases the amplitude of the harmonics by 20dB. If E1 is reduced to a gain of 1k, the second harmonic is increased to -74dB with respect to the fundamental. This effect is quite linear over a significant range.
+ +As with the intermodulation test above, there are artifacts of the simulation and FFT process. The small peaks at 4.44kHz and 6.44kHz are not related to the 1kHz input signal, but are so far below the noise floor that it wouldn't matter if they were real. These signals exist in both cases (and at the same amplitude).
+ + +Having looked at some examples using ideal amplifying devices with no real-world limitations, it is now time to examine real circuits. Unlike their simulated counterparts, real amplifiers have finite bandwidth and slew rate (maximum rate of change), finite input and output impedances, and are not free of distortion. For the audio frequency range, this makes very little difference, despite claims that these limitations lead to Transient Intermodulation Distortion or 'TIM' - now pretty much universally discredited, but still quoted by some [ 4 ].
+ +An amplifier simply needs to be somewhat faster than needed for the highest frequency of interest. Just as in the explanation given above, real amplifiers don't care if the input is AC, DC, or a mixture of multiple frequencies. The only things of interest are the instantaneous voltage level and the highest frequency of interest and its amplitude. These determines how quickly the output must change to prevent it from losing control.
+ +One major limitation in any amplifier is propagation delay - how long it takes for a signal applied to the input to reach the output. Propagation delay depends on actual semiconductor delays, as well as phase shift introduced by the dominant pole capacitor. This component is almost invariably needed to maintain stability, because the amplifier must have less than unity gain when the total phase shift through the amp is 180°, otherwise it will oscillate.
+ +Without the dominant pole compensation, propagation delays will be sufficient to cause a 180° phase shift while the amp still has significant gain. For example, if an amplifier has a propagation delay of 1µs, this causes the phase to be reversed at 500kHz, so the amp will oscillate strongly unless the gain is reduced to slightly less than unity for any frequency of 500kHz or above.
+ +
Figure 9 - Practical Feedback Amplifier
In order to obtain approximately equal slew rate for positive and negative going signals, the circuit of Figure 9 was used. Q1, Q2 and Q3 form the error amplifier, Q4, Q5 and Q6 make up the VAS, and Q7, Q8 form the current amplifier. Open loop gain is 20,000 (86dB), and the HF compensation caps (220pF) cause the open loop frequency response to be 3dB down at 2.4kHz. As is typical with such circuits, there is less feedback available at high frequencies because of the requirement for the dominant pole capacitor. This is not needed for open loop operation, but all linear (audio) applications will use the amplifier as a closed loop (feedback) circuit.
+ +At an output voltage of 1kHz / 3.7V RMS, open loop distortion is 2.3%, showing that the circuit is fairly linear with no feedback. Input impedance is about 7k, with output impedance at about 200 ohms. The distortion components are low order as expected, with only second and third harmonics at significant levels. The fourth harmonic is at -85dB relative to the fundamental.
+ +Adding feedback, but maintaining the output at the same voltage, things change much as we would expect. The gain is set to 11 (set by the feedback resistors Rfb1 and Rfb2). Distortion at 1kHz now measures 0.0014%, and only the fundamental is above -98dB (the level of the second harmonic with feedback). What happened to all the high order harmonics 'generated' by the addition of feedback? As fully expected from previous tests, they simply don't appear - all harmonics are suppressed to much the same degree, but with some dependence on the open loop gain (and hence feedback ratio).
+ +With feedback, frequency response is -3dB at 4.3MHz (no, I don't really believe that either), input impedance a more respectable 5.8MΩ at low frequencies, falling to a bit under 1MΩ at 20kHz. Output impedance is well under 1 ohm. Apart from the rather optimistic frequency response reported by the simulator, the figures are pretty much what I would expect.
+ +The slew rate is 11.5V/µs positive and 18V/µs negative - not exactly equal, but it will have to do. The maximum slew rate for a sinewave occurs at the zero-crossing point, and is determined by ...
+ ++ Slew Rate ( Δv / Δt ) = ( 2π * Vpeak *× f ) / 106 V/µs ++ +
So, it we want to get 10V RMS output at 100kHz, the required slew rate is ...
+ ++ Vpeak = V RMS × 1.414 = 10 × 1.414 = 14.14V+ +
+ Slew Rate = ( 2π × 14.14 × 100k ) / 10^6 = 8.9 V/µs +
Despite the gain rolloff after 2kHz and the relatively low slew rate for the desired frequency (it's not even double that needed for a positive going signal), the distortion measures 0.038%, and no harmonic exceeds a level of -70dB (with respect to the output of 10V RMS). The fifth harmonic is at -85dB. Remember that this is for a frequency of 100kHz.
+ + +The concept of TIM (Transient InterModulation distortion) aka TID (Transient Induced Distortion) was first proposed in the 1970s by Otala, and although it created a stir for a while, most designers realised fairly quickly that it does not happen in any sensibly designed amplifier. The 'dark side' of the industry seized upon TIM / TID as their 'proof' that feedback was bad, and the debate has raged ever since. Some supposedly objective works on the topic have glaring errors, or have completely ignored other factors [ 4 ], such as amplifier output impedance and its effect on the response of a loudspeaker. It is notable that almost without exception, driving a speaker with higher than normal impedance sounds 'better'. Frequency response is less linear, damping factor is (much) lower, but somehow it sounds really good - at least in the short term. However, it is a grave error not to eliminate this variable from a test, because the sound difference is usually unmistakable.
+ +According to the theory, when an amplifier has feedback around it, the delays between the input and output changes will be such that huge amounts of TIM will be produced. Naturally, a sinewave will never show the effect (at any frequency), and traditional measurement techniques will be useless for identification of this mysterious distortion mechanism. A useful test is to apply a squarewave at (say) 1kHz, with a sinewave superimposed upon it. This test will certainly let you know if there is a problem, but although I have used the test many times on amplifiers that should have vast amounts of TIM, no problems have ever been seen.
+ +
Figure 10 - TIM Test Waveform
Figure 10 shows the output waveform of the Figure 9 amplifier, which consists of a 10kHz squarewave whose slew rate is limited by the amplifier, with a 100kHz sinewave superimposed. This combined signal forces the amplifier into slew rate limiting, where the output cannot keep up with the input. The rise and fall times for the input squarewave are set at 1ns - many times faster than the amplifier can accommodate. Regardless of that, the sinewave shows very little modification - certainly there is a small section that is simply not reproduced at all, but this is with input frequencies and rise times that do not occur in any type of music!
+ +Although a CD is capable of full output level at 20kHz (a slew rate of 5V/µs for a 100W / 8 ohm amplifier), such a signal will never occur in music. This is a good thing, because tweeters cannot take that much power anyway. An examination of the maximum level of any music signal vs. frequency will show that the level at 20kHz is at least 10dB below that in the mid band - 10W for the amplifier above, or a slew rate of 1.6V/us. No sensible designer will ever limit an amplifier to that extent, but allowing 5V/µs is easy, and will let the amplifier match the maximum rate of change of the CD source. In case you were wondering, vinyl can't hope to match a CD for output level at high frequencies, because at the first playing with the best cartridge and stylus available the high amplitude high frequency groves would be damaged forever. That vinyl can reach higher frequencies than CD is not disputed, but the level is very low. Fortunately, very high frequencies are never present in music at very high amplitudes.
+ +As for claims that local feedback is 'good' and global feedback is 'bad' this is generally false. Global feedback around a competently designed amplifier will generally give much better results than multiple local feedback loops. Remember that waveform modification causes distortion, so a number of low gain stages with local feedback will generate additive distortion because each stage applies its own amount of modification to the signal! This is real, and the exact opposite of what may be claimed by local feedback proponents.
+ +An amplifier with a single gain block and one global feedback loop will, provided it has reasonably good open loop linearity, simultaneously remove a significant amount of distortion from all stages at once, and there is no additive effect due to cascaded stages. This point is rarely (if ever) mentioned.
+ + +It is obvious that nothing in life is instantaneous. When a signal is applied to the input of an amplifier, there is a delay before the amplifier can react to the change, and this is determined by the speed of the devices used. Logic circuits typically have nanosecond delays from input to output, and this is also the order of delay one can expect before an amplifier as shown in Figure 9 will react to a change of input. According to the simulator, it takes about 5ns for the amp to respond to the fact that the input has changed - this is still using the very fast squarewave as an input. The output then swings in the appropriate direction at its maximum slew rate until the voltage at the inverting input again equals that at the non-inverting input. Once the voltages are equal, it takes about 220ns for the output to stabilise, settling so that the two input voltages are exactly the same. These times are very short - it takes the output 1.3µs to change from +11V to -11V, so the 'reaction' time is close to negligible. It would be pointless to try to reproduce all the waveforms, so I suggest that you download the simulations. The files are in SIMetrix format, and are ready to run.
+ +++ ++
+ Note that any delay has nothing to do with electrons 'slowing down' - there is typically nothing in an amplifier circuit that does any such + thing. The delays are simply the result of the devices taking a finite time to turn on or switch off after a signal has been applied or removed, an issue that + affects all amplifying devices. While painstaking engineering is needed to minimise these delays (especially for very high speed switching), it is generally + not needed for audio - not because audio is slow (although it is very slow compared to the logic in a fast micro-processor), but because analogue + amplifiers are not switching, so are normally inherently fast. We actually have to slow them down deliberately with a capacitor (the Miller or dominant pole + cap) to prevent oscillation.
However, the above test was done with a signal that is much faster than the amplifier can handle (and much faster than any signal it is expected to handle for music reproduction), and it is more useful to examine what happens when the input slew rate is limited to something sensible. By adding a filter to the squarewave signal, the rise time can be limited to a somewhat more realistic value. A 32kHz, 24dB/octave filter was used, and this limits the output signal from the amplifier to 1.85V/µs - well within its range, but still a great deal faster than any real music signal will create. Everything is now within the linear capability of the amplifier. The output is delayed by 46ns compared to the input, but this is inconsequential. Of more importance is how the amplifier reacts to the combined sine and square wave signal. It is not immediately apparent from the output, but in fact the sinewave is almost completely unaffected - the portion that would otherwise be cut off due to slew rate limiting now simply 'rides' the slope of the squarewave - if compared (after correcting for the level difference), the input and output are virtually identical - there is no evidence whatsoever of anything that could be classified as transient distortion - even with a 100kHz signal.
+ +
Figure 11 - Realistic TIM Test Waveform (Expanded)
There are two graphs in Figure 11 - green is the scaled input (increased in level to match the output) and red is the output signal. They are perfectly overlaid, indicating that the difference between them is very small indeed. Differences can be seen if the graph is expanded far enough, but the resolution of any oscilloscope will be such that the two waveforms will appear identical. The simulator can resolve details that are imperceptible with real test equipment. It is worth pointing out that the ESP sound impairment monitor (SIM) will detect the difference in real time using real world signals. Even the modified waveform of Figure 9 does not represent any signal that can be recorded or produced by any musical instrument (or combinations thereof).
+ +Once the combined input signal is made sensible, the difference between the input and output signals can be seen, and it is primarily the result of the time delay (mainly phase shift) through the amplifier circuit. By using the SIM technique (measuring the voltage difference between the two inputs), all that remains is a residual signal that correlates with the gain of the amplifier at the frequencies used. The residual signal contains no non-linearities whatsoever, and is shown in Figure 12. The input stimulus this time is a 5kHz squarewave, filtered at 24dB/octave with a filter having a -3dB frequency of 32kHz. Superimposed on this is the same 100kHz signal used for the previous tests. The signal shown is the difference between the inverting and non-inverting inputs of the amplifier. Some of the signal shown is the result of the amplifier's error correction stage (the long-tailed pair) and VAS over-reacting slightly, and is also affected by the amplifier's total propagation delay and phase shift.
+ +
Figure 12 - Residual Signal Voltage From ESP SIM Circuit.
The important point here is that the amplifier must be maintained within its linear range. All amplifiers, including 'zero feedback' designs, can be forced outside their linear range. The whole idea of an amplifying circuit is that it should be linear, so no test signal should be used that dramatically exceeds the parameters of those of a normal source (such as music). To do so highlights 'problems' that do not exist in reality, so their inclusion is pointless at best, and grossly misleading at worst. The test signal used to obtain the above waveform is still a savage test - far more so than any music signal will produce, and deliberately much closer to the amplifier circuit's own limitations.
+ +One can also measure the difference between an amplified version of the input signal, and that passing through the real circuit. In this case, the error signal is ~58dB down from the amplifier output, but is mainly the result of phase shift and very small gain errors - it is not a non-linear (distortion) component. At the upper test frequency of 100kHz, the amplifier has an open loop gain of only 470. With a design gain of 11 and an open loop gain of 470, the actual gain works out to be about 10.75 - this (as well as phase shift and DC offset) will always cause some error. It is important to understand that this is simply a small gain error, and does not contribute towards non-linear distortion.
+ +These same tests have been performed (using test equipment, not the simulator) on various amplifiers shown in the project pages, with very similar results to those described above. There remains no evidence that any sensibly designed amplifier cannot keep up with recorded music, regardless of genre. The most common real amplifier fault one is likely to encounter in the listening room is clipping. Since clipping forces an amplifier out of its linear region, the main concern is how long the amp takes to recover from the overload.
+ +This is a test I always perform, and a well behaved amp should recover almost instantly. The simulated circuit of Figure 9 recovers in less than 500ns for both positive and negative peaks, clipped with an input signal +4.5dB above the maximum level at 10kHz. Normal maximum level is 1.75V, and the input was driven with 3V (both are peak input levels). Recovery from clipping is not substantially affected by the input level. The recovery time is substantially less than the sampling rate of a CD (44.1kHz = 22.675us), so the loss of information is only a fraction of one sample. Most amplifiers should recover in a few microseconds. If they do not, then there is a problem with the design.
+ +It's worth noting that even very slight and momentary clipping moves the amplifier out of its linear range, and the loss of some signal material is at least an order of magnitude worse than the effects of TID / TIM. Clipping is real, and can happen with any amplifier, whereas TID/ TIM usually only occur with unrealistically high slew rates on the input signal. Most TIM/ TID effects (assuming they actually exist with normal programme material) can be removed by the simple expedient of using a low pass filter before the amplifier, so fast risetime signals cannot affect the amp. Since musical instruments aren't terribly fast anyway, you needn't bother .
I must point out here that I have used the term 'local feedback', even though it is more correctly called degeneration. The difference is subtle, and the distinction between the two is not usually explained. Degeneration only provides some of the benefits of true feedback - while input impedance is increased and gain and distortion are reduced, there is no effect on output impedance. 'Real' feedback will reduce output impedance as well. Degeneration may also have the opposite effect from feedback on noise performance with valves in particular. In such circuits, degeneration can increase the noise level - the cathode resistor must be bypassed for best noise performance.
+ +There is a constant argument regarding the benefits of local rather than global feedback. The following two circuits show an essentially similar design, but one uses two stages with only local feedback, while the other has been optimised for global feedback. The value of the feedback resistor was adjusted to give identical overall gain, in this case 40 (32dB). Conventional transistor current sources were used in the second circuit, the only difference being the use of a voltage source instead of a pair of diodes. The difference is minimal.
+ +The strange resistor values in the global feedback circuit were a matter of expedience, and were used to set the gain and collector currents so that both circuits were run with the same current and collector voltage. Normally, one would not go to so much trouble, but for this experiment it was important to eliminate as many variables as possible.
+ +
Figure 13 - Test Circuits for Local & Global Feedback
Even though the circuits shown are far too crude to be genuinely useful (although they will function perfectly as shown), there are some quite surprising results. The global feedback circuit has less than half the distortion of the local feedback version (0.035% vs. 0.082%), but there are many other advantages as well. Input impedance is higher (now limited by the bias resistors R1 & R2), output impedance lower, and global feedback makes the circuit faster and with better frequency response. The full listing is shown in Table 4, and it is obvious that global feedback is superior to local feedback in every respect.
+ +Parameter | Local FB | Global FB |
Distortion | 0.082% | 0.035% |
Input Impedance | 17kΩ | 37kΩ |
Output Impedance | 1kΩ | <26Ω |
-3dB Bandwidth | 10.4MHz | 24.7MHz |
Open Loop Gain | 40 | 286,000 |
Rise Time | 28.8ns | 11.9ns |
Fall Time | 32.3ns | 10.6ns |
One would think that there must be a down side. Something so simple can't possibly be that much better without a sacrifice. Can it? Yes, it can. Figure 14 shows the spectrum of the two circuits. As you can see, global feedback reduces all the harmonics, and the 'nasty' third harmonic is reduced far more effectively by global feedback than local. Not what you might expect, but there it is.
+ +
Figure 14 - Distortion Spectra for Local (Red) & Global (Green) Feedback
On the basis of this, global feedback wins on all counts. If you were to build the two circuits, you would find that the overall situation will not change, although some of the parameters will. This is due to component tolerance, variations in actual (as opposed to simulated) transistors and temperature, but will not materially affect the final outcome.
+ +It is notable that global feedback works best when there is lots of it. The claims that global feedback should be used in moderation are just silly, and have never considered the reality of good circuit design. The higher the open loop gain the better, but eventually you will run into stability issues so some form of frequency compensation becomes essential.
+ +Designing for stability and high open loop gain can be a challenge at times - especially for power amplifier circuits. However, when it is done (and done properly), there is no doubt that global feedback lives up to all the claims for it, with virtually no down side at all.
+ +Well, there is a down side, but we have to look for it and know what we are looking for. Because nearly all opamp style amplifiers require a dominant pole capacitor to prevent oscillation, this causes a loss of open loop gain as frequency increases. Less open loop gain means less feedback, so upper harmonics are not attenuated as well as low order harmonics.
+ +This could lead one to imagine that the application of feedback has indeed increased the level of the high order distortion components, but in fact it has done no such thing. What has happened is that the feedback at higher frequencies is insufficient to reduce the upper harmonics as effectively as those at lower frequencies. Their amplitudes have been reduced, but not by as much as the low order harmonics. High order distortion products can therefore be seen extending out well past the audio band, at a similar level to the lower order components. For example, we may find that the tenth harmonic is reduced to perhaps -80dB, but the eleventh is only at -81dB, the twelfth at perhaps -81.5dB and so on.
+ +Examining the spectrum may show that the relative levels of all subsequent harmonics remain at much the same level, well beyond the audio band. In this respect, the addition of feedback can easily be blamed for all the upper harmonics. The problem really lies with the gain of the amplifier, which rolls off the frequency response at a lower frequency than we may desire. Regardless of claims you may see, there is no evidence to support the notion that harmonics outside the audio band are audible, or somehow create audible artifacts. Consider that very few tweeters extend much beyond 20kHz - some do go higher, but there's again no evidence that this improves anything (or is even audible to the majority of listeners).
+ +The limited effect of feedback to remove crossover distortion can be seen plainly with an unbiased P101 MOSFET power amp. At 1kHz, there is virtually no visible crossover distortion, even when the output stage has zero quiescent current. At 10kHz, the distortion is clearly visible on the oscilloscope, even though it is not audible with a single tone (the 3rd harmonic being at 30kHz). Needless to say there is zero visible (and almost zero measurable) crossover distortion at 10kHz once the amp is biased correctly, but this highlights the open loop gain issue. At 10kHz there isn't enough feedback to be able to correct the crossover distortion, but there is enough gain at 1kHz to reduce it. There is more about crossover distortion in the next section.
+ +The solution is simple enough - make sure the amp is as linear as possible before feedback is added (which in the above case means setting the bias current correctly). While there is no doubt that a wider open loop bandwidth is beneficial, this must never be at the expense of amplifier stability. A small amount of distortion at the uppermost frequency range is far better than an amp with marginal stability - oscillating amplifiers definitely don't sound very nice at all.
+ + +One area where there seems to be some misunderstanding is with crossover distortion. It always seems that no matter how much feedback you apply, crossover distortion will still be evident. The problem is that this is 100% true. The output stage of any amplifier must be linear before you apply feedback, or there will always be vestiges of distortion remaining. + +
Consider the case where the output transistors have no bias at all, so they cannot conduct until the base-emitter voltage reaches ~0.65V. When the output from the drive circuits (input stage and voltage amplifier stage - VAS) are less than 0.65V, the amplifier has no overall gain. None at all! If an amplifier has a gain of zero, feedback can't do anything to correct the output, so there is no feedback until the output of the VAS is greater than the forward voltage of the output devices.
+ +This is one reason that the VAS is almost always designed to have a very high output impedance. This makes it a VCCS - voltage controlled current source. Having a high output impedance means that the voltage from the VAS will make an almost instantaneous transition at the bases of the upper (NPN) transistor to the lower (PNP) device, dramatically reducing the amount of measured (and heard) crossover distortion. However, there will still be measurable distortion because nothing in life is really instantaneous, and the overall gain at zero volts output is still zero.
+ +
Figure 15 - Crossover Distortion Test Circuit
The above shows the general idea, and is a good test circuit to demonstrate crossover distortion. The circuit gain is set by the feedback resistors, and is set for a gain of two. The VCVS (voltage controlled voltage source) is set initially for a gain of 10, which is unrealistically low but used to demonstrate the idea. With such a low open-loop gain, the circuit cannot achieve a gain of two, and only manages an overall gain of 1.6 - the crossover distortion measures just over 2% with a 2V peak (1.414V RMS) input. This increases as the input level is reduced.
+ +When the VCVS gain is increased to 100, distortion falls to 0.2% - exactly as expected. But it's still there, and will remain no matter how far the gain of the VCVS is increased. With a VCVS gain of 10,000 the open loop gain still falls to zero with very low input, and while distortion is reduced to 0.002% with a 1.4V RMS input, it's still 'pure' crossover distortion. With this combination, if the input voltage is reduced to 20µV the output will be around 6µV - exactly as anticipated, the voltage gain is less than unity because the output transistors are not conducting. Yes, this is an extreme demonstration (20µV is -94dBV), but it shows that crossover distortion can never be eliminated by feedback alone.
+ +What we need to do is to add a bias circuit to ensure that the transistors conduct in the absence of signal (this is called quiescent current). While this ensures that the open loop gain never falls too far, it's still very important to use output devices whose gain doesn't fall to nothing at very low current. This was a problem with many of the early transistor amps - the output transistors had significant gain 'droop' at low current, so it was often still difficult to minimise crossover distortion.
+ +Modern devices are very much better, and few modern amplifiers will have crossover distortion that is even close to the limits of audibility at any level. Most commonly, it should be almost impossible to measure it if the output stage is sufficiently linear without feedback. You can easily verify that even the most linear output transistors have very low gain at low current. Try measuring a power transistor with the transistor 'tester' that's built into many multimeters - they all operate at very low current, and a perfectly good output device might show a gain of less than 5 (some might even show zero gain).
+ +The problem isn't the transistor, it's the tester. Transistor gain must always be measured at a realistic collector current. For output transistors, the minimum collector test current will be around the same value as the amplifier's designed quiescent current, typically between 10 and 50mA. Now you know why amplifiers aren't set up for a quiescent current of 2mA (for example) - that current is too low to ensure reasonable current gain with no (or very low) signal.
+ +In most designs, the output stage is configured so that the driver transistors also provide some of the output current. This helps to ensure that the output stage always has at least some conduction to prevent the overall gain from falling too far.
+ + +In many of the examples shown above, I used a VCVS (voltage controlled voltage source) as the amplification device. This was done to ensure consistency of results under different conditions, but it's not particularly realistic when compared to an opamp or a power amplifier. These (almost) always include a Miller (aka 'dominant pole') capacitor to ensure closed loop stability. This plays an important role with distortion, because as the frequency increases, the amount of feedback decreases, at 6dB/ octave (20dB/ decade). The dominant pole must ensure that the amp/ opamp remains stable at the design gain. Most opamps are compensated for unity gain operation, and the -3dB frequency of the open-loop (zero AC feedback) is often only 100Hz, sometimes less.
+ +That means that at 1kHz (one decade) the circuit's open loop gain has fallen by 20dB, 40dB at 10kHz and 60dB at 100kHz. If the circuit has an open-loop gain of 80dB (a gain of 10,000) at up to 10Hz or so and rolls off as described above, there's only 20dB of 'reserve' gain at 100kHz. If the stage gain is 20dB (×10), then there is no feedback at all at 100kHz. With no feedback, harmonics cannot be affected, and they will be visible using FFT or discrete frequency analysis.
+ +When the bandwidth is severely limited, the circuit will almost certainly be affected by slew-rate distortion, caused by the dominant pole capacitor. Incoming (high frequency) sinewaves are converted to triangle waves simply because the circuit isn't fast enough to handle high levels at high frequencies. As already discussed, this is not usually a problem with audio, because high-level, high-frequency signals don't exist in music. The idea that 'fast' transients must create high levels at high frequencies is flawed, with one exception. With vinyl, you can get a fast, high-level transient if there's a scratch on the disc surface, but A) it's attenuated by the RIAA equalisation network, and B) who wants major flaws to be reproduced accurately anyway?
+ +The use of a dominant pole within an amplifier (or opamp) circuit is claimed to guarantee that TID/ TIM (transient intermodulation distortion) will result, but that's a fallacy with programme material. Any power or op amp can be driven with a fast rise/ fall time impulse and will be limited by the slew rate, and this includes zero-feedback circuits! Realistically, these waveforms simply don't occur in music, so it's a moot point. Even a single transistor (BJT, JFET or MOSFET) has a finite maximum switching speed (effectively slew-rate), and a very fast transient can cause problems whether feedback is applied or not. It's obvious that the circuit must be able to handle the amplitude of the audio waveform (at all frequencies and levels of interest), but it's not at all obvious (or necessary) to ensure that frequencies that are ten times or more than the accepted maximum of 20kHz can be dealt with transparently.
+ +Apart from anything else, providing much wider bandwidth than necessary makes the circuitry more responsive to RF interference. In extreme cases, it can even make a circuit prone to RF oscillation if there's even the slightest capacitive coupling between the input and output. Using feedback doesn't change this for better or worse, as it's simply a matter of gain and frequency response. I recently tested a simple design of a rather fast small power amp, and without a dominant pole capacitor it would oscillate cheerfully with as little as 2pF between input and output! The cap was essential just to ensure that the amp could not respond to stupidly high frequencies.
+ +There are many opamps today that have such low distortion that 'trick' circuitry has to be used so it can be measured, even with the best equipment available. No-one can convince me that they can hear the distortion of any competent opamp, or that there are 'blindingly obvious' differences between them. The tests that allegedly 'prove' the hypothesis are subjective, never double-blind, and the experimenter expectancy effect (aka confirmation bias) simply makes the listener think there's an audible difference, when in fact probably none exists. The only subjective test methodology that can be trusted is double-blind!
+ +As a side-issue (but just as relevant), if anyone claims that one opamp has 'better bass' than another, then you can pretty safely ignore everything else they claim. All opamps have response to DC, and no bass signal will cause any opamp to sound even slightly different from any other. This is one of the most puzzling claims I've ever heard, and it doesn't stand up to even the most rudimentary scrutiny! The same applies to many other claims. Don't believe me ... do your own research, take your own measurements and conduct properly designed double-blind tests.
+ + +Always consider a simple fact that applies to almost all recorded music. It's already passed through a great many stages of amplification, attenuation, equalisation, 'effects' (which may add distortion) and compression, any or all of which may be analogue or digital. Regardless, it's been processed by countless opamps (using lots of feedback of course), DACs, ADCs and the like, and may include valves (often also with feedback) in some cases. To imagine for an instant that the producers used 'zero feedback' designs throughout because they somehow knew that you don't like feedback is pure folly (and perhaps a wee bit naive). Likewise, to believe that by some magical process, your 'zero feedback' design undoes all the alleged 'damage' caused during the production of your favourite music is high-order self-delusion.
Read any articles about distortion and feedback you may come across (including this!) with care. Like death and taxes, distortion is inevitable, however it can be minimised with careful design and a proper understanding of how feedback can be used most effectively to ensure that distortion doesn't spoil your listening experience.
+ +Loudspeakers contribute far more distortion than the vast majority of amplifiers, but it's low order and surprisingly subtle. Some forms of (electronic) distortion can be very intrusive - especially crossover distortion in transistorised amps. Fortunately, it is a simple matter to design an amp using sensible circuitry and modern transistors where crossover distortion is (for all intents and purposes) non-existent. Total harmonic distortion figures of well below 0.1% at any normal power level from a few milliwatts to several hundred Watts are easy to obtain. The distortion of most modern amps will contain only a few low order harmonics at all power levels up to the onset of clipping.
+ +Of far more concern is the addition of distortion to the recording, either deliberately or by accident. Nothing that you do in your home system can eliminate that - once a signal is distorted, you are stuck with it. It is possible to use an 'anti-distortion' circuit that reverses the distortion process, but that can only work if you know the exact nature of the distorted signal, and can produce its inverse and operate it with the exact same input level. Needless to say, this is not possible in any stand-alone system.
+ +A big trap is to measure THD using a conventional distortion measuring set, but without monitoring the distortion residual either through a speaker or with an oscilloscope (preferably both). Early transistor amps gained a very bad reputation, because although the distortion measured much better than the valve amps they tried to replace, many had audible crossover distortion. Had the residual signal been examined with an oscilloscope, the designers of the day would have seen the problem immediately. Regrettably, this didn't happen (either by accident or intent is unknown), and this has provided endless ammunition for anti-solid state and anti-feedback proponents for well over three decades.
+ +To avoid the use of global feedback based on some of the so-called 'research' is most unwise. As demonstrated above (and by many others), correctly used, global feedback is as close to a panacea as we are ever likely to find. The idea of any hi-fi system is to reproduce the source material as faithfully as possible, and to deliberately add distortion to everything you hear (due to amplifier deficiencies) because it sounds 'nice' is simply not high fidelity. If that is what you want to hear then there's no problem, but by adding so much additional material (by way of harmonics and intermodulation) you have a tailored sound system, not a hi-fi.
+ +Harmonic distortion and intermodulation are linked together (although not in any mathematically predictable manner), so much so that it is virtually impossible to have one without the other. By ensuring that each element in the amplification chain is as linear as possible, you minimise both THD and IMD, both of which are easily demonstrable. This is a far better option than trying to minimise TIM, the very existence of which has been called into question countless times since it was 'discovered'.
+ +Finally, I have included a pair of simple circuits that can be used to create distortion. Testing these using my workshop speaker system, the distortion of a 400Hz sinewave was (just) audible at < 0.5%. This same level would be inaudible on most music, being primarily low order as seen on the residual of my distortion meter. It is probable that had I used headphones or a better speaker system, low order distortion would be found to be audible at lower levels, but this simple test shows just how revealing a sinewave really is. While those trying to 'prove a point' will claim that a sinewave test is too simple and reveals little, this is obviously not the case.
+ +As noted earlier, a sinewave is not an easy test at all, and anyone who claims otherwise is seriously mistaken. One only needs to see just how difficult it is to build a sinewave generator with very low distortion [ 6 ] to realise that any claim that a sinewave is 'simple' is unaware (blissfully or otherwise) of the reality. Good, very low distortion sinewave oscillators have been almost a 'holy grail', with many complex designs developed over the years in an attempt to get distortion well below the levels expected from modern opamps and power amps. Several sinewave generators are featured in the ESP projects section, and these show clearly how hard it is to create a low distortion sinewave.
+ +
Figure 16 - Distortion Test Circuits
The circuits shown will need to be carefully tweaked to suit your test equipment and amplifier, so consider them to be more of a general idea than definitive test circuits. The amount of distortion for both symmetrical and asymmetrical is adjusted by varying the input level, and no attempt has been made to level match the distorted and undistorted signals. The distortion itself is sufficiently prominent that full blind AB testing is not needed to get a general idea, but would be essential for a scientific study. The day after I did these tests, a friend came to my place, and I repeated the test with him. The distortion meter was disabled so we had no visual cue, and we arrived at almost exactly the same result with both test circuits.
+ +Be aware that you may find that you can't hear any distortion until it is greater than the 0.5% I measured. Try moving around (even a few centimetres or so will be enough). Why? When listening to a steady tone, standing waves and reflections can combine to make a single frequency much louder than it should be, or almost inaudible. This effectively changes the distortion spectrum, making it sound much greater or less than the actual value. While this effect may have contributed to my hearing only 0.5% distortion on a sinewave, I did move around to make sure that the distortion was audible in more than one position. I neglected to measure the sound level when the test was done, but it would have been around 75dB SPL - any louder becomes very irritating.
+ +For around 0.5% distortion measured and using an asymmetrical diode clipping circuit, the harmonic levels will be pretty close to the following (note that all harmonics are referenced to the level of the fundamental, all voltages are peak) ...
+ +Fundamental | 400Hz | 448mV | 0dB (reference) |
2nd harmonic | 800Hz | 1.35mV | -50dB |
3rd | 1.2kHz | 942µV | -53dB |
4th | 1.6kHz | 599µV | -57dB |
5th | 2.0kHz | 348µV | -62dB |
6th | 2.4kHz | 167µV | -68dB |
7th | 2.8kHz | 68µV | -76dB |
It is probable that only the first couple of harmonics would have been audible. Those above the fifth are approaching my hearing level threshold, and anything above the third is below the ambient noise floor in my workshop.
+ +Sine waves are 'too simple' to use as a test? We think not!
+ + ++ ¹ Much of the material is best considered bollocks, with some of it taking a giant leap into the overall category of bullshit. Read it by all means, but I recommend + that you ignore 99% of what you come across. ++ + +
SIMetrix Simulation Files Right click, and select 'save link as' from the menu.
+To view or run these simulations, you need the SIMetrix simulator on your PC. The freeware version of the simulator can be downloaded from SIMetrix. Other simulators can also be used, but you will have to reconstruct the schematics.
+ +![]() | + + + + + + |
Elliott Sound Products | +Distortion Measurements |
![]() ![]() |
Please note that this is a long article, so I suggest that you allocate enough time to read it all. Because it covers a wide range of concepts, there's a lot to take in. It's not just a description of one technique, but I've tried to ensure that the reader will understand each of the concepts before moving on to the next.
+ +A complete design for a system is described in Project 232 - Distortion Measurement System, and it has many options that can be added. It relies on a 'high-end' external PC sound card and uses FFT (fast Fourier transform) to extract the distortion components from the applied signal. It's the most accurate distortion measurement system that I've published, and it allows you to measure much lower levels of THD and intermodulation distortion that other distortion meters I've described.
+ + +Distortion is a fact of life, because nothing can reproduce an original signal perfectly. This applies in all areas of electronics, and it doesn't matter if the source is audio, video, mechanical or anything in between. Expecting any amplifying device - however it's engineered - to be perfectly linear over its entire operating range (and independent of the load within preset limits) is going to disappoint. Naturally enough, I won't be looking at the other applications (although similar principles apply) - this is about audio.
+ +Firstly, distortion needs to be defined. In this article, we are looking at non-linear distortion, caused by active electronic devices. These can be valves (vacuum tubes), transistors (including all FETs) or ICs. Of these, high-quality IC opamps are without doubt the best, but even they have non-linearities (albeit at almost undetectable levels). The conventional way to minimise non-linear distortion is to use negative feedback, and even if the device is at least reasonably linear to start with, feedback can't cure all ills.
+ +Feedback always works better when the device is already linear, and some forms of distortion cannot be eliminated with feedback (crossover distortion in particular). Why? Because at the crossover point, the DUT (device under test) has very low gain (it may even be zero in an extreme case), and without 'excess' gain you can have no feedback. Most of the distortion we measure is simply the result of non-linearity, something that is unavoidable in any practical circuit.
+ +If an amplifier produces 1V output with 100mV input, but only gives 9.99V with an input of 1V, that's distortion. The amplification is not linear from input to output. The difference may only be 10mV, but that will show up on a distortion meter. The output for negative-going peaks may increase to 10.01V at the same time, and the distortion is therefore asymmetrical. Distortion is indicated whenever an output signal is not a perfect replica of the input. Perfection may not be possible, but many devices come remarkably close.
+ +Another form of distortion is frequency response - if it's not flat from DC to daylight, then technically it is a form of distortion. However, this is not a non-linear function, and it's not included in the general definition. Negative feedback will also improve response characteristics, but that's (mostly) a linear function and is not counted as distortion per se. You can look up the definition of distortion - anything that is altered from its original form is technically distorted.
+ +We (mostly) measure distortion by observing the non-linearity introduced onto a sinewave. A pure sinewave comprises one frequency only - the fundamental. It is a mathematically pure signal source, and it's easy to measure the effects of any changes, notably the addition of harmonics. It is notoriously difficult to generate a pure sinewave. This is discussed at some length in the article Sinewave Oscillators - Characteristics, Topologies and Examples, and while we can get close, the laws of physics will always intervene to thwart perfection. The lowest distortion oscillator I've published is Project 174 (Ultra-Low Distortion Sinewave Oscillator), which was contributed. It has a 'typical' distortion of less than 0.001%, which is approaching the limits that can be achieved other than by high-resolution digital synthesis.
+ +Any changes to the waveform show up as harmonics. Symmetrical distortion produces only odd-order harmonics (3rd, 5th, 7th, etc.). Asymmetrical distortion is claimed by some to 'sound better', and that contains both even and odd harmonics (2nd, 3rd, 4th, 5th, etc.). A very few circuits may produce only even-order harmonics, but the vast majority create both odd and even order harmonics.
+ +At any given point in time, there is one and only one voltage present at any node in a circuit. Complex waveforms such as music may contain many frequencies, but there's still only one instantaneous voltage present at any time point. We can see that on an oscilloscope - the voltage variations may be 'all over the place', but there is still only one voltage present at any point on the composite waveform. The amplifier's job (be it a valve [vacuum tube], transistor, opamp or any combination thereof) is to increase the voltage present at its input by a fixed amount. If the instantaneous input voltage is 100mV and the circuit has a gain of ten, the output should be 1V. Change the input voltage to 1V and the output should be 10V. If it's not, the amplifier has contributed distortion.
+ +The first layer of (non-linear) distortion occurs at the source. Some is intentional (an overdriven guitar amplifier for example), while other distortions are not. A microphone placed too close to a very high SPL (sound pressure level) device (e.g. drums) may distort, and so will a mixing desk if everything isn't set up properly. Mastering equipment may add some distortion (sometimes deliberately) and even the recording medium isn't blameless.
+ +Early tape recorders often had significant distortion, and although performance was improved over the years, tape distortion never 'went away'. There were many innovative techniques used to minimise both distortion and noise, but limitations remained. To this day there are mixing and mastering engineers who will record some tracks (or perhaps the final mix) on an analogue tape recorder to get the 'warmth' associated with vintage electronics. That 'warmth' is largely due to distortion.
+ +The final stage is our playback equipment, much of which is now very close to the ideal 'straight wire with gain'. The loudspeakers remain the weakest link, having distortion that's typically orders of magnitude greater than the electronics used to drive them. There are people who prefer comparatively high distortion electronics (e.g. single-ended triode [power] amplifiers), and others who seek the lowest possible distortion from everything.
+ +In many respects, it's better to think in terms of distortion components in terms of dB rather than a percentage. Stating the THD+Noise (commonly just referred to as THD or THD+N) as a percentage is standard, but when it's stated in dB you know the relative level compared to the original signal. This is quite useful, as it's far easier to estimate the audibility of distortion if you compare the SPL from the system and the relative SPL of the distortion products.
+ +Traditionally, it's been uncommon for the distortion waveform to be provided. This is a real shame, because the waveform tells you a great deal about the nature of the distortion, and can be very helpful to let you work out the likely audibility. Very low numbers don't necessarily mean low audibility, especially if the distortion is 'high-order' (i.e. predominantly upper harmonics). Viewed on an oscilloscope, this type of distortion is characterised by sharp discontinuities (e.g. a spiky waveform), where low-order distortion will show a fairly smooth waveform at twice or three times the input frequency. There will be other harmonics present, but if they are also low-order the distortion is likely to be 'benign' - provided it's at a low enough level. 5% third harmonic distortion may be 'smooth', but it is most certainly not benign. Nor is 5% second harmonic distortion (which will contain some 3rd harmonic as well as 4th, 5th, etc.).
+ +Note that in the drawings that follow, the opamp power supplies, bypass capacitors and pinouts are not shown. This is not a construction article, and all manner of opamps have been used, including discrete types. For metering amplifiers in particular, a discrete option may be preferable because it can be optimised for speed. Gain stages will usually use opamps, as they are now readily available with equivalent input noise of less than 3nV/√Hz (the AD747 is 0.9nV/√Hz). When there are opamps within the measurement loop, it's very important that they don't add distortion of their own. The LM4562 (for example) has a distortion of 0.00003% (-130dB) according to the datasheet.
+ +The voltages used depend on the instrument. The most common is ±15V, but many early meters used higher voltages along with discrete amplifier circuits. For example, the Hewlett Packard 334A used ±25V. More recent (or perhaps less ancient) instruments used opamps and ±15V supplies.
+ + +If distortion (THD+Noise) is said to be 0.1%, that equates to -60dB referred to the signal level. -60dB is a ratio of 1:1,000 (or 1,000:1 for +60dB). Using dB by itself lets you work out the SPL of the distortion compared to the signal. If you listen to music at 90dB SPL and distortion is -60dB, that means the harmonics are reproduced at 30dB SPL. This is the same as background noise in a very quiet listening room.
+ +Refer to Table 2.3.1 below for the relationships between dB and %THD. The table uses a reference voltage of 1V RMS, and provides percentage THD, parts per 'n' (from 100 to 1 million), dB and the residual distortion signal. Once the measured 'distortion' is below 0.001% (100μV residual), mostly you are measuring noise. Any distortion that may exist is effectively masked by the noise and the original signal.
+ +The phenomenon called 'masking' occurs where low-level sounds are rendered inaudible by nearby (i.e. closely spaced frequency) louder sounds, so many low-level details are not heard. The MP3 compression algorithm used this feature of our hearing to discard sound that we wouldn't hear. Unfortunately it also misinterpreted much of this, and supposedly 'inaudible' material was discarded and subtle stereo effects were lost. This is commonly referred to as "throwing out the baby with the bath water". Some instruments cannot be reproduced properly by MP3, notably cymbals and the harpsichord.
+ +Of course, this doesn't mean that we cannot hear a signal just because it's at a low level. It's easy to discern the presence of a tone (if it lasts long enough), even if it's more than 10dB below the noise floor. Some tones are easier to hear than others, but the principle is not changed. To hear this for yourself ...
+ +The tone is 12dB below the peak noise level (-6dB) and the average signal level is deliberately at about -20dB (ref 0dBFS). The level of the 550Hz Morse code is 18dB below the noise. One thing that isn't taken into account by this is our hearing sensitivity. As youngsters, we could generally hear down to a few dB SPL (frequency dependent), and at a frequency up to 20kHz (sometimes a little more). As we age our threshold increases (sound must be louder before we can hear it) and high-frequency response falls progressively. At age 20, the maximum audible frequency falls to about 18kHz (give or take 1-2kHz or so). By age 50, most people will be limited to around 15kHz or less [ 1 ]. The threshold of hearing increases from a few dB SPL to 20dB SPL or more as we age, and the amount and type of degradation depends on how much loud noise we subject ourselves to over our lifetime.
+ +For some perspective, consider a 10kHz frequency that's subjected to 0.1% distortion. If the distortion is symmetrical, the first harmonic generated is the third (30kHz). With asymmetrical distortion, the first generated harmonic is at 20kHz (2nd harmonic), the next at 30kHz, etc.
+ +We know these are at around -60dB, and it should be apparent that they are inaudible to any listener beyond the early teens, even in a very quiet room. However, there's something else at work - intermodulation distortion (IMD). This is more serious than 'simple' THD, as it causes frequencies to be generated that are not simple harmonics. If a 1kHz tone is mixed with a 1.2kHz tone in a non-linear circuit, you get new frequencies that are multiples (or sub-multiples) of the original frequencies, as well as (perhaps) 3.2kHz and 200Hz (the sum and difference frequencies). However, there are complex interactions that are discussed in detail in the article Intermodulation - Something 'New' To Ponder.
+ +In this article, I will mainly concentrate on harmonic distortion. Whenever there is harmonic distortion, there is also intermodulation distortion - you can't have one without the other. Despite the points made in the above-referenced article, there is normally no condition where distortion is 'perfectly' symmetrical, because music presents a waveform that's rarely (if ever) completely symmetrical other for the odd brief period. Measurement systems are another matter, and they cannot rely on the generation of sum and difference signals. IMD is covered later in this article.
+ + +The traditional way to measure THD (actually THD+N) is to use a notch filter. This removes the original frequency, and everything left is distortion and noise. To get 'pure' THD (without the noise) requires the use of a wave analyser (tunable filters that pass a very narrow band of frequencies (ideally just a single distortion frequency by itself). The analyser is tuned to the harmonic frequencies and the amplitude is measured. Most modern digital spectrum analysis uses the fast Fourier transform (FFT) method to isolate the harmonics. THD is calculated using the following formula ...
+ ++ THD = √(( h2² + h3² + hn² ) / V ) × 100 (%)+ +
+ Where V = signal amplitude, h2 = 2nd harmonic amplitude, h3 = 3rd harmonic, hn = nth harmonic amplitude (All RMS) +
For those who don't like playing with maths, there's a handy spreadsheet (in OpenOffice format) that you can download. Click Here to download it. You only need to insert the level of the fundamental and as many harmonics as you feel like adding (up to the 10th) and it will calculate the THD. Note that noise is not included. All measurements are in dB, referred to 0dBV (1V RMS), and can be read directly from a fast Fourier transform.
+ +In contrast, a notch filter removes the fundamental, and everything left over is measured. This includes all harmonics, intermodulation artifacts, noise (including that generated in the measurement system), hum, buzz, and anything else that is not the original frequency. If the notch isn't deep enough, some of the fundamental will get through, but looking at the residual on a scope will show that clearly.
+ +Now we have to decide how good the notch filter needs to be. If the fundamental is reduced by 40dB, it's not possible to measure less than 1% distortion, because 10mV/V of the input signal (the fundamental) sneaks past the filter. A low distortion amplifier will show a distortion residual that's at the test frequency, and it may show an almost perfect sinewave on an oscilloscope.
+ +Any measurement of THD should include the ability to look at the output waveform after the notch filter, as the nature of the distortion is often a very good indicator of its nature and audibility. A smooth waveform with no rapid discontinuities indicates low-order distortion, but if the waveform is 'spiky', it's likely that the DUT (device under test) has either clipping or crossover distortion. A simple measurement of the RMS or average level may give a satisfactory reading (e.g. 0.1%), but the distortion is clearly audible. Another DUT with the same THD but without the sharp (spiky) waveform will sound very different.
+ +This is (supposedly) one of the reasons that some early transistor amplifiers were disliked, even though their distortion measurement was far lower than the valve (vacuum tube) designs they tried to replace. This dislike (sometimes extending to hatred) appears to be continued to this day by some 'audiophools', who insist that only valve amps can provide true audio nirvana. This isn't an argument that I intend to pursue further. However, consider that valve amps very rarely undergo the same level of scrutiny as opamps or other transistorised circuitry. It's probable that most valve amps would prove to be 'disappointing' if subjected to the same intense analysis.
+ +There's a school of 'thought' that maintains that testing with a sinewave is pointless, because real audio is far more complex. The proponents of this philosophy utterly fail to understand just how difficult it is to produce a high-purity sinewave, and how the tiniest bit of distortion is easily measured. There's no doubt whatsoever that a sinewave is 'simple' - it's a single tone which has one (and only one) frequency - the fundamental. However 'simple' a sinewave may be, producing (or reproducing) it perfectly is impossible. Just as there is no such thing (in the 'real-world') as a perfect sinewave, an amplifying device with zero distortion doesn't exist.
+ +It is possible to test an amplifier with a 'complex' stimulus, including music. However, it's quite difficult to do, because real amplifiers introduce small phase shifts, propagation delays and tiny level deviations that make it very hard to null the output with any accuracy. It has been done though, with the method first described by Peter Baxandall (of tone control fame) and Peter Walker (QUAD).
+ +The technique was used 'in anger' when critics claimed that the QUAD 'current dumping' amplifier couldn't possibly work well, so a test was set up that used the output from the amplifier, mixed with the input signal in a way that the two cancelled. Once the two signals were perfectly matched in level and phase, any residual was the result of distortion in the amplifier. The results silenced (most of) the critics. This is a very difficult test to set up, and it requires very fine adjustments of phase and amplitude over the full audio band. It should come as no surprise that it's not used very often.
+ +Cancellation also relies (at least to some extent) on the music being played for the test. Material with extended high frequencies may require more exacting HF phase compensation, with a similar requirement for particularly low frequencies. The specific compensation will also be affected (at least to some degree) by the load, as no amplifier has zero output impedance. These requirements all conspire to make the setup process very demanding. The output level will also be very low with a high-quality amplifier, so it will need to be amplified to make it audible. If the distortion products are at -60dB referred to a 1V of signal, you'll only have one millivolt of residual distortion. Wide-band cancellation techniques are not covered further here.
+ +Distortion meters come in two main types - continuously variable (with switched ranges) or 'spot' frequency types. Common frequencies for spot frequency meters are 400Hz and 1kHz. This type lacks flexibility, but for a DIY meter you can have several separate notch filters to look at the frequencies you want. If you wanted to use three frequencies, a reasonable choice might be 70Hz, 1kHz and 7kHz. If the tuning resistance is 10k, you'd use 220nF caps for 72Hz, 15.9nF caps for 1kHz (12nF || 3.9nF) and 2.2nF caps for 7.2kHz.
+ +Continuously variable meters are much harder, and will almost always use something other than Twin-T filters so that the tuning pots are not a 'special order'. Variable capacitors are better, but make everything more difficult (and noisier) because of the high resistances needed. With continuously variable frequency, any mismatch between the gangs (pots or capacitors) requires careful adjustment of the null, usually with series (low value) pots. For example, if you use 10k tuning resistors/ pots, 10-turn 200Ω wirewound pots are ideal for fine tuning.
+ + +The most common method for measuring THD+N is single-frequency cancellation, where the only frequency that's rejected is the fundamental. We measure what's 'left over', and that becomes our measurement. The original input signal must be as close to a 'perfect' sinewave as we can get, and it's then a simple matter to determine the non-linearity contributed by the DUT. Measuring below 0.01% THD+N is not easy using this technique, because noise often becomes the dominant factor.
+ +Despite the differentiations shown below, all notch filters rely on phase cancellation. At the test frequency, the signal is effectively divided into two 'streams', with one having 90° phase advance and the other having 90° phase retard. The other method is to allow one signal to pass unaltered, and retard (or advance) the phase of the other by 180°. Perfect cancellation can only occur at one frequency, where the two signals at the selected frequency are exactly 180° apart. This creates a notch, which in an ideal case would be infinitely deep at the tuning frequency. In practice, it's unrealistic to expect more than 100dB rejection of the fundamental, which leaves the original signal (the test frequency) reduced such that a 1V input results in an output of 10μV (0.001% THD). As noted above, a major part of the residual signal will be random noise.
+ +Whether it's for simulations or bench testing, a method of generating distortion is useful. The example shown is a simple but effective way to generate distortion, with the ability to select even-order or odd-order distortion products. It assumes that the sinewave generator's output impedance is 600Ω. The distortion I measured was roughly 0.083% (588μV) for odd-order and 0.117% (828μV) for even-order. It's not 1% as you might guess from the ~1:100 ratio between the oscillator's output impedance and R1, because the diodes don't start to conduct until the voltage across them is about 0.65V, and the AC peak voltage is only 1V (707mV RMS). If you have an oscillator with a different output impedance, then change the value of R1 proportionally. The distortion that's generated is sensitive to level changes.
+ +The residuals have no particularly sharp discontinuities, so the waveforms are relatively smooth. That doesn't mean that they are exclusively low-order, because any (real world) distortion may extend to at least 10 times the fundamental frequency. However, most will be so far below the system noise level that they won't be audible. Simulating a distortion waveform with sharp discontinuities is not as easy, because an active device with (for example) crossover distortion has to be simulated (or constructed) as well. A crossover distortion 'generator' is shown in Fig. 2.1.1 (A).
+ +The scope capture shown was with a single diode in series with 100k across the oscillator's 600Ω output. The distortion measurement was 0.085%, and is largely 2nd harmonic. The presence of higher order harmonics is indicated by the (comparatively) rapid transitions seen on the most positive peaks. I used 4 averages on the scope - not because the trace was particularly noisy, but to show a 'cleaner' waveform. Compare this trace to the red trace in Fig. 2.2, which is also even-order distortion, but simulated. The waveforms are not identical, but are very close. This demonstrates that the simulations are very close to reality (but only when all factors are included in the simulation). With two diodes the measured THD increased to 0.1%.
+ +Being able to look at a detailed spectrum is something I've just recently got working again, after a lengthy hiatus (mainly due to a number of PC reassignments). I have a TiePie HS3-100 PC scope which has better resolution than a stand-alone digital scope. The level into the Fig. 2.1 distortion generator was adjusted to get exactly 0.1% THD+N, and the spectrum shows the harmonics. The 2nd, 3rd and 4th harmonics are visible above the noise. Overall noise is at about -95dBV, with the 400Hz tone at -5dBV (562mV RMS). The harmonics are at -67dBV (447μV), -70dBV (316μV) and -76dBV (158μV). Using the formula shown above, that gives a THD (without noise) of 0.076%. There is 50Hz mains hum visible, along with its harmonics (up to the 5th). To be able to get lower overall noise and better resolution requires far better equipment than I can afford.
+ +It's also worth noting that the 600Ω source resistance has a noise contribution of 3.16nV/√Hz, which works out to a total noise level of 1μV for a 100kHz bandwidth (-120dBV). For the audio range (20kHz bandwidth), that falls to 0.438μV. Most of the noise seen is from the PC scope adapter, with a small contribution from the 400Hz filter used at the output of the signal generator.
+ +One way you can improve the resolution of measurements is to use a good notch filter to remove (most of) the fundamental, then use FFT to examine the harmonics. You will need a very good preamp to boost the level sufficiently to allow the PC scope (or high-quality sound card) to resolve the distortion components, remembering that if you start with 1V and measure 0.1% THD, you only have 1mV of signal coming out of the notch filter. A good opamp can raise the level enough to make it easy to measure with suitable software. The preamp has to be low-noise, but ultra-low distortion isn't a requirement.
+ + +This topic deserves its own sub-heading, because it's so often referred to, and poorly understood - or so it seems from forum queries and the number of articles on-line. There are countless websites where the author(s) still claim it's a common problem. It isn't. Most people with sufficient electronics knowledge know what crossover distortion is, and a few may also know what it sounds like. The point that seems to be missed is why negative feedback doesn't cure it. An 'ideal' amplifier (having effectively infinite gain) will reduce crossover distortion to negligible levels, but that requires a broad-band gain of more than 100dB (100,000V/V) to get crossover distortion down to 0.001%, but due to its nature it may still be audible! No 'real life' circuitry can provide enough gain at all frequencies to overcome the crossover distortion caused by an un-biased output stage.
+ +Crossover distortion was referred to in the introduction as one type of distortion that cannot be removed by feedback. This requires further explanation, because it probably doesn't make sense. Other forms of distortion are reduced, so why not crossover? The answer lies in the cause of crossover distortion in the first place. Refer to Fig. 2.2.1 (A) showing a basic amplifier that will have crossover distortion because the output transistors are unbiased. The second amp (B) has bias. Both amps have a gain of 10 (20dB), and were simulated with a 10mV (peak) input, resulting in an output of 100mV (peak), or 70mV RMS. The test frequency was 1kHz. Close to identical results will be obtained if you build the circuit. An 'ideal' opamp reduces the distortion simply because it has nearly infinite gain and slew rate, but it is still unable to eliminate it.
+ +Crossover distortion (sometimes called crossover 'notch' distortion) is generated to some degree in any Class-B or Class-AB output stage, as the output devices conduct on alternate half-cycles (see Fig.2.2.2). There is always some discontinuity during the changeover, but it hasn't been a major concern for a very long time. In reality, 'true' Class-B is almost unheard of, with all common designs using biasing to ensure that neither output transistor turns off with no (or a zero-crossing) signal. Despite innumerable websites (forum sites in particular) complaining about it, crossover distortion has not been a major failing of any passably sensible power amplifier.
+ +Real opamps have real limits, and the opamp's output voltage must swing ±650mV before the transistors can conduct. This takes time with an AC input. A good opamp may have a slew rate of 20V/μs, so it will take 50ns to change by 1V. When the input signal level is zero, the opamp has nothing to amplify (and it's operating 'open-loop'). The circuit's overall gain is zero because the output transistors are both switched off. The transistors won't turn on until the opamp's output is at ±650mV. A few microvolts of input will be enough to create this, but the opamp is operating open-loop (no feedback) until either Q1 or Q2 starts to conduct. By applying (just enough) bias using R6, R7, D1, D2 and C1, the transistors will conduct (about 1.3mA in a simulation), so the overall gain no longer falls to zero at zero volts output. If the zero bias amplifier is tested with a notch filter, you'll see an output similar to that shown next. Crossover distortion gets worse with reduced signal levels.
+ +There is an expectation that some non-linearity must exist in any real circuit, and to obtain good performance it must be minimised. The Fig. 2.1.1(B) circuit does that by ensuring that Q1 and Q2 pass some current, so the output stage gain cannot fall to zero. This is a simplification, because power transistors used to have very low hFE at low current. Most of the ones we use now have very good gain linearity (even down to a few milliamps for the best of them). By applying enough bias to ensure the output devices are within their linear range, crossover distortion is all but eliminated in the output stage. Transistors such as the MJL3281/1302 (NPN/PNP) have almost perfect gain linearity down to 100mA collector current or less. The optimum bias current is determined by testing the final amplifier for lowest distortion at ~1W output.
+ +Fig. 2.2.2 shows exaggerated crossover distortion. The 'notch' is the point where neither Q1 nor Q2 is conducting, so there is no output. The amount of distortion is far greater than that shown in the residual below, simply because the lower levels are invisible, even on the simulator. Reality is no different, and even quite unacceptable levels of crossover distortion may not be visible on an oscilloscope trace. Because this applies to the simulator too, I made it a great deal worse for this trace than was used to produce Fig. 2.2.3. Just because you don't see it on a scope doesn't mean it's not there! The simulator tells me that the distortion is over 2.6%, and the harmonics are all greater than 10mV to beyond 20kHz, with the third harmonic being at 100mV for a 5.5V peak output.
+ +To understand why the measurement is often inaccurate, consider a 700mV output signal subjected to crossover distortion. The residual has peaks of 47mV on the residual (seen above), but an RMS measurement shows only 5.6mV. An average-reading meter (used in most distortion meters) will indicate ~2.15mV, which is an even worse underestimate! So, while the distortion measurement may show less than 1% THD, the output will sound dreadful. The simulator I use will tell me that the rough-and-ready 'amplifier' I created for the simulation (Fig 2.1.1 (A)) has only ~0.8% THD at 700mV RMS output, but a fast Fourier transform (FFT) shows harmonics extended to over 100kHz with not much attenuation (the harmonics are odd-order, and are all greater than 2mV [with a 1V peak output signal] up to 20kHz). The spiky nature of the waveform shown is typical of distortion that may measure alright, but is easily distinguished by ear as being inferior to another amplifier with the same measured THD but without crossover distortion. This is why it's so important to understand how these measurements work.
+ +If the same 'rough-and-ready' amplifier's output is increased to 7V RMS output, the distortion component increases to 10mV (RMS) (3.45mV average) but 150mV peak! The calculated or measured distortion is reduced to 'only' 0.14% (0.048% average-reading), but the spiky waveform is still easily heard with a single tone or audio, and intermodulation products are very audible. This may have been situation with some early amplifiers, which measured 'better' than equivalent valve amps of the day, but sounded worse. An oscilloscope shows the problem very clearly. You can also use a monitor amplifier to hear the output. Be very careful, as the notch filter is very sharp, and even a tiny frequency change will cause the output to increase by anything up to 60dB (instant monitor amp clipping and very loud). The difference between valve and transistors is more complex than just distortion performance, with output impedance causing audible frequency response changes.
+ +We expect that any modern amplifier using good (linear) output transistors will have undetectable levels of crossover distortion. Most integrated circuit power amps (e.g. LM3886, TDA7293) are also very good. Few (if any) commercial amplifiers will be found wanting either. It used to be that only Class-A amps could be counted on to lack crossover distortion, but this is no longer the case. One point that is entirely missed by nearly everyone is that crossover distortion and clipping distortion are essentially identical, but the phase is changed. For a given percentage of crossover or clipping distortion, the harmonics have the same amplitude and frequency distribution. In reality, this is not something we worry about - we expect distortion when an amplifier is overdriven, but not at low power levels.
+ +As an example of a 'typical' IC power amplifier, I ran a test on my Project 186 workbench amplifier. At 1W output (2.82V RMS/ 8Ω) the distortion was 0.004%, most of which as noise. There was zero evidence of crossover distortion. When the output level was raised to 5V RMS (3.5W), the measured distortion was below my residual of 0.002% THD+N. With such low harmonic distortion, that also means that IMD (intermodulation distortion) is also low, and again (probably) below my measurement threshold. The same applies to the other project amplifiers on my site where a PCB is offered. None has any evidence of crossover distortion if set up according to the instructions. There is one exception - Project 68. It's designed specifically as a subwoofer amp, and the small amount of distortion is inaudible with any subwoofer loudspeaker. While it's measurable, no-one has ever said it's audible (and I've run many tests on it, including full range audio).
+ +The measured distortion at 1W output is less than ~0.2% based on the peak distortion residual. For reference, a spectrum of the distortion is shown above, with an output of 40W at 100Hz. The only visible distortion products are 70dBV or more below the peak, so despite the crossover distortion, it still sounds clean. With bass only (as intended), the loudspeaker is unable to reproduce the higher frequencies anyway. While it's possible to bias the output stage for no visible crossover distortion, there's no reason to do so. The design of P68 was specifically to ensure high power and complete thermal stability, without any adjustments.
+ +Crossover distortion is not limited to transistor amplifiers - it happens with valve (vacuum tube) amps as well. There's usually a fine balance between getting clean output at low power and not pushing the valves past their limits at high power. This has become harder because the valves you can buy today are nowhere near as good as those from the 1960s and 1970s. Amps designed to use RCA, Philips, AWV (Australian Wireless Valve company), Sylvania etc., etc. will often stress the valves, but the old ones could take it (and they were cheap then as well). Modern valves are generally more easily damaged by overloads, so the grid bias voltage on the output valves may be set a little more negative to reduce dissipation. This can lead to crossover distortion. It's a lot 'softer' than transistors though, and usually isn't as objectionable. Almost all valve amps will develop crossover distortion when the output is driven to hard clipping - this is discussed in the Valves section of the ESP website.
+ +A technique for minimising crossover distortion is to use a small bias current from the output to either supply rail. However, this 'crossover displacement' technique simply moves the 'notch', but it does not eliminate it. The technique may be referred to as 'Class-XD' (for crossover displacement) in some texts. The offset current forces one of the output transistors into Class-A for very low-level signals. It might be possible to move the notch far enough from zero to make measurements look better, but it's a band-aid, and doesn't solve the problem.
+ +![]() | When we look at notch filters, you'll see that even with feedback around the filter circuit, the notch depth is barely affected. This is a very similar phenomenon - when + the notch is perfectly tuned, the circuit has no gain, and feedback is unable to restore flat response. From this we can deduce that feedback can only work when the circuit doesn't + have (close to) zero gain. This should be self evident, but I've not seen this aspect of feedback and distortion covered elsewhere. + |
It's often hard to know what the signal levels of distortion represent in real terms. Sometimes, distortion may be described as a level in dB rather than a percentage. While quoting distortion in dB is not conventional, it actually tells you more about the likely audibility of distortion products than a simple percentage. It's easy enough to convert from one to the other once you know how to do it. Measuring very low distortion levels means that you either have to use a metering circuit that can resolve very low residual voltages, or the input voltage has to be raised to a sufficiently high voltage that you don't have to be able to measure a few microvolts.
+ +Percentage | 1 Part per ... | dB | Residual + |
1.0% | 100 | -40 | 10mV + |
0.5% | 500 | -50 | 3.16mV + |
0.1% | 1,000 | -60 | 1mV + |
0.05% | 5,000 | -70 | 316μV + |
0.01% | 10,000 | -80 | 100μV + |
0.005% | 50,000 | -90 | 31.6μV + |
0.001% | 100,000 | -100 | 10μV + |
0.0005% | 500,000 | -110 | 3.16μV + |
0.0001% | 1,000,000 | -120 | 1μV + |
0.00005% | 5,000,000 | -130 | 316nV + |
0.00001% | 10,000,000 | -140 | 100nV + |
Based on Table 2.3.1, intermediate percentages are easily worked out once you know the base ratios. While it was clearly demonstrated above that we can hear signals well below the noise floor, it stands to reason that if the distortion is at least 60dB down, it's very unlikely that it will be audible. However, that does not mean that 0.1% THD at full power will be maintained at lower levels. An amplifier with crossover distortion (in particular) will show that the distortion increases as the output level is reduced. If you look at any opamp distortion graph (along with many power amplifiers and other electronics), you'll see that the distortion (to be exact, THD plus noise) increases at very low levels. This is almost always not distortion, but residual noise. The figures with a light grey background are of academic interest - anything below 0.001% THD can generally be ignored.
+ +Building a metering circuit that can measure 10μV is challenging (a rather serious understatement). Below that, the task becomes even more difficult. Likewise, having a preamp circuit that can boost the input level to a worthwhile degree without adding noise and distortion can be no less challenging. Fortunately, we have opamps available that have vanishingly low distortion and low noise, but the best of them will be expensive. Ideally, the input reference level should not be less than 10V RMS, so instead of trying to measure 10μV we'll have 100μV. However, this is still difficult to achieve. It's no accident that even the best 'old-school' distortion meters have a lower full-scale limit of 0.01% THD+N, as it's possible to build a metering circuit that can measure between 100μV and 1mV without too many compromises. Most have a minimum full-scale reading of 0.1%, and the minimum distortion that can be reliably read on the meter is about 0.02%.
+ +Having an oscilloscope output that lets you see the distortion waveform isn't just 'nice to have', IMO it should be used every time you look at distortion. You'll sometimes see 'artifacts' that are clearly the result of a sharp discontinuity, most commonly this will be remnants of crossover distortion. Even with an oscilloscope, the residual can be extremely hard to see clearly due to noise. For thermal (white) noise, modern scopes have the ability to use averaging, which eliminates most of the noise component and leaves only the distortion waveform.
+ +Other things that can cause havoc include 50/60Hz hum (or 100/120Hz buzz), caused by ground loops (or power supply ripple). This might be from the DUT, but are often as a result of a ground loop created between the oscillator, DUT and distortion measuring system. If you're sure that any hum is the result of an external ground loop, a high-pass filter is often used. A standard frequency is 400Hz, but that is intended for measurements of 1kHz and above. Some analysers also include a low pass filter (typically at 80kHz) to remove excess noise, without seriously impacting the measurement accuracy. An additional 30kHz low-pass filter may be provided on some analysers.
+ +Measuring (and quantifying) distortion is not an easy task. The equipment has to be made to a very high standard for accurate measurements below ~1%, and those requirements get harder to meet as you attempt to measure lower levels. It shouldn't come as a surprise that distortion analysers are expensive, but if you know how to do it, getting reliable measurements down to around 0.02% are within reach for the dedicated DIY constructor.
+ + +The notch filter technique is still the most common for distortion meters. Complex (and very expensive) test systems such as those by Audio Precision now perform most of their processing digitally, but this is not an option for anyone who doesn't have a spare US$20k or more to buy one. There are still many notch filter based distortion measuring sets available, both new and second-hand. The Project 52 Distortion Analyser has been on-line since 2000, and it's based on a Twin-T notch filter.
+ +There are many different ways to make a notch filter. All of them rely on phase cancellation at a single frequency to make the fundamental 'disappear', leaving behind the harmonics and noise (including hum or buzz from the power supply). The filters have (theoretically) infinite notch depth, but this is impossible to achieve in reality. The notch is extremely sensitive to any frequency drift from the oscillator, or small changes to capacitance and/ or resistance due to temperature variations. if you have a notch depth of 80dB, the measurement threshold is 0.01%, and the frequency only has to change by perhaps 0.01Hz for the signal amplitude to rise by 6dB. The +3dB bandwidth of the notch is extremely small - less than 0.01Hz (10mHz) depending on the depth (maximum rejection).
+ +Fig. 3.1 shows the response of a notch filter that can achieve a -100dB reduction of the fundamental frequency. The response is based on a Twin-T filter, and I've used the combination of 15nF capacitors and 10k resistors in most examples, giving a notch frequency of 1.061kHz. The majority of these filters use the 'standard' R/C filter formula ...
+ ++ f = 1 / ( 2π × R × C ) ++ +
Some filters make tuning easier than others. Fewer precision tuning components makes it easier to locate the parts necessary, but more (active) electronics may be required to achieve a good result. Compromise is an important part of the design process, and all notch filter topologies have strengths and weaknesses. It's almost always necessary to use opamps in the notch filter circuit, either to make it work at all, or to improve its performance.
+ +Some manufacturers (notably Hewlett Packard) have used variable capacitors in place of variable resistors. These have the advantage of almost zero electrical noise, but capacitor 'tuning gangs' (as used in early radio tuners) have low capacitance, so very high value resistors are necessary for measuring low frequencies. Stray capacitance also causes problems, but they're not insurmountable.
+ +It is possible to build a notch filter using an inductor and capacitor, but the series resistance of the inductor will seriously limit the notch depth. One variation that can work is to use a NIC (negative impedance converter) based gyrator (see Active Filters Using Gyrators - Characteristics, and Examples). 'Section 11 - Impedance Converters' covers this type of gyrator, and a simulation shows that better than 50dB attenuation is possible. However, this is nowhere near as good as any of the circuits that follow, and tuning isn't simple.
+ +I've shown AC coupling and a series resistor (10Ω) in the Twin-T filter, but these are not included in the others. However, the cap is always necessary to prevent any DC offset from affecting following stages - especially distortion amplifiers and metering circuits. The resistor prevents oscillation if the load is capacitive (shielded cable for example).
+ +Most of the notch filters shown below are capable of a -100dB notch when tuned perfectly. One possible exception is the MFB (multiple feedback) notch, which is limited to about -90dB (opamp dependent to some degree). For some applications this may still be sufficient, but you can't measure distortion below about 0.04% because too much fundamental frequency will get through the notch filter. Conversely, any two notch filters can be used in series, providing close to an infinite notch depth. I leave it to the reader to work out how to measure 1μV without noise swamping any distortion residual.
The first notch filter to be covered here is the venerable Twin-T (aka parallel tee). It's been a mainstay of distortion meters for a long time, because it's relatively easy to implement for 'spot' frequencies. Once, one could get 30k+30k+15k wirewound pots that made tuning over a decade a fairly simple task. I have one from a distortion meter I built around 30 years ago (perhaps more - I don't recall exactly). These are no longer obtainable, and even when I got mine they were fairly uncommon.
+ +A passive Twin-T notch can achieve -100dB quite easily, but the Q is fairly low, which causes the 2nd and 3rd harmonics to be attenuated. The error is over 8dB for the 2nd harmonic, and about 2.6dB for the 3rd. Most equipment has low levels of even-order distortion, but if one is measuring a valve amplifier, that can be very different. The solution is to add feedback. The feedback cannot eliminate the notch, but it will reduce the maximum depth. It does minimise the response errors for low-order harmonics. The maximum permissible error depends on the designer (it may or may not be specified), but anything over 1dB is unacceptable. It's not difficult to keep the maximum error under 0.25dB while retaining a notch depth of -100dB or more. That allows distortion measurement down to 0.001%. Excellent opamps are needed if that's to be achieved.
+ +To make tuning easier for manual adjustments, the Q can be made variable. Initial (rough) tuning is done with a low Q, and it's increased as the operator gets close to a complete null. The Twin-T filter should be driven from a low impedance, but it's not particularly sensitive to the source impedance. It's not a good idea to use anything greater than 100Ω, but it's not as critical as some other topologies. The tuning is determined by ...
+ ++ f = 1 / ( 2π × RT × CT ) (Where CT2 = 2 × CT, ½RT = RT / 2) ++ +
The Twin-T has been popular for a long time, because it's so easy to implement. Feedback is provided by followers, so even 'pedestrian' opamps can give good performance. The insertion loss (loss of signal at frequencies other than the notch) is 0dB, and it's fairly easy to trim the resistance to get a very good null. The notch shown in Fig. 1 was derived from a Twin-T circuit. The greatest disadvantage of the Twin-T is that to make it fully variable, you need a 3-gang pot with one gang half the value of the others. You can use a 4-gang pot with two elements in parallel to get the half value, but both options are very limited now.
+ +Note that the Q is shown as variable (VR1) in Fig. 3.1.1, but it's normally fixed. All other notch filters shown use fixed Q. For unity gain buffer feedback systems, a ratio of between 1:8 and 1:10 is usually optimum. For a ratio of 1:8, if VR1 is replaced with a 10k resistor, R1 becomes 1.25k. My preference is to use 1k and 10k, which for most filters results in an error of less than 0.5dB for the second harmonic.
+ +In practice, RT (either one) and ½RT will be a slightly lower value than required, and variable resistors (pots) used in series to allow the notch to be tuned. Most distortion meters that use the Twin-T circuit have two pots in series for each location, with a resistance ratio of ~10:1. If the nominal value for RT is 15k, you'd use perhaps 13k (12k + 1k), with a value for VR3 of 5k, and VR4 as 500Ω. ½RT could be 5.1k, with VR1 a 5k pot. A 500Ω pot (VR2) is used in series for fine control. All pots are very sensitive when you have notch depth of -100dB. Just 1Ω added to RT (either one) will affect both frequency and notch depth.
+ +All notch filters are similarly afflicted.
+ + +Bridged-T filters normally have low Q and are more likely to be found in equalisers and other 'mundane' audio circuits. If all values are optimised, the Bridged-T circuit can be used as a deep notch filter. The values of R1 and R2 are critical, with a ratio of 5:1 (10k and 2k as shown). Unlike the Twin-T, the circuit must be driven from a low impedance.
+ +The capacitance values are 'interesting'. The value (CT) is calculated by the standard formula shown above, but the bridging cap is divided by √10 (3.162) and the 'tee' cap is CT multiplied by √10. The net result is that the 'tee' capacitor is 10 times the value of the bridging capacitor, and this produces a good notch. This isn't a common arrangement, but several manufacturers have used it over the years.
+ +If RT is 15k as with the other filters, C x √10 is 31.6nF and C / √10 is 3.16nF. It would be easier to use 47nF and 4.7nF caps and 10k resistors to get a frequency of 1.0708kHz. This gets harder if you can't set the input frequency very accurately.
+ + +The Wien bridge has one advantage over the Twin-T in that there are only two tuning elements. This means that a readily available dual-gang pot can be used for tuning, so it's possible (even today) to get suitable tuning pots. While the Wien Bridge is simpler than a Twin-T by itself, the number of support components means that the final filter is quite a bit more complex. The performance is (close to) identical with the values shown.
+ +The circuit needs quite a bit of feedback to get flat response down to the 2nd harmonic, and that's controlled by R9. Reducing the value provides little benefit, as the response at 2kHz (for the 1.061kHz fundamental) is less than 1dB down. With 3.3k the pass-band response is close to unity, but that changes if the Q is altered. Wien bridge notch filters always need a gain stage, and it can be easy to cause an overload if you're not careful. The Fig. 3.2.1 version has a gain of 1.6 (x4.1) at the output of U2B, but as that's the output it can't go un-noticed. However, it must be considered when setting the level prior to a THD measurement.
+ +A particular disadvantage of the Wien bridge is its insertion loss. It's typically 10dB, and this has to be compensated by using extra gain. With the extra gain comes noise and an inevitable upper frequency limit that's caused when opamps are used with gain greater than unity. It's not insurmountable, but you have to use much better opamps than expected.
+ +Another disadvantage of all Wien bridge notch filters is that it's not easy to set the reference level. With a Twin-T you simply disconnect the 'tail' or short between input and output, but Wien bridges are trickier. Because they have gain stages, the overall level isn't unity. Variable Q is possible (but it's likely to be difficult). Because the gain changes when the Q is changed, additional compensation would be required. These issues haven't stopped anyone from using the Wien bridge though - everything can be solved with a little ingenuity.
+ +The version shown in Fig. 3.3.2 is the most practical. With no input amplifier to contribute distortion, it's not affected by the source impedance (within reason). This means that an attenuator can be used to measure high levels, while low levels can be amplified after the notch filter. The input level must not exceed the opamp's maximum input voltage limits, and the insertion loss is only 1.5dB. Tests show that an source impedance of even 2.2k has only a minor effect, but you need very good opamps for U1 and U2 to get a good result. Anything below the specs of an NE5532 will be probably be disappointing.
+ +Layout is important with any notch filter, as stray capacitance can play havoc with the tuning frequency. Realistically, R1 and one of the timing resistors (RT) will need a 200Ω multi-turn wirewound in series with each fixed resistor to allow tuning. Tuning caps should ideally be polystyrene, but polypropylene will probably be alright. Polyester has a noticeable temperature coefficient that will cause drift. If you can get a notch depth of 100dB, the bandwidth is measured in milli-Hz, so low drift and a very stable test oscillator are very important.
+ + +The (bi-quadratic) state-variable filter is one of the most flexible filters ever designed. It's based on a pair of integrators, but has feedback paths that provide simultaneous low-pass, band-pass and low-pass outputs. If the high-pass and low-pass outputs are summed, a notch filter is obtained. It has variable Q, and can produce a notch depth of at least 100dB with good opamps. There are only two networks that have to be tuned, so dual-gang pots will work for coarse tuning, with series (lower value) pots to get fine-tuning. It's reasonably tolerant of component tolerances, but it requires four opamps for the filter and summing stage. It must be driven from a low impedance source to ensure the desired gain and Q are achieved.
+ +The Q is adjusted by varying R2. With 1.8k as shown, the 2nd harmonic is attenuated by less than 0.5dB. As with other circuits shown, the frequency is changed using RT and CT. When combined with an input buffer the circuit uses five opamps. Ideally, all of them need to be 'premium' devices, with wide bandwidth and low noise. Using 'ordinary' opamps will result in a loss of performance. While the state-variable and 'biquad' (below) filters are both bi-quadratic, they behave very differently.
+ +The other bi-quadratic filter looks like a state-variable, but it's quite different. The filter is commonly called a biquad, and unlike the state variable with low-pass, high-pass and bandpass, the biquad only provides bandpass and low-pass. The biquad comprises a lossy integrator (U1) followed by another 'ideal' integrator (U2) and then an inverter (U3) - the last two can be reversed with no change in performance. This subtle change provides a circuit that behaves differently from the state variable filter. For a biquad, as the frequency is changed, the bandwidth stays constant, meaning that the Q value changes (Q remains constant with a state-variable filter).
+ +I'm not going to describe the tuning formula, as it depends on too many variables. The values shown in the circuit provide a frequency of 715.3Hz, with a notch depth of 70dB as simulated. Because the Q is not constant, the biquad filter is not suitable for a variable-frequency notch filter. Both the state-variable and biquad filters are comparatively insensitive to component tolerance. Of the two, the state-variable is (IMO) a better filter with fewer interdependencies.
+ + +A notch filter using phase shift networks (aka all-pass filters) looks superficially similar to the state-variable, but if implemented well it can have higher performance. There are two identical all-pass filter stages (U1 and U2), with U3 summing the phase-shifted and direct signal, as well as providing feedback to prevent the 2nd harmonic from being attenuated. R1 must be fed from a low-impedance source. If the value is reduced, the gain of the circuit is increased, and the Q is reduced.
+ +This filter is used in the Sound Technology 1700B distortion analyser, which can measure down to below 0.01% THD full-scale (distortion can be measured as low as 0.002%). Like any filter that uses opamps in the notch filter itself, they must be 'top shelf' devices, with wide bandwidth and very low distortion. The 1700B used Harris Technologies HA2605 and NE5534 opamps in the notch filter. Fine-tuning was achieved using LED/LDR optoisolators, in common with most other auto-nulling distortion meters.
+ +Please Note: There was an error in Fig 3.5.1 that has been corrected. R9 was meant to go to the -ve input of U3 as shown now. I apologise for any confusion this may have caused.
+ + +There are other topologies that you may come across apart from those described above. They are far less common and generally don't have a notch depth that's suitable for low distortion measurements. Two that I came across during research into this article are a multiple-feedback band-pass filter with subtraction to provide a notch, and the rather uncommon Fliege notch filter. Neither of these filters can achieve much better than 70dB attenuation, but two can be used in series to get effectively infinite attenuation of the fundamental. I'm not entirely convinced that this is such a great idea, but it does mean that 'pedestrian' opamps can be used while still getting a very good end result. Of course, the same can be done with any of the other filters shown.
+ +Using a pair of notch filters will complicate the tuning process, but can produce a notch depth of close to 200dB. Anyone who thinks that measuring less than 10μV distortion residual is even possible probably needs to brush up on the noise contribution of every component in the circuit. Tuning will also be a nightmare, as the bandwidth of the notch is incredibly narrow. At maximum notch depth, the frequency has to be within ~20mHz (that's milli-Hertz, or 0.02Hz!). The tiniest amount of drift from the oscillator or tuning components (caps and resistors) will reduce the notch depth substantially.
+ +No commercial distortion analyser has ever attempted to measure distortion at such high resolution, and very expensive equipment (e.g. Audio Precision) is needed to even attempt such measurements. These invariably measure the spectrum, not a 'simple' THD+N measurement. That's not to say it can't be done, but I certainly wouldn't bother. A simpler method is to use a decent notch filter to remove most of the fundamental, then use PC spectrum analysis software to look at the residual signal and its harmonics. With 16-bit resolution, most PC sound cards will actually do a fairly decent job. This idea has been published as Project 232, and it can rival the expensive kit - at least for distortion measurements.
+ + +The first 'miscellaneous' design shown is based on a Fliege notch filter. This is not a common topology, but it is reasonably economical. With particularly good opamps it can achieve a notch depth of -90dB or more, but the value of R1 and R2 is critical. They must be identical to get a good notch, so one of them should be reduced and a trimpot added in series to obtain a way to adjust them. Even a mismatch of just 10Ω will cause the notch depth to be reduced dramatically.
+ +The Fliege filter is one example of several that use a negative impedance converter (NIC). This typically (but not always) 'synthesises' an ideal inductor as part of the tuned circuit. The performance can be very good, but you must treat any circuit that uses negative impedance with caution. A seemingly minor resistance variation may cause oscillation.
+ + +The second alternative is a multiple-feedback (MFB) bandpass filter, with a summing amplifier to mix the input voltage with the filter's output. This can achieve a deep notch with no significant attenuation of the 2nd harmonic. The notch depth is acceptable, with a reasonable expectation of better than 70dB. The values needed for a 1kHz notch are different from all the other circuits, because of the way an MFB filter works.
+ +With the values shown, the frequency is 996Hz, assuming exact values for the resistors and capacitors. It will almost certainly be necessary to tweak one or more values to get the frequency you need. The tuning range is limited, and if the frequency is changed it may not be possible to get a deep enough null. The input frequency (from a suitable sinewave generator) forms part of the nulling process, and that can make accurate readings very difficult.
+ +The optimum null is achieved when the currents through R4 and the series connection of R5, VR1 and VR2 are exactly equal with an input signal at exactly the frequency of the tuned circuit (the MFB filter). With the values shown, the frequency is 996.018Hz, the gain is 1.063 and Q is 3.127, based on the calculations below.
+ +The formulae are somewhat daunting for these filters, as everything depends on everything else. The gain, frequency and Q are interdependent, so calculations are not straightforward. To start, you need to know the gain, frequency, Q and decide on a suitable capacitance. From these, you can calculate the three resistors that determine the circuit's performance. An easy way to work out the values is to use the ESP MFB Filter Calculator.
+ ++ R1 = Q / (G × 2π × f × C)+ +
+ R2 = Q / (( 2 × Q² - G ) × 2π × f × C )
+ R3 = Q / ( π × f × C )
+ f = 1 / ( 2π × C )) × √(( R1 + R2 ) / ( R1 × R2 × R3 )) (Sanity check) +
Based on these formulae, the optimum values are R1 = 47.75k, R2 = 2.808k and R3 = 95.49k. The values used change the frequency very slightly, and at 996Hz the error is less than 1%. The input frequency needs to be set so it's right at the tuning frequency. The null control is very sensitive, and will require two pots in series, with a resistance difference of ~10:1. If R5 is 8.2k, you'd typically use a 5k pot with a 500Ω pot in series. Based only on a simulation using TL072 opamps, the notch depth can reach about 76dB, although this is might not be achievable in reality.
+ +There's no easy way to change the frequency without it affecting the gain and Q of the filter. Making R3 part-fixed, part-variable (e.g. 82k + 50k and 5k pots) will allow a small frequency change, but because that also changes the gain it will interact with VR1 and VR2. Control interaction is common with all notch filters though, so that's not a serious limitation.
+ +The MFB version shown above isn't the only option. Any high-Q bandpass filter can be used, and the output from the filter can be summed (or differenced) by an opamp stage to get a reasonably good null. Ultimately, it all depends on how much circuit complexity you're willing to accommodate, whilst understanding that this approach will rarely get a null of better than -70dB. If the output of the notch filter is used as an input to a spectrum analyser (e.g. suitable PC software that provides FFT capability), then even a 40dB notch is still capable of giving very high resolution.
+ +One filter that lends itself well is described in Project 218. The filter circuit is also shown below, in Fig. 8.1 (only one filter is required). The circuit is easily tuned over a reasonable range, while being able to reduce the fundamental by at least 40dB. This is sufficient to allow high resolution FFT analysis even with a basic sound card and suitable PC software. However, most FFT analysers have a dynamic range that means that a notch filter is probably redundant (and it needs more circuitry and adjustment to take a measurement).
+ + +One distortion meter (Meguro MAK-6571C) that I've used for some time is based on LC (inductor-capacitor) filters. The filter banks are quite complex, and there's one for 400Hz and another for 1kHz. There is no tuning function, and the filters are designed as high-pass. There is no notch, so the filters have to be particularly sharp cut-off. Any hum or low-frequency noise is naturally excluded. There are a couple of other meters that I suspect use the same technique, and it seems likely that they are clones of the MAK-6571C.
+ +The meter I have always seems to give fairly representative results, so this scheme certainly works (within its limitations). To be fully effective, the filters need a slope of at least 50dB/ octave. This can be done with opamps as shown above, and the filter shown is 80dB down at 400Hz (0.01% THD). A simplified design (produced by the Texas Instruments 'FilterPro' software) is shown, and while it certainly works, it's an uncommon way to measure distortion. All such filters need odd-value resistors and capacitors, and are very sensitive to component tolerances. The level at 400Hz is at -80dB, and response is passably flat (±3dB) at and above 800Hz (the second harmonic). This isn't a recommended method due to the difficulty of construction.
+ +Other filter topologies can also be used, which may simplify (or complicate) the circuit. Design will never be easy with a 9 or 10-pole filter (54dB/ 60dB/ octave nominal respectively), but a multiple-feedback filter will use more parts overall than the Sallen-Key filters shown.
+ + +Not all distortion meters include filters, but 400Hz and 80kHz filters are provided on some so that noise can be removed from the signal without materially affecting the distortion reading. Hum (mains and/ or rectified AC buzz) can be (mostly) removed with the 400Hz filter, which may be cheating unless the hum is due to ground loops in the measurement setup (i.e. not from the DUT). Some instruments use a differential input to minimise ground loops, but mains hum can still get through in some cases. There's rarely a good reason to measure beyond 80kHz (the 4th harmonic of 20kHz), and the reduction of noise (especially that from the measurement system itself) makes it easier to see distortion residuals that may otherwise not be visible.
+ +The filters are usually 18dB/ octave (3-pole), and are usually 'traditional' Sallen-Key types. The examples shown provide the response needed, and they would normally be equipped with the facility to bypass the filter(s) that aren't required. Whether these are in the distortion output circuit or before the metering amplifier depends on the design. In most cases they will be before the metering amplifier and the distortion output.
+ +The 400Hz filter will suppress mains hum (50Hz) by 55dB, or 50dB with 60Hz. As always, the filter frequency is a compromise, and while better low-frequency attenuation is possible with a steeper filter, that adds more parts, and isn't (or shouldn't be) necessary. Some instruments include a 30kHz low-pass filter as well, which is used primarily for testing and verifying broadcast equipment.
+ + +Most distortion meters use average metering, calibrated for RMS. There are a small few that use 'true-RMS' meters, which are more accurate, and the results of a test on a given piece of equipment will be different depending on the metering system used. The average value of a rectified sinewave is 0.637 of the peak value, while RMS is 0.707 of the peak. Average reading meters may be calibrated to show RMS, but that works only with a sinewave. Other waveforms (particularly distortion residuals) will have errors, and the amount of error depends on the waveshape. The discrepancy can be extreme, with some waveforms able to produce an error of almost 90%. You will encounter that if you're measuring crossover distortion, and there is always a difference between true RMS and average reading, and what you see on a scope.
+ +Some meter rectifiers measure the peak, and calibrate that to RMS. A 1V peak sinewave will be scaled to 0.707 so again, it looks like the RMS value. Again, this creates serious errors with some waveforms. All meter rectifiers are full-wave. Half-wave rectifiers will create large errors with asymmetrical waveforms. It's not an option, and I know of no test instrument that uses a half-wave meter rectifier. I also know of no distortion meters that use a peak-reading meter amplifier.
+ +In the early days of transistor/ 'solid state' circuits, many meter amplifier/ rectifiers were made using discrete parts. No early opamps were good enough, and a discrete design would have wider bandwidth and lower noise than opamps of the day. We're spoiled for choice now, but very low noise opamps are still expensive. One of the best is the AD797, but it comes at a cost (around AU$33.00 at the time of writing). There are cheaper ways to get low noise. To ease the burden on the metering amp (U1B), a preamp should be used to increase the signal level first. The meter circuit shown has a sensitivity of 5mV using a 100μA meter movement. The two opamps need to be high-speed types to get good high frequency response.
+ +The meter amp shown is more-or-less 'typical', and it's average reading due to the meter movement itself. The rectified output isn't smoothed or filtered, so the pointer always responds to the average of the rectified waveform. This can be a problem at very low frequencies, because the meter almost certainly won't have enough electromechanical damping to prevent the pointer from responding to signals of less than 5Hz (effectively 10Hz due to full-wave rectification). Meter amplifiers are covered in some detail in AN002 (Analogue Meter Amplifiers).
+ +The diodes are particularly important. In an ideal case they'd be germanium because of their lower conduction voltage. When the diodes aren't conducting, the opamp is operating close to open-loop, and the output has to swing across the diode voltage drops (0.7V with silicon diodes). This imposes an upper frequency limit based on the opamp's slew rate. U1 has to be as fast as possible, and the slew rate should be at least 20V/μs. The opamp doesn't necessarily have to be very low noise, as it's easier to include a preamplifier to get the required sensitivity. It's not immediately obvious, but the metering amp has the same constraints as the Fig. 2.1.1 (A) amplifier - there is no feedback until the diodes conduct. There's no option to apply bias, as that would show as a meter reading. A linear wide-band metering amplifier is a much greater challenge than it first appears.
+ +Note the diode in parallel with the meter movement. It's used to limit the maximum current through the meter, as the opamp can deliver far more current than a high-sensitivity movement can handle safely. This diode must be a 'standard' silicon type, such as 1N4148 or similar.
+ +This is just one example, and there are many different approaches taken by various manufacturers. The version shown is not one that I've used in any projects I've built, but it's well behaved at audio frequencies. Don't expect to get above ~50kHz before rolloff becomes apparent. In some cases, it may be necessary to use an offset balance control to ensure that positive and negative peaks are exactly even. Some distortion meters have used digital displays, but in general they are not as user-friendly as an 'old-fashioned' analogue meter movement.
+ +No common metering amplifiers have particularly high gain because their primary function is to drive the meter movement. If you use two opamp gain stages (two of the stages shown in Fig. 5.1), each with a gain of 10, it's not difficult to get flat response to well over 200kHz, but the opamps all need to be very fast. The common TL072 is not good enough, and even a 3-stage meter amp will struggle to get to 100kHz driving a 100μA movement. You need opamps with a unity gain bandwidth of at least 12MHz. You might expect better with something equivalent to an LM4562 (55MHz, ±25V/μs), but tests I've performed show that it's not fast enough. Reasonably low noise is also a requirement, since you need to be able to get full-scale with no more than 5mV RMS input. If you're trying to measure down to 0.001% THD, you only have a residual signal of 10μV with a 1V RMS input.
+ +To be able to measure 0.01% THD with a 1V input, the meter circuit needs an input sensitivity of 100μV, so even with a 3-stage meter amp with 5mV full-scale sensitivity, additional gain is required. Even 0.1% full-scale requires 1mV sensitivity for a 1V input. The challenges are fairly obvious, especially since the required gain and bandwidth far exceed the requirements for audio reproduction. Most distortion meters have several gain stages before the metering amp, especially if distortion below 0.1% is to be measured. Note that the metering amps do not need to be particularly low distortion, as their only job is to amplify the residual distortion signal. If they add a little distortion to the residual it's usually of little consequence. I'd consider anything up to 1% to be quite acceptable, and a simple discrete meter amp is often preferable to using an opamp. The higher the sensitivity of the meter movement, the easier it is to drive, so if you have a choice, use a 50μA or 100μA movement rather than 1mA.
+ +The simple meter amplifier in Fig 5.2 uses just three cheap transistors, but can provide 100μA at up to 100kHz. With the x10 opamp circuit in front, the full-scale sensitivity is only 3mV. Calibration is done by changing R7 (it should be a trimpot). It's difficult to beat this even with a very good IC opamp. The version shown isn't optimised, and it can be improved by reducing the gain of the meter amp (thereby providing more feedback), and adding the necessary gain externally. The opamp will be the limiting factor for frequencies over 100kHz. Sometimes a discrete circuit is the best choice, although it might not seem like it at first. Simple circuits like that shown may not be particularly linear though, and are unlikely to provide good accuracy at low input voltages (e.g. 10% of full scale). According to the simulator, the Fig. 5.2 circuit reads low by about 0.65dB at 10% of full scale (300μV input). Making the whole circuit sensitive to 10mV (rather than 3mV) improves linearity somewhat.
+ +If you'd rather use a true RMS meter, Project 140 shows the complete details, and a suitable example is shown in Fig. 5.3. The AD737 True RMS-to-DC Converter is not a cheap IC, but it will handle most waveforms well, and it's said to be accurate to ±0.3% of the reading. The full-scale sensitivity is 200mV, so an input preamp will be necessary to allow measurement of lower voltages. The metering section should ideally have a full scale sensitivity of no more than 5mV, so an external gain of 40 is required (ideally provided by two stages to obtain wide bandwidth). Alternatively, you could use the Project 231 discrete opamp, which can provide a gain of 100 (40dB) to over 1MHz.
+ +While you may be tempted to use a digital multimeter as the 'readout', be aware that most have poor high frequency response. True RMS meters are likely to be better, but trying to measure less than 1mV with response to (at least) 50kHz is beyond the ability of most digital meters. I've tested my meters and found some of them to respond to no more than about 5kHz, but many are worse. A cheap (non RMS) meter will likely struggle to get beyond 2kHz before the response falls to an unusable degree.
+ + +There are several ways to measure intermodulation distortion (IMD), with the most common now being spectrum analysis with specialised equipment. All IMD tests involve the use of two signals, with varying standards used. The 'traditional' way to measure IMD is to use the SMPTE (Society of Motion Picture & Television Engineers) standard RP120-1994 method, which uses a 60Hz signal and a 7kHz signal, with the ratio normally 4:1 (60Hz and 7kHz respectively). When these tones are provided to an amplifier (or other device), the presence of IMD will cause amplitude modulation (AM) of the high-frequency signal. To measure the amount of AM, the low frequency is filtered out with a steep-slope high-pass filter, the modulated 7kHz 'carrier' signal is rectified (in the same way as is done in an AM broadcast receiver), and the high frequency is then filtered out with a high-slope low-pass filter. The recovered signal is a direct representation of the amount of IMD. It's a distorted 60Hz tone that can only be present if there is intermodulation distortion. A completely linear circuit will have zero output, or perhaps a few nanovolts at most, mainly due to imperfect filters.
+ +By measuring the amount of the recovered AM signal, the amount of intermodulation is revealed. In most cases, an amplifier with low THD will also have low IMD, since the process of amplitude modulation requires non-linearity in the amplifier. If it has low THD, then (by definition) it has high linearity. However, there's no direct correlation between the two forms of distortion. IMD is without doubt the most objectionable form of distortion, because many of the frequencies produced are not harmonically related to the input signal.
+ +One technique that might work is to look for sum and difference frequencies, and that test might use a 10kHz signal along with an 11kHz signal. IMD would be revealed by looking for a 1kHz signal, which is the difference between the two input signals. However, as noted in the article Intermodulation - Something 'New' To Ponder, if the DUT has purely symmetrical distortion, sum and difference frequencies are not generated. The generation of these frequencies can only occur if the distortion is asymmetrical. In most modern amplifiers and preamplifiers (excluding most valve-based designs), the distortion is symmetrical, so sum and difference frequencies are not generated. AM is produced regardless of the symmetry (or otherwise) of the distortion.
+ +The 60Hz/ 7kHz test method requires two oscillators, preferably with less than 1% distortion. The signals are mixed in the desired ratio (most commonly 4:1 respectively) and fed to the DUT. A loop-back test should be used to ensure that the meter is working, and in the absence of non-linearity the meter should read zero. Anything in the DUT that causes amplitude modulation will be measured, so if a valve amplifier has excessive HT hum that will register as well (excess hum can [and does] often cause amplitude modulation). This can be verified by turning off the 60Hz tone, something that should be allowed for in the oscillators used. Commercial test sets have the ability to turn either tone on and off as needed, or adjust the relative levels.
+ +There's rather a lot of circuitry needed to measure IMD, so rather than detailed schematics, the process will only be shown as a block diagram. The way the filters are implemented is immaterial, provided they eliminate the 7kHz 'carrier' frequency, the original 60Hz modulation signal, and leave only the decoded AM as a residual. The circuits involved aren't especially difficult to realise with opamps, but the AM detector has to have high linearity. Fortunately this isn't particularly difficult, because we're working with audio frequencies and not RF (as with an AM broadcast receiver). Despite any reservations you may have, the AM detector (which is a rectifier) only needs to be half-wave, and a standard 'precision rectifier' using an opamp is all that's necessary. There is a bit of trickery needed to ensure high gain at high frequencies (7kHz) but that's not especially difficult, even with many early opamps.
+ +The diagram above shows the filters and rectifier, and includes the distortion generator circuit. The ratio of the generator's impedance and R1 is such that the harmonic distortion is around 0.038% with just the 60Hz signal. The sum of the 60Hz and 7kHz input voltages is 1.455V RMS, and the output of the IMD measurement system is 1.7mV RMS. This equates to an IMD of 0.116% when both diodes are used. With one diode (even-order), the IMD rises to 0.235% (3.42mV). While these results were simulated, I'd expect the simulation to be very close to reality, and should be easily replicated with any simulator.
+ +The subject of IMD is covered in depth in the article Intermodulation Distortion (IMD). This covers just about everything you need to know about the subject, but doesn't go into great detail about exactly how it's measured. With just the block diagram, you can devise the basic circuitry without too much difficulty. As an alternative to a 7th-order high-pass filter, you can use a basic Twin-T notch filter (without any feedback) to remove most of the 60Hz component from the modulated 7kHz carrier frequency. However, the 'residual' of the 60Hz signal will have high output at 120Hz, which makes a notch filter far less effective. You can follow the notch filter with a basic 3rd-order filter, saving a couple of opamps and quite a few resistors and capacitors.
+ +Likewise, the low-pass filter can be simplified by using a 7kHz Twin-T filter (also without feedback) and a simpler low-pass filter. There's still a possible issue with the 2nd harmonic of the 7kHz signal, but it's not hard to get a residual of less than 12μV when there's no distortion. Ultimately, the actual method used doesn't matter, provided that only the amplitude modulated part of the waveform is passed through to the metering circuits, and the gain of the filters and AM detector are maintained at unity.
+ +In the above (simulated) spectrum, you can see the sidebands around the 7kHz signal. This is from the input waveform, which looks just fine with no visible distortion, but quite obviously all is not well. The 60Hz sinewave by itself has a THD of 0.038%, which doesn't seem too bad. IMD measurements are described above (0.116% [odd-order], 0.235% [even-order]) using a simulated version of the Fig. 6.1 block diagram. One thing that's important to understand is that even-order distortion does not provide only even-order harmonics or intermodulation products. Odd-order harmonics (and IMD) are also generated, so the idea of 'nice' even-order distortion is a mathematical impossibility. There are a (small) few 'demonstration' circuits that produce only the second harmonic, but these generally don't apply to real circuits.
+ +The capture shown above is from the simulator, and shows intermodulation of the standard 60Hz waveform added to a 7kHz 'carrier' (4:1 ratio). This shows clearly that even-order intermodulation creates more sidebands than odd-order. With even-order, the sidebands are spaced at 60Hz intervals, and that's extended to 120Hz with odd-order. The first set of even-order sidebands is only 31dB below the fundamental of 7kHz, vs. 41dB for the first set of odd-order sidebands. The distortion circuit adds 0.62% THD to the 60Hz signal alone (even-order) and 1.9% THD for odd-order.
+ +This shows (yet again) that even-order distortion is not the 'holy grail' of audio. Even with far less THD, the even-order spectrum is not only a greater amplitude, but it's more cluttered, with sidebands spaced at 60Hz intervals vs. 120Hz intervals. Even-order distortion also creates odd-order harmonics and sidebands to the spectrum, and the common claim that it's 'musical' or 'benign' must be viewed with some suspicion. However, the context also has to be considered, and in this case it's based on a circuit that deliberately introduces distortion, rather than looking at a specific device (valve [vacuum tube], bipolar junction transistor, JFET or MOSFET).
+ +How these devices are used in a real circuit is an important factor, so a simple test such as that described here cannot (and must not) be used as the final arbiter of 'audio quality'. It's generally considered that low-order distortion products (typically less than the 5th) are less objectionable than higher order products (5th and above). However this depends on the level, and anything below -120dB is purely academic.
+ +Despite everything that should indicate otherwise, there is no doubt at all that a small amount of primarily even-order harmonic distortion can sound 'nice' for some listeners. My preference is for distortion to be below audible limits, but it's often a personal choice. My main gripe with those who proclaim that their system (distorting) sounds 'better' than those with no audible distortion is that they proclaim this to be a fact, when it's quite clearly an opinion. The two are not the same!
+ + +While this form of distortion (aka SID - slew-induced distortion) has largely been shown to be bollocks, you can test for it if that makes you happy. It was a rather hot topic back in the late 1970s and early 80s, but it's not brought up very often any more. The general idea was to inject a filtered squarewave and a low-level sinewave simultaneously into an amplifier, and see how much of the sinewave went 'missing' because the amp's slew rate was (supposedly) too low. There is absolutely no doubt that the technique described works, however, the effective slew rate (voltage change over time) of a squarewave is far greater than any musical instrument can attain. There's more info on this topic in the Intermodulation Distortion (IMD) article.
+ +It was claimed in one of the original articles that a power amplifier had to be capable of a slew rate of 100V/μs to avoid the alleged 'problems'. There's an issue with this, in that an amplifier delivering a 35V peak output at 20kHz only needs a slew rate of about 4.4V/μs - assuming that full power is required at 20kHz with normal programme material. This is quite clearly never the case, and an amp will normally deliver (much) less than 10% of the full output voltage at frequencies above ~15kHz. For a 35V peak output, that means around 3.5V peak (less than 800mW into 8Ω) at 20kHz. The slew rate for that is less than 0.5V/μs. If we allow a safety margin of 10, that's 5V/μs. These figures are all conservative - in reality the levels and slew rate requirements will be lower. That doesn't mean it's not important, but the original claims were wildly exaggerated.
+ +There is no argument that slew rate limiting will cause problems. It can, but with modern semiconductor devices and sensible topologies, it's not possible for any music signal to push a competent amplifier into slew rate limiting. All amplifiers have a limit, but trying to extend the maximum rate-of-change to silly extremes means that you have an amp that's liable to be marginally stable. Ultimately, trying to eliminate TIM by increasing the slew rate of the amplifier is a fool's errand. It's easy in a simulator because we have access to 'ideal' amplifiers, filters and signal sources. None of these is available in the real world, but real world music doesn't need them. Problem solved.
I don't intend to provide much additional detail here. Most people realised fairly quickly that the idea of TIM/TID was perfectly true, but it very rarely caused any audible degradation. There are still a few folk who insist that there's a problem, and they generally fall into the same category as people who claim they can hear a difference between mains cables (for example). As described above, we can measure down to -100dB fairly easily, and many modern 'top-shelf' measurement systems can get down to below -120dB. Any claim that we can hear things that cannot be measured is simply untrue, and in reality the reverse is the case. It's easy to measure 'stuff' that's completely inaudible, for example the difference between two opamps (all other things being equal) where one has 0.001% THD and another has 0.0003%.
+ +There's never been a universally accepted test method for TIM, and while I've experimented with the idea (with some success), it's largely irrelevant and won't be covered any further here. There's a lot of info on the Net (of course), but not all of it is useful. One thing that will reduce TIM (if you still believe it to be an issue) is to add a low-pass filter in front of the power amp. A rolloff of ~18dB/ octave with a -3dB frequency of perhaps 30kHz should do nicely. It won't affect the audio because there's little or no signal at 30kHz, and even if some signal is present, we can't hear it. A 10kHz, 40V peak squarewave (40V RMS) has a slew rate of ~6V/μs after it's been through a 30kHz 3rd order Butterworth filter (< -0.4dB at 20kHz). Such a signal will never occur with music.
+ + +For the adventurous, you can make a filter that lets you see the amplitude of harmonics. Using the filter described in Project 218, it's not especially difficult to make a filter that isolates an individual harmonic, with the next harmonic attenuated by at least 60dB. You can just use a single tuned filter, but performance is greatly improved with two as shown. The values for CR, RT and CT are correct for 1kHz. It's essentially an LC tuned filter, with U1 and U2 forming the first gyrator (simulated inductor). The second gyrator is identical.
+ +The gyrator shown is one of the very few that can achieve extremely high Q. You could use a pair of multiple feedback (MFB) bandpass filters, but tuning them accurately will be almost impossible. I selected this circuit because I know how well it works. Unlike most 'ordinary' gyrators, this version cannot be configured for series resonance, so it can't create a notch filter. You could (of course) use 'real' inductors, but the Q will be far lower and the cost considerably greater. Accurate tuning will be somewhere between difficult and impossible.
+ +As simulated, resonance is 1kHz, and each gyrator has a gain of two. R11 and R12 reduce that to unity, so there is no overall gain. The amplitude of the harmonic is the actual value. If you use a 333.33Hz input, 1kHz is the 3rd harmonic, or it's the 2nd harmonic of 500Hz. The input frequency has to be exact, so you need to be able to adjust it very accurately. While it's possible to make the filter tunable, that comes with some difficulty. RT1 & RT2 can be made variable over a small range, but you'd need a 10-turn pot to set the frequency accurately.
+ ++ L = RT × CT × ( ½ R3 )+ +
+ L = 10k × 15n × 5k = 168.75mH
+ + Frequency is determined by ...
+ + f = 1 / ( 2π × √( L × CR ))
+ f = 1 / ( 2π × √( 168.75m × 150nF )) = 1 kHz +
I tested this with a single filter stage, and it works better than I expected. However, don't expect to be able to see worthwhile results with THD levels much below 1%. From Table 1, we know that means the distortion is 40dB below the signal level, so with 1V input you'll see less than 10mV of the harmonic. Unless you use a number of these filters all tuned to exact harmonics of the input signal, it tells you far less about the distortion characteristics than a notch filter. Early commercial 'wave analysers' (aka frequency-selective wave analysers or frequency selective voltmeters) used a sweep filter and sometimes a CRT (cathode-ray tube) to display the amplitude of each harmonic. Modern instruments use fast Fourier transform (FFT) to display the amplitude of each harmonic in more-or-less 'real time'.
+ +Although the 'simple' filter shown above works, selective voltmeters are very different beasts. They use a mixer stage to convert the incoming audio signal to an intermediate RF (radio frequency), and that's where the narrow-band filters are implemented. Since all input frequencies are manipulated so they have the same frequency, selectivity is easy to achieve. A tunable RF oscillator provides the necessary offset to allow the input frequency to be converted to the IF (intermediate frequency). The IF amplifier has very high selectivity, typically down to ~25Hz. The ultimate selectivity determines the minimum frequency that can be measured.
+ +These are complex instruments, and the (highly simplified) block diagram doesn't quite do justice to the real thing. It's neither appropriate nor possible to show more detail, and it's safe to say that however desirable a selective voltmeter may be, they are now well and truly obsolete. The complexity was such that regular maintenance and calibration were necessary, and both were fairly major undertakings. The manual for the HP 312 extends to 160 pages. Many had provision for SSB (single-sideband) and AM detection, allowing them to be used as very 'high-tech' communications receivers.
+ +Wave analysers, frequency-selective voltmeters and other similar instruments were very expensive when they were available, and modern equivalents are no less so. To get the selectivity necessary to isolate each harmonic is always going to be expensive. If it's done digitally, the DSP (digital signal processing) involved is very demanding. 16-bit resolution is good enough for measurements down to about -100dB. One example of a selective voltmeter (and possibly the best known) is the Hewlett Packard 3586B Selective Level Meter. These command a premium price even today, and they cost around US$12,000 when new in ca. 1992. With a range from 50Hz to 32.5MHz, they were largely used for RF and multiplexed telecommunications analysis. The voltage at the selected frequency is displayed on a digital readout or analogue meter, so the level of each harmonic has to be written down and the final level calculated using the formula shown in Section 2.
+ +When the harmonics are located by this method, noise is excluded from the measurement because the filters are narrow-band. This lets you see harmonics that are below the noise floor, although their significance is unlikely to be of much interest. Anything at -90dB or less is almost certainly going to be inaudible.
+ +There are quite a few USB oscilloscope units that are used with a PC, and they offer fairly advanced FFT capabilities. Many of these are better than you might expect, but they usually aren't cheap. There's the requirement for a PC to control the unit and display the output, which means they are less convenient than a stand-alone oscilloscope. Many of the 'latest and greatest' audio analysers require a PC as well, and measurements are often a mixture of analogue and digital techniques.
+ + +The most common distortion measurement approach now is spectrum analysis. There are countless PC based software packages that provide FFT capabilities, usually with an oscilloscope interface as part of the package. To get good results you need a good sound card, but even those generally provided on the PC motherboard can work surprisingly well. The greatest limitation is the bandwidth, which is almost always limited to 20kHz. This is adequate, but there may be issues beyond 20kHz that affect the sound but can't be seen when using 44.1kHz sampling. 16-bit resolution also limits the noise floor, but it's still good enough to be able to detect harmonics at -100dBV.
+ +There are quite a few external (USB) audio interfaces (you can't really call them 'sound cards') that offer up to 192kHz sampling and 24-bit resolution. How well (or otherwise) these interface with the various FFT software packages is not known. Some are fairly expensive, and they offer many functions that will not be used if you need a measurement system. They are available from the usual manufacturers, such as Focusrite, Sound Blaster, PreSonus, Behringer, etc. Of the 'better' interfaces, I can't comment on anything other than the Focusrite 2i2, as I don't own a multiplicity of expensive audio interfaces. There's no point discussing any PC's internal sound card, because there are so many differences. Using an audio interface and FFT software, there's no need for a notch filter because the dynamic range is already more than good enough to see harmonics.
+ +The Focusrite 2i2 is a high performance audio interface, but (like internal sound cards) it's not designed to be a measurement tool. As a result, the input impedance is not ideal for measurement applications. The line inputs have an claimed input impedance of 60k, with a distortion of less than 0.002%. The 2i2 supports most sample rates (including 192kHz) and up to 24-bit resolution. This is not a cheap interface, but it still costs far less than a dedicated audio measurement system. If it's going to be used for measurements, I suggest an external panel with BNC connectors. This will let you use 1:1 scope probes, particularly for inputs (phone jacks are decidedly sub-optimal IMO). I've also tested a Behringer UCA222 - it's not as good as the Focusrite, but it can still give good results.
+ +This idea will be expanded shortly, as I plan to produce a project for high-resolution distortion measurement. A link to the project will be placed here when it's ready. The idea is to make it easy to use and flexible enough to get good results with free software. One that can't be ignored is REW (room EQ Wizard see the REW Website). We'll ignore the conventional use for REW, which is to analyse room response so acoustic treatment can be optimised. When used just to monitor the frequency spectrum the learning curve is greatly reduced.
+ + +Distortion remains a contentious topic. It shouldn't be, but misleading 'specifications' in the early days of transistor amplifiers certainly did no-one any favours. While it's pretty much irrelevant today, the myths have persisted for over 50 years, often reinforced by snake-oil vendors. Our ability to use test instruments that are at least 100 times more sensitive than our hearing should dispel any doubts, but charlatans keep pushing ideas that range from just silly to serious fraud.
+ +One thing that should be fairly obvious is that most modern amplifiers are rather extraordinary. To achieve distortion below 0.1% is almost trivial, which shows just how good most amplifiers really are. One part distortion to 1,000 parts signal (-60dB) isn't much, but less than 0.01% (1 part in 10,000, or -80dB) is no longer difficult to achieve. This is achieved with a handful of resistors, capacitors and transistors, and shows the precision that can be attained - even with relatively simple circuitry. Many digital measurement systems are so good that it's hard to find fault with them technically, but they need a PC which can be a real nuisance. However, the PC removes the need for a great deal of expensive circuitry, so it's a reasonable compromise.
+ +It's interesting that so many articles have been written about the alleged 'ills' of opamps (amongst other modern electronics), along with some rather extraordinary claims. The same extreme scrutiny has not been applied to valve (vacuum tube) equipment, possibly because it doesn't fit the writer's agenda. Much the same applies to many early transistor stages (typically using 2-3 transistors, and a combination of local degeneration (e.g. emitter degeneration) and overall negative feedback. I grew up with both valve circuitry and the early transistor stages, and I know from many years experience that even a TL072 opamp beats the pants off any of them.
+ +When a decent opamp such as the NE5532 or LM4562 is used, there is pretty much nothing to see that isn't buried in noise (even using averaging on a digital scope). The distortion of an LM4562 (for example) is so low that the opamp must be connected in a way that forces it to have very high 'noise gain', which also increases the distortion. The technique is described in the datasheet. This is one of several devices available that are so good that no commercial equipment can resolve the distortion without resorting to 'trickery'.
+ +Distortion isn't the be-all and end-all of course, as other factors influence our perception of sound quality. Noise is present in all amplifying devices, and that can make things better (digital dithering) or worse (audible hiss). The nature of the distortion is important. It's generally accepted that high-order harmonics sound worse than low-order, but the absolute level has to be considered too. 10mV of distortion (with a 1V output) at 20kHz and above will be inaudible (assuming no IMD is generated), but the same level at the second and third harmonic may be very audible - it's only 40dB down, and IMD will result. Many digital systems use 'noise shaping' to force most of the noise components, including digital artifacts including quantising noise (a form of distortion) to be above 20kHz. There may be a great deal of it, but it's raised to be above the audible range.
+ +There's no denying that circuitry has to be auditioned. Not listening to a design you've just built isn't sensible, but making simple comparisons isn't advised either. To be useful, a comparison must be 'blind', so you don't know which unit you're listening to until the test is finished. Levels (and tone controls if fitted) must be matched to within 0.1dB, and the switching system has to be arranged so you don't hear any tell-tale noises (for example, a relay may make a different noise when is opens from that when it closes). Some tests may be so obvious that you'll hear the difference quite clearly, but then you have to be careful that you avoid the common mistake of equating 'different' with 'better'. 'Different' can be better or worse, and our hearing and preconceptions can easily lead to the wrong decision.
+ +We need measurements, because they help to validate a design. No-one would attempt to sell an amplifier with a frequency range from 300Hz to 3.5kHz (never measured, just 'auditioned'), and frequency response will be tested and verified with an oscillator and AC voltmeter (or oscilloscope). I've not heard of anyone eschewing a simple frequency response test, yet the responses can become rather 'excited' when the topic switches to distortion. Both are important, and both are needed to characterise an amp or preamp. Comparatively high distortion is sometimes preferred with some equipment by some people. That's not a problem as such, but the presence of distortion never makes an amplifying circuit 'better'.
+ +There's a great deal of very detailed information available on-line, although some of it will be behind 'pay walls' or simply not able to be located with a normal search. I urge anyone who's interested enough to find out more to do so. Material that's properly researched and peer-reviewed is a far cry from that found in random web pages or forum sites, and I have no intention to try to cover the topic to the same level of detail as you'll find on 'scholarly' websites. However, I do hope that this article gives readers a few ideas, or de-mystifies the topic so the way measurements are performed makes some sense.
+ +One thing that happens on the Net is that some people tend to seek out information that aligns with their preconceptions (called 'confirmation bias'). If something that's blatantly false is repeated often enough, there are those who will assume that it must be true, because they see it 'everywhere'. Repetition of an invalid proposition doesn't magically make it true. There are forum sites where criticising (for example) a cable - speaker or interconnect - will get you banned. That's not how science works, and audio as we know it owes everything to science and physics, and nothing to belief and dogma.
+ +It always amuses me when I see claims that one opamp supposedly has 'better bass' than another. All opamps (yes, every one) work perfectly to DC, so any allegedly perceived difference at (say) 40Hz is obviously imagined. Any change to low-frequency response is created with external parts, and the opamp can't alter that. This is why we measure things, because without objective confirmation, imagination becomes 'reality'. The idea of confirmation bias (aka experimenter expectancy effect) is apparently unknown to many hobbyists, so if they've been told that opamp 'A' has better bass than opamp 'B', a non-blind listening test is likely to confirm the belief. It's safe to say that any two circuits that measure the same (in all respects) will sound the same. When differences are (for example) 1 part in 10,000 (-80dB) it's purely an academic exercise - our hearing simply isn't that good.
+ + +Much as I'd like to be able to cite more detailed material, most of it is only available if you pay (and it's usually fairly expensive). Service manuals are a great way to see how various test equipment manufacturers approach the different tasks, including impedance conversion, notch filters, metering amplifiers and any other circuitry needed for the equipment to work. Many of the older manuals are fairly easy to follow, but as equipment gets more advanced, the circuitry is far less useful. A schematic of a digital system isn't much use without the microprocessor system's source code, and that's never provided.
+ + +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +DIY Heatsinks |
Note: The basis for this article was originally written by John Inlow, and was available on his website up until 2002 (when the entire site disappeared). Drawings have all been substantially revised (and colour added to make them clearer), and are based on John's originals. The photo in Figure 1 is a cropped and cleaned version of John's original picture. Parts of this document may still retain original copyright. The text has been almost completely re-written.
+ +Build your own heatsinks? Absolutely - this page shows you how to build a heavy duty heatsink and chassis, suitable for Class-A or high power Class-AB amplifiers. While there is no claim that the end result will be cheap (nor are store-bought heatsinks), the performance can be as good or better than anything you can buy.
+ +At only 20% efficiency for a Class-A amplifier, the wasted heat is enormous! Out of 100 watts of input power, only about 20 of those Watts are available as useful sound. The rest of the power (80 Watts) is dissipated as heat, and must be removed - all the while keeping the transistor junctions at a safe operating temperature. It takes big heat sinks to remove the excess heat from the power transistors. It can also be unbelievably difficult to locate affordable, large heatsink stock (and this seems to be worse in the US). So, this page describes a solution to the problem.
+ +
Figure 1 - Photo of a Completed Chassis
The pictured chassis (John Inlow's photo) will easily dissipate well over 160 Watts of wasted power - all as heat. If this meets your requirements, get ready to head for your workshop in readiness to make a significant amount of mess . The design is totally adaptable, so any size of heatsink can be produced, designed to fit any opening size or application. The chassis, although time consuming, is easily reproduced by anyone with basic construction skills. Use a drop saw (also known as a 'chop' saw in the US) with a carbide tipped blade designed to cut soft metals.
The saws and blades are readily available from most hardware stores, and are relatively inexpensive - actually, one can purchase the power saw for less than the blade (a very odd situation indeed). Be sure to use plenty of cutting fluid designed for machining aluminium. Denatured alcohol (methylated spirit) is excellent, but is highly flammable. On the positive side, there is no oily residue that can spontaneously combust if incorrectly stored. Take extreme care with all cutting fluids, as there are risks with all of them - especially anything that works well with aluminium.
+ + +Before going into great detail, it is important to look at the following drawings so you can see what is involved. There is no doubt that you will need some fairly serious tools to be able to tackle something of this nature. Apart from the drop saw and metal-cutting blade mentioned above, you will need a drill press - a power hand drill could be used, but the results will be disappointing or very, very time consuming (or both). You also need taps (for cutting threads, not whatever you might have been thinking ). A decent sized work area is also needed, and be aware that you will create a vast amount of aluminium chips, powder and swarf. The kitchen table is not recommended.
You also need the usual array of hand tools and miscellany - a hacksaw, files, drill bits, G-cramps, and various bits of scrap material that can be used to construct drilling and assembly jigs, etc., etc.
+ +
Figure 2 - Rear View Drawing
Looking from the rear, we see the two heatsinks, top and bottom covers, as well as the front panel. If you insist on using imperial measurements, then divide all measurements shown by 25.4 to obtain inches. The dimensions are not critical for the most part - adapt to suit your specific application.
+ +
Figure 3 - Top (Plan) View Drawing
The plan view gives you the rest of the picture. Again, adapt the dimensions to suit your application. The heatsink sections can be as large (or as small) as you need, limited by available funds and your patience (as always). + +
It is suggested that you use 6 x 25mm (or 0.25" x1.0") bars for the spacers and 2 x 75mm (0.08" x 3.0") bar or cut sheet for the heat dissipation fins. The length depends on your intended height - all drawings here are based on a 150mm (6") heatsink height. Starting with a spacer section on the outside, the two sections are repeated (spacer-fin-spacer-fin ... spacer) until you have the length you need. To achieve the desired depth for the chassis, the plates are bolted together onto four (two for each heatsink) 10mm (3/8") threaded rods. Drilling the holes is a tricky and time consuming job - they must be aligned perfectly. The difficulty of the process is reduced if you create a jig to position the material as you drill the holes. Although it adds time to the whole process, partially pre-drilling each section with a centre drill (a special drill bit with a thick shank and small stubby tip) will ensure that the drill bit does not wander, causing holes to be off-centre. It is very important to clean up the holes after the drilling is completed. Using either a larger drill bit or a de-burring tool, chamfer the holes to remove the jagged edges that form when drilling. Don't even think about avoiding this step - the plates will not touch one another evenly so heat transfer will suffer, and your assembled heatsink will never be straight.
+ +If you use 10mm threaded rod, the holes should be drilled to 12mm. This allows for the occasional hole that is slightly off centre, and also lets you align the assembly perfectly before it is finally clamped up tight.
+ +
Figure 4 - Fin and Spacer Detail Drawing
Keep repeating the above pattern until you achieve the desired depth for your chassis. The same procedure is used for both sides of the chassis. The threaded rods must be long enough to pass through all of the spacers and heatsink fins, and still have sufficient left over for nuts. Optionally, the rods may be extended further to create a mounting for handles at the front of the chassis.
+ +Make sure that all surfaces are flat after drilling. De-burr all edges before assembly to get the best possible contact between the spacers and fins.
+ + +When everything is machined, cleaned and ready to assemble, begin by screwing a half nut onto one end (to become the front) of the four threaded rods. Make sure that the distance from the end of the rod to the face of the half-nut is correct for your handle sections (allow for front panel, tube (or long nut if you prefer), handle section and acorn nut). Then slide on the first spacer, followed by a fin,then a spacer (etc.) in alternating fashion. When completed, screw a full nut onto the remaining end (rear) of the rods - this should only be finger tight at this stage! You need to get the distance at the front dead right if you are planning on adding handles - the final acorn nut does not have much depth. Feel free to add washers at the front as well as the back, but make sure they are allowed for in the length of the threaded rod.
+ +Alignment is critical. Because your heatsink is made from many separate pieces of aluminium, it is possible for it to assume many different and entertaining shapes, within the constraints of the threaded rod. None of the potentially entertaining shapes is useful - you need flat and square. Period.
+ +To achieve this, you'll need a flat surface with a piece of scrap angle the same length as the heatsink assembly screwed to one edge. You also need a small hammer, a square, a scrap piece of aluminium and a piece of timber. First, lay the assembly onto the surface, with fins pointing upwards. Slide it across so that it contacts the angle. The alignment is by nature repetitive - each step will need to be repeated - possibly several times ... + +
Once completed and the nuts are firm, you can clamp a piece of wood on top of the assembly, thus clamping the heatsink to your flat surface. Be careful that you don't disturb any of your alignment during this process. Now the rear nuts can be tightened fully. They should be tight, but not ridiculously so - a stripped nut or threaded rod is not a bonus at this stage.
+ +Upon removal from the assembly jig, the heatsink should be flat and square, requiring the minimum effort to obtain excellent thermal contact between the reinforcing bar or mounting plate (see below). If the nuts are tight enough, firm effort on your part to bend or twist the completed assembly will result in nothing more than heatsink imprints on your hands - the heatsink itself should remain nice and flat, with no bending or warping. If it does warp, you will have to repeat the final step, after loosening the nuts just enough to allow the assembly to be made flat once again. Although this is a frustrating step if things move, it is highly recommended - you will find out for yourself how rigid the heatsink is, and whether (or not) you need to add a reinforcing bar (or perhaps just tighten the nuts a little more).
+ +If it is possible to distort the assembly (or it comes out pre-warped), then you must use reinforcing bars screwed to the back of the heatsink (all drilling and tapping must be into the spacer strips only. Alternatively, a full-length flat plate will also provide reinforcement, but at somewhat greater expense. These options are discussed below. Note that one or the other needs to be used anyway - at issue is how long it must be to achieve rigidity and good thermal contact.
+ + +After assembly and prior to fitting the heat spreader (either a reinforcing bar and/or flat spreader plate), it is advantageous to mill the rear of the heatsink to present the flattest possible surface. Given that few hobbyists have access to a milling machine, an alternative is to carefully file the entire rear surface - this will be an extremely tedious job, but will improve performance. A linisher (belt sander) can also be used, but the surface must be finished with a fine grit so it is as smooth as possible. The photo in Figure 6B shows the rear surface of a small heatsink I made, and the linishing marks are visible. I deliberately did not complete the job so the anomalies could be seen clearly.
+ +++ ++
++
Note Carefully - Once the heatsink is fully assembled and the back has been linished (or milled) flat, you must not disassemble it. If you do, you will have to re-surface the + back of the heatsink (where heat-spreaders and then transistors are mounted) because it will be impossible to re-assemble the fins and spreaders to obtain the original surface finish. You must make + sure that everything is the way you want it to be before the base is machined, and after that it's no longer possible to make changes unless you are prepared to re-surface the back again. +
Although the surface shown is very flat (better than 25um / 50mm), the surface finish will almost certainly not be good enough for direct mounting of semiconductors. This makes the heat spreader mandatory. You can check surface flatness by trapping a thin hair (for example) at various places on the surface with the edge of a steel rule. It should be not possible to pull the hair from beneath the edge of the rule at any position under the heat spreader location.
+ +
Figure 5 - Transistor Mounting Options
Before assembly of the heat spreader to the heatsink itself, apply heatsink compound between the spreader and the heatsink back. Apply a thin layer, and check that the two surfaces mate well by pressing the bar onto the heatsink. Remove it, and check that the heatsink compound is evenly distributed and shows signs of full contact. This will be immediately obvious upon inspection.
+ +The transistors may be mounted directly to the reinforcing bar as shown. Normal transistor mounting procedures apply. Alternatively, attach a section of flat bar as shown in the right-hand drawing. This method produces a nice flat surface, ideal for mounting boards where the transistors (or MOSFETs) are under the PCB.
+ +It is imperative that all holes for the bar or plate line up with the centres of spacer sections - drilling and tapping such that a hole is part way between a fin and a spacer will cause deformation of the assembly, resulting in potentially dramatic loss of performance.
+ +
+Figure 6 (A and B) - Photo and Scan of Small Demonstration Heatsink
The above photos are of a small demonstration heatsink I built. The left side (A) shows the general construction, while B shows the surface finish on the underside (the latter was scanned to get the best image of the surface). I deliberately didn't complete the machining process so you could see the aberrations that you will get when the heatsink is assembled. The dark areas are actually the shiny (not sanded) areas of the fins. This was despite my following the instructions listed above, so you will also have the same problem. Attempting to remove every imperfection is futile unless you have access to a milling machine - it will be too boring and frustrating for words. The last 5% of the surface could easily take 90% or more of the total construction time.
+ +This is the reason for using a heat spreader - it distributes heat over a much larger surface area than you will get with any transistor, but the surface still needs some basic attention or heat transfer between spreader and heatsink will be badly affected. This will lead to an excessive temperature on the spreader, and an even higher temperature for the transistors. This machining is the most important part of the exercise!
+ +I used a linisher (essentially an upside-down large belt sander), but careful filing will also work very well. Yes, it will be tedious and hard work, but the results will be worth your efforts.
+ +You will also note that I didn't follow the procedure, in that I have fins (rather than spacers) at each end of the heatsink. Feel free to break the rules too, provided you work out exactly what you need. The whole idea is that this process allows you to make a heatsink that exactly fits your needs. In case you were wondering, the overall size of the heatsink pictured in Figure 6 is 100mm (h) x 60mm (w) x 52mm (d). There are 8 fins, each is 32mm deep, giving a thermal resistance of about 1.16°C / W (if black enamelled) or 2.0°C / W (polished aluminium, as pictured).
+ + +Once the heatsink sections are drilled and tapped to accept the top and bottom covers, the heat spreader has been drilled and tapped for the transistors, and no further work needs to be done in that area, you may start the final assembly. The bottom needs to be drilled for external feet and for all internal hardware that will be attached to it. As shown in Figure 3, the heatsinks (as well as top and bottom covers) are drilled to accept mounting screws. The heatsink holes are tapped to accept the size screws you will use, as will the heat spreader.
+ +The front panel simply attaches to the two heatsink sections as shown in Figure 3, and the top and bottom panels attach to the heatsink and front and rear panels. Although no additional mounting for the rear panel to heatsinks is shown, this can be added if desired. You can use 12mm square section, or a piece of angle if you prefer. The extra is not really needed, but the rear panel will flex a little when the top is not in place without it.
+ +If you find that you need the reinforcing bar(s), first assemble the heatsink as described above, then drill and tap the holes for the reinforcing bar (in both heatsink spacers and bar). Assemble the heatsink as described, carefully turn it over and file, sand or otherwise machine the rear surface so it is as flat as possible. Then, attach the reinforcing bar, and screw down lightly. You will quickly see if there is any misalignment, and this must be corrected before you permanently attach the bar to the heatsink. In general, it is expected that if the assembly process is carried out carefully, the heatsinks will be very rigid indeed - almost as if they were one solid piece of aluminium.
+ +The unit as described is very heavy, so file and sand all edges and outside corners of the ribs to reduce the risk of being cut. It will be much easier to assemble if you drill and tap all the heatsink holes for mounting the reinforcing bars and / or heat spreader before final finishing.
+ +Paint the finished heatsink with a spray can of flat (matte) black enamel. It should be a self-etching type designed for aluminium finishing for best results. Don't be tempted to try for a perfect finish between the fins, as you will end up applying too much paint which will reduce performance. You could get all the sections black anodised before assembly, but that would be an expensive option because of the number of pieces. Don't even contemplate pre-finishing the heatsink components - any paint between fins and spacers will degrade performance dramatically, and may cause the heatsink to bend or twist when it is assembled.
+ +The front panel arrangement may look rather suspect, with the 12mm bar attached using screws into blind holes in the panel. With a 6mm panel, this allows 5mm hole depth, and about 4mm of this can be threaded. Having done exactly this on many occasions, I can assure the reader that it works perfectly. You do need to be extremely careful to ensure that the holes don't go all the way through, but other than that, the process is straightforward. Tapping blind holes is a cow, but you can cheat and use self-threading screws. The latter must be square-ended - conventional tapered self-tapping screws will not work! If you are concerned, use a good epoxy resin (24 hour setting type - not 5 minute) as well as the screws for a permanent bond.
+ + +As described, each heatsink section will have a thermal resistance of around 0.12°C / W. This is assuming a middle-of-the-road value for emissivity of about 0.8 - typical of a flat black surface finish you will be able to apply at home. This is a very good figure, but it will be degraded if the heat spreader is too small, or makes ordinary (as opposed to excellent) thermal contact with the heatsink, etc. The distribution of transistors will also have a bearing on the thermal performance.
+ +However, even if we were to assume the worst case and the thermal resistance is effectively doubled, it is still very good at 0.24°C / W. The 'typical' figure (0.12) was calculated using the ESP heatsink calculator program (see the downloads section of the ESP website). Heatsink temperature was taken as 60°C, at an ambient temperature of 25°C. Even using the worst case figure, this means that when dissipating 80W (as discussed at the beginning of this article), the heatsink temperature will stabilise at around 20°C above ambient (45°C at 25°C ambient) ... and that's worst case!
+ +Because of the large thermal mass (as well as actual mass), this heatsink will take a significant time to reach full operating temperature. Each heatsink will weigh in at about 5.9kg for aluminium alone, and even the baby one I made (see Figure 6) weighs 480 grams (admittedly, that's with the steel threaded rod, washers and nuts). Both heatsinks will weigh about 12kg (over 26 pounds in the old measurements), and you have yet to add the panels, transformer(s) and other components.
+ +The basis for this article was originally written by John Inlow, and was available on his website up until 2002. Drawings have all been substantially revised (and colour added to make them clearer), and are based on John's originals, as is the photo in Figure 1.
+ +![]() | + + + + + + + |
Elliott Sound Products | +DSPs and Audio |
Introduction +
The DSP has become part of so much equipment in the last few years that it is hard to imagine many products without at least one digital signal processor involved. One area where DSPs have not yet gained full acceptance is audio. While there are many audio products that use digital signal processing, few are considered high end applications.
+ +Studios are usually more than happy to use a DSP based effects unit to provide echo, reverb, phasing, flanging, pitch shift and many other functions. The end result may easily find its way into a high end audiophile's collection, and the presence of the DSP may or may not go un-noticed. One of the products that certainly use current DSP chips to their utmost is made by DEQX [1], and the capabilities of the system are very impressive indeed.
+ +Other products that make extensive use of DSP chips include digital crossovers, equalisers and other equipment that is (in general) more likely to be found in studios and sound reinforcement than in home systems. This is changing though, and we are starting to see more home equipment utilising DSPs to decode multiple DVD formats, including the likes of DivX. That the world of the DSP is encroaching on traditional analogue territory is undeniable, but it important to understand that a DSP is not a panacea, and cannot perform miracles.
+ +Figure 1 shows the internal (simplified) block diagram of a DSP chip, based on that in the SHARC® data sheet. This article is not intended to explain exactly how a DSP works (and I don't know enough about the programming of them to be much use in that area), but rather to give the reader a brief overview of the DSP before explaining what they can't do.
+ + +A digital signal processor is a very sophisticated processor chip, whose architecture has been specifically optimised for the task of high speed 'real-time' data processing. Speed is of the essence, because although audio may not seem that fast, real-time manipulation requires that the processor be fast enough to deal with every sample as it is received. It is not possible to slow the processing down, as might happen with a PC performing DSP functions on a file or block of data in memory. Nor is it possible to ignore samples if the DSP can't keep up. As sample rates increase, so too does the requirement for DSPs to be able to keep pace.
+ +One of the things that slow down the whole process of executing DSP algorithms is transferring information to and from memory. This includes data, such as samples from the input signal and the filter coefficients, as well as program instructions, the binary codes that are loaded into the program sequencer. For example, suppose we need to multiply two numbers that reside somewhere in memory. To do this, we must fetch three binary values from memory, the numbers to be multiplied, plus the program instruction describing what to do. In a traditional microprocessor, this requires three clock cycles just to fetch the data.
+ +The Analog Devices SHARC processor (one of the more popular DSPs for audio work), uses what AD call 'Super Harvard Architecture', and this is the origin of the name. By using separate memories and buses for program instructions and data, a piece of data and a program instruction can be fetched simultaneously. There is a lot to it, as Figure 1 shows - and this is a simplified block diagram.
+ +
Figure 1 - Simplified Block Diagram of DSP Integrated Circuit
High speed I/O is a key characteristic of DSPs. The ultimate goal is to move the data in, perform the maths, then move the data out again before the next sample is available. If a DSP can't do that, then it's of no use to anyone.
+ +Much of what a DSP has to do for end-user audio applications is based on filters (crossover networks and equalisation). There are two filter types that are commonly used, and while neither would seem very challenging on the surface, when the time constraint is included it becomes critical. The two main filter types are known as FIR (Finite Impulse Response) and IIR (Infinite Impulse Response). Analogue active filters are equivalent to an IIR digital filter, as they use feedback. FIR filters cannot oscillate, but IIR filters can (as can analogue filters).
+ +An FIR filter has no feedback, but uses a finite number of previous samples for calculation. Its response to a given sample ends when the sample reaches the end of a circular buffer. In contrast, an IIR filter uses recursion - computer terminology for a function that calls itself (so is sometimes called 're-entrant'). The output of an IIR filter is a weighted sum of input and output samples.
+ +To give you an idea of the process steps for an FIR filter, have a look at the following ...
+ +A traditional microprocessor would usually require one clock cycle to perform each instruction from 1 to 14. A DSP may be able to execute the entire block from 6 to 12 in a single clock cycle, resulting in a significant speed increase. These tasks may be performed many times for each input sample, to handle multiple coefficients and work with previous samples as needed for the required filter response. This is how the DSP is capable of working in real time, without the significant expense of using an extremely high clock speed.
+ +The total delay between the input and output is typically no more than a few milliseconds. This is a major difference between analogue and digital processing - analogue filters have a total time delay of a few microseconds at most, but a DSP needs time to accumulate enough samples to work with. A single sample is useless for a filter, because it only has information about the instantaneous level. To obtain frequency data, a number of samples are needed, with the total number tending to increase as frequency is reduced. While 1ms is enough to capture 10 cycles at 10kHz or one complete cycle at 1kHz, it is less useful at 20Hz, because 1ms only describes 1/50 of a cycle. A 20Hz waveform has a period (time for a complete cycle) of 50ms, but the DSP does not appear to need a complete cycle to enable a filter to function as it should.
+ +MP3 - the application of DSP doesn't always need a dedicated IC. There are many PC/ Mac/ Linux programs that allow you to convert CD files to MP3, and they use the PC to perform the processing. There is a great deal of processing needed to make the conversion, as the processor has to determine which parts of the audio stream are 'inaudible' so they can be removed. The compression algorithms used are very complex, and some encoders do a much better job than others by careful refinement of the maths functions to get the best result. It's still MP3 at the end, but there are significant differences. Note that no lossy compression algorithm can process pink noise so it sounds like the original. Because pink noise consists of equal amplitude sound over the full frequency range, if anything is 'discarded' the sound is changed (and instruments such as a harpsichord are equally afflicted - they never sound 'right' with MP3).
+ +ProTools - DSP functionality is also found in applications such as ProTools, a very popular sound editing and manipulation suite of professional recording/ editing software. ProTools allows the user to modify sounds, change the pitch of vocals, remove unwanted background noise, replace dialogue, alter the tempo of audio files, and much more. It is even possible to make an extremely average (or even bad) singer sound like a diva - which is cheating the public IMO, but there's probably not much we can do to stop that.
+ + +There are now many digital crossovers available, and I have one that I use to evaluate speakers and determine the optimum crossover frequencies for different drivers. It includes parametric equalisation, time delay to account for driver offset and many other features that were almost unthinkable only a few years ago. Many such systems are available, primarily aimed at professional sound reinforcement - although some people do use these systems for domestic systems as well. Dedicated boards are available to OEMs (original equipment manufacturers) from a number of vendors, and we are seeing them start to form an integral part of new designs.
+ +Systems such as the DEQX (Digital EQ and Xover - pronounced dex) are very powerful, and can determine the optimum crossover frequency, filter slopes (which can be asymmetrical) and EQ for a given set of drivers. All this from a few measurements taken with a microphone plugged straight into the unit itself. It not only performs the crossover functions, but is a complete digital analysis package as well. By no means is the DEQX alone, although it was one of the first to offer such a complete package with such a high degree of functionality.
+ +The equalisation functionality is extraordinary, so much so that it may seem that we at last have a foolproof means of turning the proverbial sow's ear into a silk purse - despite the old adage claiming that it is not possible to do so. (See footnote.) With the capacity to handle equalisation tasks that are simply impossible with analogue equipment, nirvana seems so close we can almost touch it.
+ +Bang and Olufsen use DSP in the BeoLab 5 speaker system to provide their Adaptive Bass Control, which "will listen and analyse the sound of the room" at the press of a button. DEQX can also provide room equalisation, as can many other systems. Dolby systems are used extensively in cinema installations, providing a wide range of equalisation functions - all in the digital domain.
+ + +The term 'time alignment' refers to the use of sloped baffles, baffles with steps or the use of an electronic delay to ensure that the acoustic centres of the drivers are aligned in such a way as to ensure that the signal from all drivers in the enclosure arrive at the listener's ears at the same time. Each method can be arranged to achieve the desired result, but there may be inherent problems. For example ...
+ +In general, time alignment will theoretically produce a better result than a non-aligned system, but in reality most people won't be able to hear any difference - especially if fast rolloff filters are used in the crossover. See Phase Correction - Myth or Magic for some background information on the basics of time alignment and/ or phase correction.
+ +In the majority of home hi-fi systems, it's the tweeter signal that needs to be delayed, because the tweeter has a much shorter mechanical structure than the midrange (or mid/ bass) driver. If the acoustic centre of the tweeter used is (say) 35mm closer to the listener than that of the midrange driver, you need to apply a delay of 100µs. This is calculated based on the speed of sound and the acoustic centre offset. Naturally, you have to use a median value for the speed of sound, since it varies depending on temperature and humidity.
+ +Assume the speed/ velocity of sound to be 343m/s (this is at a temperature of 20°C, 50% humidity). Air pressure has not been included because it has almost no effect. That means that sound will travel 343mm in one millisecond, or 34.3mm in 100µs. Needless to say you can calculate the delay needed for any driver offset using the info above. The velocity of sound depends heavily on temperature, and while it is certainly possible to include a temperature sensor to adjust the delay, that would probably not be considered sensible.
+ +Before worrying about adding delays to create a time aligned system, you also need to consider the wavelength. That's determined by the speed of sound and the frequency. At 343Hz the wavelength is exactly 1 metre, and there is no point trying to correct for a phase shift of less than a few degrees (a 90° phase shift causes a 3dB change in level). Even as much as a 30° phase shift only causes a level change of 0.3dB, so it's important to understand that attempting time alignment at frequencies much below 500Hz or so is fairly futile. You'll most likely be able to measure the difference with the right equipment, but it will almost certainly be inaudible with programme material.
+ +It is beneficial to establish the relationships between frequency and wavelength, distance and time, and this may be determined by ...
+ ++ wavelength = velocity / frequency+ +
+ period = 1 / frequency
+ time (seconds) = distance in metres / velocity (343m/s) +
A useful thing to remember is that a 1µs delay is equivalent to 0.35mm (close enough). So for any given frequency we can determine the wavelength and period (the time for one complete cycle). From that, you can work out the time delay for each degree of phase shift. For example, at a crossover frequency of 3.0kHz, the wavelength is ...
+ ++ wavelength = 343 / 3000 = 114mm+ +
+ Period = 1 / 3000 = 333µs
+ Time / Degree = Period / 360° = 333µs / 360 = 925ns +
At a crossover frequency of (say) 300Hz between bass and midrange, the wavelength is 1.15 metres. You can have up to 30° phase shift (a delay of 316µs at 300Hz), and the level of the combined electrical waveforms is down by only 0.4dB. it should be obvious that any delay caused by misaligned acoustic centres is negligible (perhaps 100-200µs at most), and will create far less than 30° phase shift. Time alignment is normally only ever required between the midrange and tweeter unless the bass-midrange crossover is at a much higher frequency than normal. As an example, with a 300Hz crossover between bass and midrange and an acoustic-centre offset of 100mm (285µs - more than you'll ever get with most driver combinations), the cancellation should not be more than 0.32dB and can be ignored.
+ +Now you have everything you need to be able to work out if you are likely to gain any real advantage of a time aligned system. Consider the frequency response of the individual drivers (especially peaks and dips in their response at or near the crossover frequency), the driver offset, the distance between the drivers and the listener, room effects (see below for more) and just how well you can hear small variations in response with your favourite programme material.
+ +Using a high-slope crossover (24dB/octave) minimises the width of any notch that's created, and you may well find that the difference between time aligned and 'normal' is inaudible other than by direct and immediate comparison. It goes without saying that any test must be double-blind. If you know which configuration you are listening to, you will hear a difference, even if one doesn't exist at all.
+ + +The term 'room EQ' is very misleading, especially if you assume that all the anomalies within the room can be dealt with, without having to resort to room treatment. In the old days (pre DSP), if a room had a problem, you had to make or do physical 'things' to correct it. Absorbers, resonators, diffraction gratings, heavy curtains, thick carpet and speaker placement being just a few. (The simple reality is that the 'old' methods are still required - nothing has changed except for marketing hype.)
+ +Now, all we have to do is set up a measurement microphone and let the system loose. All the problem areas will be cleaned up and we will have "perfect sound forever". Right?
+ +Wrong! This is one of the major misconceptions that people have of digital EQ systems. A simple statement of absolute fact is warranted ...
+ ++ You cannot correct time with amplitude ++ +
An equalisation system cannot compensate for acoustic effects that are time related. No-one would attempt to create a 'time-aligned' speaker system by applying equalisation - it wouldn't work, and the creator of such a travesty would be the butt of a great many jokes - and rightfully so! Reflections within a room are an effect of time, and no amount of messing around with the amplitude (level) of a signal can fix a problem that is a direct result of a time delay - even if done at specific frequencies. In fact, there is absolutely nothing you can do at the source that will have an effect. If an acoustic signal reflects off a window, the only thing that will stop it is to turn the signal off or open the window. Naturally, any other acoustic signal from any source will also reflect off the window. Can EQ fix this? Of course not. One would be quite mad to imagine that it could.
+ +I have seen claims that a DSP can 'fix' room modes and other anomalies at a fixed position, but the claimants fail to point out that such a fixed position may only encompass a few cubic centimetres. Also missed is the fact that our hearing (ear-brain combination) ignores (at least to a degree) many of the peaks and dips that can be detected by a microphone, and if one were to equalise based on the mic response, the result would sound worse than one could possibly imagine. However, there is an exception ... see below.
+ +A time delay will cause problems over a wide range of frequencies, but is likely to be most troublesome where the time is in direct relationship to the wavelength of the affected frequencies. It is because specific frequencies are affected that it may be assumed that a filter circuit might help, but this approach has neglected to consider the real problem.
+ ++Just imagine how we would all laugh at a motorist refilling the petrol tank because his car had stopped, having completely failed to notice that the car stopped because it crashed into a tree. ++ +
For some unknown reason, people take the application of EQ (which changes the amplitude of specific frequencies) to correct time issues quite seriously, in much the same manner as the motorist just described. Hmmmm!
+ +The velocity of sound in air (at sea level and 20°C) is 343m/s, so the wavelength (λ) of a 343Hz signal is 1 metre. If a bidirectional sound source is positioned 500mm from a wall (as shown in Figure 2), any signal at 343Hz will be reinforced by the reflection of the rear radiation from the wall, because the reflection has travelled an additional metre and is in phase with the forward radiated signal, causing a peak. The reflected signal adds to the direct radiated signal. At 172.5Hz, the reflection has still travelled an extra metre, but the reflected wave will now partially cancel the original signal because it is now 180° out of phase, and will create a notch. The same effects occur at all frequencies whose wavelengths are multiples or sub multiples of 1 metre (2m, 500mm, 333mm, 250mm, etc.). How can this be equalised? Quite obviously, it can't be. As the frequency increases, the number of peaks and dips/ notches also increases.
+ +
Figure 2 - Bi-Directional Loudspeaker, 1 Metre from Wall
If we analyse the end result of such a reflection, we see a comb filter effect. Distance between comb notches is determined by the time delay, and the relative amplitude of each notch depends on the losses the reflected signal encounters. If the rear wall just absorbs the sound then no reflection is created - the problem does not occur (this is one correct way to deal with such issues). So far, we've only looked at one reflection, but in reality there will be many more.
+ +Note that for the sake of discussion, the speaker is assumed to be acoustically transparent. The idea is to show the basics rather than to become bogged down with the complexity of any room reflection. Even with this simple analogy, the number of anomalies created by a single reflection is already at the limit or beyond the capabilities of any equaliser, whether analogue or DSP based. In reality, there will not be one but a multiplicity of reflections from ceiling, side walls, floor, rear wall, etc., etc. ... and all with different frequency response characteristics. The end result becomes so complex that it is impossible to equalise such a large number of problem frequencies - even assuming for a moment that it would be sensible to do so.
+ +Some readers may recall a time when "direct - reflected" was not only an advertising slogan for one manufacturer, but the speakers were set up more or less as shown in Figure 2. Let the reader make of this what s/he will .
To make matters worse when room reflections are involved, every location in the room will be affected differently. It is quite obvious that application of multiple different EQ settings simultaneously to a single driver is not possible. In Figure 3 (based on the example in Figure 2), the single reflection has been rolled off at 6dB/octave above 1kHz to account for the fact that high frequencies are easily absorbed. This may be over-optimistic for some reflective surfaces, but is sufficiently realistic for the purpose of demonstration. Without the rolloff, the deep notches continue up to the highest frequencies, and get closer together as frequency increases.
+ +
Figure 3 - Comb Filter Created by Loudspeaker & Wall
Note in particular the depth of the notches at 172Hz and 500Hz. Believe it or not, these are achievable using a microphone. When a system showing such deep notches is auditioned, we hear nothing of the sort. We will hear notches that are created by an incorrectly set up crossover network or out-of-phase drivers (often referred to as a 'suck-out'), but we tend not to hear deep notches caused by room reflections.
+ +Fortunately for us, it is our hearing that comes to the rescue with reflected sounds - at least to some extent. While a microphone will pick up the effect shown above, we will hear only a colouration to the sound, which can still be quite disturbing. Equalisation does little to help, because the colouration is caused by time delays, not amplitude variations within the driver itself. We will not hear the full (dramatic) effects of the comb filter because our hearing has evolved to reject early reflections (to a degree at least). We don't start to hear a reflection as an echo until it is delayed by 30ms or more. If a system were (somehow) equalised based on the measurement microphone's data, the end result would sound nothing like we may have imagined it should - it will be a disaster.
+ +Herein lies the problem, and while still uncommon in home systems, it has been repeated in countless cinemas worldwide (another topic, another article) - this is definitely not something to aspire to. Microphones and ears respond very differently to sound, so to equate what a microphone 'hears' with what we hear is simply wrong. Any recording engineer will tell you how critical microphone placement can be to get the sound you want from an instrument. The very idea that a room can be 'equalised' with a microphone, a few test signals and DSP based system is flawed in the extreme.
+ +Even very basic loudspeaker measurements need to be conducted with great care. Ideally, a loudspeaker should be measured under completely anechoic (no echoes) conditions to ensure that reflections do not 'create' problems that don't exist. The topic of loudspeaker measurement has been covered in any number of books, such is the difficulty of the task. The designer also needs to know what measured effects should be ignored because they are not relevant to reality (microphone artifacts as opposed to what is audible). A microphone is pretty stupid the truth be known, and automated measurement systems use many compromises to eliminate (as far as possible) room reflections. These compromises have varying degrees of success, but none can compete with our hearing for rejection of extraneous reflections.
+ +While many people will still claim that (full range) room EQ is possible, it must be understood that ...
+ +It is not practical to have to sit in one rigidly fixed position to listen to music, nor is it practical to re-equalise the room because you moved the coffee mug on the table. Even a small re-arrangement of furniture or other items in a room will create new peaks and dips that can be measured if the system has sufficient resolution. I've never heard anyone complain that someone moved their coffee mug and ruined the sound. A microphone hears the difference, we don't. At any frequency above 100Hz or thereabouts (a wavelength of 3.45 metres), any attempt at room EQ will create an overall frequency characteristic that is optimised for a microphone, not our hearing. The two will usually be very much at odds with each other.
+ +Interestingly, it is possible to perform some degree of EQ for sub-bass, at least within a typical home listening room. Why? Because the wavelengths are large compared to distance within a room. The room's standing wave patterns can cause extreme 'one note' bass, but this can often be tamed enough by EQ to obtain a very satisfactory end result - at the listening position. Other locations in the room will have a 'hole' at the frequency that has been equalised out, but this is usually not a major issue. The listening position is usually sufficiently large for a number of people to experience an acceptable balance with most material. It is invariably better to experiment with alternate locations for a subwoofer before applying any EQ at all. The location that requires the least equalisation is the ideal, but by Murphy's law that means the sub will be in the middle of a doorway or some other equally non-sensible location. You will always have to compromise somewhere, but to assume that a DSP will fix everything is naive and misguided.
+ +By applying EQ to reduce the level at a troublesome frequency (or perhaps two frequencies), we can often obtain a system that may not be perfect, but will give good performance down to around 20Hz. There may also be dips in the response, but any attempt to apply EQ to boost those frequencies is ill advised. In general, applying boost does not help sound quality, but can require an astonishing amount of power (see note). If 10dB of boost is applied at one frequency, this will demand 10 times as much power as the unequalised system at that frequency. Few subwoofer amplifiers have enough power to accommodate this. A modest amount of boost can be used to extend the bottom end of sealed enclosures, but boost must never be applied below the tuning frequency of a vented (or passive radiator) box.
+ +There is also a point where room propagation changes from a travelling wave to 'pressure mode' (also known as 'room gain'). The room itself becomes pressurised in sympathy with the bass frequencies, and this effect is very prominent with high power car systems. As a first approximation, a room will enter pressure mode when the longest dimension of any boundary wall is about 1/2 wavelength [3]. For a room with a largest dimension of perhaps 5 metres, pressure mode can be expected below about 35Hz. Once a room is in pressure mode, it can be equalised with no problems. Although a side issue, it is important when discussing room EQ.
+ +![]() | Note: When applying EQ to a subwoofer, the system may not require a vast amount of power, but a great deal of voltage swing from the amplifier. + To correct an anomaly close to a subwoofer's resonant frequency uses almost no power at all, but still requires the voltage swing that would produce that power into + the nominal load impedance. This is actually a surprisingly difficult area to explain to those who don't see it from the basic description here. Unfortunately, it is + outside the scope of this article, so for the sake of simplicity we can simply assume that 10dB of boost needs 10 times the power. + |
Of course, one must be prepared to experiment with an idea, no matter how bizarre it may seem. Quite some time ago, I equalised my system to get a nice flat response at my listening position. This was done very carefully, and the end result looked pretty damn good. The sound seemed 'better' (i.e. different) for a while, but the EQ remained in the system for only one day. It was wrong! It sounded wrong, and rapidly became irritating. It did help the sub-bass (and that is equalised to this day), but everything else just didn't make the grade.
+ +During the EQ process, I identified an anomaly with the right speaker. This was caused by a reflection from a coffee table, and although completely inaudible, the microphone picked up the reflection and the analyser thought there was a peak at that frequency. There wasn't then, there isn't now, and there never was a peak. A far greater change in general tonality is easily obtained by clasping one's hands behind one's head while listening, but no-one complains about that. Should we add an EQ setting for that just in case we want to clasp hands behind our heads while the hi-fi is on? No, I didn't think so either .
A small amount of equalisation can often be used with great success to compensate for a minor deficiency in a loudspeaker driver. However, any driver that needs radical EQ to perform satisfactorily simply should not be used. Likewise, no amount of EQ will compensate for severe driver deficiencies such as cone break-up or high levels of intermodulation distortion. If the speaker enclosure isn't rigid enough, there will be panel resonances at various frequencies. Such resonances can be in or out of phase, are almost always distorted (not a perfect representation of the source signal) and can have a significant negative impact on sound quality. The issues discussed here are all physical effects, and cannot be 'corrected' by equalisation.
+ +Ultimately, the performance of a loudspeaker is determined by the laws of physics. No amount of EQ can make a 100mm (4") driver perform like a 380mm (15") unit or vice versa. Cone surface area determines the lowest frequency where a driver can move enough air to create a useful sound wave, based on the size of the outer enclosure - the room itself.
+ +As an example, a 380mm driver with 10mm of cone travel can move about 1.13 litres of air - not very much (I have assumed the entire diameter for radiating surface for the sake of explanation). A 100mm driver with the same 10mm of cone travel can only move 76 ml (millilitres or cc). To be able to move the same amount of air as the 380mm unit, the 100mm driver would need a cone travel of 150mm! Even if this were possible (which it isn't), the cone area is so tiny compared to wavelength that the radiation efficiency is extremely low. While there are no definitive tables relating to cone area vs. lowest frequency for direct radiating loudspeakers, I have verified that a 200mm driver cannot reproduce useful bass in a half space environment below about 40Hz - regardless of added bass boost.
+ +A small diaphragm can reproduce very deep bass if the outer enclosure is small. Headphones are a good example, where the outer enclosure is only the small air-space between the diaphragm and your ears. Bandpass speaker enclosures also make use of a small space for the driver to radiate into, and system tuning is then used to obtain the best compromise between bandwidth and efficiency.
+ +The larger the area to be filled at a given low frequency (and sound level), the greater radiating surface is needed. Reproducing 25Hz in a large venue demands that a huge amount of air is moved, and this can only be achieved with horn loading, large diameter drivers, or high velocity air using a bandpass enclosure (for example). If the latter makes noise (not uncommon), no DSP can prevent or even reduce that noise - it can only be dealt with by physical intervention.
+ +At the other end of the scale, a 100mm driver cannot reproduce 20kHz with any degree of usefulness. The cone diameter is many wavelengths at this frequency, so even if the cone were infinitely stiff and light, its diameter is such that it will cause severe lobing, with the on and off-axis levels being radically different. A conventional loudspeaker simply doesn't work well if the diameter exceeds one wavelength. Some drivers use an auxiliary tweeter ('whizzer') cone to obtain improved high frequency dispersion.
+ +In any of the cases described above, application of equalisation to make a driver work outside its physical limits simply cannot work, and attempting it is pointless at best. Of the other effects, no DSP system is capable of the instantaneous correction needed to make a poorly designed driver perform well. For example, the amount of computation needed to correct intermodulation distortion is astonishing. The DSP system would need to know the exact position of the cone at any given time, and would need to be programmed with every characteristic of the driver at every cone position. Magnetic path saturation, voicecoil instantaneous temperature, cone breakup modes, applied signal level and frequency all influence the way a loudspeaker performs. Papers have been written on this topic, and it is claimed that some success has been achieved. While certainly possible, it is no doubt far cheaper to use a better driver in the first place. If I sound less than convinced, there is probably a good reason for it .
To return to the car mentioned earlier, adding DSP functionality in the form of anti-lock brakes, traction control and active suspension cannot compensate for a set of raggedy old tyres. The automatic systems will do their best to maintain stability, but ultimately the raggedy tyres will lose grip and the car ends up wrapped around a tree (again). If there is anything wrong with the tyres, cheap and nasty suspension components are used, or if the suspension/steering geometry is wrong, all the DSP in the world won't help. Again, the laws of physics come into play. Any system can only be as good as its worst component, and this is especially true with loudspeakers (and cars).
+ +If the loudspeaker itself is not up to the task or the enclosure design is wrong, throwing DSP systems at it won't help. While it may appear to improve the system, a careful listen will reveal that all the problems that existed before still exist. Some may be masked to a degree, but in general you simply create new problems that are worse (but more subtle) than the originals. When distortion is analysed, the DSP will make it worse if boost is added at low frequencies. The extra cone travel needed to reproduce the boosted low frequencies simply increases intermodulation distortion.
+ +While it becomes possible to produce a loudspeaker that appears to be completely flat from DC to daylight (as measured), the DSP cannot compensate for the defects that we would have heard on the raw driver(s). I mention all of this because there seems to be a school of thought that the DSP truly is a panacea, and that silk purses can now be freely fabricated from sow's ears. (See footnote.) I have equalised drivers during any number of experiments, and it is almost universally true that any driver that needs drastic intervention to achieve acceptable response sounds like crap. Using EQ may make it look alright, but it still doesn't sound any better.
+ +In the end, it is completely pointless to expect a (relatively expensive) DSP system to compensate for poor driver selection or inadequate enclosure design in a loudspeaker. Increasing the amount of digital processing to attempt to compensate for bad drivers or poor design is false economy. Good performance is an end in itself, and if you have good drivers in well designed cabinets you should get very good performance from the system regardless of how it's driven. The DSP then can be used to perform time alignment, optimise the crossover and perhaps add a small amount of EQ to make the system as close to perfect as it can be.
+ +It should be fairly obvious that using a DSP with cheap and/or poorly designed drivers, an incorrectly aligned enclosure, or other fundamental design issues cannot achieve the results obtained if everything is right beforehand. Simply failing to use the right amount of acoustic damping material in a speaker box will create issues that the DSP cannot 'fix'. Like wall reflections, internal box reflections are a function of time, and cannot be corrected with EQ. How can a DSP be expected to compensate for cone breakup effects, for example. These effects vary (in some cases unpredictably) with level and frequency, and are a physical manifestation of an inherent problem in the driver. DSP cannot correct this, as the complexity of breakup artifacts are more than can be handled by any current DSP. +
According to the opinions of some, using DSP allows one to disassociate the physical loudspeaker, and simply use the DSP to get whatever result you desire. This is a fool's paradise - it completely ignores the laws of physics, and relegates reality to a secondary position. An untenable position at best.
+ + +![]() |
+ Explanation from The New Dictionary of Cultural Literacy, Third Edition. 2002 ... + You can't make a silk purse from a sow's ear + Explanation: It is impossible to make something excellent from poor material. |
There is no doubt at all that DSPs can achieve wonderful things for us in the world of audio. However, we must always remember that there are limitations. There are some things that the DSP cannot do - regardless of claims to the contrary. Always keep in mind that external time related issues can never be corrected by the DSP - they are outside the influence of the DSP, and nothing can change that (other than DSP controlled active wall surfaces - could be a tad expensive).
+ +Starting with excellent components and an accurate initial design will give very good results indeed - usually far better than can be achieved using passive crossovers. A DSP based system may also beat an analogue-based fully active system using the same drivers, although the difference will usually be fairly subtle if the analogue design is done correctly. Any intended loudspeaker that will implement a DSP should be engineered to be as good as it can be using conventional design practices. If the results are unsatisfactory, they will remain unsatisfactory after the DSP is added. Sure, it might sound impressive during an initial audition, but the faults will reveal themselves longer term. The most common complaint about systems that aren't right is that they cause listener fatigue.
+ +The DSP cannot, ever, make cheap undersized drivers sound as good as an equivalently priced system using high quality components in a well designed enclosure. A 100mm driver can never be made to perform like a 380mm driver, nor can a 380mm driver be made to work as a tweeter - while both examples are extreme, I wouldn't be at all surprised if such claims are eventually made to boast the superiority of DSP systems.
+ +Along similar lines, you must not accept that a 200mm mid-bass driver with a tweeter can be made to sound like a fully active 4-way system. As with the other examples, the laws of physics dictate what is achievable, not the DSP, not the loudspeaker manufacturer's marketing department and not the magazine reviewer's self proclaimed 'expert' opinion. This is not to say that (using good drivers and enclosure design) a 200mm driver with tweeter can't sound very good, but it remains a 200mm driver with tweeter (along with all the limitations of this arrangement), and cannot be made to sound like a larger system.
+ +Having used a number of DSP based products, I can attest to how well they work, and the wonderful things you can do with them. The DEQX in particular is a spectacularly good product. It can actually make ordinary drivers almost sound good, but the key word there is 'almost'. Any deficiencies in the driver will remain, and any DSP can only ever do so much. The deficiencies may reveal themselves with increased distortion (especially intermodulation), beaming, cone breakup or poor transient response ... or a combination of any two or more of all the possible loudspeaker problems.
+ +The DSP is a useful tool, and one that will become the standard in a few years. As performance improves, more things will be possible. However, modification of signals in the time domain by manipulation of the frequency domain will not become possible. Not even a DSP can break the laws of physics - despite the claims of hi-fi websites, salesmen, reviewers or other enthusiasts who may not fully understand what they are doing, or why.
+ +Current trends in interior design and architecture don't help. While stark rooms with tiled or polished timber floors, masses of glass, brick walls, concrete ceilings and almost zero furnishings may look appealing to many, such a room is totally and absolutely incapable of being used for high quality audio reproduction. No amount of EQ will make any worthwhile difference, and even attempting it is futile. A room intended for quality audio reproduction needs to have a minimum of reflections, which means carpet, heavy curtains or drapes, soft furnishings, and absorbers/diffusers. Bookcases (full of books, not ornaments) make excellent diffusers.
+ +Some rooms - especially where walls, floors and/or ceilings are concrete, brick or other non-absorbent material - will probably need absorbers - either as panels, wall hangings or resonators. Tuned resonators are sometimes used to reduce especially troublesome peaks. Speaker placement is also important, but no speaker can sound good in a bare room. Our hearing can only do so much, and the colouration added by excessive reverberation remains audible and severely reduces intelligibility.
+ +Most of the things that make a good listening room go against modern trends - you should have heard the comments when I had the polished floorboards in my lounge room carpeted. These same things also have a generally poor SAF (spousal acceptance factor), unless one's spouse also shares a passion for music and appreciates good sound. Given that the loudspeakers, amplifiers (or equipment racks), subwoofers and collections of vinyl, CDs, DVDs etc. (not to mention cables, remotes and other paraphernalia) are less than handsome in the first place, the end result may oppose everything that an interior designer might want to do. (While I have heard crocodiles mentioned as a method of taming recalcitrant interior designers, such practices are not generally acceptable in society, so an alternative is suggested ).
There are many sites on the Net that give a great deal of information on room treatment. This is a difficult subject at best, and requires a very good understanding of acoustic principles. While many people have no doubt had some success at DIY room treatment, this is not a topic I intend to cover.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +Audio Designs With Opamps - 3 |
Copyright © 2006 - Rod Elliott (ESP)
+ + + + + + + + +Probably the best known common mode circuit is the single opamp balanced receiver circuit. While it has a number of perceived problems in real life, it is nonetheless a good place to start. The problems with the circuit are normally not a limitation. The schematic is shown below, and there are two circuits shown. The first shows the circuit the way it is normally used, with the input (source) connected in differential mode to the opamp inputs. The second circuit shows how the circuit connects to the input for noise input - it is coupled (ideally) equally to both inputs at once.
+ +
Figure 22 - Differential Input Opamp Circuit
While the wanted signal is passed directly through to the output (but is now a simple ground referenced signal at that point), any noise is presented to both inputs at equal level. This causes the cancellation of the noise, while allowing the signal to pass without alteration. This is shown in Figure 23, where we have a microphone with 10mV (differential) output, in the presence of a 100Hz 1V common mode noise signal. This is a ratio of 1:100 of signal to noise (or 100:1 noise to signal).
+ +Although the results I obtained are simulated, the reality is not much different. At the output, the interference signal was measured at 27µV, while the signal level remains at 10mV. This means that the wanted signal is 51dB greater than the interference after the opamp, where the external noise level is 40dB greater than the signal at the input. The common mode rejection is therefore 91dB (1V common mode input, 27µV noise output). This is under ideal conditions, but in practice it is usually possible to get performance that exceeds the ability of the cable to maintain a perfect balance.
+ +
Figure 23 - Microphone Input Amplifier Circuit
Although there are a number of claimed problems with this circuit, in reality it works very well. There are better alternatives (especially for microphones), and these are well represented on the ESP site. The general principle of all balanced circuits remains the same as that shown above.
+ +The biggest limitation of the Figure 23 circuit is that the input impedance for common mode (noise) signals is only equal if the signal applied to each input is equal. This is often claimed to cause major deficiencies in use, but in reality the common mode noise signal usually is very close to equal levels at each input, so the circuit works as described. The greatest limitation of this circuit for microphone use is opamp noise and reduced performance for high frequency common mode signals. For example, at 10kHz, the common mode rejection ratio is 6dB worse than at 100Hz.
+ +Resistor tolerances, cable asymmetry and internal wiring will generally cause more error than the circuit limitations will impose. For optimum CMRR, the resistors should be 0.1% tolerance, and these may be selected from a batch of 1% components.
+ + +Although these are discussed in depth elsewhere on the ESP site, a brief look at these essential circuits is worthwhile here. Since we have balanced input circuits, it makes sense to have a matching output circuit, allowing equipment to provide a balanced output to other (often remote) gear. The basic circuit shown below is the starting point, and is the basis of all other (often much more complex) circuits.
+ +
Figure 24 - Balanced Line Driver Circuit
The general idea is quite simple. A single-ended (unbalanced) signal is applied to the input, and it is buffered by the first opamp, and inverted by the second. You should recognise the inverting buffer from Part 1 of this series. The tolerance of the resistors around the inverting stage is again critical, and they should be as closely matched as possible to ensure that the two output signals are exactly the same, but with the signal from the second opamp inverted.
+ +This is a true balanced output, but it has an inferred reference to ground. It behaves like a transformer with a grounded centre tap. While it is possible to approximate a fully floating output, in general it is not necessary to do so. It is usually essential to place a small resistance (typically around 100 ohms) in series with each output to prevent oscillation - R4 and R5. The cable connected to the opamp output acts as an unterminated transmission line at high frequencies, and this can cause the opamps to become unstable because of the reactive load. By including a resistance, the opamp's output is isolated from the reactive load and stability is usually unaffected, regardless of load.
+ +While there is a small error caused by operating two opamps in series (therefore adding the propagation delay of each opamp), the circuit still maintains extremely good balance well above the audio band. Operating the inverting and non-inverting buffers in parallel (the inputs of both joined together), this gives a much lower input impedance and improves performance so marginally that it's not worth doing (IMO). + +
Note that the information is (deliberately) simplified. I strongly suggest that you read the article Balanced Inputs & Outputs - The Things No-One Tells You, as this tells you the things that are usually avoided in most discussions and articles.
+ + +Summing amplifiers are based on the inverting buffer. Although we examined this earlier, one aspect of the circuit that was not covered at the time was just how it works. The inverting buffer is also called a virtual earth (ground) circuit, and is very common in mixing consoles and analogue computers of old. When it was described earlier, we worked in terms of voltage, but the virtual earth amplifier is really a current to voltage converter.
+ +Figure 25 shows the general idea. If a voltage of 1V (AC or DC) is applied to the input, that will cause a current of 0.1mA to flow through R1. Remembering the first Rule of opamps, the opamp itself will attempt to maintain both inputs at the same voltage. Since the positive input is earthed (ground, zero volts), the negative input has to stay at the same potential to satisfy the first Rule. This means the output will have the same voltage as the input, but with the opposite polarity. This is necessary because R2 must also pass exactly 0.1mA to maintain the +ve input at zero volts.
+ +
Figure 25 - Summing Amplifiers
By changing the value of R2 (relative to R1), we can modify the gain, making the output voltage larger or smaller in absolute magnitude than the input. It's all done with current - no smoke or mirrors required. Now, if we add another input as shown in the diagram to the right, we can apply another signal, and the opamp will give us a result that is the sum of the two input currents (or voltages, if fixed value resistors are used).
+ +As shown, if one input has an instantaneous voltage of 2V, and the other is -0.5V, the output voltage will be the (inverted) sum of the two - in this case -1.5V. If both inputs were the same voltage and polarity, they are simply added together. At the other extreme, two input voltages of equal magnitude but opposite polarity will result in an output voltage of zero.
+ +There is a drawback to this circuit though, and it is important to understand what happens when you have a large number of inputs. Think back to the inverting amp as originally described. The voltage gain is described by the formula ...
+ ++ Av = RFB / RIN ++ +
We may decide to use several (let's assume 10 for the moment) inputs, all 10k, and all being fed with voltages from different sources (think in terms of a multi-channel mixer). For each input individually, the voltage gain is as described - i.e. -1 (unity, but inverted). What does the opamp see though? The total value of RIN is 10 x 10k resistors in parallel ... 1k. The opamp therefore acts exactly as if it had a gain of 10, so input transistor noise is multiplied by 10, offset current is multiplied by 10, and bandwidth is reduced accordingly.
+ +A small DC voltage shift caused by input offset current (the difference in current needed by each opamp input), is fine for audio. Any DC error is easily removed by adding a capacitor in series with the output. For instrumentation, the DC value may be critical, and this is why some opamps have 'offset null' pins. The designer can use a pot to adjust the offset to ensure that any DC error is 'nulled' out.
+ +While 10 inputs is not going to cause a major problem in most cases, there is often a need for a great many more - a 32 channel mixer will need to be able to sum 32 channels, so the opamp will have a 'noise gain' of 32, even though the gain for each input individually is -1. Note that there is no polarity for noise gain - noise is random in nature, and not correlated to the input signal. Noise is noise.
+ +The opamp also acts as if it were operating with a gain that is equal to the noise gain, even though each input individually has unity gain. As a result, the bandwidth may be (apparently) inexplicably limited, but by knowing the noise gain, we can treat the circuit as if it has a voltage gain that equals the noise gain - and indeed, this is exactly the case.
+ +The inverting amplifier stage is actually noisier than a non-inverting stage with the same gain. For a non-inverting amplifier, the noise gain is equal to the voltage gain, but with an inverting stage, noise gain is equal to voltage gain + 1. When a large number of inputs is needed, a summing amplifier needs to have very low noise and wide bandwidth, or performance will not be as expected.
+ + +Because filters are so important in audio, it is necessary to examine a few more variations. Two additional functions will be examined - integration and differentiation. Although Part II did not make it clear, these are the building blocks of some of the most interesting filters that can be made using opamps. A simple integrator (low pass filter) and differentiator (high pass filter) are shown in Figure 26. These are conceptual - the real world nature of opamps means that neither will work as a 'perfect' device, but must be limited to a defined frequency range. In reality, this is rarely an issue, since ideal versions of the circuits are not needed to cover the audio bandwidth.
+ +
Figure 26 - Ideal Integrator and Differentiator
The differentiator will actually work as shown (albeit with less than ideal performance), but the integrator will not. The reason is simple - it has no DC feedback path, and will drift towards one supply or the other (depending on the opamp characteristics). To combat this, a resistor is needed in parallel with C1 (R2, shown dotted), with a value sufficiently low to provide DC feedback, but not so low as to cause the lowest frequency of interest to be affected. As always, compromises are a part of life.
+With an input waveform of a square wave as shown, an integrator provides an output that is dependent on the input current (via R1) and the value of C1. Over a useful frequency range, the output of an integrator is an almost perfect sawtooth waveform with a squarewave input. The DC stability resistor (R2) ultimately limits the lowest usable frequency. While there are methods to get around this limitation, this level of complexity is not normally needed for audio.
+ +The differentiator as shown is also supplied with a squarewave. The output is now based entirely on the rate of change of the input signal, so a squarewave with relatively slow rise and fall times will give a lower output than another with very fast transitions.
+ +The integrator is a low pass filter, with a theoretical 6dB/octave rolloff starting from DC (where it has infinite gain). The 6dB/octave rolloff continues for as far as the opamp characteristics will allow. At an infinite frequency, gain is zero.
+ +A differentiator is the exact opposite of an integrator. It has zero gain at DC, and an infinite gain at an infinite frequency. The filter slope is again 6dB/octave. With ideal opamps, if the two circuits were connected together the output is a squarewave - identical in all respects to the applied signal. With the values shown, the unity gain frequency is 159Hz ( 1 / 2π R1 C1 ).
+ +To see integrators at work in a real circuit, you need look no further than the Project 48 subwoofer equaliser. Differentiators abound in all audio circuits, but in a rather crude form. Every capacitor that is used for signal coupling is a very basic differentiation circuit, in conjunction with the following load impedance. Note that the frequency where coupling circuits will actually function as a differentiator is well below the lowest frequency in audio. + +
A rather annoying problem exists with integrators. Opamps have input current (even those with FET inputs), and the circuit cannot tell the difference between an actual input current and that caused by the opamp's input stage. Left to their own devices, all integrators will suffer from drift, which is usually temperature dependent. This issue was 'solved' in Figure 26 by the addition of a resistor (shown with dashed connections). However, this is only a partial fix, and if you need a true integrator you'll need to apply a reset at regular intervals. In this case, the reset is achieved by discharging the capacitor. This can be a physical switch, a JFET used as a switch, or some other mechanism. It will usually be triggered by a level detector. + +
This means that the process of integration is often a very basic analogue to digital converter, with the frequency of reset pulses providing the information about the input current. High input current (or voltage if fed via a resistor) causes rapid resets, and low currents cause them to be less frequent. Opamp selection is critical, and there are highly specialised devices that are designed to provide usable performance. Without a reset, all integrators will eventually drift to one or the other supply voltage. This may take minutes, hours or days, depending on the opamp and capacitor. Even PCB leakage can cause issues when trying to create a long-term integrator operating from low input currents. + +
In contrast, differentiators are usually well behaved, provided the input rate-of-change (ΔV/ΔT or DV/DT) is within the bandwidth of the opamp. If you have a very fast voltage change, the opamp also needs to be very fast to keep pace with the signal. If the input changes faster than the opamp's reaction time, then the output is inaccurate, and fails to show the rise/ fall time of the input signal. + +
These functions are mentioned here only because a basic understanding of the principles is a part of understanding filters - most are based on one or the other function, and bandpass filters use both integration and differentiation. Integration in particular is used in the State-Variable filter (described next), and the problem of drift is solved by the nested feedback loops. This isn't normally applicable for 'true' (or stand-alone') integrator circuits.
+ + +The State Variable filter is probably one of the most interesting of all opamp filters. It uses nested feedback loops and a pair of integrators to define the filter Q, frequency and gain. Gain and Q are independently adjustable in the version shown - changing R1 will change the Q, but leave the overall gain unchanged. The gain from the bandpass output does change with the Q, something that must be taken into consideration for high Q implementations. A standardised version is shown in Figure 27, and with the values shown, passband gain is unity, Q is 0.707 and the centre frequency is 159Hz (set by R6, C1 and R7, C2). There is a complete article discussing this topology - see State Variable Filters.
+ +
Figure 27 - State Variable Filter
As shown, the filter outputs low pass, band pass and high pass responses simultaneously. R1 sets the filter Q and the amplitude of the bandpass output, and is selected as ...
+ ++ R1 = 4.98 k (Butterworth - Q = 0.707)+ +
+ R1 = 11.2 k (Linkwitz-Riley - Q = 0.5) +
To calculate for a different Q, use the formula ...
+ ++ D = 3 × ( R1 / ( R1 + R5 )) where 'D' is damping+ +
+ Q = 1 / D +
The damping is set at 1.414 in Figure 27, giving a Q of 0.707, so the filter response is Butterworth (maximally flat amplitude). The frequency is changed by varying R6 and R7, or C1 and C2. A very common use for the state variable filter is in parametric equalisers, where a filter with variable frequency and Q is a requirement. The state variable filter allows the gain to be changed without affecting Q (and vice versa), so it is an ideal variable filter for audio use. Filter Q remains constant with frequency, so altering the frequency has no effect on the filter's overall (summed) response. Note that the high and low pass sections have opposite polarities, with the high pass shifted by -90° and low pass shifted +90° at the tuned frequency (fo). The bandpass output is in phase with the input at fo
+ +
Figure 28 - State Variable Filter Frequency Response
The red trace is the band pass response, the blue trace is low pass, and green is the high pass. Remember that all of these are reproduced simultaneously, thus making the state variable filter on of the most interesting and versatile applications available. The standard state variable filter is second order, having rolloff slopes of 12dB/octave. It is also possible to make these filters with third order (18dB/octave) or fourth order (24dB/octave) response, but that is beyond the scope of this article. A first order state variable filter is also possible, but as far as I'm aware I have the only published version of this circuit (see State Variable Filters). The only other version I know of is from THAT Corporation, who published a voltage controlled version - VCA-Controlled 1st Order State Variable Filter, using the THAT2180 VCA (voltage controlled amplifier).
+ +A variation of the state variable is called a Bi-Quad (aka biquad or bi-quadratic). The difference between the two is subtle. With a bi-quad, as frequency changes, the bandwidth remains constant, which means that the Q must change. As you change the frequency, Q increases as frequency increases and vice versa. Outputs are low pass and bandpass (no high pass), but the Q can be be higher than available with the state-variable. It's not as useful as the state-variable filter and is not shown here.
+ + +The bandpass version of this class of filter has its own page on the ESP site - see Multiple Feedback Bandpass Filter for more information about this category. There are also low and high pass versions of the MFB filter, and these will be covered briefly here. The chief advantage of the MFB filter is that the opamp's gain bandwidth product (GBP) is relaxed somewhat compared to the Sallen-Key topology. This is not normally a problem at audio frequencies and for audio applications, because very high Q values are rarely used.
+ +Where the Sallen-Key filter requires a minimum open loop gain of 90Q² at the filter frequency, the MFB version requires only 20Q². To put this into perspective, a Sallen-Key filter with a Q of 0.707 at 20kHz requires an opamp open loop gain of at least 75 at 20kHz, while the equivalent MFB filter only needs a gain of 17 (close enough). While these figures are easy to obtain at low Q values, they become difficult or impossible if a high Q filter is needed.
+ +
Figure 29 - Low-Pass and High-Pass Multiple Feedback Filters
The first thing to notice with these filters - the resistor (and to a lesser extent the capacitor) values are decidedly non-standard. The design formulae are also rather complex, and I eventually settled on the values shown based on a simulation. The calculations are tedious, and will invariably yield non-standard values. Simple parallel or series connections often cannot be used to get the values you need (based on others in the circuit).
+ +There are several articles on the Web covering the low pass multiple feedback filter, but few that cover the high pass version. As a result, I didn't even try to calculate the values, but figured them out based on the low pass version. As shown, the low pass filter has a cutoff frequency of 503Hz, and the high pass filter has a cutoff frequency of 538Hz. Both have a Q of 0.707 (Butterworth response) and unity gain in the passband. + +
However, the high pass multiple feedback filter has a fatal flaw, and it's very hard to recommend it for anything. The input impedance is capacitive, which may cause many source opamps to oscillate. In addition, the capacitive load on the inverting input can give rise to instability of the filter opamp. One solution is to add resistance in series with C1 and C2, but this needlessly increases the component count for a circuit that's already only marginally useful. See Active Filters, Section 4 for a more complete description of the issues created by the high pass MFB filter.
+ +As filters, they function exactly the same as any other topology with the same cutoff frequency and Q, but as noted above are less demanding of the opamp performance (low pass only!). Whether this ever becomes a problem for most audio frequency circuits is debatable. Note too that they are inverting, so if absolute phase is your goal, you will need to re-invert the outputs.
+ + +There is an almost infinite range of filter types, some are useful, others less so. An interesting idea was described in an online design website. The article, entitled "Bandpass filter features adjustable Q and constant maximum gain" showed an active notch filter followed by a difference amplifier (balanced input stage). The original used to be available on the EDN website but the original link was broken. (There is now a PDF version here). The circuit detail is reproduced below.
+ +
Figure 30 - Notch Filter Followed By Difference Amp
Note that all resistors and capacitors with the same designation are the same value. The 'BP' output is bandpass, and the 'BR' output is band reject (notch).
+ +The notch filter is formed by R1, R2, C1 and C2, where R2 = R1 / 2 and C2 = C1 × 2. The output is buffered by U1, and U2 provides a low impedance feedback path to the notch filter, based on the level set by VR1. U3 is a conventional balanced amplifier, in this case being used as a difference amp. By subtracting the notch signal from the input signal, the result is a bandpass response. The output of the filter can never reach the same Q as the notch however, because phase shift reduces the attenuation of out of band signals.
+ +
Figure 31A - Bandpass Response
The response with the pot (VR1) set at three levels is shown. The red trace shows the response with VR1 set to 80%, green at 90% and blue at 98%. Useful values of Q (for this type of filter) are only available with close to maximum feedback, but the circuit works as described.
+ +
Figure 31B - Band Reject (Notch) Response
The notch filter response (shown in Figure 31B) is as one expects from the twin-tee circuit when feedback is applied. The notch itself is almost infinitely deep, and extremely narrow because of the feedback. The response shown was taken with VR1 set to 98% of its travel - almost maximum feedback applied. Applying more feedback is rather pointless, and the simulator decides that it cannot resolve the notch at all at higher pot settings. The extreme sharpness of notch (band stop) filters in general cannot easily be matched by band pass types. This is largely due to the laws of physics and the way phase shifted signals add together.
+ +The above is another example of the huge range of possibilities for filters - there are obviously a great many more, but space (and usefulness for audio applications) preclude me from going any further in this direction. A web search will reveal many more, and there are also switched capacitor filters, digital filters and perhaps even others not yet invented.
+ + +Opamps have limited output current, usually only around 20mA or so. While this is usually more than enough for preamps and filters, a great deal more is needed to drive a loudspeaker. There are several power amplifiers that are configured in exactly the same way as an opamp, and these can be classified as 'power opamps'. While this is often a good way to get the extra current needed, in some cases it may not be considered appropriate or convenient.
+ +Opamps also have a limited voltage swing, and operating them at the voltages typical for power amplifiers will cause failure. While there are already several opamp based amplifiers on the ESP site (as headphone amps, a small power amp, and two power opamps), the following design is different.
+ +
Figure 32 - Power Amplifier Concept
The circuit shown in Figure 32 is conceptional (that's why there are no component values). At one stage, this circuit was quite common, but had some major high frequency issues, as well as DC stability and a host of other problems. Although I do not recommend that anyone even attempt to build it, the circuit is nonetheless interesting. By using the opamp supply current to modulate the transistors base current, the opamp could operate with voltages well above the normal allowed supply voltages, because the zener diodes reduced the voltage to within the allowable range. R9 was included because many years ago (when I actually contemplated using the design), it was found to work better with this in place.
+ +Although my simulator has difficulties with the circuit, it is possible to get it to run with low output voltages. It shows low distortion, but I know from experience that this is dubious - measured (versus theoretical) distortion is higher than expected, and if I recall correctly, somewhat load dependent. Bias current stability can be very poor. This can lead to thermal runaway in the output stage, and it is very difficult to ensure good thermal stability without additional circuitry.
+ +For other power amplifier designs that do work, just look through the ESP projects pages.
+ + +All in all, the specifications for even a fairly basic opamp can be daunting. There are so many terms used that it is difficult to understand what they all mean. The easiest are the absolute maximum values - these are simply a set of parameters that should never be exceeded. Supply voltage, input voltage, operating and storage temperatures are just a few of the figures quoted.
+ +Let's have a look at the data for the TL072, chosen because it is still a good general purpose opamp ...
+ +Abridged electrical characteristics, VCC = ±15 V, TA = 25°C | ||||||
Parameter | Test Conditions | +
|
+ Units | |||
VIO Input offset voltage | VO = 0, Rs = 50Ω | +
| mV | |||
αVIO Temp. coefficient of input offset voltage | VO = 0, Rs = 50Ω | +
| µV / °C | |||
IIO Input offset current | VO=0 | +
| pA | |||
IB Input bias current | VO = 0 | +
| pA | |||
VICR Input common mode voltage | + |
| V | |||
VOM Maximum peak output voltage | RL = 10kΩ | +
| V | |||
AVD Large signal voltage amplification | VO = 10V, RL ≥ 2kΩ | +
| V / mV | |||
B1 Unity gain bandwidth | VO = 10V, RL ≥ 2kΩ | +
| MHz | |||
rIN Input resistance | + |
| MΩ | |||
CMRR Common mode rejection ratio | VIC = VICRmin, VO = 0, Rs = 50 Ω | +
| dB | |||
kSVR Power supply rejection ratio | VIC = VICRmin, VO = 0, Rs = 50 Ω | +
| dB | |||
ICC Supply current (each amplifier) | VO = 0, No load | +
| mA | |||
VO1/ VO2 Crosstalk attenuation | AVD = 100 | +
| dB | |||
SR Slew rate at unity gain | VI = 10V, CL = 100 pF, RL = 2 k Ω | +
| V/us | |||
tr Rise time overshoot factor | VI = 20mV, CL = 100 pF, RL = 2 k Ω | +
| ||||
Vn Equivalent input noise voltage | Rs = 20Ω, f = 10Hz - 10kHz | +
| µV | |||
THD Total harmonic distortion | VIrms = 6V, AVD = 1, RL ≥ 2k Ω, RS ≤ 1k Ω, f = 1kHz | +
|
Well, it certainly does look rather daunting, so we'll look at each parameter in turn to see what it means. It must be understood that there are many different ways to specify an opamp, and the table above is intended to be representative only.
+ +++ ++
+
+VIO Input Offset Voltage + This is a measure of the typical voltage difference that may exist between the inputs when the first Rule is applied. So, while an ideal opamp will try to + make both inputs exactly the same voltage, in a real opamp it may differ by this amount. A TL072 connected as a unity gain buffer with zero input could + have between 3 and 10mV between the two inputs (plus or minus). This is measured with zero volts at the output, and a source resistance of 50 ohms. + +
+αVIO TempCo of Input Offset Voltage + All real devices are affected by temperature, so the input offset voltage may vary by the amount shown as the operating temperature changes. This figure can be + ignored for most audio applications. It is important for instrumentation amplifiers, high gain DC amplifiers and other critical applications. + +
+IIO Input Offset Current + The input current will change with temperature, and the two input devices may not match. Offset current is primarily caused by gate leakage. The value is very small + for FET input opamps, having a maximum value of only 100pA for the TL072. + +
+Ib Input Bias Current + Even though the TL072 is a FET input opamp, it has some input current, primarily caused by gate leakage. The current is very small, ranging from 65pA to 200pA. + +
+VICR Input Common Mode Voltage + This is often an important parameter, and is very much so with the TL07x series. If exceeded, the output state becomes undefined. In the case of the TL07x devices, + there can be a change of state of the output voltage if the input common mode voltage is exceeded. A signal that should just clip has a sudden transition to the + opposite polarity during the period where the common mode voltage is exceeded. A few other opamps have a similar problem, but most do not. + +
+VOM Maximum Peak Output Voltage + Few opamps can swing their outputs to the supply rails, and the amount of current drawn from the output affects this further. This limitation is only apparent + when using low impedance loads, although there is still some loss with no load at all. By using the highest (sensible) supply voltage, this is not normally a problem. + +
+AVD Large Signal Voltage Amplification + This is usually the open loop (no feedback) condition, and is a measure of the maximum gain of the opamp for high level outputs. At a minimum of 25V / mV, this + represents a gain of 25 / 0.001 = 25,000 (it is typically as high as 200,000 or 106dB at low frequencies). This is rarely a limiting value in any audio circuit. + +
+B1 Unity Gain Bandwidth + The frequency at which the opamp's open loop gain falls to unity is the unity gain bandwidth. When feedback is applied, it is usually desirable to have at least + 10 times the gain that you specify with the feedback components. This represents an error of ~10% at the upper frequency. Unity gain bandwidth limits the maximum gain + you can use for a given upper frequency. + +
+rIN Input Resistance + The input voltage divided by the input current (Ohm's law). So with an input voltage of 1V and an input current of 65pA, the input resistance is 153GΩ. + The exact derivation of the value claimed in the table is unclear, since the measurement conditions are unspecified. Many datasheets state the input resistance of + TL07x series to be 1TΩ (1,000 GΩ) + +
+CMRR Common Mode Rejection Ratio + A measure of how well the opamp rejects signals applied to both inputs simultaneously. See the description of the balanced input stage for more information. + Common mode rejection depends on the available open loop gain, and deteriorates at higher frequencies. This is graphed in most data sheets. + +
+kSVR Power Supply Rejection Ratio + All opamps can have some noise on the supply lines without serious degradation of the signal. PSRR is a measure of how well the opamp rejects (ignores) the + supply noise or other unwanted signal(s) that may be carried by the supply buses. + +
+ICC Supply Current (Each Amplifier) + Needless to say, some current is drawn by all opamps. This is simply the typical current you expect to draw from the supply for each opamp in the package. At + 1.4mA per opamp, a dual (TL072) will typically draw 2.8mA with 15V supplies. + +
+VO1/ VO2 Crosstalk attenuation + When there are two or more opamps in a package, it is inevitable that some signal will pass from one to the other. As specified, this will be -120dB if the + amplifiers are operating with a gain of 100. In general, PCB layout and circuit wiring causes far more crosstalk than the IC itself. + +
+SR Slew rate at unity gain + The slew rate is simply how fast the output voltage can change. The specification says that this is measured with the opamp connected as a non-inverting unity + gain buffer. In this case, the opamps is specified for a typical slew rate of 13V/us, meaning that in one microsecond, the output voltage can change by 13V. There + are faster and slower opamps of course, but for audio work it is actually difficult to exceed the slew rate of any but the slowest opamps. + +
+tr Rise time overshoot factor + When any electronic circuit is subjected to a very sudden change of input voltage, the opamp will often not be fast enough to maintain control via the feedback + loop. Once control is lost, there is a finite time before the opamp 'catches up' to the input signal. This causes the output to overshoot the steady state level + for a brief period, and the overshoot is measured as a percentage of the voltage change. + +
+Vn Equivalent input noise voltage + This is a theoretical noise voltage that lives at the opamp's input. This noise is amplified by the gain that is set for the circuit. In this case, if the amp + is designed to have a gain of 10, the output noise will be 40µV, over the frequency range of 10Hz to 10kHz. Noise is also expressed as nV√Hz (see separate article). + +
+THD Total Harmonic Distortion + This is pretty much self explanatory. The test conditions are not always representative of real world application, but in this case appear to be reasonably + sensible. The use of a unity gain amplifier is not the choice I'd make, but I didn't design the test specification. +
+
The data above is based on National Semiconductor's data sheet and terminology. Other manufacturers may choose to use different terms for the parameters, use different test methods, or specify different operating conditions for the same test. The only way you will get to understand the terminology used is to read it - don't just look at it as gobbledygook and ignore it - you will never learn anything that way.
+ + +Comparators are so important in electronics that they have their own page on the ESP site. See Comparators, The Unsung Heroes Of Electronics for a lot more information on these essential building blocks. Note that comparators always follow my opamp Rule #2 (which states that 'the output will produce a voltage that has the same polarity as the most positive input'). If the inverting input is the most positive, the output will be at the negative supply voltage (which may be the same as earth/ ground - zero volts). + +
The poor comparator is the (almost) forgotten first cousin of the opamp. Although comparators are very similar to opamps, their design is based on the fact that they will never have negative feedback applied (although positive feedback is not uncommon!). Consequently, there are no constraints set by the necessity for stability with gain reduction caused by applied feedback. Although opamps can be used as comparators in low frequency applications, they are totally unsuitable at high frequencies. This is because of the frequency compensation applied in opamps, necessary to maintain closed loop (negative feedback) stability.
+ +Because this restriction is removed for true comparators, they are able to be much faster than opamps, although it can still sometimes be a challenge to find a comparator that is fast enough if the operating frequency is high. Class-D amplifiers rely on a fast comparator to convert the analogue input signal into a pulse width modulated switching signal, and they need to be very fast indeed if timing errors are to be avoided.
+ +
Figure 33 - PWM Comparator
Figure 33 shows the general idea for a pulse width modulator. The input sinewave is compared to the reference signal - usually a very linear sawtooth waveform. Since typical PWM amplifiers operate at a switching frequency of 250kHz or more, even a few nano-seconds switching delay becomes significant, but there are other applications where even faster operation is desirable.
+ +
Figure 34 - PWM Comparator Waveforms
The PWM waveforms are shown above. A perfect example of the speed limitation is visible as the input signal approaches the peak value of the reference signal. The comparator is simply not fast enough to switch from one extreme to the other, resulting in the loss of pulses. This is a real phenomenon, and occurs with all PWM amplifiers as they approach clipping.
+ +Achieving very high speed is a compromise, because high speed means that all circuit impedances must be low, thus increasing supply current. This means that the parts will get hot, unless the supply voltage is limited. For exactly the same reasons, microprocessor ICs are now using 3.3V instead of 5V, and high speed digital logic chips are all limited to 5V supplies. We can expect to see the voltage decrease even further as the speed of digital systems increases.
+ +Needless to say, PWM amps are not the only place comparators are used in audio. LED meter ICs have a string of comparators, they may be used for basic (or precision) timing applications, clipping detectors, etc. Most analogue to digital converters use comparators as well. While you don't see them much in linear audio circuits, they are very common in industrial process control systems, and they are a very important 'building block' for many circuits.
+ +Because they have no feedback, comparators always obey the second rule of opamps ... the output takes the polarity of the more positive input. Look carefully at the PWM waveform shown above, and you will see exactly that. Look closely - it is not immediately apparent, but it is visible.
+ +
Figure 35 - Comparator Circuits
Comparators can be absolute, meaning that the output will change state whenever the signal passes the threshold. While this is needed for analogue to digital conversion and many other applications, it is often preferable to arrange the circuit so it has 'hysteresis'. This is a rather odd concept, but means that once the signal has caused an output change, it needs to change further (in the opposite direction) to cause a reversal. For example, a comparator may change its output state (e.g. from low to high) when the voltage reaches 5V, but it may have to fall to 4.5V before it changes state again (from high back to low). Hysteresis can be likened to the snap-action of most switches, and indeed, these use mechanical hysteresis.
+ +Both circuits are shown above and the waveforms are shown below. Note that both comparators are shown as inverting, because this connection provides the highest input impedance when hysteresis is added. If a non-inverting connection is used the input would be applied to the +ve input, via R2 for the version with hysteresis, and the -ve input connected to the reference voltage - in both cases this is ground (zero Volts). In a non-inverting connection, the positive feedback will create distortion on the input waveform - this can be a problem if the signal is intended to be used as an analogue waveform elsewhere in the circuit.
+ +The polarity of any comparator can be reversed with a digital logic inverter, another comparator, or the input can be buffered to prevent positive feedback artifacts from being added to the signal. As always, there are many, many ways to achieve the same result, and the final circuit depends on your specific needs.
+ +
Figure 36 - Comparator Waveform Without Hysteresis
As you can see from the above diagram, if a noisy signal is applied to the input of an absolute (no hysteresis) comparator, the output shows multiple state changes as the input signal passes through zero. This happens because the noise amplitude is enough to cause the instantaneous input amplitude to pass through zero multiple times at each zero crossing of the input signal. This is usually undesirable, because comparators are commonly used to convert input waveforms into a digital representation, based on zero crossings of the input. Multiple triggerings as shown will cause an erroneous output. However, hysteresis is generally not used when converting a waveform to PWM, because the hysteresis may cause unacceptable distortion in the demodulated output waveform.
+ +
Figure 37 - Comparator Waveform With Hysteresis
Figure 37 shows the input and output waveforms of the comparator with hysteresis. The noise is ignored, because once the comparator changes state, the signal must go negative by more than the hysteresis voltage before it will change state again. The amount of hysteresis is determined by R2 and R3, and may be calculated to give a specific (and exact) level before the output will change state. This circuit is also known as a Schmitt (or Schmidt) Trigger, and is a very common circuit. It is used in the PCB version of P39 (power transformer soft-start) to ensure accurate timing without any possibility of relay chatter as the timing voltage reaches the trigger point.
+ +Calculating the trigger voltages for the inverting case is easy (see below), but is somewhat more irksome for the non-inverting configuration because the input and output voltages interact, even when the signal source has very low impedance. Only the inverting is described here. R2 and R3 form a simple voltage divider. When the output is high (+3.5V), the voltage at the +ve input is ...
+ ++ Vd = ( R3 / R2 ) + 1 = ( 4.7k / 1k ) + 1 = 5.7 (where Vd is voltage division)+ +
+ Vin = Vout / Vd = 3.5 / 5.7 = 614mV +
The signal therefore must exceed +614mV before the output will swing negative. When it does, the input then has to exceed (become more negative than) -614mV before the output will change state again, because the circuit is symmetrical. Any signal that does not reach the ±614mV thresholds will not cause the output to change state - such signals are completely ignored.
+ +It is also possible (and not uncommon) to make the trigger thresholds asymmetrical, and I shall leave this as an exercise for the reader to work out how this can be done. (Hint - diodes are commonly used to do just this).
This is but a small sample of comparator applications. They may well be the almost forgotten first cousin of opamps for the beginner or novice, but are extensively used in all kinds of circuitry - not necessarily audio, but they are very common there too. Most comparator applications are easy enough to understand once you have the basics, and this section is intended to provide just that - the basics.
+ + +Although the three articles in this series have only scratched the surface, hopefully you will have sufficient information as to how opamps work to be able to analyse any new circuit you come across. There is no doubt that there will be some applications that will cause pain, and it is completely impossible to make it otherwise.
+ +Even though there are three fairly large pages devoted to the topic here, there are countless other applications for opamps - not only in audio, but in instrumentation, medical applications, and any number of industrial processing systems. There is almost no analogue application these days that does not use opamps, although in many cases you may not be aware they are there. Opamps and/or comparators are embedded in many other devices, from analogue to digital converters (ADCs) and digital to analogue converters (DACs), digital signal processing (DSP) ICs, switchmode power supply controllers - the list is endless.
+ + +I have used various references while compiling this article, with most coming from my own accumulated knowledge. Some of this accumulated knowledge is directly due to the following publications: +
+ National Semiconductor Linear Applications (I and II), published by National Semiconductor ++ +
National Semiconductor Audio Handbook, published by National Semiconductor +
IC Op-Amp Cookbook - Walter G Jung (1974), published by Howard W Sams & Co., Inc. ISBN 0-672-20969-1 +
Active Filter Cookbook - Don Lancaster (1979), published by Howard W Sams & Co., Inc. ISBN 0-672-21168-8 +
Data sheets from National Semiconductor, Texas Instruments, Burr-Brown, Analog Devices, Philips and many others. +
The TL07x data sheets from National Semiconductor was extensively referenced in the basic specifications section. +
Cirrus Logic, Application Note 48 +
Bandpass filter features adjustable Q and constant maximum gain - EDN (Local Document) +Recommended Reading
+ Opamps For Everyone - by Ron Mancini, Editor in Chief, Texas Instruments, Sep 2001 +
![]() |
Elliott Sound Products | Electronic Fuses |
![]() ![]() |
Fuses have been used to protect electrical and electronic circuits from the very beginning of electrical equipment being employed. Mostly they do a pretty good job, but they are rarely fast enough to protect electronic parts such as transistors or MOSFETs from a serious overload. A semiconductor will almost always fail well before the fuse has had time to act, pretty much a perfect example of Murphy's law in action. There's a great deal of useful information in the article How to Apply Circuit Protective Devices. This article was contributed back in 2009, and shows most of the things you need to know about fuses and miniature circuit breakers.
Circuit breakers are (sometimes) an improvement over a simple fuse, but usually only if they have a magnetic trip mechanism that acts very quickly in the presence of a severe overload. All protective systems introduce some loss in the circuit, and the resistance of a variety of fuses are shown in the article and in the following section. This includes the resistance when cold (25°C) and at rated current. Fuses below 3.15A dissipate about 1.6W at full rated current, and higher rated fuses dissipate up to 2.5W. This means that they run fairly hot if used at their design current rating continuously, but this is rarely the case in most circuits.
Many circuits draw significant 'inrush' current (when power is first applied), as transformers settle down to steady state conditions and/ or as filter capacitors charge to their operating voltage. Because of this, it's often necessary to use a 'slow blow' or delay fuse, that is designed to handle a much higher than normal current for a short period. Like all fuses, if the current is only fractionally above the fuse rating, it can take a long time before it 'blows' by going open circuit. It's unrealistic to expect a wire fuse to fail if the current is (say) only 1.1 times the rated value (1.1A for a 1A fuse). In general, expect any fuse to fail within around 1-2 minutes with a current of 1.5 times the rated value.
This is quite alright for equipment with a high thermal mass, such as a transformer or motor, but it's inadequate to protect a transistor that's already at its peak operating power (so the die temperature is at or near the maximum allowable). As a result, most electronic equipment that uses fuse 'protection' is not actually protected at all. The fuse(s) only ensure that a serious fault condition won't cause additional serious damage, including the possibility of fire.
Enter the electronic fuse (or e-fuse). These can be set up to trip at a very specific current, and if it's exceeded (even by a few milliamps) the e-fuse will disconnect the load. Ideally, it will remain 'open' even if power is disconnected and re-applied, but that requires battery backup, which is very uncommon. Electronic fuse ICs are available (often with a host of additional features), but most are only available in SMD (surface mount device) packages.
The idea of this article is to show a few options that can be used at (almost) any voltage and current, and with a fairly well defined trip current. Some are much better than others in this respect, and using a resistive 'current detector' is more reliable than utilising the RDS-on of a MOSFET. This is commonly used in dedicated 'e-fuse' ICs, but these have temperature compensation to ensure predictable results.
Despite the very accurate detection thresholds available with electronic fuses, the 'traditional' wire fuse is far from being 'dead' technology. They are still the most cost-effective option where you need to manage the risk of fire (or further destruction of electronics), but their limitations have to be understood. This is why projects such as ESP's Project 33 Loudspeaker Protection circuit exist. While a pair of e-fuses could easily detect a fault condition and disconnect the load if the current exceeds a preset limit, it's not a simple alternative.
The circuits shown here are intended to provide examples, and are not construction projects. There are countless other circuits (including many specialised ICs) that can be used, but not all of them are useful for DIY (some are not useful at all IMO). An e-fuse that requires you to press a button to turn the circuit on is not helpful, and doubly so if it has no turn-off mechanism. There is one very common circuit (it's all over the Net) that uses this, and it has been deliberately left out because it's not a good idea (and many people who built it found it doesn't work). An e-fuse should be active when the power is turned on, and if tripped it must remain so until a power-on reset. If the fault still exists, it will trip again.
Electronic fuses can be a viable alternative to a thermal-magnetic circuit breaker. They are a great deal faster, and can be set for an accurate upper limit that may be well below anything offered by circuit breakers. For example, if your circuit needs 100mA, but even 120mA indicates a fault, an electronic fuse (set for 110mA for example) is the only option. No wire fuse or circuit breaker offers the same level of precision and speed.
Because they are so widely used (and the table will be referred to several times in this article), I've included this table which is shown in the How to Apply Circuit Protective Devices article. This is a useful reference table, and some of the values can be directly transferred to an 'equivalent' electronic fuse.
Rated Current Amps | Interrupt Current Amps (Max) | Resistance, Ohms 0A Rated A | Voltage Drop At Rated Current | Power Dissipation At 150% Current (W) | |
---|---|---|---|---|---|
0.315 | 35A @ 250Vac | 880 m | 4.13 | 1.300 V | 1.6 |
0.4 | 277 m | 3.00 | 1.200 V | 1.6 | |
0.5 | 206.5 m | 2.00 | 1.000 V | 1.6 | |
0.63 | 190 m | 1.03 | 650 mV | 1.6 | |
0.8 | 120.3 m | 300 m | 240 mV | 1.6 | |
1.0 | 96.4 m | 200 m | 200 mV | 1.6 | |
1.25 | 70.1 m | 160 m | 200 mV | 1.6 | |
1.6 | 52.8 m | 119 m | 190 mV | 1.6 | |
2.0 | 41.6 m | 89.5 m | 170 mV | 1.6 | |
2.5 | 33.4 m | 68.0 m | 170 mV | 1.6 | |
3.15 | 22.4 m | 47.6 m | 150 mV | 2.5 | |
4.0 | 40A @ 250Vac | 16.5 m | 32.5 m | 130 mV | 2.5 |
5.0 | 50A @ 250Vac | 13.7 m | 26.0 m | 130 mV | 2.5 |
6.3 | 63A @ 250Vac | 9.5 m | 20.6 m | 130 mV | 2.5 |
The table shown above is adapted from a Littelfuse datasheet (Axial Lead & Cartridge Fuses 5 × 20 mm > Fast-Acting > 217 Series) for fast-blow glass fuses. I've shown the values that are most likely to be used in typical electronic projects, but the complete table has a lot more information and covers fuses from 32mA to 15A. I added the column that shows resistance at maximum current (copper wire is assumed), and it works out that the fuse wire temperature is around 300°C at full rated current.
While there are many different topologies, the basic principles are usually very similar. We need a way to detect the actual current flow, and if it exceeds a preset threshold, the load should be disconnected with as close to zero delay as possible. Additional circuitry can be included to allow very brief excursions beyond the preset value (analogous to a delay (slow blow) fuse), or in some cases, the fuse is designed to limit the current to a preset maximum for a few milliseconds. If the over-current condition is maintained for longer than the programmed maximum, the fuse then disconnects the load from the power source. This is a feature of some e-fuse ICs.
For the most part, this article will concentrate on discrete circuitry rather than 'COTS' (commercial off-the-shelf) products. This is mainly because the internal circuitry of commercial ICs is very complex, and most are not designed for high voltages, although there are exceptions. Because these ICs are virtually all SMD parts, they are difficult to experiment with because breadboards and 'lash-up' circuitry is usually not possible without using a PCB designed for the specific device. Some even use LCC packages (leadless chip carrier), and they don't play well with any experimenter construction methods.
The most obvious requirement is a way to monitor the current. While a resistor seems like a bad idea, remember that fuses have resistance too - if it were otherwise the wire wouldn't be able to get hot and melt! (See Table 1.) The resistance should be as low as possible, but there are limits - if it's too low, the voltage across the resistor will be too low to be useful for anything. Likewise, if it's too high, the voltage drop may be excessive. If we aim for a voltage drop of 100mV at rated current, that won't upset most circuits and it's enough to be able to get reliable detection. This is actually less than most wire fuses, so it's not a bad compromise.
Once the voltage across the detector (current transformer, Rogowski coil (not covered here), resistor or MOSFET channel resistance for example) exceeds the predetermined value, the circuit must disconnect the load, and just as importantly, not re-connect it when the load current falls to zero. This happens when the load is disconnected, so a latch is required to keep the circuit turned off. Unfortunately (and unlike a traditional wire fuse), the circuit will re-connect the load if power is cycled (turn it off and back on again). If the fault still exists, it will turn off again almost instantly, but doing this may cause further damage. Unlike a wire fuse, the user cannot substitute one with a higher rating, so that affords a safeguard not otherwise available. It's not especially difficult to get a turn-off time of under 10µs with only a 10% overload, something that cannot be matched by any wire fuse. However, this does come with some caveats! All circuitry takes some time for conditions to stabilise, and in some cases the circuit may not be able to turn off at all if the fault occurs before the circuit is ready.
There are countless different ways to configure an e-fuse. There are many circuits shown on-line, and (as always) some are good, and some are completely useless. Unless you build (or simulate) the results, it's very difficult to know if a particular circuit will work or not. There are many interdependencies in all electronic circuits, and if something goes wrong it can do so with undesirable consequences (note careful use of understatement!). If your circuit relies only on an electronic fuse with no backup, you can make matters far worse than they would be if you just stayed with a wire fuse in the first place.
An e-fuse is not the same as a current limiter. Some e-fuse ICs combine the two functions, and a few circuits that claim to be an e-fuse are current limiters, not fuses. A current limiter is a very different application, and while it may save some electronics from short-term problems, the current limiting circuitry is often subjected to very high dissipation if the load develops a short circuit. Current limiting isn't covered here, because it's not equivalent to a fuse (electronic or otherwise).
A resistor is simple, but you do need to ensure that it's sized appropriately, both in value and power dissipation. Ideally, the resistor will have a 'burden' (voltage drop) of no more than 100mV, but this too depends on the application. A 100mΩ (0.1 ohm) resistor drops 100mV with a current of 1A, and dissipates 100mW. This is less than a 1A wire fuse at full current (200mV burden, 200mW dissipation). A resistor is suitable for AC or DC, but with AC there's a need for full-wave rectification, which is a problem at very low voltages. It also makes the circuitry more complex.
A better option for AC electronic fuses is a current transformer (see Transformers, Section 17 for information on these). Because the output impedance of a current transformer is very high, rectification can be done with four low current diodes (e.g. 1N4148) with very little loss of accuracy, making the rest of the circuitry much simpler. Disconnecting AC loads is more problematical, and ideally you'd want to disconnect at the instant an overload condition occurs. If the mains switch is a TRIAC, it won't disconnect until the mains waveform passes through zero, which may allow a half-cycle to exceed the limit by a great deal. Very short duration high current pulses are permissible, but if beyond the limits of the TRIAC used it will fail - short circuit!
For example, a BT139F-600 TRIAC has a steady state current capability of 16A RMS, and can handle up to 145A for one cycle at 50Hz. You can (usually) expect a dead short across the mains to be within that limit, but that depends on the circuitry, the impedance of the mains wiring, etc. If you expect the continuous (or peak) current to be higher, you'll need a higher current TRIAC (e.g. BTA25, 25A, 600V, 250A peak). A MOSFET or hybrid (MOSFET plus electromechanical) relay can also be used, but that's a more expensive option. Most AC circuits are protected by circuit breakers (ideally thermal-magnetic types), and an e-fuse would only be used for particularly sensitive circuits (but you still need a fuse or circuit breaker in case of internal failure).
An alternative detection scheme is to use a Hall effect sensor. These come in two versions, with the most common being sensitive to a magnetic field that's vertical to the plane of the IC body. There are also 'planar' versions, which can simply be placed above a PCB track, and these are sensitive to a magnetic field that's parallel to the IC body. Most are designed for high current, although 'conventional' sensors with a magnetic circuit (iron, ferrite, etc.) can be used to detect low currents accurately. An example is shown in Project 139 (Mains Current Monitor). These sensors can be used with AC or DC, unlike a current transformer that only works with AC.
Although these sensors are a great option for very high currents or where little or no voltage drop across the fuse is permissible, it's not an option that will be covered here. This is because the sensors are generally fairly specialised, and some are too expensive. In addition, there is still a requirement for a switching circuit, which will nearly always impose a small voltage drop. The exception is a relay, but that makes the system electro-mechanical, which barely qualifies it for the name 'electronic fuse'. In such cases, a thermal-magnetic circuit breaker may be a better option.
As stated above, a resistor is a reliable and versatile detector. However, there are some methods that may come as a surprise, such as the reed switch shown below. I selected a reed switch at random from a box of them I acquired for next to nothing, and wound 10 turns of telephone/ bell wire around the middle. It trips at almost exactly 2A, and the result is 100% repeatable. This has the advantage that there is very little resistance in the circuit (heavier gauge wire would be used for higher current), but there is a small delay because it's a mechanical contact. Since (at least with the switch I tested) it requires 20 ampere-turns (2A, 10 turns) to operate, it can be configured for almost any current you like. Anything over 20A would be a challenge though, as that implies less than one turn. Positioning the coil along the body of the switch will provide some control over the trip current.
The advantage of this technique is that the electronic fuse circuitry is isolated from the voltage source being monitored, although we still have to provide a mechanism to turn off the supply. Once disconnected, it also has to remain off until the circuit is reset or the supply is cycled off and on again. Supply interruption methods are described further below. This information is provided more for interest's sake than any suggestion that a reed switch is the ideal sensing method. As always, it depends on the application, and the optimum detection method varies accordingly. I've only seen one circuit that used a reed relay during my search, but it's completely different from the circuit I've shown.
You will find very little anywhere about using a reed switch as a current sensor, but they are quite precise and very fast. The sensitivity is adjusted by the number of turns and the position of the coil along the length of the reed switch glass body. In general, this approach is ideal for moderate current, as anything over 5A starts to get tricky because of the low turns count. With those I've tried, 5A operation is reliable with four turns (20A/T), and adjustment (e.g. for 6A) is obtained by moving the coil along the body so it's not centred over the contacts. The coil should be fixed in place with tape or a suitable adhesive (UV-cure adhesive is ideal).
This is an application that has received almost no attention from anyone else, which I find a little strange. With the addition of a small SCR to latch the over-current 'event' and a suitable switch (which could be 'solid state' [e.g. MOSFET] or electromechanical relay) you have a simple, reliable overload detector that has close to zero dissipation in the sensor itself. This is almost the equivalent of a magnetic circuit-breaker, but can be expected to be somewhat faster and with a precise detection current.
If a relay is used as the 'disconnect' device, it should be powered from a higher voltage than its coil rating for faster operation. These are commonly used as 'efficiency' circuits, designed to reduce the coil current once the relay has activated. As shown next, you can have both - faster activation and improved efficiency.
If the relay has a 5V coil as shown, you can expect its resistance to be around 50Ω for a more-or-less typical 10A relay. If you look at the specifications for any relay, you'll see that the 'must release' voltage is typically less than 20% (sometimes only 10%) of the rated voltage. A sensible compromise is 30%, so after the capacitor has charged the relay coil will have 1.5V across is. The current is reduced to 30mA, and the relay will operate much faster than normal. Where the specified operation time may be 15ms, we can halve that by using a momentary 12V supply.
Using this arrangement, the peak current will be around 180mA (normal operating current is 100mA), and the coil will get 100mA within 2ms, ensuring rapid actuation and disconnection of the load. With the continuous coil current reduced to 30mA, the power while activated is only 360mW. The activation time is determined by both mechanical inertia and coil inductance, and the 'speed-up' circuit helps to minimise the effects of both. The normally closed contacts open faster than the quoted activation time, but the difference isn't huge. This technique can be used with any of the circuits described below that use a relay.
However, you must always consider the supply and load characteristics. Electromechanical relays are not suitable if you need to break DC at more than 30V (current dependent), MOSFET and other 'solid state' relays can fail (almost invariably short-circuit). The switching device must be selected according to the supply and load.
Be aware that the circuit shown cannot operate if the 12V supply is turned off. A 'fail-safe' circuit would use the relay's normally open contacts, so the relay can't close unless power is available. This creates a new set of constraints, but the general principle isn't changed much. Many of the latching circuits shown below can also be used with a reed switch detector, so I leave the details to the reader. There is no 'Reset' facility in the above circuit, so to reset the 'fuse', turn the 12V supply off then on again.
There are some quite specific precautions that may be required, depending on the circuit used. For example, if a latching circuit requires a power-on reset (POR) to ensure it's in the proper state to be triggered, it's important that the mains power cannot be applied before the POR has completed. This is critical, because the circuit may be unstable if it's told to turn off while the POR signal is still present. Things like that can cause an otherwise reliable circuit to create much grief, and it's important to look at every possibility, however unlikely it might seem. Some potential issues are not easily covered by simulations or bench tests unless you are aware of the possibility in the first place. For example, one circuit I've seen on the Net will only trigger when the current first passes the threshold. If that point is missed (for whatever reason, including the POR issue mentioned), the circuit is inactive and your 'protected' circuitry burns up.
For particularly sensitive applications, it may be necessary to provide an auxiliary 'always on' supply to power the e-fuse. It will turn off if the mains lead is disconnected, but remain on whenever the device is connected to the mains (independent from the mains switch). Of course, the circuitry used has to be protected itself, lest a failure causes the auxiliary supply to fail. This will almost always be a wire fuse, or perhaps a fusible resistor. It's also important to ensure that the main circuitry cannot be powered on unless the auxiliary supply is operating normally.
The use of a wire fuse as an 'emergency backup' is mandatory, because there's always the possibility that a fault can occur within the electronic fuse itself. The switching device may fail, and being (usually) a semiconductor, it will fail short circuit. Relying on the die bonding wires to fail is not a good approach, as this is highly variable and in some cases may require far more current than the voltage source can provide.
By now it should be apparent that electronic fuses are not as simple as they seem. There are quite a few that rely on SCRs (silicon controlled rectifiers) or TRIACs (bidirectional AC switches), but these have problems as well. Once an SCR has turned on, it can only be turned off again by reducing the current to below that required to maintain conduction (the holding current). Most also have a minimum current to turn on (and stay on), called the latching current. If these are not properly thought out, the circuit may not function properly or may not work at all.
While using a resistor to sense the current is somewhat 'old hat', it's far more reliable than (for example) measuring the voltage drop across a MOSFET while it's turned on. This may be alright inside an IC where temperature compensation can be applied, but in a discrete circuit, RDS-on varies with temperature, which is itself a function of the load current. Ideally, the MOSFET needs to have the lowest possible RDS-on to minimise power loss (and MOSFET heating), but that makes it much harder to monitor the tiny voltage drop across the device when it's powering the circuit.
The selection of the switch is very important. Relays have the advantage that they are extraordinarily reliable and provide complete isolation of the switched circuit from the electronics, but they aren't suitable for high voltage DC. For most, this means anything over 30V DC with a current of more than a couple of amps. At higher voltage or current, there's a risk that the DC will simply arc between the contacts. Relays are also fairly slow (compared to electronic switches). If you wish to use a relay as the switch, I suggest that you also read the Relays, Selection & Usage articles (it's a two-part article). If closed contacts are used to provide current to the load, they will generally open within 5ms (longer if a diode is used and coil voltage is removed). For DC, MOSFETs are the preferred choice for a continuous current of up to 50A or so, but a heatsink is essential. SCRs and TRIACs are fine for high current AC applications, but they also need a heatsink. Anything that requires a heatsink starts to get rather large (depending on the current).
In the circuits that follow, I've specified a floating 12V supply. This isn't always essential, but it ensures that there is no interaction between the protected circuit and the power supply used for the electronic fuse circuitry. As a result, any of the DC circuits can be used with either polarity of the main supply, provided the polarity of the switch is observed. The floating 12V supply ensures that there are no polarity conflicts that could cause a short circuit under some conditions. A simple solution for a floating supply is a miniature isolated DC-DC converter. These are described further in the next section.
Because electronic fuses are usually capable of very high speed, the following circuitry should not have any large capacitors that need to be charged. Although the typical risetime of the DC is around 4-5ms if provided by a transformer and rectifier at mains frequency, some circuits may have a much faster risetime. The charge current into a capacitor is determined by the capacitance, risetime of the applied voltage, and any series resistance. The latter includes diode dynamic resistance, transformer winding resistance, and even the resistance of the mains (from the powered device back to the power station). In 230V countries, expect the mains resistance/ impedance to be around 0.8-1 ohm, or 0.2-0.25 ohm for 120V outlets). To give you an idea, a 1,000µF capacitor, charged from a source with a voltage of 50V and a risetime of 2.5ms and a source impedance of 1 ohm will draw a peak of 18A as it charges. This will trip all of the circuits shown below, although the Figure 5.1.2 'slow-blow' circuit can be tailored to handle capacitor charge current.
If the risetime is increased to 5ms, the peak current is 10A. This is still more than enough to trip the circuits shown. You must either minimise the capacitance on the load side of the e-fuse, increase the risetime, or use a delay system to ensure that instantaneous high peak currents don't trip the circuit. This rather defeats the purpose of a very fast electronic fuse. Some circuits are more amenable to applying a delay than others. It's easy with the Figure 5.1.1 circuit (see Figure 5.1.2), but harder (and less predictable) with the circuits shown in Figures 5.1.3, 5.1.4 and 5.1.5.
AC 'Solid State' relays based on TRIACs or SCRs cannot turn off until the current falls to zero. That means that you could have a serious over-current for up to 10ms (50Hz, 8.6ms for 60Hz). This may or may not be a problem, but it is something you need to understand. Should any SSR fail, it will become a short (possibly in one direction only, causing half-wave rectification of AC). It may seem silly to protect an e-fuse with a wire fuse, but it's there as a backup. If it's omitted, a fault cannot be contained should the SSR (or even an electromechanical relay) fail. The results could be catastrophic!
In this section, I have concentrated on circuits that I've been able to simulate or bench test, and that can be trusted to function reliably. This has reduced the possible examples to only a few. There are some elsewhere that might work, but without the physical parts or simulator models, it's impossible to know for certain. If I can't ensure that a circuit does what is expected of it, it doesn't get published - there are way too many examples of circuits that will not perform as claimed, and I'm not about to add to their numbers. It's quite obvious that some circuits that claim to be 'electronic fuses' are nothing more than current limiters - they are not the same thing!
Note: The circuits described are fuses and are not current limiters. None is suitable for providing a fixed limited current.
I show examples of both AC and DC electronic fuses, but for AC the only sensor I would consider seriously is a current transformer. There are ICs that can sense current, but most are only available in SMD packages, and they aren't covered in any detail. Ideally, if you build an electronic fuse, you need to be able to repair it if anything goes wrong, and many SMD parts have a very short sales cycle. The IC you buy today may not be available after only a few years. My articles span over 20 years, and I won't suggest anything that's obsolete or may become so in the foreseeable future. There are several suppliers of current detector ICs, but current transformers have been around for over 100 years and are more common now than ever before.
For the examples, I'll base the circuit on a trip current of around 5A (AC or DC). It's fairly easy to adapt any design for higher or lower current, usually with nothing more than a sense resistor change. Note that most designs require a separate power supply, since that provides for more consistent operation. However, it's also a nuisance to add, although a good option is the Mornsun B1212S-1W or an equivalent (similar tiny supplies are made Murata and Traco Power, amongst others). These are miniature DC-DC converters, with a 12V (nominal) input and an isolated 12V output. The isolation is not necessarily sufficient for mains voltages, but is fine for any circuitry powered from the secondary of a transformer. While the specifications state that the isolation voltage is 1kV, that's the test voltage - the supplies should not be operated with anywhere near that voltage differential. I've used these supplies in some 'special' projects, and I always keep a few on hand because they are so useful. They can be as small as 12 × 6 × 10mm (length × width × height in millimetres), and can be purchased for as little as AU$2.20 each.
The basics are shown above. You need to monitor the current, suitable circuitry to detect an over-current fault, and a switch to disconnect the load from the power supply. The detector is shown as a resistive shunt, but you can use a Hall sensor, a current transformer (AC only) or even a reed switch as shown in Figure 3.1. The control circuit should latch, so that the switch does not close again until power is cycled. Many e-fuse circuits you may see have a 'reset' button, but this is not included in the above, nor in any of the other examples. Sometimes it's a good idea, but mostly it's not. You'll also see that a fuse is included - this is intended as a fail-safe. If the e-fuse fails to operate, you still have some protection against catastrophic failure and/ or fire.
The DC supply that powers the detector and latch circuitry should normally be floating (not referred to earth/ ground) as it will usually connect to the protected supply, which itself may or may not be referenced to ground or some other voltage. By floating the DC supply, it can be connected to any other voltage source without fear of creating a short circuit or other problem. Small DC-DC converters are available readily for under AU$10.00 each, and a single 'master' 12V supply can provide power to as many DC-DC converters as you need. Mostly, the detector will only need a few milliamps, and a 1W, 12V converter can supply 80mA. It's unlikely that more will ever be needed. All following examples show only the floating supply - the DC-DC converter isn't included.
Mostly, the switching device will be a MOSFET for DC or a TRIAC for AC. There may be situations where an IGBT (insulated gate bipolar transistor) or back-to-back SCRs are preferable, but that doesn't apply for circuits where the current is only a few amps. Even if an e-fuse is used to protect a power amplifier (for example), the average current is usually quite low. Note that if any circuit uses dual polarity supply rails, there should be a mechanism in place so that if one trips, it automatically trips the other. Dual rail circuits generally misbehave if one supply goes missing but the other remains.
This is particularly true for dual-supply audio circuits (preamps and power amps). It's essential that both supplies are turned off simultaneously, or the resulting DC offset can damage your loudspeakers. It's up to the reader to determine if the circuit selected will do that reliably, as it's not possible to cover every eventuality in this article.
These are likely to be the most common requirement, but they can still create some 'interesting' challenges. Of these, providing a reliable power-on reset (POR) is one that cannot be overlooked. Schemes that look fine (and simulate exactly as expected) can be very deceptive, and it's an area that gets scant attention in most latching circuits that you'll see. As noted earlier, ideally a circuit that includes latching will be used as a matter of course, since circuits that can 'automatically' reset are liable to cause more damage. This isn't necessarily a problem, and it depends on the application. It's also important to keep dissipated power as low as possible, as this reduces wasted power and (hopefully) means that you don't need to add a heatsink to the switching device. At high currents, there will nearly always be a need for a heatsink, but it should be as small as possible or the circuit is a nuisance to incorporate into a design.
Figure 5.1.1 shows an electronic fuse that has all the required safeguards for reliable operation. U2C and U2D form a 'set-reset' latch, which is initialised by the charging of C1 when power is applied (power-on reset). The 12V floating power supply needs to turn on quickly enough (within 5ms to full voltage) to ensure that the reset works, otherwise the circuit may not turn on the load. While the POR is active, Q1 is turned on to ensure that no load current flows until the circuit is ready to function. The load can be in the positive or negative side of the switching MOSFET, and the only thing of importance is that the polarity is correct.
If the load current exceeds the preset maximum (as set with VR1), the output from U1A will exceed the threshold for U2A, which sets the latch and turns off the power (Q1 shorts the gate supply to the MOSFET's source). For a 5A load, R1 (current sense) can be as low as 25mΩ, and U1A needs a gain of about 50 to trigger U2A. The LM358 is used because they are cheap, readily available, very tolerant of supply voltage issues, and can operate normally with both inputs at (or even slightly below) the negative supply voltage. They aren't particularly fast, but a simulation shows that the circuit will trip (turning off power) within 10µs - even if the current is less than 10% above the threshold.
While the circuit may look fairly complex, there are only two low-cost ICs and a small handful of other parts. The MOSFET is selected based on the supply voltage used for the load. The IRF540N is shown as an example only, and because it has a low RDS-on, power dissipation is only 1.1W at full current (5A). There are many MOSFETs that can handle either much higher voltage or current, so that needs to be chosen to suit the application. Likewise, for lower currents, R1 should be a higher value to ensure that U1A doesn't need too much gain (around 50 is the maximum I'd recommend). The circuit's current draw from the floating 12V supply will be around 1mA or so when the load is on, rising to about 6.5mA when tripped (because Q1 has to pull the gate voltage to zero).
The configuration above has another useful feature. If you include the resistor/ capacitor network before U2A, the combination of R9 and the capacitor (C3) provides a 'slow-blow' characteristic. The capacitor will integrate the DC, so it can handle high peak current, provided the average is below the trip current. With R9 at 10k, you can use 33µF or more, which will allow up to 10A for one second (under 100ms with 10µF). A sustained overload of two hundred milliamps (based on a 5A cut-off) will disconnect the power in just over one second with 33µF for C3. No wire fuse or circuit breaker can match that. Most of the other circuits shown can't match that either - with most, 'slow-blow' operation isn't possible.
You can adjust the time delay section over a wide range. Both R9 and C3 can be increased in value to get a longer delay, but make sure that C3 is a low-leakage capacitor and isn't subjected to any heat source. This will increase its leakage and may adversely affect the timing. An electronic fuse that can't do its job is worse than useless. A zener diode (~3.9V, cathode to U1A.1) can be used in parallel with R9 so a severe overload will trip the circuit almost instantly. This will require some experimentation to get it right for your application.
The circuit shown above appears on several websites, and its origin is unknown. It's been modified to remove the (IMO) redundant LED indicator, and it uses a sense resistor as well as the MOSFET's channel resistance to set the current. Using RDS-on may save an extra part (R1), but the trip point changes with MOSFET temperature. As the current approaches the trip point (about 4.9A as shown), MOSFET dissipation starts to increase because the gate voltage falls. Because the detector is just a transistor, it doesn't have a well defined on/ off state, as load current approaches the maximum, the transistor starts to turn on, removing gate voltage. Once tripped, it will not restart unless the load voltage is reduced to near zero. D1 is used to 'offset' the base-emitter voltage of Q1. Several parts of the circuit are temperature sensitive, namely Q1, Q2 and D1, so expecting accurate current tripping over a wide temperature range is unrealistic.
Once the current has increased enough to turn on Q1, that turns off Q2, so the base of Q1 gets more current, turning off Q2 further. It's a simple positive feedback loop that ensures that Q2 turns off completely, and Q1 gets base current from the positive source voltage. If the voltage is too high, the base current limits of Q1 may be exceeded, so R4 may need to be increased in value. The circuit is reset simply by turning the main supply off and back on again.
The circuit is latching, but simply maintaining the +12V supply won't keep the circuit turned off. I've included it because it does appear on several sites, but just how much detailed analysis has been done is unknown. Like the previous circuit, the load can be in the positive or negative output circuit, and only the polarity (and supply voltage) is important. Also, like the Figure 5.1.1 circuit, the MOSFET must be selected depending on the supply voltage and load current. While the circuit is simple, it also has high dissipation, especially when close to the trip current. If set for around 5A as shown, it will provide very good protection for a circuit that normally draws up to 3A maximum. It will disconnect with a fault condition (5A or more) very quickly.
The issues in the Figure 5.1.3 circuit are expected. A simple circuit will often have inferior performance, provided the more complex circuit is designed properly. Just because a circuit uses lots of parts, that does not automatically mean that it will work 'better' (it may not work at all). The biggest problem with the simple circuit shown is power dissipation, which not only makes the circuit get hot, it also causes a higher than normal voltage drop in series with the load. MOSFET dissipation is increased a great deal more if you omit R1 and only rely on the MOSFET's RDS-on (as shown in other versions on the Net). R3 can't be reduced substantially, as it will pass too much current to the base of Q1, and will dissipate significant power. With a 100V supply, R3 dissipates nearly 1.5W and passes 14mA. These are reduced at lower voltages.
An alternative that addresses some of the issues with Figure 5.1.3 is to use an SCR to switch off the MOSFET's gate voltage. While very low current SCRs do exist (e.g. BT169 series), most require about 800mV gate voltage to trigger. This can be addressed by using a discrete SCR, made with a pair of low power transistors (see Appendix). However, a standard low-current SCR will still work, although it's not easy to get a well-defined cutoff current.
Unlike the Figure 5.1.3 version, once it's triggered, the MOSFET will remain turned off for as long as the auxiliary 12V supply is present. The SCR needs around 6mA to ensure that it remains on once triggered. The circuit also doesn't rely on the MOSFET's on resistance, only that of the current shunt. Once the SCR is triggered, it doesn't require any voltage from the main supply. Note that the MOSFET must be a 'standard' type and must not be logic compatible (turned on with 5V). An SCR is unable to reduce the voltage to much below 1.7V, which may provide enough gate voltage to cause current to flow in the MOSFET if it's a low threshold (logic compatible) type. SCRs are temperature sensitive, so if it gets hot, the detection current will fall. It's unlikely to cause a problem in most circuits.
The reed switch detector shown below can also be used with the above circuit (DC only). There are a few other changes needed. Replace the sense resistor with the reed switch coil, and include the 1k gate series resistor.
The next circuit is very adaptable, but in its simplest form it's flawed. With a few enhancements you can do things that would otherwise be impossible or inadvisable. The detector (reed switch and sense coil) and the relay do not need to be part of the same circuit - the reed switch can sense DC, using the relay to disconnect AC. This makes it highly versatile. While there are ways that other circuits shown can be wired in a similar manner, the following version is the easiest to adapt.
The above e-fuse uses a reed switch with a sense coil, which minimises the voltage drop across the sensing circuit. It's not adjustable, other than by varying the number of turns around the reed switch. As noted above, the switch I tested responded to 20A/T (ampere-turns), so ten turns provided detection at 2A. The load and its supply must be DC to prevent reed vibration and possible metal fatigue. Provided the reed switch terminals are at least 5mm from the coil, the circuit will be safe with mains derived voltages. AC through the sense coil is not recommended, because if the current is high enough the reed will be vibrating constantly.
When DC is applied, the relay is de-energised, and the NC (normally closed) contacts provide DC to the load. When the trip current is exceeded, the relay is energised via SCR1 and disconnects the load. In common with all DC relays, make sure that the voltage is within the relay's DC voltage limit. This is typically only 30V DC for most relays, but it can be doubled by using two sets of contacts in series. I suggest that you read the article Relays, Selection & Usage (Part 1) and Relays (Part 2), Contact Protection Schemes to gain an understanding of the issues faced when switching DC with electromechanical relays.
Note: Using a normally closed relay contact is less than ideal, but it's the only way to get zero dissipation when the system is on standby. 24V operation is quite ok - use a 24V relay and increase the value of R1 to 1.8k. However, there is no protection against contacts that weld closed due to high inrush current into the load. Relays can weld their contacts and MOSFETs can fail (always short-circuit) - no connection/ disconnection scheme is immune from failures.
The same basic arrangement can be used with a MOSFET switch (the gate draws zero quiescent current), and that may be preferable. The SCR simply pulls the gate voltage down to about 1.8V, which turns it off. The gate requires a feed resistor from the +Ve supply (1.8k is fine for 12V) and a 15V zener to ground to protect it against voltage spikes. When triggered, the circuit will draw only 6mA, which is just enough for the SCR to latch and remain turned on. The test button is highly recommended if you use a semiconductor switch.
Although the reed switch I tested required 20A/T, yours will likely be different. Once you know how many ampere-turns are needed, it's easy to calculate the number of turns needed for any given current. For example, the switch I used would need 40 turns to respond to 500mA, 20 turns for 1A, or 5 turns for 4A. The reed switch can be used with many of the other circuits as well, but it will not work with the Figure 5.1.3 version, because that circuit needs the main supply to be present so it can latch.
There are inherent problems with an e-fuse that uses the main supply to power the fuse circuit. The kindest thing we can say about a short circuit is that it's brutal. If you consider the circuit shown in Fig. 5.2.5 you may see the problem. A 'dead short' removes all power, because the supply is shorted. With no DC available, the relay cannot activate, and you'll only get the circuit to work if the main supply is capable of more current than the short circuit can absorb. There is no way to guarantee this, but you might get away with a circuit like that shown next.
D2 charges C1, which is made much larger than would normally be needed. It has to be able to provide current for the relay coil during the time it takes to disconnect the shorted load. The final value of C1 depends on the relay's coil current, the applied voltage, and the relay reaction time. To be safe, it should be able to hold enough charge to provide the relay with enough current to operate for at least 5 times the expected activation time.
The voltage from the supply will collapse while the short is present, but it should disconnect within around 10ms (based on 'typical' 12V relays). Current to the load is interrupted, but the relay remains energised, keeping the faulty load from causing damage. Of course, a really serious fault current may cause the wire fuse to open, but if that happens, the e-fuse is redundant.
A MOSFET is a good switch - no moving parts and very fast operation. However, like all semiconductors, MOSFETs can be damaged by a variety of abuses. Because we are switching the 'high side' (i.e. the positive supply), a P-Channel MOSFET is needed to eliminate the requirement for a supplementary voltage to drive the gate. The output from the SCR has to be inverted, hence Q2. The voltage divider (R4 and R5) is required because the voltage across the SCR when it's on is about 0.8V - enough to keep Q2 conducting. ZD1 may appear to be redundant, but the gate of a MOSFET must be protected from 'unforeseen' conditions, as may occur if the load is inductive. As you can see, the circuit is more complex, but the added parts are all low-cost.
As long as no overload is detected, the SCR remains off, so Q2 is on, pulling the gate of Q2 to ground, turning it on. When an overload triggers the reed switch, the SCR turns on, removing base drive from Q2. Its output (collector voltage) goes to +12V, and Q1 turns off, disconnecting the load. The load remains disconnected until power is cycled (so the SCR can turn off). A push-button can be used in parallel with the SCR as a 'Reset' switch. The circuit turns on again when the switch is released, so the e-fuse is instantly re-armed. If the fault is still present, it will promptly turn off again.
No part of any e-fuse circuit should ever be taken for granted, and everything has to be tested (many, many times) to ensure that it performs as expected every time. Before you commit to a MOSFET, check that RDS (on) ('on' resistance) will not reduce the voltage too much, and/ or subject the MOSFET to excessive power dissipation. The IRF5305 has a claimed RDS (on) of 60mΩ, whereas a more-or-less typical 10A relay will have (quoted) contact resistance of less than 50mΩ. If you choose to measure it, you'll usually find it's less than that. I've measured around 10mΩ for some relays - there are MOSFETs that can beat that easily, but there are comparatively few low RDS (on) P-Channel types. Lower RDS (on) can be obtained by paralleling two or more MOSFETs. R3 may be reduced so it can discharge the gate capacitance as quickly as possible.
Sometimes, one sees a circuit that looks too good to be true. This is almost always because it is - the 'designer' has completely failed to see the inherent flaw(s). The next drawing is just such a circuit, but I won't say where it came from other than it was a website in India. It has been simplified (LEDs etc. are not included) and redrawn. The demonstration unit used a 9V battery to power a small motor (already a bad combination because of the very low capacity of 9V batteries). The circuit relies on the resistance of the source voltage, and if the supply is capable of 10A or more, the circuit will only operate if the short-circuit impedance/ resistance is very low.
The idea is that you press the 'On' button to turn on your device, and that energises the relay, which bypasses the pushbutton switch. The relay remains activated as long as power is present. The premise is that if the load develops a fault or is short-circuited, the incoming voltage will be pulled to zero (or close enough) and the relay will release. The load (and short circuit) is then disconnected, and everything is (allegedly) protected. Unfortunately, there is no current sensor of any kind, so operation depends only on the fault being able to pull the supply voltage down to less than 1V (for a 12V relay). Any overload that can't reduce the supply voltage to near zero will not de-activate the relay, so the load may burn out, the supply may be damaged, or (probably) both.
As shown with a supply having an output impedance of 100mΩ (implying 10% regulation at 10A which is pretty poor), the 'short' must have a resistance of 10mΩ or less to pull the voltage down to below 1V so the relay will release. Most 12V relays will remain activated with a voltage of as little as 1.2V ¹. The short-circuit current will be around 110A for at least 10ms (the relay release time). Operation is not 'automatic' in that the 'On' button must be pressed to turn the device on. It is possible to make its start-up automatic upon application of power, but the design is so badly flawed that there's no point. Relying on a fault to reduce the supply voltage to zero is not an 'electronic fuse' under any known definition.
I've included this as example of what not to do. When you see circuits on the Net, understand that many are flawed, some won't work at all, and some are downright dangerous. The circuit shown is a perfect example - it's badly flawed, and is potentially dangerous unless the voltage supply has a comparatively high output impedance (preferably at least 1Ω) and can withstand a shorted output without being damaged. Note that the 'On' button must be rated for the full load (or short-circuit) current, and there is no protection while the button remains pressed. The only thing that can save this circuit is a relatively high-impedance source, which renders it useless and dangerous. The wire fuse is my addition - this was not included in the original.
¹ Most relays will pick-up (activate) with 0.8 of the rated voltage (9.6V for a 12V relay) and release at 0.1 of the rated voltage (1.2V for a 12V relay). This is fairly consistent, but always check the datasheet.
There's one circuit that you'll see in any search for 'electronic fuse'. Unfortunately for those who may have built it, they quickly discover that they have just wasted time and parts on a circuit that just does not work for its intended use. The 'theory' is that at a current determined by the base-emitter voltage of Q1 and the value of R2, the transistor will conduct and short out the SCR, which will turn off. These things do not happen. Q1 certainly turns on, because the base is joined to the collector via the SCR. There is nothing that will cause the SCR to turn off!
You'll see that I haven't included any component values, simply because the circuit does only one thing - it wastes parts (and your time). It also reduces the output voltage by around 2-3V so you also need to waste more money on a heatsink for Q1. Over the years, I've seen many forum posts from people who have built this circuit, and they were asking for advice because it didn't work. The only useful advice is don't build it in the first place, because it doesn't work.
There are a few other circuits that may appear similar, but they are current limiters, not fuses. A fuse disconnects the load (and whatever fault exists) from the supply. A current limiter set for (say) 5A with a 12V supply will dissipate 60W if the output is shorted, and obviously a great deal more with a higher voltage or current. Current limiters can be useful at low current (less than 1A), but they don't fully protect the load. If the load is a motor, its internal fan (if fitted) will cool the windings when it's running, but if it stalls (and draws the maximum current allowed for), there is no cooling because the motor is stopped. How long it can survive depends on the power dissipated. This is why true electronic fuses are used!
Finally, this section shows a commercial IC designed for 'hot swap' applications where control is required over everything. This isn't a specific recommendation, but is included to show a sample of what's currently available. There are many others from multiple manufacturers, but this one caught my eye as one of the most ingenious and capable devices I've come across. It's only available in SMD packages.
The TPS2663x has just about every bell and whistle you than think of, then adds a few extras. It's primarily designed for 'hot swap' boards in digital systems, and it also features a drive circuit for an external N-Channel MOSFET for reverse polarity protection (this is not included in the above). Every parameter can be programmed using resistors or a capacitor (which controls the ΔV/ΔT (dV/dT) rate of change of voltage over time). It operates using a supply voltage from 4.5V to 60V, and the current limit can be set from 600mA to 6A. TI describes it as a '60-V, 6-A Power Limiting, Surge Protection Industrial eFuse'. It can be set up to automatically re-try or latch off after an overload has been detected. Overvoltage and undervoltage protection are also provided.
This has been included so you can see that e-fuses are now a mainstream requirement, and have abilities far beyond anything described here. However, it comes at a cost. The IC shown is very complex, and is available in LCC (leadless chip carrier) and a 'traditional' SMD package. Both include provision for heatsinking which is necessary when it's in current limit mode. At the time of writing, an evaluation module was available for US$99, with the IC itself costing a little over AU$9.00 for one-off quantities. The datasheet has countless formulae to allow the user to program the various functions. If you need to know more, see the datasheet (and no, I won't answer questions about it - just to save you from asking).
The LTC4249 is another example of an electronic fuse. The details aren't shown here, but it's a dual-channel IC in an LQFN package, which makes it very hard to use for DIY. With 6V to 65V operating voltage (the second channel will work down to 1.5V) at up to 1.2A (which can be doubled by parallelling the two internal e-fuses), it's a very capable IC. The trip current is easily adjusted with a single resistor (per channel), and it's designed to do one job, and do it well.
AC electronic fuses are less common than DC versions. While a few examples exist on the Net, some are best described as ill-conceived, with others that simply will not work as intended. There is no point whatsoever publishing a circuit that doesn't do what's claimed. In some cases, a small change can make all the difference, but most people won't know what to change or why. AC circuits are also tricky, because many common loads have a high inrush current. Because electronic fuses are so fast, the first half cycle can trip the fuse and disconnect the load, even though there's nothing wrong.
This is one reason why fuses (in equipment) and circuit breakers (in the switchboard) remain popular - a slow blow fuse can handle the inrush current easily, but will blow if there's a fault. Circuit breakers (thermal-magnetic types) are available with what's known as a 'D-Curve', which is essentially a delay. A true fault current will trip the breaker, but loads within the D-Curve profile won't. Electronic fuses generally don't allow much leeway, so they will trip at the instant the current exceeds the threshold. I'll only show a couple of AC types, with one using a current transformer and the other a shunt resistor.
Switchboard circuit breakers are almost always thermal-magnetic types. A small overload causes a bimetallic strip to bend as it gets hot, and if the overload is maintained the bimetallic strip will trip the breaker, opening the contacts. If there's a severe overload, the magnetic circuit operates almost instantly, and opens the contacts. Because short circuit current can be very high, an arc is created across the contacts, and this is dissipated with an arc quench system - commonly a series of flat metal strips mounted in an insulating material, that break the arc into smaller segments that are more easily extinguished. This is known as an arc chute. Industrial circuit breakers often use alternative methods that can handle higher voltage and current.
Most AC electronic fuses will be designed to operate if the current is only marginally higher than the required value. Because a short circuit will almost inevitably cause damage or failure, it's important to ensure that an electronic fuse is only ever a secondary system, with the system as a whole protected against catastrophic damage by a fuse or a thermal-magnetic circuit breaker. Relying on electronic circuitry alone is unwise (in the extreme).
As with the DC version shown in Figure 5.1.1, U2B is used to lock out the output until the power-on reset is complete. If this isn't done, the first cycle can easily be well beyond the threshold, but won't be detected. The current set pot (VR1) may allow the current transformer core to saturate, but that won't impact on the ability of the circuit to detect the current reliably. The MOC3020 is a dedicated TRIAC driver, and provides around 7.5kV isolation between the input (diode) and output (photo-TRIAC). These have been with us for many, many years, and are still available for less than AU$1.00 each. The TRIAC is selected based on the voltage and current required. A common device is the BT139F-600E (TO-220, full pack - no insulator required). These are rated for 600V at up to 16A RMS. You may need some additional circuitry if the load is highly inductive, and the MOC3020 datasheet shows what's needed.
This circuit can be adapted for a slow-blow characteristic using the time delay circuit shown in Figure 5.1.2. As it's shown here, the circuit responds to the peak current, and adding the R/C network means it responds to the average. The time delay can be selected to suit your application, and requires thorough testing to ensure that it can allow for inrush current, but operates as expected with the normal AC load.
For the current transformer, it's hard to beat the AC-1005, a 5A, 1:1000 ratio transformer. It's capable of working at up to 60A, and I've used them in quite a few projects. Of course, there are others, with some small ones (18 x 10 x 18mm) from eBay that are as cheap as chips. I've tested them, and they work perfectly. The sensitivity of any current transformer can be increased by winding two or more turns through the centre. For example, using two turns doubles the sensitivity. In conjunction with the current setting trimpot, this gives a very wide trip range.
The Figure 5.2.2 circuit is included primarily to show another option, but it's absolutely not recommended for mains unless you know exactly what is required to ensure safety. All circuitry (including the 12V supply) would be at mains potential, so the circuit could be lethal if any part is touched while it's connected to the mains. Simply ensuring that the 12V supply is isolated to full mains insulation requirements is difficult, so my recommendation is not to mess with it. However, it can be used safely at lower voltages, such as in the secondary side of a mains transformer. The diode bridge and MOSFET need to be capable of handling the full load current and peak voltage.
There is no provision for delayed action, so if the load current exceeds the trip current, it will turn off. The excursion only needs to be very brief (less than 1 millisecond is more than enough), so it can't handle inrush current as capacitors charge. Neither AC electronic fuse is recommended unless you have a definite requirement for a switch that operates at relatively low current, and you know exactly how your load behaves when AC is applied.
Finally, I'll end this with another version of the detector shown at the beginning - a reed switch and a power relay. It's suitable for low voltage, particularly automotive or marine applications. Once the reed switch closes, that triggers SCR1, which connects the bottom of the relay's coil to ground. D1 absorbs the relay coil back-EMF when power is interrupted. The difference between this version and the one shown in Figure 5.1.5 is that the relay is normally not energised, and the normally closed contacts are used for the load.
As long as the power is available, the circuit is ready to operate. This may be useful for circuitry that's on continuous standby, as the trigger circuit draws zero power unless a fault is detected. This is often particularly important for battery powered equipment, where continuous drain will discharge the battery. Once triggered it will draw power, and ideally it will also be hooked up to an alarm of some kind that will alert you to the fault. Unfortunately, you don't have confirmation that the circuit is functional because it draws no power unless tripped. However, it's a very simple circuit and there isn't much that can go wrong. Adding a test button would be a good idea, with R2 selected to draw about 1.5 times the normal load. The button must be able to handle the current!
Figure 5.2.3 is a hybrid, in that it detects a DC fault, but disconnects the AC supply. The sense coil is simply wound around the outside of the reed switch as shown in Figure 3.1. The number of turns needs to be determined for the reed switch you use. With just a reed switch, a standard relay that can handle the voltage and current, an SCR, capacitor, resistor and a diode, you have a latching electro-mechanical fuse. It will be far more predictable than a wire fuse, and will operate almost instantly if there is a problem. I would expect that this arrangement would be useful where everything can run from a 12V battery, but it can be used with mains provided the reed switch and relay contacts are fully protected from contact. Do not use AC across the sense coil, because continuous vibration will shorten the life of the reed switch.
Normal operation is started when the 12V supply is turned on, which energises the relay (RL1), closing the contacts and providing AC power to the circuit. Should the load current exceed the limit determined by the number of turns around the reed switch, it closes and SCR1 shorts the supply to Q1, turning it (and the relay) off. The 12V supply does not need to be floating because both the input (detector) and output (relay) are isolated, both from each other and from the 12V supply.
R1 provides sufficient latching and holding current for the BT169 SCR, while R2 and R3 form a voltage divider to ensure that Q1 will turn off when the SCR turns on. R4 provides the gate current for the SCR, and delivers more than enough to ensure it turns on reliably. I've included this because it's interesting, and because it's one of the few e-fuse configurations that can use different sense and control circuits.
When the load current causes the reed switch to close, that triggers SCR1 which energises RL1. The contacts disconnect the load. Because the reed relay will release almost the instant current stops flowing, SCR1 is required to ensure that RL1 latches. You must test this thoroughly, as it's important to ensure that operation is 100% reliable. The backup wire fuse is essential!
While not technically an electronic fuse, a so-called crowbar circuit ensures that a wire fuse operates almost instantly. These circuits get their name from the analogy of dropping a crowbar across a pair of wires, creating close to a dead short. They've been around almost as long as electronics, but became affordable once high-current SCRs were available. One of the first transistor amplifier designs I built had a crowbar protection scheme, but the original designer (who shall remain nameless) failed to run proper tests. When the crowbar circuit operated, it short circuited the (single) supply, so the speaker coupling capacitor had to discharge through a reverse biased output transistor. The results were predictable - the amp blew up, killed by its own 'protection' circuit.
Although I managed to get the amplifier working as it should, the design was flawed elsewhere as well, and was quickly abandoned. It was that which started me on amplifier design, and I haven't stopped since. However, this doesn't detract from the fact that crowbar circuits can be extremely useful. If you use a crowbar, you must ensure that the remainder of the circuit is protected from the crowbar itself. This isn't always as simple as it may seem.
Crowbar protection is often found as part of power supplies expected to operate complex and expensive circuitry that cannot tolerate any significant over-voltage. For example, a processor and support ICs may operate at 5V, but with an absolute maximum of 7V (typical of TTL for example). The crowbar trip voltage may be set for 5.5V, so if the power supply regulator fails and tries to provide a higher voltage, the crowbar circuit operates and protects the circuitry.
The above shows a basic crowbar system, using an SCR. The control circuitry (shown in block form) can be triggered by any event that puts the protected circuitry at risk. This may include damaging overvoltage, over current, or any other risk factor that may exist. A crowbar circuit is brutal and totally unforgiving, so it's a technique that should only ever be used where the overriding requirement is to protect the load equipment. One example of this is to protect a loudspeaker system from an amplifier fault. If accidentally triggered, the crowbar will most likely cause additional damage to the amplifier, but this may be considered 'trivial' compared to protecting the speaker system. There is one (and only one) project that uses this technique, namely Project 120.
The resistor marked as RLIM is optional, and may be necessary to limit the peak current to something that may not be especially sensible, but at least is not destructive. A very heavy-duty wirewound resistor would be used, which must be capable of handling the peak current without going open circuit. Typical values would range from 0.1Ω up to perhaps 1Ω for high voltage circuits. For example, with 1Ω and a 100V supply, the peak current is limited to 100A. I know through testing that many 'cement' block type resistors cannot handle that much current (they go open, sometimes splitting the ceramic case!), so something designed for the purpose will be required.
The SCR has to be able to handle the normal operating voltage (with some headroom to allow for transient events), and a peak current handling capacity that depends on the supply and the fuse rating. For high current applications, the fuse should be an HRC (high rupturing capacity) type, as it may need to interrupt a significant current. HRC fuses use a ceramic tube instead of glass, and contain sand or ceramic powder which extinguishes the arc much faster than an ordinary glass fuse. It can be instructive to see a glass fuse that's been blown by a mains short circuit - the inside of the glass is covered in a metal film created when the fuse wire vaporises as the arc is drawn. Sometimes the glass will shatter due to localised heating. HRC fuses remain intact even after the most serious short circuit.
Overall, this isn't a technique I'd advise for general usage, because it is so brutal and unforgiving. However, no discussion of electronic fuses would be complete without including it. A crowbar may not be a fuse per se, but it is designed to protect equipment from damaging outside problems. These can include high voltage spikes on the AC line that may damage equipment if not caught quickly, and a crowbar can be designed to be very fast indeed. An SCR like a CS45-08 is rated for an instantaneous current of over 500A, and 800V peak, with a turn-on time of 1kV/µs. They cost less than AU$10.00 each, so you get a great deal of protection for the money. The hard part is in the innocuous little box that says 'Detection & Control Circuits'. What's inside that box depends on the application.
There are countless SCRs available, and there's bound to be one that suits your application. It's essential that you understand the datasheet and the specific limitations that apply to all thyristors. In particular, the gate current must be maintained until the rated holding current is reached. This is rarely a problem with crowbar circuits.
If your application is AC you'll use a TRIAC, and there are additional considerations. In particular, beware of the polarity of the trigger pulse. The recommendation is that if you can only provide a single polarity trigger pulse, it should be negative, as this avoids the potentially troublesome '3+' quadrant, where MT2 is negative and the gate is positive. (TRIAC terminology refers to 'MT1' (at the gate end of the device) and MT2 - 'MT' stands for 'main terminal'. The remaining terminal is the gate. These are shown in Figure 5.2.1.) Further discussion of this is outside the scope of this article. AC crowbar circuits are relatively uncommon.
Where equipment is powered from dual supplies, they usually get annoyed if one supply rail disappears but the other remains. This depends on the circuit itself, but it's common for a power amplifier circuit to 'go DC' if one supply rail should disappear. DC can easily damage speakers, hence the need for projects such as Project 33. With an electronic fuse, it's usually possible to arrange the circuitry so that if one supply draws more than the rated current and trips, it will trip the opposite polarity (or even another voltage of the same polarity if both are needed for normal operation).
In the circuit shown, there are two optocouplers, U1 and U2. The output sides of each connect to the opposite polarity supply. If either SCR operates, it activates the optocoupler, and the output trips the other supply. It doesn't matter which one operates first, both will trip within a few milliseconds of each other. Note that the negative supply cannot share the 12V auxiliary supply - it must be separate from the supply used for the positive e-fuse circuit.
While the above is shown using the Figure 5.1.4 circuit, the same principle can be applied to most of the other circuits. The negative supply connections might look 'odd', but remember that the N-Channel MOSFET doesn't care about the external polarities, only that its drain must be more positive than its source. Provided that the voltage polarity is correct, the circuit works as expected.
Where (low current) SCRs are shown, there's no real reason that you can't use a 'discrete' version, using one NPN and one PNP transistor and a resistor plus diode (or two resistors). The arrangement shown below is almost identical to a 'real' SCR. However, it has one important (and potentially useful) function - it can be turned off by applying a negative gate voltage. This doesn't work with standard SCRs - once turned on, the only way to turn them off again is to reduce the current below the SCR's holding current. The turn-off pulse needs to be significantly greater than that used to turn the device on. It only needs about 1mA to turn on, but requires around 5mA to turn off with the values shown. Higher anode current means that more current is required to turn it off. While marginally interesting, the ability to turn it off is not particularly useful. This type of device is known as a GTO (gate turn-off) thyristor, and they are available as a single component (albeit not particularly common).
In the discrete SCR circuit, D1 is used to convert the upper transistor to a current mirror, and that reduces the 'on' voltage, as well as the base current in both Q1 and Q2. You can use another 1k resistor instead of D1, but the circuit doesn't work as well, with the base current being far greater in both transistors (by a factor of at least five). The circuit operates as a regenerative feedback amplifier. Once either transistor starts to conduct, it supplies base current to the other, turning on very quickly. As simulated, the turn-on time is not much over 200ns, which is pretty fast by any standard. As tested on the workbench, that time fell to just over 8ns, and the circuit was triggered from an ohm meter (as tested its output was under 1mA).
The discrete SCR is included for two reasons. Firstly, it demonstrates the internal structure of an SCR rather well, plus the discrete SCR provides the experimenter with an easy way to play with the circuit to see how it works. It can also be used if you don't have any low power SCRs in stock, but it is not designed to handle more than a few milliamps. I'd suggest that anything more than 100mA is being 'adventurous', largely because it pushes the base circuits close to their current limits. The real SCR and its discrete counterpart are similar in performance at low current. Both require an input current on the gate (G) terminal, and both require a defined holding current.
The discrete version is more sensitive, but can handle less voltage and current than the BT169A shown. D1 can be replaced by another 1k resistor, but that forces the base current to be a great deal higher than with the version shown. It will also have a higher saturation ('on') voltage, by about 100mV at 12mA or 750mV at 100mA. Be warned that the discrete SCR is very sensitive, and simply connecting the supply will often cause it to trigger. The risetime of the applied DC should be controlled to prevent false triggering, or use a 10nF cap in parallel with the 'SCR' itself.
The two transistor latch (bistable) circuit is another alternative. It's useful anywhere you need to latch a condition (such as an over-current 'event'). The standard approach is to connect a capacitor from the supply to the base of one transistor, which forces a reset. This sets the circuit to a known condition when power is applied, and it will remain in that state until a pulse is applied to the 'G' terminal (actually 'trigger', but I used the same terminology as used in the other circuits). Unfortunately, the capacitor also slows down the switching speed, but not usually to a significant degree.
This circuit has been around from before the earliest days of transistors (using valves/ vacuum tubes), and could be used to replace the pair of cross-connected NAND gates shown in Figures 5.1.1, 5.1.2 and 5.2.1. Despite its simplicity, it will use more PCB space than the IC, and will probably cost more as well. However, it's an important building block in electronics. In logic terminology, it's called a 'set/ reset' (or just S/R) latch (aka 'flip-flop'). It's one of a 'family' of what are known as multivibrator circuits. The other two are the astable (no stable states, an oscillator) and monostable (one stable state, commonly used as a timer).
In several of the circuits described above, a reed switch was used as a current sensor. These are fine for DC, but not with AC, as the reed will vibrate constantly and metal fatigue will eventually cause mechanical failure. There are Hall-effect sensors that are designed to provide a linear output with applied current, and these can be used to detect AC or DC. These have an isolated output (no electrical connection to the current-carrying conductor), and they almost always use a 5V supply with a no-load output of 2.5V (allowing positive and negative output depending on load supply polarity).
One example is the Allegro ACS712. Different models are available to detect current from ±5A to ±30A. The 5A version has an output voltage (referred to 2.5V) of 185mV/A, so a 5A load current will cause the output level to be ±925mV. Higher current versions have a lower sensitivity to ensure that the output voltage doesn't exceed ±2V, as it must remain within the 5V supply limit.
These devices are effective and convenient, with a claimed isolation voltage of 2.1kV RMS, so mains voltage detection is allowable, with the detection and trigger circuitry at low voltage. They are comparatively expensive (you can get current transformers for less money), but are a great deal smaller. Unfortunately, they're only available in SMD packages, and while they have wide bandwidth they are also rather noisy.
The wide bandwidth makes them suitable for instantaneous monitoring of switchmode power supplies operating at less than 80kHz. Because the output requires further processing (amplification and rectification for AC), they are more complex to implement, and I've not included any example circuitry in this article. The datasheet shows a number of application circuits, and these can be used to work through to a complete e-fuse solution.
The circuits shown above quite deliberately specify through-hole parts wherever possible, and use well established parts that have been around for a long time. As a great believer in making things that can be repaired should something go wrong, I avoid SMD parts because most people find them very difficult to work with, and they make the end product very hard to work on later should it ever need fixing. The idea of 'throw it away and get a new one' doesn't sit well with me, and I firmly believe that if something is still capable of doing its job, it should be fixed if it ever fails.
The circuits shown will all work as claimed, even though this is not intended as a collection of projects. The idea is to show budding constructors their options, and stimulate thought about how a circuit functions. Not all of the circuits have been built and tested, but all have been successfully simulated, and function as intended in the simulator. Of course, 'real life' can throw up some potential glitches, but these have been addressed where misbehaviour is a possibility.
Each circuit shown has parts that can be mixed and matched to suit the application. For example, a current transformer can drive an opamp with the output used to trigger a small SCR. Likewise, you can use the opamp and 4093 CMOS latch circuit (Figures 5.1.4, 5.1.5 and 5.2.2) where an SCR is shown. When opamps are used with a single supply, I suggest the LM358, because it's available almost anywhere, is low power, it can function with the inputs at the negative supply voltage, and the output can get to (almost) zero volts. Most opamps can't. There are other alternatives to the LM358, but most are likely to be less readily available and more expensive.
In most cases there is no requirement for electronic fuses. While it is a technique that can be applied to particularly sensitive systems, in the field of audio it's rarely necessary. An electronic fuse using a reed relay or current transformer may seem easy ways to detect excess output current from a power amplifier (indicating a short circuit or a load impedance that's below the optimum), but the reed relay won't respond to high frequencies, and a current transformer won't respond to a DC fault. You can use both of course, and the extra impedance in the speaker output won't affect the amp's output impedance.
However, music is dynamic, and the impedance of a loudspeaker is rarely a 'simple' load. Amplifier protection circuits (e.g. VI limiters) are more likely to protect the amplifier from an unfriendly load, but they aren't without their problems either.
It should be obvious that if you need a really good electronic fuse, it's not a simple undertaking. There are many different e-fuse ICs available that are designed to protect sensitive equipment, although I've only shown a single example. There is no way that I could cover them all as there are so many. The one shown gives you an idea of the capabilities that users expect.
The DIY circuits shown are also only examples. I've shown MOSFETs in most cases, but you can also use IGBTs or bipolar transistors for switching DC. In most cases, a TRIAC is the easiest way to switch AC, but they will not turn off partway through a half cycle, only when the current falls below the holding current. While this may allow a very high peak fault current, it's brief and (probably) won't cause further damage. For AC switching, a MOSFET relay (see Project 198) may be a good option, but I do not recommend using it with mains voltages.
In all cases, the switching device has to be selected based on the voltage and current that is to be controlled. Suitable devices are readily available, with many capable of very high voltage or current. Expecting high voltage and high current usually means the switching device will be expensive, but this isn't often a requirement for DIY projects. The nice thing about an electronic fuse (apart from its well defined cutoff current) is that they are much faster than wire fuses, and can operate even at very low currents. While a 10mA fuse isn't a common requirement, it's easy to do with electronics, but a great deal harder with a wire fuse (try buying a 10mA fuse - they exist, but the price will probably scare you away).
The advantages of electronic fuses are that they are much faster than wire fuses, and can be made to trip quickly with even a small overload. The disadvantages are greater cost and complexity, so they will not be an economical proposition for anything but the most demanding of applications. While the level of protection is far greater than a wire fuse can provide, the wire fuse is still essential in most cases, simply because an e-fuse uses electronics components which can fail. All circuits (other than Figure 5.1.6) are shown with wire fuses, which act as a final backup should a fault develop in the e-fuse. It would (IMO) be most unwise to leave these out, because you may end up with no protection at all.
I've not referenced any of the circuits I found on-line. While there are a couple that appear to be well thought through, most are a motley mixture of continuously regurgitated circuits of unknown origin, and in some cases just won't work at all.
![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Electrocution |
![]() ![]() |
+ I have no accreditation as an electrical safety expert, and this article is based on common sense, personal experience and basic research. The material is for your information + only - it contains sensible advice, but the information here is not to be considered as wholly reliable or complete. For complete safety information, please consult appropriate professionals + in electrical hazard reduction, victim rescue and CPR techniques. |
Electrocute - To kill or be killed by electricity (This is the proper definition, but many people use the term to mean electric shock) + +
There is much confusion about electricity and its ability to send you to your ancestors, and while much of the information is sensible, it is not always easy to find, and usually doesn't cover your area of interest - especially if that includes audio (or electronics in general).
+ +An electric shock may be variously referred to by survivors as being zapped, bitten ("That f'ing microphone just bit me!"), "copping a belt" (probably uniquely Australian), electroplated (that's one I use), or simply by a string of expletives. However it's expressed, the experience is never pleasant, and almost always signals your body to release adrenaline - our bodies react as if we are under attack - which is true enough. All installations, whether fixed or mobile, should be properly connected, all equipment with an earth pin must be connected to an earthed outlet, and safe wiring practices must be used.
+ +This article is not about wiring practices, because there are so many variations worldwide that it is impossible to cover everything. If you are involved with setting up, building or repairing audio or lighting equipment, make sure that you know and follow the regulations that apply where you live. Failure to do so can result in death or serious injury, possibly followed by expensive litigation. If the worst does happen though, the material here should be useful - plus you will hopefully learn something new.
+ +Terminology: Unfortunately, different terms are used in many countries for the same thing. Active, phase, hot, line, live - they all mean the live mains wire, but this is not always clear - especially to those who may not know the terminology. I can make no guarantees that I have covered all possibilities, since there are many foreign (to me) languages that will use different terms again. The table below covers those that I know of - that there are others is certain.
+ +Wire Name (Oz) | Wire Colour ¹ | Also known as ... |
Active | Brown, Red, Black | live, line, phase, hot, plus, positive (these last two are wrong, but I have heard them used) |
Neutral | Blue, Black, White | cold, common, grounded conductor (US), minus, negative (as above for the last two) |
Earth | Green/Yellow, Green | ground, protective earth, earth ground, safety earth, grounding conductor (US) |
+ Note 1 - Be careful with wire colours. The standards are gradually changing worldwide to the Brown, Blue, Green/Yellow scheme, but a great deal of older equipment will use + one of the old standards - and it might not be one ever used in your country! Make sure that you treat all incoming mains wires that are not connected directly to the chassis as + hostile. ++ +
While most terms are reasonably easy to get right, take great care with the US terms grounded and grounding conductor. They are not the same thing. The neutral lead is earthed (grounded) in almost all installations - this is done at the switchboard of every connected premises. Many people have claimed that the mains would be "safer" if neither conductor were earthed, but this is simply not the case. Anyone, anywhere in your street or neighbourhood could have an undetected earth fault that shorted one conductor to earth. The system is converted by one failure (which could be undetected for weeks or months) to the way we have it now, but no-one knows ! Where everyone would assume that both wires were "safe", only one would meet this expectation. The other would be live, with the full potential from conductor to earth. See below for more information.
+ +Standards: The European standards (and those of many other countries) can be particularly confusing, and some of the information is either marginally wrong or is incorrectly used with respect to audio and audio-video systems. As an example, I have included section 7.16.2 from TLC-Direct in the UK. This applies mainly to fixed installations, but is included primarily for reference. Fixed installation SELV circuits are not intended to be handled, and they are required to be insulated against accidental contact.
+ ++ 7.16.2 - Separated extra-low voltage (SELV)+ +
+ + The safety of this system stems from its low voltage level, which should never exceed 50 V AC or 120 V DC, and is too low to cause enough current to flow to provide a lethal electric shock. + The reason for the difference between AC and DC levels is shown in (Figs 3.9 & + 3.10).
+ + It is not intended that people should make contact with conductors at this voltage; where live parts are not insulated or otherwise protected, they must be fed at the lower voltage level + of 25 V AC or 60 V ripple-free DC although insulation may sometimes he necessary, for example to prevent short-circuits on high power batteries. To qualify as a separated extra-low voltage + (SELV) system, an installation must comply with conditions which include:
+ ++
+ +- it must be impossible for the extra-low voltage source to come into contact with a low voltage system. It can be obtained from a safety isolating transformer, a suitable motor + generator set, a battery, or an electronic power supply unit which is protected against the appearance of low voltage at its terminals.
+ +- there must be no connection whatever between the live parts of the SELV system and earth or the protective system of low voltage circuits. The danger here is that the earthed + metalwork of another system may rise to a high potential under fault conditions and be imported into the SELV system.
+ +- there must be physical separation from the conductors of other systems, the segregation being the same as that required for circuits of different types (see + 6.6)
+ +- plugs and sockets must not be interchangeable with those of other systems; this requirement will prevent a SELV device being accidentally connected to a low voltage system.
+ +- plugs and sockets must not have a protective connection (earth pin). This will prevent the mixing of SELV and FELV (Functional Extra-Low Voltage) devices. Where the Electricity + at Work Regulations 1989 apply, sockets must have an earth connection, so in this case appliances must be double insulated to class II so that they are fed by a two-core connection + and no earth is required.
+ +- luminaire support couplers with earthing provision must not be used
+
I dispute the claim that 50V AC or 120V DC is "too low to cause enough current to flow to cause a lethal electric shock" - IMO this is bollocks. While it is impossible to cover every possibility, statements like that make people complacent (self-satisfied and unconcerned), and complacent people and electricity do not mix.
+ +Note the comment that contact with SELV is not intended, and that lower voltages apply where uninsulated terminals are accessible for contact. This would include plug-pack (wall-wart) transformers with a standard DC connector, loudspeaker terminals (both on amplifiers and speaker boxes). Based on these recommendations, any power amplifier (or loudspeaker) capable of greater than 75W (approx.) should have insulated terminals designed to prevent contact by the user.
+ + +I don't have a full copy of the latest A/NZ standards, and unfortunately this information is almost impossible to find on-line (for any country - not just Australia). One is expected to purchase the standards (this applies almost everywhere), all are copyright, and none will allow any re-publication without a hefty fee (assuming that such re-publication is allowed at all - usually is not permitted). So much for keeping people informed about electrical safety matters, and making sure that hobbyists and DIY people have access to essential safety information. Ultimately, it is left to individuals like me to provide this data, with no ability to legally disclose the relevant sections of the rules. One section of the Australian/NZ standard to which I have access covers double insulation requirements. Since laboratory testing is mandatory to ensure that all double insulation requirements are met, this is an impractical approach for DIY.
+ +After much description, the standard states ...
+ ++ It will be apparent that double-insulated appliances built according to these principles must not be earthed. ++ +
One wonders how it will be apparent (and to whom) when the item is fitted with a bunch of connectors at the rear. Putting a string of words together is all well and good, but the above is almost impossible to maintain. There is an inevitable mix of earthed and unearthed A/V equipment in almost all households, and these will be interconnected. Quite clearly, this violates the basic principle of double insulation for A/V equipment, and I'd be interested to meet the lunatic who included this rule. I have never seen (or heard about) any 'new' sub-ruling within the standards that allows such interconnection, so can only conclude that it is technically illegal to connect my DVD player to the hi-fi.
+ +Likewise, it is technically illegal to connect my TV set to an antenna (outdoor antennas must (by law) be earthed as they are a definite safety hazard otherwise). Pity the poor person who tries to maintain the rules (that no-one will tell him about) and not earth double insulated appliances. He's not allowed to connect the TV to an outside antenna, can't connect the CD or DVD player to an earthed amplifier, so is forced to watch and listen to ... what, exactly? I suppose that one can listen to the sound through the TV speakers, but they usually sound like a goat pooping on a tin roof .
While most of the rules make some kind of sense, there are countless pieces of A/V equipment that are double insulated (suitably marked, and no earth pin on the mains plug), and these will almost always be connected to another piece of equipment that is earthed (grounded).
+ +This defeats the SELV and/or double insulation isolation requirements, and while it will not usually cause a problem there is definitely a risk involved - mainly to the sanity of the regulations. It is not at all uncommon that regulations are at odds with reality (just look at the RoHS directive), but the SELV requirements only consider new equipment, and do nothing to address the vast amount of older gear that is still in use. One risk that does exist is if the earthed amplifier (for example) has its earth connection removed - either due to a fault or deliberately. If this amplifier subsequently develops a fault that makes the chassis live, then all connected equipment becomes live too - including all the double insulated gear that is supposedly safe.
+ +To make matters even worse, it seems that it is now alright to use a type Y2 capacitor between the internals and the non-earthed metal case of double insulated equipment. Why? Because they have to pass electromagnetic interference tests, and will fail if the case is floating. The metalwork of such equipment can give you a tingle because of the capacitor, but this is perfectly acceptable according to the regulations. This (IMHO) is unadulterated madness! No mains powered appliance should ever give you a tingle. An Australian electronics magazine recently published a project to prevent the tingles, however it's use is probably (technically) illegal. How insane have things become when it is theoretically illegal to prevent your DVD player from giving you a tiny zap whenever you touch the metalwork?
+ +It is also assumed that Y2 capacitors will never fail as a short, and that no-one will ever use a counterfeit Y2 capacitor (for example, no-one will ever fraudulently re-label X-class caps as Y2 to make a quick buck). Counterfeiters have never been known for their moral fibre, so at some stage there's every likelihood that fake Y2 caps will surface. The biggest problem is that no-one will even know about it until someone is killed or injured as a result, and that could be many years after manufacture. Not one regulatory body seems to have thought about this probability, and any attempts to convince government bodies is sure to fall on very deaf ears indeed. I have tried, and got exactly nowhere.
+ ++ Note that I have already seen switchmode plug-pack /wall wart /etc. supplies where the manufacturer (from guess where) decided that a Y2 cap was too much hassle, so used a very ordinary + 1kV ceramic cap instead (these are far cheaper than Y2 caps as you would expect). These supplies have fraudulent CE markings, and would not pass electrical safety tests in any first-world + country. They are readily available from any number of sellers operating on-line auction accounts.+ +
+ + Expect this trend to continue, and expect that unknowing (or unscrupulous) sellers will import equipment and sell it without regard to mandatory electrical safety or electromagnetic + interference tests that may apply in your country. I've already caused one to be shut down because almost every product he sold required safety tests or other mandatory certification, none + of which was done. This is rife on a well-known on-line auction site, where international sellers (and also some locals in many different countries) are selling goods that require approvals, + but have none. It's common to see the CE logo on everything that comes from the East, but most will not have a single test report to prove that the goods actually meet regulations. +
I could recount many other tales of sheer stupidity that have come about as a direct result of the application of ill conceived standards and the blind following of "the rules" by the electrical safety test laboratories and regulatory bodies. To do so will not achieve anything though, so I shall refrain.
+ +Suffice to say that it is almost certainly safer overall if all hi-fi, TV and other home theatre equipment is earthed, so a fault in one cannot cause anything else to become live. This isn't going to happen though - more and more equipment will be double insulated (or at least claim to be) as time passes.
+ + +For electricity, in a word ... this is bullshit! There is no immunity and no single reason that one person survives where another dies. Someone who has had countless electric shocks over many years might not panic and may therefore be able to apply reason and disconnect before dying, but don't count on it. While panic certainly helps people die, it's the electric current that kills them, not the surprise! People who have worked with electrical systems all their lives are killed regularly.
+ +It is generally considered that a current of around 50mA is deadly. This is true if that current passes through the chest cavity, and the likely outcome is a nasty condition known as ventricular fibrillation. If this occurs, the heart muscles are all working, but are out of synchronisation with each other - no blood is pumped and the host dies (about 4-6 minutes before irreversible brain damage to carbon based lifeforms such as humans). If your heart is in fibrillation, it is hard to stop - hospitals have machines called defibrillators that are designed to provide such a powerful shock that the heart stops. Once the heart is stopped, it may re-start by itself, or a smaller controlled shock may be needed. A stopped heart may be able to be re-started by external heart massage (CPR - cardiopulmonary resuscitation) and assisted breathing is almost always needed in such situations. If the victim's heart is fibrillating, you probably won't know this is happening, but there will be no detectable pulse. CPR should be started immediately. A comment on one website is worthy of repeating ...
+ +Good CPR is better than bad CPR, but even bad CPR is infinitely better than no CPR at all.
+ +It used to be considered essential to check for a pulse before administering chest compressions (the 'cardio' part of cardiopulmonary resuscitation). If the victim has a heartbeat, inappropriate application of chest compression may cause damage (and may even conceivably stop a weakened heart). This notwithstanding, the latest CPR recommendations suggest that there is less chance of damage than death through delayed action, as explained by a reader (who is a physician, advanced cardiac and advanced pediatric life support instructor, and practices emergency and family medicine) ...
+ ++ The old recommendations (checking for a heartbeat) are superseded because of the difficulty faced by lay people trying to find a pulse. The delay can cause far more damage than + 'inappropriate' chest compressions. ++ +
I strongly suggest that you do a course in CPR - you may need it one day yourself, and the more people who know how to perform CPR properly, the better. A useful fact sheet is available from The American Heart Association that explains the new techniques, and why the recommendations have been changed. From the AHA ...
+ ++ 2005: After delivering the first 2 rescue breaths, the lay rescuer should immediately begin cycles of 30 chest compressions and 2 rescue breaths. The lay rescuer should + continue compressions and rescue breaths until an AED arrives, the victim begins to move, or professional responders take over.+ +
+ Why: In 2000 the AHA stopped recommending that lay rescuers check for a pulse because data showed that lay rescuers could not do so reliably within 10 seconds. Lay rescuers + were instructed to look for signs of circulation. There is no evidence that lay rescuers can accurately assess signs of circulation, however, and this step delays chest compressions. Lay + rescuers should not check for signs of circulation and should not interrupt chest compressions to recheck for signs of circulation.
(Above section updated 11 Feb 2007 to include latest recommendations).
+ +Should the current be less than 50mA (or is present for a very short time), you will probably only enrich your vocabulary with a few suitable phrases and get on with what you were doing (after a short break to allow the adrenaline to dissipate - highly recommended!). A current above 50mA may stop your heart completely - while it may re-start by itself, I wouldn't count on it.
+ +An electric shock across one hand will not kill you. For example, if two fingers of one hand are in contact with the active and neutral conductors but there is no circuit to earth, it will hurt, it may burn you, but death is highly unlikely except by a secondary effect (falling, heart attack (cardiac arrest), etc.).
+ +An electric shock that passes through your legs will throw you against the rear wall if your knees are bent! The muscles that straighten your legs are much stronger than those that bend them, so an electric current will cause your legs to straighten violently, possibly causing serious injury or even death. This is not uncommon, so always wear insulated shoes when working with electrical appliances, amplifiers, etc.
+ + +You can get a tingle from a telephone circuit, which is at -48V with respect to earth when the line is not in use. Ring voltage is about 90V RMS - that will give a good tingle, but I don't know of a case where it's killed anyone, because it's both current limited, and has 'cadence' - i.e. it is pulsed with a repeating sequence that varies from one country to the next. This is done deliberately. Not only is a continuous ring really annoying, but it less safe than a pulsed waveform. The pulsed waveform gives you a chance to let go of a wire with ring current present, because it stops after a short time (typically a couple of seconds maximum) before starting again.
+ + +A conventional power amp usually has zero volts at the output when there is no signal. There is no AC or DC (maybe a few millivolts at most). In the case of even a very large PA rig with no signal, it is usually safe, but some Class-D (PWM) power amps will have a continuous DC voltage even with no signal. This could range from about 30V up to maybe 90V or so. The same signal exists on both leads, but if you contact either lead and earth you may get a tingle. Even at full power, it is unlikely that an amplifier will kill you if you get yourself across the speaker leads. This is not to say that it won't though, and safe working practices would suggest that you keep yourself away from such temptations.
+ + +Any power source capable of supplying more than 30mA has the potential (sorry ), to kill you - regardless of voltage. While electrocution is highly unlikely from a 1.5V alkaline cell, 12V may be more than sufficient with the right combination of unfortunate circumstances. One of the nastiest electric shocks I have ever received was from a 12V car battery (and there have been a few in the 40-odd years that I've been messing with electrical stuff). I had a small cut on one hand, and a strand of wire stabbed me in the other. This provides a low resistance path because of direct contact with the bloodstream, and I have never forgotten the experience. Definitely not recommended. This was a freak accident, in that the circumstances needed are not common - it has never happened since, and that was back in the 1960s.
There are many tales (and many of them are at least based on fact) of people sustaining horrific burns as a result of tools dropped across telephone exchange bus bars. The phone system uses 48V DC, and the current available in a large exchange (central office) may be hundreds of amps. Quite literally, tools will simply vaporise if they contact the positive and negative bus bars. If you are nearby, you can be badly burnt from the arc and flying molten metal. Most modern exchanges to not use exposed bus bars, and the current requirements are generally lower now because of electronic switching, rather than relatively power hungry electro-mechanical switching systems of old. Still, a bank of massive cells or batteries (almost always lead-acid) supplying 48V is a force to be reckoned with.
+ + +Of the world's mains supply voltages, that used in Australia, New Zealand, Europe, the UK and South Africa is sometimes claimed to be the worst. 220 - 240V (now 230V nominal) delivers a lethal shock quite readily, since the combination of voltage and typical skin resistance is just about ideal. 220V may seem marginally safer, but there's very little difference in reality. Even 120V is quite capable of causing a lethal shock, but it is generally considered to be reasonably 'safe' - especially when compared to 220-240V. There are still a lot of people killed by 120V supplies though, so complacency is not an option. Higher voltages (for example 415V is used for three phase systems in Australia) can deliver a mighty wallop (personal experience again!). I have heard said that in some respects higher voltages may be 'safer' - the shock will either throw you across the room (and away from the source), stop your heart or both. A stopped heart is marginally better than fibrillation - at least there is a chance that it can be re-started relatively easily. Don't count on this though - it isn't much more than a passing thought on my part, and high voltages are most certainly not safe.
+ + +Many cases of electric shock are accompanied by burns. These can be very nasty, and if an appreciable arc is drawn, it has the same temperature as an electric arc welder. This can not only cause extreme burns, but can also cause eye damage because of the intense ultra-violet emitted by an electric arc. DC is generally worse, especially where massive current is available. A car battery can supply 300A quite easily, and this can create a very intense arc. Metal watch bands or rings can easily be melted into your flesh by the heat generated if they form part of a short circuit across a high current source such as a car battery. DC arcs much more readily than AC because there is no momentary break in the supply and no polarity reversal as with AC.
+ + +There are many tales of people surviving electric shocks that should have killed them several times over. One that I know of (for a fact, but it wasn't me this time) happened a long time ago, in an electrical sub-station adjoining a water pumping station. One of the 1,100V 3-phase pump motors caused the mains fuses to blow, so the maintenance chap went to the sub-station to replace them. Normally, only two fuses fail in a 3 phase system, and that's what he took with him. In this case, all three had failed, so the ladder (timber of course) was removed from the switchgear as required and he went to get another fuse. Unfortunately, someone else was working in the sub-station at the time, and moved the ladder! (You can guess what's coming.) As expected, our man forgot to verify that he had the right switching point, and replaced the ladder against a live switch. The arc almost destroyed his elbow, and burnt a hole about 8mm diameter through 6mm thick steel angle. He survived (and apparently got most of his arm movement back), but the hole remained as a warning to any who might start to think they were invincible.
+ +In short, almost any electrical supply capable of supplying over 50mA can kill. Most don't - you feel a tingle or even perhaps a proper wallop that sends you reeling, but that's it. At low voltages, you probably won't feel anything at all. It is wise to remember that even 'safe' voltages can be dangerous - if you are up high, on a roof or perhaps a flown PA speaker system, even a mild shock may cause you to lose balance. Deceleration trauma from a reasonable height is definitely life threatening.
+ + +Many tales from survivors tell of how they grabbed a tool, wire, light fitting, etc., received a major electric shock, and couldn't let go. The reason for this is very simple - the muscles that close your hand and fingers are much stronger than those that open them. Muscles are triggered by minute electrical impulses in the body, and the external electrical current is many times greater than those we generate. The 'closing' muscles will almost certainly win. As a result, you will genuinely be stuck - unable to let go. Panic is the mind's instant reaction to something like that, but if you panic, you can't think. If you can't think you will probably die.
+ +It's really easy to say "Don't Panic" (the technique seemed to work well enough in the Hitch-Hikers' Guide to the Galaxy). It's not quite so easy in real life, so I suggest that you follow every possible precaution to prevent the shock that causes the panic that causes the death. If you do find yourself in that situation, the best option is usually to try a different means of getting rid of the current source. I once saved myself by smashing an electric drill on the ground. It was desperately trying to kill me, but when I smashed it that caused a major short circuit internally that blew the main fuse. It also caused some embarrassment for the workshop manager, because the mains socket I used wasn't earthed! - I didn't know this at the time. I was lucky, and have had to use similar techniques on several occasions - yes, I've been zapped many times from all manner of things. To some extent, it comes with the territory - if you are building and fixing mains powered things for long enough, an electric shock is almost inevitable. We all get a bit complacent when it's something we do every day.
+ +With anything that you suspect, never touch it with your finger tips - if it's live, you may grab it and be unable to let go. If no test equipment is available, use the back of your hand. Because the skin is softer, you can feel quite low voltages this way, but if it's potentially lethal, your hand will pull away from the faulty appliance. You may find that you can detect as little as 1mA quite reliably by using the back of your hand - this is considered to be about the minimum we can feel, although some people will be more or less sensitive.
+ + +If someone else is stuck, never simply rush to their aid. If their body is live, you may find yourself stuck too - it may bring to mind comical scenes of hundreds of helpers all jiggling about wildly, but it's not funny. The first thing you must attempt is to remove the source of power. If this cannot be done for any reason, you may try to pull the person clear by holding onto dry clothing or by using a dry stick or similar (nothing metal or wet for obvious reasons) to pry the person from danger, or to pry the danger from the person. Don't be too concerned about using force and causing minor injury - no-one ever died of a broken finger (probably not strictly true, but you know what I mean ).
There are innumerable possibilities, and it is obviously impossible to try to explain a method for each case. If you are the rescue party, then your first responsibility is to yourself - you can't rescue anyone when you are dead. Attempting a foolhardy rescue may mean that not only does your rescue mission fail and the person dies, but you die too. This is not a good outcome, and is best avoided.
+ +There are quite a few websites that discuss electrocution, what to do and what not to do. The vast majority will give good advice, and although it might be a bit over-cautious in some cases, it is better to be safe (and alive) than sorry (and dead).
+ +If there is any doubt about the victim's condition whatsoever, call for an ambulance. Elderly people in particular may suffer cardiac arrest (heart attack) or even a stroke as the result of the often violent electric shock. In many cases, the electric shock itself may not kill the victim, but can easily be a trigger for some other life-threatening condition.
+ + +Of all the possible sources of electric current, most do have the capacity to kill you, either directly or by causing a fall, heart attack, etc. There are many that are considered safe, but that doesn't mean that you should be complacent. Electrocution from low voltage sources (< 32V AC or about 48V DC) is extremely uncommon - I couldn't find any references to a death from such sources in my searches. This doesn't mean they can't kill you, and sensible precautions are still needed.
+ +A list of do and don't items is always difficult, because one must generalise. However the following may be helpful ...
+ +You should see the power supply or case fan spin for a few seconds and then stop. The power switch LED may also light for a few seconds as well. This will discharge the capacitors in the
+ power supply and make the PC safe to work on.
+
+
This is not a complete list, nor is it intended to be the last word on electrocution from any source. The purpose of this article is to give the reader a few basics, and to encourage further study on the topic. There are over 3 million sites on the Net that discuss electrocution alone. You can also search on many other specific areas within the topic - this I leave up to you.
+ +Even with relatively mild shocks, anyone with a heart pacemaker or a chronic heart condition is at risk of suffering cardiac arrest as an indirect result of an electric shock. I was unable to find any statistics on this, but I'm sure they are out there somewhere.
+ + +If you are working with mains powered items (such as audio equipment), use a safety switch. These are variously known as RCDs (residual current detectors/devices), ELCBs (earth leakage circuit breakers), core balance relays, or just safety switches. In the US, you may see them referred to as Ground Fault Circuit Interrupter (GFCI) or an Appliance Leakage Current Interrupter (ALCI). Regardless of what it's called, test it regularly (they have a self-test button), and make sure that you use it. Always. No excuses.
+ +Your entire workbench should be protected, but be aware that a safety switch will not work if you get yourself across the active and neutral wires but have no path to earth (ground). For this reason, never disconnect the safety earth pin or wire on any piece of equipment - especially while you are working on it.
+ +Safety switches operate by comparing the current in each mains conductor - active (live, hot, line, etc.) and neutral. Provided the two currents are exactly equal, the safety switch will not operate. When a person contacts the live conductor, some current passes through the person's body. This unbalances the current (because that current is not returned via the neutral conductor) and the power is interrupted. RCDs do not protect against overloads, short circuits between active and neutral or any fault condition other than a current imbalance between active and neutral. Normal trigger conditions may be as little as 5mA, and the RCD should disconnect the power within as little as 25ms. Actual specifications vary, but are usually regulated by the electrical authorities for each country. Most RCDs are less sensitive than indicated above, because such a high sensitivity and fast switching will cause nuisance tripping - a small amount of leakage (or even capacitance) can cause the RCD to interrupt the supply.
+ +Typical portable RCDs will have a sensitivity of between 15-30mA, and will switch off if this condition is maintained for more than around 40ms. While the current may seem a little high, it's a reasonable balance between safety and lack of nuisance tripping. If the unit switches off the power for no apparent reason, people are less likely to use it, and then have no protection at all.
+ +Just because you have an RCD installed, this does not mean that normal safety precautions can be neglected. Remember that anything beyond a transformer is not protected - regardless of voltage, so power supplies still present a risk. Small though it may be, the risk is still there, and ignoring it is not recommended.
+ +This applies especially to the use of isolation transformers (sometimes erroneously called 'safety' transformers). Use an isolation transformer only when absolutely necessary, such as working on a switchmode power supply or hot chassis equipment. These are potential killers even with an isolation transformer, so don't think for an instant that you are 'safe' - you most certainly are not. Remember that your safety switch will not operate if there is a transformer in the circuit and you contact the transformer secondary!
+ +Remember - Even with a safety switch, there is still a risk of electrocution as noted above. There is no technology that will keep you completely safe while working on mains powered equipment. Your survival depends on you ... employ safe working practices, and never assume anything!
+ + +In many workplaces, it would seem that electrical safety is compromised by the use of conductive wristbands (or sometimes ankle bands or similar). These are used to prevent ESD (electrostatic discharge) damage to sensitive components. If the ESD protection equipment is either not made to a high standard or not tested regularly, there is indeed a risk.
+ +Most ESD protection systems use a resistor to limit the current. 1MΩ is the most common, and this limits the current to 250µA at the maximum recommended working voltage of 250V. Where voltages above 250V are encountered, wristbands or other methods of connecting the technician to ground should not be used! Alternative methods of static reduction must be employed to minimise the risk to those working on the equipment.
+ +The table below provides the US DOD (Department of Defence) guidelines, based on MIL-STD-454. Bear in mind that these are guidelines, and different people may react differently. These figures may be somewhat higher than those accepted by many other authorities - perhaps defence personnel are tougher than the rest of us .
Current in mA | ||
AC (60Hz) | DC | Effect |
0 - 1 | 0 - 4 | Perception |
1 - 4 | 4 - 15 | Surprise |
4 - 21 | 15 - 80 | Reflex Action |
21 - 40 | 80 - 160 | Muscular Inhibition |
40 - 100 | 160 - 300 | Respiratory Block |
Over 100 | Over 300 | Usually Fatal |
60Hz is referenced because that is the frequency in the US (where the data originated). Expect the results for 50Hz to be fairly similar though. As always, different countries will have differing regulations and requirements, although the anti-static wristbands (and other anti-static equipment) are fairly standard these days. If you do need to use any form of anti-static device, ensure that it is tested regularly and maintained in good condition. Damaged insulation or a shorted safety resistor (for example) could place you at serious risk.
+ + +Many people wonder why the neutral conductor (aka grounded conductor in the US) is earthed. Surely it would be safer if both AC lines were floating, so that contact with either one (but not both at once) would not give anyone an electric shock. This is the same reasoning behind using an isolation transformer when working on equipment with a live chassis - it becomes 'safe'. This 'safety' should extend no further than one individual piece of equipment, and only while it is being repaired. It should always be tested by connecting it to the mains as normal, because some faults may not be apparent while the isolation transformer is being used.
+ +There is a major problem with a floating mains supply, and it turns out that leaving it floating is actually incredibly dangerous. It must be remembered that a great many houses and business premises will be connected to the same circuit. Should a fault develop in the wiring or in any piece of equipment in any of the connected premises such that one conductor contacted earth, then the situation is as it is now.
+ ++ The problem is that no-one knows about the fault, because nothing happens. Fuses/ circuit breakers don't blow, residual current + devices can't be used effectively with a floating supply, and now one conductor is 'hot' and the other is 'cold'. BUT WHICH ONE? ++ +
Unlike the situation now where everyone knows which mains lead is active ('hot', which can kill you) and which is neutral ('cold', which is generally 'benign') because they are colour-coded, we have a condition where no-one knows about the fault, no-one knows that one lead is now 'hot', nor do they know which one! Because circuit breakers/ fuses remain operational, there is nothing to warn anyone or disconnect the fault. This situation could remain for quite some time, without anyone being aware there was a problem.
+ +Some time later, a similar fault may develop in another piece of equipment in another house, but with the other lead now connected to earth. This is a short circuit, with both 'floating' mains leads connected to earth, but in different premises. Circuit breakers may or may not operate, depending on the resistance of the earth connection, and meanwhile we may have a voltage gradient across the ground itself, between the two faults.
+ +Likewise, the exact same AC line may be connected to earth in several places due to faults. It would be incredibly difficult for any electrician to try to isolate the faults, because they could be in any house(s) on that section of the distribution grid. It should be clear that this is a nightmare scenario - in every significant respect. Even when everything is ok (no faults) the capacitance to earth from the distribution transformer and all distribution and household wiring will be significant, and may be enough to create an artificial 'centre-tap', so both mains leads are 'hot' with respect to earth, albeit at half the normal voltage and fairly high impedance. This doesn't sound very safe to me!
+ +The current system is used in one form or another everywhere. I don't know of any country that allows the use of floating mains supplies, because everyone in the industry knows that the risks are unacceptably high if one mains conductor is not designated as a neutral, and securely connected to earth/ ground via water pipes, dedicated grounding stakes, or other means that ensure that the neutral conductor remains at (or near) 0V AC with respect to earth.
+ +Australia and New Zealand use what's known as the 'MEN' system (multiple earth(ed) neutral, defined in AUS/NZ 3000:2007 Clause 1.4.66). At one point in each installation (at the main switchboard), the neutral and protective earth conductors are joined, so the neutral connection will be earthed at multiple individual premises. I have seen claims that this is somehow 'dangerous', but that would only apply if it weren't a specific requirement under what are commonly known as the 'wiring rules'. Because of the multiple connections, the neutral conductor remains earthed/ grounded even if one or two installations on a distribution feed are faulty. It's possible to find fault (at least in theory) with any wiring scheme, but ours has been in use for a great many years, and no specific hazards (compared to other systems) have been identified that are related to the MEN system (although a broken neutral from the 'grid' may cause problems, but this is rare).
+ + +A large number of sites were scanned for information, and it is not possible to list them all. Some have minimal basic information, others go into great detail about specific incidents. Those sites that had material that was used in this article are listed within the text. As noted in the warning panel in the introduction, the material presented in this article is largely common sense, with much based on personal experience. It is definitely a topic worthy of your own further research.
+ +If anyone has information they would like included, please let me know. While I have taken every care to ensure that the material is correct, there will be errors and omissions. I welcome further input, but no anecdotal 'evidence' please.
+ +Several suggestions have been included already, either adding to existing information or providing new details. This is a serious topic, and it is my intention to add updates or additional warnings when they are received. Any material submitted should have references if possible, although this isn't always necessary.
+ +Information on electrostatic discharge protection was obtained from ESD Around High Voltage - August, 1996 Ryne C. Allen, NARTE certified ESD Control Engineer, Desco Industries Inc. This document was supplied to me by a reader.
+ ++ Online references ...+ + +
+ 1. Multiple Earth Neutral Wiring System
+ 2. Earthing Systems - Wikipedia +
![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Loudspeaker Cabinet Design |
![]() ![]() |
One of the most popular pastimes in the DIY audio world is building loudspeaker systems. A web search will reveal literally thousands of different designs, a great many of which are (at least superficially) quite similar. It's highly likely that most of these will sound different from the others, although it's almost guaranteed that the designer will claim that his/ hers is 'better' in some way. We can be fairly certain that some of the published designs will sound very good, and others awful.
+ +This is reflected in commercial offerings as well. There aren't many 'real' audio brands that will be truly awful, but there will be differences. This is often despite the fact that many will show frequency response (both on and off axis) to be very similar, with many sharing one or more of the same speaker drivers as used by other manufacturers. It can be very difficult to work out just why (and how) two apparently near identical designs can sound different.
+ +Loudspeakers are the most subjective component in any audio system. Amplifiers and preamps are routinely so close to being a 'straight wire with gain' that measurements can be difficult. While CD and SACD players (as well as DVD players) definitely sound different from vinyl, blind testing the preamp-amp combinations will most commonly result in a 'null' outcome - it's usually not possible to correctly identify amp 'A' from amp 'B' with a statistically significant result. This is often not the case with loudspeakers.
+ +There are several things that can change the sound of a loudspeaker system, even when using driver components that are identical to another system. Some of these differences will be due to the way a system has been 'voiced' - a term that means adjusting the response so the system sounds balanced and 'right' from the designer's perspective. Very few loudspeakers have a truly flat frequency response, and the way the system interacts with the listening room also changes the sound.
+ +Few hobbyists today would argue that enclosure panels need to be rigid and acoustically 'dead'. How you get there depends on the philosophy of the designer. At one stage, hollow, sand filled panels were popular. These are certainly likely to be acoustically dead, but are difficult to make. It may or may not be possible to refill the panels after the sand has settled or the panels have expanded as the sand compacts and tries to force the panels apart.
+ +Concrete has been used, sometimes with tiny pellets of expanded polystyrene foam to reduce the mass (so the box can actually be moved), sometimes only the baffle may be concrete. Different types of plywood are used (and no, birch ply (for example) should not sound different from some other tree species). If one box sounds different from another (identical other than material), then the material is not damped properly. Once something is acoustically dead, it doesn't matter what it's made from - dead is dead. Differences are due to resonance(s), meaning that one or more panels are not dead at all.
+ +The cabinet shape can make a difference, even if the enclosed volume is exactly the same. While the panels may be acoustically dead, the air space within is not. The enclosed volume should never have two internal dimensions the same (such as top to bottom and front to back) as that will usually reinforce standing waves at certain frequencies. The air within the box can be made somewhat acoustically dead by adding damping material - fibreglass 'wool', or any number of proprietary filling materials that are designed to absorb the sound inside the enclosure. You'll find claims (well, perhaps not quite) that only virgin yak's wool should be used, because man made fibres 'sound bad'.
+ +When these materials are added loosely, the effect is to make the enclosure acoustically larger. If packed in tightly, the enclosed volume is smaller. Both of these change the way the loudspeaker driver reacts with the enclosed volume, mainly at or near the speaker's resonant frequency. In many (but I suspect by no means all) commercial designs, it's expected that the driver interaction with the filled volume will be modelled and measured, and the filling adjusted to get the right amount of absorption, while minimising internal reflections to the point where they can 'do no harm'. Even this term will be variable - some will claim that -40dB is ok, others may insist on at least -60dB, while others might be content with -20dB.
+ +Some insist that any enclosure is bad, and the speaker drivers should be free, allowed to show their naughty bits to the world should anyone peek around the back. Open baffles create a dipole effect - the sound will be (generally) equally loud directly in front or behind the speakers, with (theoretically) zero output from the sides and top. This won't be the case, but again, should the side response be -20dB? -40dB? More? Less? This is almost impossible to answer.
+ +Such systems interact with the walls, floor and ceiling of the listening space very differently from a 'conventional' enclosure. Positioning will usually be fairly critical, but there are many who are firmly convinced that this is a better way to build a speaker. There are (of course) others who claim exactly the opposite, that enclosures are essential and that the open baffle idea is flawed.
+ +Many people design cabinets with the deliberate aim of avoiding all parallel surfaces. This prevents (or helps to prevent) standing waves from developing within the enclosure, and is generally a good idea. However, it's not easy to do without dedicated machinery that can cut precise odd angles so that it all fits together. In some cases, you may find that adding an internal baffle at an angle within the enclose space will work, and if it's well perforated (to ensure that the total internal volume is available to the rear of the speaker cone) it may be enough to prevent major standing waves. Acoustic damping material is still needed, no matter how irregular the interior volume. The idea in most cases is to absorb the rear radiation from the speaker completely, because any sound that re-emerges through the cone will not be in phase (or in time) with the original.
+ +The 'acoustic labyrinth' type of speaker is a (fairly serious) extension of this principle, with the length of the 'tunnel' often used to create a transmission line to reinforce bass frequencies. These cabinets used to be very popular amongst DIY constructors, but seem to have fallen from favour over the last decade or so. Part of the reason is that they are difficult and expensive to build, and the results may be rather disappointing after you've gone to all that trouble.
+ + +This article is not about specific designs. You won't find any cabinets, dimensions, crossover circuits or anything else that is available in countless books, magazine articles or websites here. What you will find is general guidelines, many based on 'ancient' knowledge, and others that are more-or-less common sense. The idea is to provide some basic information that can be used in the design of any cabinet, regardless of the drivers used.
+ +In general, the guidelines are intended for domestic hi-fi applications, not commercial, public address or sound reinforcement systems. These have many other constraints, in particular weight and cost. When building your own systems, these are generally secondary, and the extra cost of adding an extra brace or more damping material is small compared to the overall cost of the project.
+ +There is only a little about specific materials that could/ should be used for cabinet construction. See Section 10 for a bit more on this topic. Some people loathe MDF but love plywood (whether exotic or otherwise), and others are exactly opposite. Some materials may be difficult to get (or very expensive) in many places. One recommendation I will make is to avoid 'chipboard'. While it is still a popular material for some applications, it's generally not robust enough for a speaker cabinet. Veneered chipboard is somewhat better, but the material's structure is such that it's not easy to make a rigid box, and radiused edges expose the coarse grain structure which is time-consuming to fill to get a good surface finish.
+ +Some cabinet shapes can be fabricated using fibreglass, but that requires a mould that is used to form the cabinet shapes. Unless you are experienced in the use of fibreglass (or carbon fibre), it's hard to recommend for hobbyist enclosures. The glass fibres and resins used are potentially dangerous without a proper face mask to prevent inhaling the fumes and/ or glass fibres. Fibreglass panels can also be quite flexible, which allows the panels to radiate sound as they flex, and bracing can be difficult to change if it's moulded into the structure. Attaching anything inside from the outside surface is generally impossible because the outer surface is usually the final finish, and external fastenings can't be concealed.
+ +I suggest that prospective builders look at Project 181, an easy to build accelerometer intended for measuring the movement of speaker enclosure panels. It's highly recommended, because without an accelerometer you have no idea how much the panels are flexing or their resonant frequencies. Lacking this, you may end up with an enclosure that just doesn't sound 'right', even when you think you've done everything correctly. Panel resonance is always difficult to assess unless you have a way to measure it, and then take new measurements to see if the issue is fixed or not.
+ +I do not suggest or recommend commercial software used to design speaker enclosures, with the one exception of the free program WinISD (you can find it on the Net). There are countless programs that either do (or purport to do) complete designs, based on the drivers you are using. These omissions are not because the software doesn't work, but simply because I operate as an independent individual, and I do not make specific recommendations for anything, other than components used in project articles.
+ +Other than a few general hints here and there, I also won't be discussing general woodworking methods, choice of adhesives or finishes, the use of power tools or anything else that's well catered for all over the Net. It goes without saying that you need an area where you can generate copious amounts of sawdust, and another area (completely free of sawdust) if you plan on any high quality paint finishes. For example, classic 'piano black' tends to look a bit tatty if it has dust particles embedded all over it. Various power tools are essential, although simple enclosures can be made using only hand tools for the truly masochistic constructor .
In all cases, there will be loudspeaker driver parameters that are different from those claimed by the manufacturer, and in some cases the necessary data (in particular the Thiele-Small parameters) that are either quite wrong or missing altogether. If this is the case, I suggest that you read the article Measuring Loudspeaker Parameters, as this requires no specialised equipment and gives good results. Fairly obviously, this also extends to recommendations for particular vents or passive radiators.
+ +This article also (deliberately) avoids making any recommendations for drivers. There are so many, and they often have a very short manufacturing period. The driver that one person loves may well be hated by others (often for obscure and illogical reasons for both 'love' and 'hate'). There's also the issue of availability - there's no point recommending a particular driver that's only available in one country, because no-one else will be able to get it easily (if at all). I will suggest that you avoid drivers that show sharp discontinuities on the impedance curve. I've run tests on a few such drivers, and the impedance discontinuity usually corresponds to a response anomaly which can be such that it simply cannot be ignored (nor equalised!).
+ +Given two drivers that are otherwise identical (or sufficiently close over the required frequency range), the one with higher efficiency is usually the better choice. However, this is not an absolute position, as there can be other things that influence your final decision. This may simply come down to appearance - a driver that sounds great but looks ugly usually doesn't rank highly, unless it's hidden beneath grille cloth. Not everyone like grilles, so appearance can be important. For many people, appearance is a major factor, and disguising an 'unappealing' driver's basket-front can be an expensive and difficult undertaking.
+ + +There are many different types of enclosures, and it's not possible to cover them all in any detail. Of those listed, they are shown (more or less) in order of complexity, from the simplest to the most challenging to build. Some are very common (simple sealed and vented enclosures for example), with others used primarily by hobbyists and a few 'boutique' manufacturers. While most of the drawings are shown with a single driver, in the majority of cases there will be at least one other (a tweeter), and in some cases there will be a secondary enclosure containing a midrange driver.
+ +No 'esoteric' enclosures are covered here. It's assumed that anyone who wishes to undertake something that quite out of the ordinary will have the necessary skills to ensure that everything is done correctly. This isn't always the case of course, so anyone who does want to make cylindrical or spherical enclosures (or anything else with a 'weird' shape) should still find many of the suggestions helpful.
+ +Remember that the volume occupied by the speaker driver(s) needs to be added to the total volume calculated, and if a port is used, the volume of that must be included as well. The same applies to bracing materials - they all occupy space in the enclosure and need to be accounted for. You may find that you need to add extra bracing once the enclosure is (almost) finished, so a bit of extra volume can be added just in case. You can usually change the internal volume by a small amount without it having a serious impact on performance, and remember that the listening room will have much greater effects on overall sound quality than any small miscalculation of internal volume.
+ +Speaker parameters are not absolute numbers, and in some cases they can be way off. It's always wise to measure the Thiele/ Small parameters yourself. There's an ESP article on this topic - see Measuring Loudspeaker Parameters for all the details. There are many other articles on the Net that describe speaker parameter measurements, so use the one whith which you are most comfortable.
+ + +The open baffle or dipole speaker is favoured by some, most notably the late Siegfried Linkwitz. An open baffle (or open-backed box) was used from the earliest days of amplified sound, and is by far the easiest to build. Ideally, the baffle should be large compared to wavelength (the 'infinite' baffle), but this is very difficult to achieve at low frequencies. So, while they are easy to build, they are not so easy to design (or even produce) in sizes that suit low frequencies. One wavelength at 100Hz is already 3.43 metres, so the size rapidly gets out of hand.
+ +
Figure 2.1 - Dipole 'Enclosure' ('Infinite Baffle'/ Open Backed)
For higher frequencies, it can be argued that dispensing with the box prevents internal reflections. This is quite true, but of course the rear radiation is introduced into the room, which has its own reflections, most of which are completely unpredictable and can be a lot harder to deal with than an enclosure's internal reflections. Open backed speakers are very common for guitar amplifiers, where the open back provides a stage sound that most guitarists prefer. An open backed box can be likened to a flat baffle that's been 'folded' to reduce its size. Of course, this also protects the rear of the speaker from damage in transit - especially important for guitar systems.
+ + +The sealed enclosure is very common, and can work very well if the internal volume is calculated to match the speaker's characteristics. The Thiele-Small parameters of the driver will show that optimum performance requires an enclosure of just the right size. If it's too small there will be a pronounced bass peak, followed by a sharp rolloff at 12dB/ octave. Of anything that would qualify as an 'enclosure', this is the simplest.
+ +
Figure 2.2 - Sealed Enclosure
Rather than being radiated into the room, the sound from the rear of the speaker cone is absorbed, using proprietary fibre mats, felt, carpet, fibreglass, or a combination of these materials. Ideally, no rear radiation will be reflected back through the cone, something that becomes critical at midrange and higher frequencies. Bass can be very good (often with equalisation), but this requires drivers with a larger than normal maximum excursion (Xmax). Sealed cabinets are common for instrument amplifiers (guitar, bass, keyboards).
+ + +This is probably the most common enclosure in use today. It was used in very early speaker systems, but it was basically a 'trial-and-error' design until the loudspeaker parameters were properly quantified by Neville Thiele and Richard Small. This allowed mathematical calculation of the enclosure and port sizes, and it was then possible to design a system, build it, and have it perform as expected. Many of the early 'tuned' boxes were what's now commonly referred to as 'boom boxes', because they had excessive and often 'one note' bass. Countless programs have been written to allow users to design an enclosure, based on the Thiele-Small parameters. This has removed much of the guesswork, but by themselves, the programs are (mostly) unable to provide a complete design. Most provide the necessary internal volume and port (vent) diameter and length, but further 'tweaking' is nearly always needed.
+ +
Figure 2.3 - Bass Reflex Enclosure
In these enclosures, the rear radiation is utilised to boost the bass response below the loudspeaker driver's resonant frequency. The combination of the enclosure volume and the vent length and diameter form a Helmholtz resonator, which (when done properly) reinforces the low frequency response without creating excessive bass and/or poor transient response. It's important to understand that the Thiele-Small parameters are 'small signal', meaning that the performance is not necessarily the same at high power levels. Only the bass region is affected by a bass reflex enclosure, and mid to high frequencies still need to be absorbed within the enclosure.
+ + +A variation on the 'traditional' bass reflex enclosure uses a passive radiator. This is pretty much a loudspeaker with no magnet or voicecoil, and it's generally tuned for a resonant frequency somewhat below that of the woofer. Some have weights that can be added or removed to tune the resonant frequency of the radiator. These have some advantages over a port, in that there is no possibility of 'chuffing' or other noises that a ported enclosure can create if the air velocity is too high.
+ +
Figure 2.4 - Passive Radiator Enclosure
Fairly obviously, a passive radiator takes up more space on the baffle than a port, but some people prefer them for a variety of reasons. This is a configuration that seems to be somewhat 'seasonable', gaining or losing favour for no apparent reason. There used to be many passive radiators on the market, but they appear to be less common than they once were.
+ + +An aperiodic enclosure is (kind of) halfway between a sealed and vented box. The vent is deliberately restricted, so it's either a leaky sealed box, or a 'constricted' bass reflex. There's quite a bit of information on the Net, but not all of it is useful, and design equations are hard to come by.
+ +
Figure 2.5 - Aperiodic Enclosure
The above is one of many different ways that an aperiodic enclosure can be configured. This isn't a technique that's widely known, and it's also not one I've experimented with. Many claims are made, and there are many variations - in some cases, just a small hole or a series of narrow slots is used, with appropriate damping material covering the openings. There appears to be little consensus from designers, so the technique is somewhat experimental. It's claimed that with an appropriate aperiodic 'vent' that the enclosure is made to seem much larger than it really is, and it's not uncommon to see aperiodic enclosures that appear much too small for the driver used. As I said, I've not tried this approach, but may do so when time (and motivation) permit.
+ + +Isobaric speakers are not particularly common, and are only ever used for the bass region. The benefit is that the required cabinet size is halved compared to a single driver, allowing a more compact system. The disadvantage is that the efficiency is also halved, because the same power is fed to the two drivers, but output level is not increased. Although the drivers are shown 'nested', with the front driver partially inside the rear driver, they can also be mounted face-to-face. The enclosed volume between the drivers must be small to ensure optimum coupling.
+ +
Figure 2.6 - Isobaric Enclosure
Isobaric enclosures can be used with or without a vent, depending on the desired outcome. Most speaker design software can accommodate isobaric configurations, but the mechanical details can be awkward to produce. There are some commercial isobaric enclosures, but they aren't especially common in the market. This is a good design to use if the driver you wish to use requires a box that's larger than you can accept, but no isobaric enclosure should normally be operated above around 300Hz or so. The cost, weight and relative inefficiency of isobaric enclosures limits their usefulness for commercial systems.
+ + +These are probably one of the most challenging to build, but can produce a great deal of bass over a relatively narrow bandwidth. They are used only for bass, as the dimensions are not suitable for higher frequencies. As the name suggests, these enclosures are an acoustic analogue of an electrical bandpass filter. They can have very high efficiency, but the enclosure is sensitive to variations of driver parameters. If a driver fails, it must be replaced by one with near identical parameters, or the response will not be as expected.
+ +
Figure 2.7 - Fourth Order Bandpass Enclosure
Although a fourth order box is shown, the sixth order enclosure is also used. These have an additional vent between the speaker's rear enclosure and the front resonant chamber. Bandwidth is usually fairly narrow, so they cannot reproduce a wide range of frequencies. Fourth order systems are fairly common for large sound reinforcement applications, where it's (apparently) more important to create a vast amount of noise than to consider fidelity. This isn't always true of course, but it does seem to be the case in many of the systems used for very large audiences. Some care is necessary to ensure that the effect isn't 'one note bass', where the bandwidth is so narrow that they sound as if only one note is audible (many automotive installations suffer the same problem).
+ +These are without doubt the hardest to design, and even small variations from the 'ideal' can cause serious response anomalies. Because of the acoustic filter, some people will say that this enclosure type is responsible for 'day late' bass - there is often a significant delay from the application of a signal before the resonance is stimulated sufficiently to produce output. The delay is usually somewhat less than a full day, but you get the idea . This configuration can be extended to eighth order, but this is less common (and has a very narrow bandwidth).
Finally, there's the transmission line. In theory, the idea is that the line is infinitely long, but this is a little impractical for most listening spaces . Mostly, the line is designed for ¼ wavelength at the speaker's resonant frequency, and there will be some reinforcement from the open end of the line. These are notoriously difficult to get 'just right', and the process usually involves experimenting with stuffing within the transmission line until the desired outcome is achieved. An optimally set up transmission line should reduce the resonant frequency of the driver, something that no other enclosure type can achieve.
Figure 2.8 - Transmission Line Enclosure (Shorter Than Normal)
The line shown above is much shorter than normal, only because I didn't want a huge image to show one in full. The general principles are unchanged, and it's usual practice to taper the line so it gets narrower along its length. Some constructors will insist that sheep's wool is the only material that should be used, and others will use a combination of different materials to get the desired results. It's important that the stuffing within the 'line' cannot move, disintegrate or compress over time, as it's very hard to get to once the enclosure is finished and sealed. Unlike a more traditional enclosure, the internals of the transmission line can't be accessed by removing the speaker.
+ + +Although I don't intend to provide must info about horn systems here, they have to be mentioned - if only in passing. A horn acts as an acoustic transformer, reducing the high acoustic pressure at the diaphragm (mounted at the throat) to a low pressure (at the mouth) that matches the air. Horn systems can be 10dB more efficient than direct radiators, but for low frequencies the mouth (and length) need to be very large, making them impractical for home systems. The original Klipschorn™ was one of very few 'domestic' systems that used horn loading for the full frequency range. Developed in 1946, they are large and very expensive.
+ +Fully horn-loaded systems used to be common for sound reinforcement, and when done properly are very efficient and provide sound that is/ was (IMO) vastly superior to that obtained from modern line arrays. There are several domestic and studio monitor systems that use either a horn or a waveguide (a similar principle) for the tweeter. Waveguides are becoming very common, and can be used with a 'conventional' dome tweeter to provide a small increase in efficiency and a better controlled dispersion than a dome tweeter by itself. See Practical DIY Waveguides on the ESP site for more information.
+ +Because the design of horns is so specialised, this is the limit of what is shown here. However, construction methodology, the need to ensure that panels are not resonant and other general comments apply to any enclosure, regardless of the type of system. Panel resonances in a folded bass horn can be particularly troublesome, due to the high pressure at the throat of the horn.
+ + +While making an enclosure with no parallel sides is possible, it's very difficult for the home constructor making only a pair of enclosures. The vast majority of speakers use conventional parallel sides, front and back, top and bottom. This can still produce a very good box, but there is one thing that can make it 'better'.
+ +There's something known as the 'Golden Ratio', signified by the Greek letter φ (Phi). There are many claims as to its inherent advantages (including aesthetics), but it does have an important characteristic ... no side is a multiple or sub-multiple of any other, so a box using the golden ratio cannot set up single-frequency standing waves across more than two panels. The ratio is defined as ...
+ ++ φ = (1 + √5 ) / 2+ +
+ φ = 1.61803398875... +
For example, if the baffle is 400mm high, the width (or depth) should be 247mm, with the remaining dimension being 153mm. Note that these are all inside dimensions. These dimensions are not harmonically related, so there is less chance of reinforcement of particular frequencies or overtones. In reality, it probably doesn't make a great deal of difference one way or another, and it's just as easy to build a box using the 'golden ratio' that sounds bad as any other box shape (excluding a perfect cube with the driver smack in the centre of one face of course ).
The ratio can also be described as 0.618 : 1 : 1.618. Which side you choose for the baffle is largely irrelevant, but ideally it would be the narrowest side (so for the example above, the baffle would be 400mm high by 153mm wide (internal). However, this does limit the size of speaker that can be mounted on the baffle - typically to no more than 150mm (6"). If the enclosure has a sub-enclosure (for a midrange driver for example), the problem gets a bit harder. There are probably far more commercial speaker boxes that don't use the golden ratio than there are that do, so to some extent it's always going to be a moot point.
+ +
Figure 3.1 - Golden Ratio And √2 Ratio
As always, the room dimensions will have a far more profound effect on the sound, and bracing, internal damping and sound absorbing materials are just as essential as with any other enclosure shape. It's expected that very few rooms will adhere to the golden ratio, and using it for a loudspeaker doesn't guarantee anything. Overall construction methodology, with particular emphasis on bracing, can give excellent results provided some care is taken to ensure that the panels are different sizes, and that bracing is not symmetrical. Braces should always be off-centre on a panel so that the two 'sub-panels' have different resonant frequencies, but this can be hard to achieve while maintaining reasonably simple construction techniques.
+ +While there will likely be some people who insist that the golden ratio always makes boxes sound 'better', it's not a 'magic bullet', and if it results in an inconvenient box size then by all means feel free to deviate. Another ratio that again isn't 'magic' but can work well is √2 (1.414213562...), which is also an irrational number, as is π (Pi - 3.141592654...). √2 is useful and can provide a 'better' aspect ratio than φ (in particular, you can get a wider baffle assuming the box is deeper than it is wide), but you will usually stay out of trouble provided dimensions are not direct multiples (or sub-multiples) of each other. If at all possible, try to use irrational multipliers, rather than 'simple' ratios such as 1.5 (etc.). The drawing above shows the two ratios superimposed so you can see the difference easily.
+ +While it might seem that 1/3 is irrational (0.33333...), it's not, at least in terms of sound. A panel that has a ratio of 1:3 may excite the third harmonic. All 'simple' ratios can create problems.
+ + +There are quite a few things that people do for appearance, that usually cause a speaker system to be less 'perfect' than the builder may have hoped. One of these is placing the drivers in a neat row, exactly in the centre of the baffle. While this means there's no 'left' or 'right' speaker (they are interchangeable), it also means that diffraction effects are magnified. Diffraction happens when a sound wave reaches a discontinuity. This is commonly the edge of the cabinet, but it includes adjacent speaker drivers as well. Some people consider that diffraction effects are inconsequential, but IMO it's better to err on the side of caution.
+ +When the drivers are equidistant from each edge of the cabinet, the diffraction effect is magnified. It was shown many years ago by Harry Olson that a circular baffle with a driver in the centre is the worst possible arrangement. A square baffle (speaker driver centred) is almost as bad, and the best results are obtained when the driver is mounted on a sphere. For more conventional systems, all drivers should be a different distance from each edge of a rectangular baffle. Ideally, the edges will be well rounded - not quite to the extent of producing a partially spherical baffle perhaps, but lovely square edges should be avoided.
+ +In some cases, a diffusing or absorptive material around the driver can help, but to be effective at lower frequencies it needs to be unrealistically thick. It's not difficult to ensure that all drivers are a different distance from the edge of the cabinet though, and you only need to be concerned with midrange and treble - bass is more-or-less omnidirectional, because the diameter of the driver is small compared to wavelength.
+ +The ideal loudspeaker would have equal dispersion at all frequencies, so that sound reflected from walls, floor and ceiling would have the same spectral energy as the direct sound. This is easier said than done, although there are a few speakers that do manage to come close. This is something that some designers strive for, while others ignore it almost completely. Even dispersion does have some major benefits of course, especially if you listen (or are forced to listen) off-axis of the system. The so-called 'sweet spot' needs to be wide enough so that everyone listening hears the same (hopefully) well balanced sound. This is achieved in only a few designs, and for the high frequencies it generally means using a well designed horn or waveguide. It's harder at the lower end of the treble range (around 2-3kHz) because in most systems, this is provided by the midrange (or mid-bass) driver, which will have a diameter that's a significant fraction of the wavelength. Some midrange drivers use a 'phase plug', which is intended to provide more even coverage at higher frequencies than a similar driver without one.
+ +For 'bookshelf' speakers or any enclosure that will be placed against (or near) a wall or other large surface, a rear-facing tuning port is ill advised, because it won't be able to radiate into 'free space'. Likewise, placing a vent right next to the tweeter isn't sensible either. I was unable to locate any definitive papers on this topic on the Net, but it doesn't seem wise to create relatively high velocity, low frequency air movement close to the high frequency driver. The air movement is likely to cause some degree of high frequency modulation, which may be similar to so-called Doppler distortion. I have no proof one way or another, but IMO it's not ideal. The port opening may also create diffraction effects, but I've found no information one way or the other on this.
+ + +Deep bass reproduction ideally needs a fairly large diameter driver, or high (sometimes unrealistic) linear excursion. When a single driver has to cover from bass all the way up to the tweeter's crossover frequency, there are inevitable compromises. Bass needs a larger driver than midrange, and once the diameter of the driver is 'significant' compared to wavelength, the off-axis response suffers. Ideally, the driver used for midrange shouldn't exceed around 125mm (5"), but if it has to handle bass as well that's somewhat on the small side of the ideal.
+ +This isn't to say that a 125mm driver can't produce good bass - some are surprisingly good. However, one also needs to ensure that the excursion remains within the linear range at all times! That means a fairly large XMAX or comparatively low listening levels, otherwise there will likely be excessive intermodulation distortion. Expecting response below around 40-50Hz with small drivers is unrealistic, because their radiating area is too small. Multiple drivers can work, and will ideally be configured as a '2.5-way' system, where two drivers are in parallel for low frequencies, but the driver farthest from the tweeter has a rolled-off top end. The D'Appolito (invented by Joseph D-Appolito, aka MTM - midrange-tweeter-midrange) arrangement is preferred by some, but it may cause issues when the listener is not in-line with the tweeters. It's always important to keep the distance between the midrange and tweeter as small as possible to avoid phasing errors in the vertical plane (sometimes referred to (by me at least) as the 'sit-down, stand-up' effect, where the 'tone' of the speaker changes when you sit or stand).
+ +We also need to look at what 'significant' means in terms of wavelength. + +
In general, the diameter of any loudspeaker driver should ideally be less than ½ wavelength at the highest frequency of interest, but that can be extended at the expense of dispersion. The driver's cone diameter should always be smaller than 1 wavelength. Wavelength is determined by ...
+ ++ λ = c / f Where c is the velocity of sound (nominally 343m/s at 20°C), λ is wavelength, and f is frequency ++ +
From the above, it's apparent that smaller drivers are ideal for the midrange. A 65mm cone (nominally a 90mm driver) will have almost perfect directivity up to 2kHz, and is generally acceptable up to 3-4kHz. Some drivers include a phase plug which is intended to improve the directivity at higher frequencies. Some can be effective, others not so good - it depends on whether the manufacturer has included it solely for aesthetics or performance (the latter is more expensive, because it requires many tests to get it right). While it's common for people to use 150mm mid-bass drivers with a tweeter, it's hard to get a tweeter that can cross over at a low enough frequency to prevent poor off-axis response.
+ +A very common crossover frequency is 3kHz, at which frequency a complete wavelength is only 114mm. The midrange cone should ideally be no more than half that (57mm) but simple reality dictates that it will almost always be larger. A 100mm (4") driver is a reasonable compromise, with the cone being pretty close to the optimum diameter. This almost always means that the system will be a 3-way, since a 100mm speaker isn't going to be very useful for bass. In general, 3-way systems can perform very well, and there's rarely any need to exceed that - other than adding a subwoofer of course. While technically that makes the system 4-way, the sub is usually mono, so only one is used in most systems.
+ +In some cases, it may be possible to use a waveguide to load the tweeter and allow operation to a lower frequency, but these can be difficult to design and build for the hobbyist constructor. The secondary advantage of using a waveguide is that it moves the tweeter back from the baffle, and can help to 'time align' the woofer and tweeter. Waveguides are discussed in the contributed article Practical DIY Waveguides (a three part article). Designing a waveguide that does the things you want (and none of the things you don't want is not a trivial undertaking.
+ +Of course, the points made above are suggestions, and are not intended as 'rules'. Many very successful commercial systems use a larger mid-bass driver, and can still perform very well. There will be 'disturbances' in the off-axis response (especially with low-order crossover networks), but not everyone agrees that the polar response has to be perfect over the full frequency range. For example, if your listening room is acoustically treated to eliminate most reflections, the off-axis response is only important if you listen off-axis. Room treatment can have far a greater influence on what you hear than most people realise, and while important, that's not an area where I have significant experience, and no products (whether commercial or DIY) will be discussed.
+ +Ultimately, while perfection is always nice to have, I don't think that any commercial loudspeaker has actually achieved 'perfection' as such. The same can be said for room treatment and (although to a far lesser extent) electronics. It's not at all difficult to design and built preamps and power amps that have distortion so far below the audible limits that they contribute little or no degradation of the sound. However, this has never stopped people from going 'one better', to the point where it can be difficult to measure any anomalies with the best equipment available.
+ + +Note: There will be an article coming soon that discusses time alignment in some detail. In the meantime, while the info below is more-or-less accurate, there's a lot more to it. As a starting point, the top plate (voicecoil polepiece) is a good 'first guess'. When the voicecoils are aligned you are close to the ideal, but there will be a small phase shift caused by any voicecoil inductance - this will be in the mid-bass/ midrange driver, as there is usually significant semi-inductance above 500Hz or so. This delays the output of the midrange driver a little, but it shouldn't normally cause a serious error. The new article is due mid September.
+ +Ideally, all loudspeaker drivers in a system will reproduce the energy of a transient simultaneously from the listener's perspective. This nearly always means that the tweeter should be set back on the baffle, or its output will be slightly ahead of the midrange driver - the sound from the tweeter will reach your ears first, closely followed by that from the midrange. The time difference may only be 75µs or so (up to around 150µs is not uncommon with larger mid-bass drivers) but that small difference can make a surprisingly large difference to the frequency response. It's often affected more off axis too, because of the relatively large area of the midrange driver.
+ +There is a fairly extensive look at time alignment (Phase, Time and Distortion in Loudspeakers), but it's largely from a purely theoretical standpoint. In reality, people often go to great lengths to set the tweeter further back than the midrange driver to ensure that the acoustic centres of the drivers are aligned properly, but this can cause other issues - especially diffraction. If a horn or waveguide is used for the tweeter, this might be sufficient to move the acoustic centre so it's in line with the midrange, but doing so does not automatically mean the system will sound any better.
+ +Before embarking on time alignment, you need to determine the acoustic centre of each driver. This is rarely as simple as aligning dustcaps or voicecoils (or any other part of the motor structure), and it usually varies with frequency. To get results that are useful, you must measure using the time domain (using an impulse test rather than a frequency sweep). By definition, a frequency sweep measures in the frequency domain. The impulse can be a short tone-burst, or just a single impulse generated by measurement hardware/ software or some other means of creating a repeatable pulse stimulus. In reality, you'll probably have to measure in both the time and frequency domains. If this is done carefully (and with the crossover network you plan to use in place), it should be possible to get results that will be entirely satisfactory.
+ +Time alignment between bass and midrange drivers is generally not important, because any offset is (usually very) small compared to wavelength. Since bass frequencies are (pretty much by definition) comparatively slow, a short impulse of (say) 100µs is simply not possible, as that corresponds to a frequency of 10kHz or more. Consequently, if there's a 100µs time difference between the bass and midrange (assuming a crossover frequency of around 300Hz) it will not cause any audible variation. There most certainly is an effect, but at less than 0.03dB it pales into insignificance compared to normal speaker variations (and the room hasn't been considered yet).
+ +In some cases, the relative alignment of drivers can be improved by adding a very short delay - perhaps digital, or using phase shift networks to achieve the same end. Again, doing so will not necessarily make anything sound 'better'. It might be different, but 'different' is not the same as 'better', although our ear-brain mechanism will often conflate the two. It's common for us to hear 'better' when the result is merely slightly 'different'.
+ +All sorts of delay ideas are used for time alignment, but they are mostly not applicable to passive crossovers. One technique that has been proposed is an L/C (inductor/ capacitor) 'ladder' network, but this is not something to be approached lightly. The cost is likely to be considerable, and it's very difficult to get a flat response. Yes, you can obtain phase shift, but there are usually much easier ways to go about it. In an active crossover, a time delay can be created by an all-pass filter (usually several in series), but this isn't without issues either. Phase shift networks are a common solution to obtain short time delays, but the delay is not consistent - it varies with frequency. So, even if the offset is perfect at the crossover frequency, it will not remain 'perfect' over a wide frequency range. This causes ripples in the frequency response, and wide-bandwidth phase shift networks are hard to design and require many opamps.
+ +Sometimes, designers use different crossover slopes for the midrange and tweeter to achieve the phase shift necessary for time alignment. Anyone can do this of course, but it requires a good measurement system to ensure that the results are as expected, and is usually difficult to get 'just right'.
+ +If the baffle is sloped backwards to achieve time-alignment, you will be listening to the drivers off-axis, so their off-axis response has to be good enough to allow this without causing response errors. Some constructors (including manufacturers) have used a stepped baffle (usually with the 'step' at a 45° angle), but this means that the midrange and tweeter drivers can't be located as close to each other as they should be. It's no accident that some midrange drivers (as well as some tweeters) have flat sides or a curved profile on the tweeter surround so the two can be located as close to each other as possible. This isn't done for fun - the two sound sources need to be as close as possible to ensure minimal destructive interference (combing effects).
+ +If the drivers are separated by a true step (i.e. 2 × 90°) then you risk creating what I like to call a 'diffraction engine'. The output from both drivers will be subjected to potentially extreme diffraction, which again will cause combing (a situation where the response varies widely depending on the listening or measuring position). Using separate enclosures stacked one above the other (with offset to 'time-align' the drivers) can have much the same effect. This can even extend to loudspeakers that cost more than a mid-priced luxury car, but there is no suggestion here that they are somehow 'no good'. This is merely an observation.
+ +Often, attempting to ensure that everything is physically 'perfect' in terms of an impulse (time aligned) doesn't necessarily result in a system that is better than one where the drivers are mounted on the baffle in a conventional manner. Every small aberration can be measured, but often it will not be audible in situ. You may hear a difference, but again, being different doesn't necessarily mean better. Despite the claims of some, measurements are far more revealing that our hearing ever will be, as hearing evolved primarily to keep us alive ... music is wonderful to have, but we don't need it to survive in the world .
Time alignment is not necessarily essential, and there are countless well regarded commercial loudspeaker systems that don't use anything fancy to correct for minor time delays. If you're lucky, the time difference may be such that reversing the phase of the tweeter may be sufficient to ensure that there is very little disturbance in the frequency domain. The time delays involved are usually short (less than 200µs is likely to be typical). In some cases, a minor tweak to a passive crossover (shifting its nominal frequency a little for example) can achieve good results. While it's certainly possible to calculate the shift needed, it's usually simpler to do it experimentally (some might call this 'voicing' the system - a fancy name for a bit of trial and error).
+ +While we humans can't resolve very short time delays, we will easily hear any destructive interference caused, which typically manifests itself as a notch at the frequencies where the phase is altered by the delay. Although sound will travel a mere 34mm in 100µs, its effects can still be audible. Whether the small notch or ripple is audible or not depends on the resolution of the drivers used, although the room acoustics will always have a far more significant effect overall.
+ +
Figure 6.1 - 145µs Displacement, Phase Shift Network Vs. Polarity Reversal
As an example of the topics discussed above, a 24dB/ octave Linkwitz-Riley crossover was simulated. The crossover frequency is 3kHz (2.83kHz to be exact), and a three stage phase shift network was compared to reversing the polarity of the tweeter. For what it's worth, this is almost identical to the arrangement my speakers use, and the larger than normal offset is because I use a ribbon tweeter. The phase shift network gives the response shown in red, and the green trace is the result when the polarity of the tweeter is reversed. It's pretty obvious that reversing the phase of the ribbon tweeter gives a significantly better response than the phase shift network.
+ +A phase shift network used as a delay is optimal when the dips are of equal amplitude (peaks are more audible and are nearly always unwelcome), and that's the case here. The phase network was staggered, using different value caps to spread the delay over a wider range. A two stage phase-shift network was worse than the three stage staggered version, and no phase network could compare to the simple phase reversal. Time alignment is (or can be) very tricky, and sometimes the least obvious method gives the best result.
+ +It's worth noting that locating the acoustic centre is not a simple process. I set up an experiment in my workshop, and it's fair to say that the results were inconclusive at best. I used a 25mm dome tweeter and a 100mm mid-bass driver, wired in parallel. The pair was pulsed by discharging a 33µF capacitor into the pair, and the tweeter was moved from having the magnets in-line (both on the bench top) to having the two mounting surfaces in line. The total distance was about 40mm, and while there were differences, they were not pronounced. Part of the problem is that the mid-bass is slow compared to the tweeter, so there was no possibility of seeing separate impulses. The best response was obtained with the rear of the magnets in-line, and the impulse response is shown below.
+ +
Figure 6.2 - Magnets Aligned
The above trace as obtained with the magnets aligned (roughly aligning the acoustic centres for the drivers used). This is a 'better' response, but without performing a frequency scan it's hard to be certain. I have a pair of almost identical drivers in a small box that I use as my secondary workshop monitor, and (predictably) their mounting surfaces are on the same plane. This box was (many years ago) designed by the late Richard Priddle, and was well regarded at the time.
+ +
Figure 6.3 - Mounting Surfaces Aligned
The above looks pretty much ok, and the impulse is reproduced fairly accurately. However, the positive and negative peaks are a bit lower than they should be, and there's a small 'ripple' in the second positive peak. I couldn't hear the difference between this and the first plot shown above, but the microphone picked it up easily. The time delay from the mid-bass is 117µs, equivalent to a distance of 40mm.
+ +You can calculate the time delay and/ or distance travelled with the following formulae which accept the time in μs and distance in mm (so don't use suffixes to imply mm or μs) ...
+ ++ c = d / t+ +
+ d = t × c
+ t = d / c
+ c = Velocity of sound (nominally 0.343mm/μs), d = distance in millimetres, t = time in microseconds +
Ultimately, and despite the offset that usually exists with most drivers, the effects are never as drastic as a simulation might indicate. Simulations work in the electrical domain, where it's possible to get almost infinitely deep notches if drivers are 180° out of phase at some frequency. Acoustically, this doesn't happen. While there is every chance that you will get a notch due to phasing of adjacent drivers, what's important is whether it's audible or not. Remember that the response of every driver you look at is never flat, but can vary by up to ±5dB in many cases. This is particularly true at higher frequencies, and depends on many factors. Cone drivers of 100mm diameter or more can have some fairly serious variations above 1kHz, and these variations are exacerbated off-axis. Mostly, the 'disturbances' caused by non-aligned acoustic centres will be less than those from the driver, so it may be a moot point.
+ +It's a fairly easy matter to run a simulator to see the effects of any time misalignment, but they operate in the electrical domain. For example, you can use an 'ideal' transmission line, which lets you set the characteristic impedance and delay time to anything you like. The results of an electrical simulation are always extremely pessimistic, because the electrical domain (and the simulator) are close to exact, and completely fail to account for the same signals mixing in the air, rather than electrically. The differences are so significant that you'll nearly always get an answer that's not only pessimistic, but often quite wrong. Simulator packages are designed for circuit simulation, and the results do not apply to the acoustic response, other than by accident. That's not to say that such experiments are useless, but you need to be aware of the differences between electrical and acoustical summing.
+ + +Ideally, your speaker will be a point source, so that all frequencies emanate from the same place in space. Tannoy has (for a long time) made speakers that are as close to a real point source as you're likely to find. Their coaxial speakers have a horn-loaded tweeter that's concentric with the woofer/ mid-bass, and it uses the main cone as part of the horn. Tannoy is not alone - there are several other makers of dual-concentric drivers. While this can work well, it's not recommended if the 'main' driver has significant excursion, as that will change the horn parameters. There are also other concentric drivers that use a sectoral horn mounted to the centre polepiece of the main driver, so cone excursions will have little effect. Celestion makes a coaxial driver that uses only one magnet, but has separate voicecoils for the HF and LF sections. While they claim that they are phase coherent, this may or may not be the case in reality. Seas has a similar driver (L12RE/XFC), but I don't have any details other than what's shown on the website.
+ +Others (especially car speakers) claim to be 'concentric', but mount a small tweeter in front of the main driver, either on a sub-frame or an extension of the woofer's centre pole. While these are likely a good choice for a car, few people will find them to be satisfactory for hi-fi. Tweeter diffraction is likely to be fairly extreme, and due to the small tweeter they usually have to be crossed over at an unrealistically high frequency. This doesn't mean that good results can't be obtained, but they will rarely compete well against separate drivers selected for their performance.
+ +These coaxial designs are not universally loved (many hate them with a passion), but they remain the closest to a true point source as you are likely to find. The main point is that all loudspeaker drivers are a compromise, and coaxial/ concentric designs are no different. Ultimately the driver selection comes down to cost, and the designer deciding what they can or cannot live with. Audio is very personal, and what works depends on what you prefer listening to. If you find that a single wide-range, high-efficiency driver suits the music you like, then that's what you'll probably use. There are several wide-range drivers that are commonly used in DIY projects, and they tend to be used predominantly by those who imagine that the key to 'good sound' is simplicity. This may be true in some cases, and if this approach is aligned with your wants/ needs, then you have to be prepared to spend serious money for anything 'decent'.
+ +One of the issues with wide range drivers is that the cone area is large compared to wavelength at high frequencies, so they often have a very small 'sweet spot', and might not sound so good off axis. They also tend to be rather expensive, and like many of the different arrangements mentioned here, they aren't something I've worked with. My primary work in audio is on the electronics rather than speakers, and there are so many different speakers on the market that it would be impossible (and impossibly expensive) to test even a small percentage of them.
+ +The simple fact is that most commercial loudspeakers use individual drivers for mid-bass and treble (plus 'super treble' if you think you can hear above 20kHz). Many of these receive rave reviews (is there any other kind?), and all speakers I've built (both for hi-fi and sound reinforcement) have used individual drivers. Some were disappointing and didn't last, others are anything but, and are still in use. I've not tried to build a true point source speaker, and the only coaxial driver I have is sub-par in most respects, and hasn't been used for anything other than a few experiments.
+ +Of course, this does not mean that coaxial drivers shouldn't be used. If you find one that suits your needs and sounds good, then you get the benefit of well controlled dispersion, very little lobing, and a true point-source - at least for the mids and highs. Large coaxial drivers (e.g. 300mm (12") or greater) become a compromise, and the horn tweeter isn't to everyone's liking in any size. With many of the smaller drivers, the 'horn' is more of a waveguide than a true horn, potentially minimising the oft-complained of 'horn sound'. Choosing drivers that have a tweeter suspended in front of the main driver might work for you, but I know of no commercial loudspeakers that use that arrangement. It's not uncommon for in-wall, ceiling and car speakers to use this approach, but these are (usually) not regarded as 'hi-fi'.
+ + +As most readers will be very aware, I recommend active crossovers wherever possible. This means that each loudspeaker driver has its own amplifier, and in the DIY world this is not especially difficult or expensive to do. Passive crossovers (using capacitors, inductors and resistors) take the full-range signal from the power amp, and divide the frequency range so that each driver gets only those frequencies it can handle. There is no doubt whatsoever that a very well designed and executed passive network can sound very good indeed, but unless every precaution is taken there will be interactions that may make excellent drivers sound dreadful.
+ +There are several articles that cover passive crossover design, and these should be read through thoroughly so you know what to expect. Simple (e.g. 6dB/ octave, preferably series) passive crossovers can work much better than you may expect, but they are limited to relatively low power use. Because the slope is so gradual, it's easy to get excessive tweeter power at frequencies below the crossover point, and where the tweeter is least able to cope with the dissipation and/ or excursion. For example, a 3.1kHz, 6dB/ octave series crossover has reduced the tweeter voltage by only 17dB at 310Hz. To put that into perspective, if you have a 50W/ 8Ω system, the power at 310Hz is over 500mW - that might not sound like much, but it's probably more than the tweeter was designed to handle at that frequency. First order (6dB/ octave) crossovers can be used if you have well behaved drivers, and don't intend to use amplifiers of more than around 30W or so. If you plan to use a 6dB/ octave network, a series configuration is preferred (see Series Vs. Parallel Crossovers).
+ +Passive crossovers should ideally be at least 12dB/ octave, but to get them to work well, impedance compensation is essential for both the mid-bass and tweeter. This makes the crossover network fairly complex, and if good quality parts are used it will be expensive. Higher order passive networks can be used, but anything above 18dB/ octave (3rd order) becomes a very costly undertaking. As the filter order is increased, so too is the need for accurate component values and impedance compensation. Even a small variation of impedance across the crossover region can have serious effects on the accuracy of the network. Likewise, the tolerance of the parts used in higher order networks become more critical, and even a small variation of voicecoil resistance (due to the power dissipated) can have serious effects on the network's performance.
+ +Unfortunately, even getting everything 'right' doesn't always mean that the speakers will sound any good. A very useful tool for optimising the crossover frequency is the Project 148 State Variable Crossover, which was designed with this very application in mind. I've been using variable frequency electronic crossovers for many years (nearly 40 at the time of writing!) to find the 'sweet spot' between drivers. While crossover frequencies are often dictated by the driver parameters, sometimes you need to go a little outside of the recommended parameters unless you have drivers that are specifically designed to work well together. This is regrettably uncommon, even when the drivers are made by the same company.
+ +This is one of the factors that has led some people to believe that crossover design is a 'black art', whose intricacies are known only to a select few. This is not the case at all, but there's a lot more to it than buying a generic crossover network from a hobbyist supplier, wiring it to the loudspeakers and considering the job completed. Unless the drivers have impedance compensation, the results can be mediocre at best, but rarely 'horrible' unless you do something seriously wrong.
+ +Many 'modern' systems are using DSP, with a fully digital signal processing chain. Unfortunately, this involves some serious processing power, and is not without its problems either. Several people who have bought the Project 09 analogue Linkwitz-Riley crossover board have done so after deciding that the analogue to digital and digital to analogue converters (along with the DSP itself) created too many 'artifacts' for their liking, and resorted to returning to an analogue solution. As far as I'm aware, no-one who changed back to analogue has decided that the DSP is 'better'. While it makes it easy to add delay (and optionally equalisation) if necessary, the process can degrade the signal unless the very best DSP chips are used (along with high quality ADCs and DACs).
+ +This doesn't mean that they are no good - some are very, very good indeed, but probably not if you only pay a few hundred dollars for the complete setup. The digital process also has very limited headroom, since most run with only a 5V supply, so the absolute maximum signal level is generally below 2V RMS. Both the bit rate (aka sampling frequency) and bit depth (the number of bits available for processing) are important. When complex filters (often with equalisation) are performed, the system has to use at least 24 bits or low-level detail may be lost as it passes through the processing chain.
+ + +Designing loudspeaker systems is not a 'black art', but it is full of traps for the unwary. Probably one of the most common mistakes (see note below) is to align drivers down the centre of the baffle, which has the advantage that you end up with speakers that can be swapped - there is no 'left' or 'right' speaker. The diffraction effects aren't always readily audible, but they will exist and can make getting a flat response very difficult (within the abilities of the drivers used of course). If you're making a set of 'utility' speakers then it doesn't matter, because no-one expects them to be perfect. On the other hand, if you shell out several hundred dollars for good speakers, then it's worth the effort of making separate baffles for each enclosure. Mostly, they will be mirror images, so the extra effort needn't be that great.
+ +++ ++
+Note: It's highly debatable as to whether aligning drivers down the centre of the baffle is a 'mistake' or not. There are many well regarded speakers that do just that, and + they don't seem to have engendered the wrath of reviewers for doing so. My preference has always been to offset the drivers to ensure that no dimension from the tweeter (or midrange) to the edge/ + top of the baffle is the same, as that minimises diffraction problems. A popular (and comparatively recent) technique is to use a waveguide for the tweeter, making diffraction effects (almost) a + non-issue. +
There is considerable difference of opinion as to whether the tweeters in an asymmetrical baffle should be on the 'inside' (closer together by maybe 100mm or so) or the 'outside'. My preference has always been for the inside, but it's something that you need to try for yourself and decide which way sounds better. It may be that neither is actually better, just slightly different. There seem to be as many opinions on this as there are people writing about it, so it's up to the builder/ listener to decide.
+ +One thing that is important is to ensure that the tweeter is directly above the midrange (not offset). This can often be at odds with keeping drivers at different distances from cabinet edges, especially with very narrow enclosures. If there is an offset, you will get uneven dispersion around the crossover region, with the radiation pattern tilted [ 5 ]. With a system using an MTM (mid-tweeter-mid) arrangement, you might find that a small offset improves directionality in the preferred direction, but this means that left and right speakers cannot be interchanged, and extensive testing will be necessary to ensure that dispersion is properly controlled at all frequencies.
+ +It's generally agreed that the tweeter should be at ear level while sitting in your preferred position. For speakers that aren't tall enough, stands should be considered mandatory. Many households can't readily accommodate 'true' floor-standing speakers, as most tend to be rather imposing. While smaller cabinets are often described as 'bookshelf' designs, actually locating them on a bookshelf is usually a bad idea - especially those with a rear vent which would be obstructed, reducing bass output. It's also likely to be difficult to use sufficient toe-in (pointing the boxes towards the listening position). A lack of toe-in can often result in a 'hole-in-the-middle' sound, where the central position (which is supposed to be the prime listening position) has a pronounced response dip, and often some odd phasing issues. There are a few people who prefer 'straight-out' speakers, and some even prefer toe-out (speakers splayed), but this rarely (if ever) improves the sound stage or imaging.
+ +
Figure 9.1 - Baffle Cross-Section
In general, the baffle should not be recessed to allow for a grille cloth or protective cover. It's tempting to do so, but it can cause considerable disturbances due to diffraction. The speaker drivers should be recessed into the baffle though, so there are no discontinuities across the face of the baffle itself. For small 'near-field' speakers (as might be used with a computer for example) it probably doesn't matter, but I have been able to measure a small 'glitch' in the response of speakers that are surface mounted. Minimising discontinuities is almost always beneficial, but some will always remain because of the way most speakers are manufactured. Just the surround and cone of a midrange driver can create small but measurable response anomalies, but reality tells us that there's no much that can be done to avoid this.
+ +Rounding (or chamfering) the edges of the baffle (and in general, the 'rounder' the better) minimises edge diffraction, but there will always be limitations due to the material's thickness and the rounding bits that we have available. While taking things to extremes (such as a cylindrical enclosure) can reduce edge diffraction to the minimum possible, that's not always practical and the drivers always need a flat mounting surface anyway. Such enclosures can (and have) been made, with some constructors going to the extreme of using spheres. It may be the optimum shape acoustically, but it's rarely practical (and a sphere is a cow to build unless it's made from fibreglass or similar).
+ +Some speaker enclosures have tapered, angled tops and upper sides [ 8 ], to keep the baffle area around the tweeter as small as possible. In a few cases, the baffle is trapezoidal, with the tapered sections extending from the top to the bottom of the enclosure (or a significant part thereof). These are usually difficult to build, but if done properly can give very good results. This is taking the idea of 'rounding' the edges/ corners to extremes, but it can produce a good result if done properly.
+ +With a double-thickness baffle, it will be stiffer and less resonant if the two layers are different materials. For example, plywood may be preferred for the outer surface, and MDF is then ideal for the second layer. Because the two materials are so different, resonance is minimised. If a slightly flexible adhesive is used between the two, they will be decoupled to some extent, which reduces the Q of any resonance that does exist. Tee-nuts or other metal threaded inserts can be placed between the two layers so the cutout for the woofer/ midrange drivers can be radiused on the inside. This reduces internal diffraction.
+ + +There are innumerable materials that can be used for loudspeaker enclosures. Many commercial systems (especially small PA powered boxes) use ABS or a similar thermoplastic material. While the original setup cost is very high, enclosures can be produced rapidly and for relatively low cost. Most of these boxes include the horn flare (or waveguide) for the high frequency driver, as well as appropriate cutouts for the crossover or amplifier module. What they lack in cost is made up for by the appearance, and usually little or no finishing is required, other than removing any adhesive residue after the two halves are glued together.
+ +However, while they are cheap to build and usually look quite good, the plastic almost invariably lacks rigidity, despite the curved surfaces. While these are fairly strong, they are anything but stiff, and I've not come across one that could be called rigid (regardless of the definition that the maker may use for the term). Bracing is difficult, and while most do have ribs moulded into the interior, they are nowhere near strong enough to prevent panel resonances. Fibreglass (with or without carbon fibre) is very strong, but is also very difficult to repair if the box is damaged.
+ +There are many plastic composites available, but few (if any) are suitable for home construction. The requirement for a mould means that it's not economical for building just a couple of enclosures, and thermo-set plastics require an autoclave or large oven to cure the resin. This is clearly not practical for home construction for the vast majority of hobbyists.
+ +A favourite for many is plywood. It's a very good material, and offers high stiffness for its weight, but it is usually also very poorly damped. If a panel resonates, it will usually do so with some vigour, and good bracing practices are essential (see next section). There are many constructors who think that MDF is 'no good', but that may be due to poor construction techniques, or even because they are used to the panel resonance(s) produced by plywood. MDF has been the material of choice for many major manufacturers for some time, and it's now possible to powder coat MDF given the proper (and very expensive) equipment.
+ +Particle board (aka chipboard) is one material that was once used extensively by low-cost manufacturers and hobbyists. It is sub-optimal in almost all respects, and in particular the structural integrity is found wanting, so corner braces are almost always needed to the box doesn't disintegrate during handling. Particle board can be obtained with high-grade real wood veneer, and while this makes the finished item look better, it's still fairly low-strength. If a veneered finish is used, this pretty much eliminates the possibility of using radiused edges, so edge diffraction will be greater than desirable. In general, particle board has very little to commend it, veneered or not. Attaching drivers and connection panels is irksome, because the screw holes will become useless after only a few insertion/ removal attempts. The use of Tee nuts or similar is essential, and even they should be glued in place or they may fall out during assembly (or disassembly should changes be needed).
+ +Many manufacturers are now using advanced composites, which let them create any shape relatively easily. Cellulose reinforced resins and 'exotic' plastic resins are common, but the requirement for moulds to create the finished shape (and an autoclave to cure the resin) means that these are generally not suited to the DIY approach. It's not impossible of course, but even getting the materials in small quantities may prove difficult. 'Traditional' materials will almost always be the best choice for DIY, because an enclosure can be built using only basic hand and power tools.
+ +Finishing is another matter entirely, and that is not covered here. The requirements for proper spray booths and an extreme dust-free environment are essential for the classic 'piano black' finish, or any other high gloss finish. Less labour (and equipment) intensive finishes are more common in the DIY sector, although there are no doubt some who will be able to achieve very high quality surfaces with a well equipped workshop. Ultimately, the final finish depends on what you can achieve within your budget and with the tools you have to hand.
+ + +These are the most critical parts of any enclosure. Bracing is necessary to increase the stiffness of the panels, and it has the side benefit of forcing any resonances to higher frequencies. High frequencies have less acoustical power, and are easily absorbed by the damping material used - provided that it selected for its effectiveness. Fibreglass (as used for home insulation) is very good, but it shouldn't be used in vented enclosures because tiny glass fibres may be ejected from the port, and these should not form part of the atmosphere of the listening room.
+ +As noted earlier, braces should be asymmetrical, and not simply pieces of timber placed neatly around the inside of the cabinet. The greater the asymmetry, the less chance there is of creating sub-panels with the same resonant frequency. The effectiveness of the bracing can be tested with an accelerometer (see Project 181 - an audio accelerometer for speaker box testing). This will tell you just how much a panel vibrates, and at what frequency (or frequencies). If you find a panel that appears to have too much vibration, additional bracing will be necessary to reduce it to a level you find acceptable.
+ +Note that the P181 article also shows screen captures of vibrations measured on a test box I have in my workshop, and includes a number of ideas that you can use to create strong, non-resonant enclosure panels. It's not particularly detailed, and it was my own experiments measuring panel resonance that led to this article being written. It's a complex field, and while some resonances are 'benign', many others are quite the opposite.
+ +Braces should be made from rectangular hardwood (e.g. 50 × 25mm/ 2 × 1 inch), and always glued firmly in position with the short edge to your panel. This provides much greater stiffness than the alternative mounting. Ideally, they will also be screwed (or nailed if you must) to ensure a firm bond while the glue sets. Other than the normal bracing that you'll often use where panels meet, the braces should ideally be at an angle (with no two angles the same), and they don't need to extend right to the corner with any bracing, as the corners are extremely stiff already.
+ +More substantial bracing is needed for the baffle, which should ideally be double thickness to withstand the momentum of the diaphragm (for the bass driver in particular). Braces between the baffle and the rear of the enclosure also help to prevent vibration. Many construction articles show 'window frame' bracing, which can certainly work, but these are incredibly difficult to install at the odd angles that can be very helpful in ensuring that no two panels (or sub-panels) have the same resonant frequency. The left drawing shows asymmetrical bracing, where each sub-panel is a different size. The right drawing has four more-or-less identical sub-panels, and they will all resonate at roughly the same frequency. This is usually unwise, but it may be alright for smaller enclosures where the resonant frequencies are all well above the highest output from the midrange.
+ +
Figure 10.1 - Bracing, Right And (Usually) Wrong
Remember that it's the outside of your enclosure that needs to look good. The internal construction with angled braces and odd shapes is not visible, and should be designed for rigidity and performance, not appearance. Deadening materials (e.g. bitumen tiles, heavy felt or other mass-damping treatment) needs to be very well bonded to the interior of the treated panels so it cannot move, rattle or fall off. All internal wiring has to be secured properly to prevent rattling as well, because it can be very difficult to correct after the box is sealed up and you only have access via the speaker cutouts.
+ +If the back (for example) is made removable, then I strongly recommend that it be secured with metal thread screws, with 'tee nuts' or some similar threaded metal insert. Wood screws can't be inserted and removed more than a few times before the thread cut into the timber starts to disintegrate (especially true with MDF or particle board!). This also applies to the driver mounting screws - wood screws are generally a poor way to mount the drivers. You can also use a metal bar, ring (for speaker drivers) or angle with threaded holes, provided it's well attached and doesn't vibrate or rattle. Suitable gasket material is essential to stop whistling noises as air passes through any small gaps. These gaps (if present) can also adversely affect the performance of tuned enclosures, because they represent losses that reduce the effectiveness of the tuning. It may seem counter intuitive, but metal thread screws work very well in holes tapped into hardwood, provided it really is hard! Some Australian hardwoods are so hard that they can destroy a drill bit, and these take a tapped thread very nicely indeed.
+ +The type of acoustic damping material used is a matter of personal choice. Fibreglass is very good, but isn't suitable for vented boxes, as glass fibres may be ejected from the vent. Most suppliers stock damping materials, and to be effective they should be coarse to the touch so there is considerable friction between the fibres to absorb as much energy as possible. Foam is generally not suitable, because it a) doesn't usually work very well, and b) because it tends to disintegrate after a few years. Foam surrounds were once common for woofers, but eventually the foam gives up and the surround has to be replaced or the driver scrapped. Damping materials need to be as acoustically absorbent as possible.
+ +There is much disagreement as to whether vented boxes should have damping material or not. My view is that it's essential, because without it there will be excessive upper bass and midrange energy bouncing around inside the cabinet. This can often be heard through the vent, so if you can hear anything that isn't at the tuned frequency coming out of a vent, the cabinet needs damping. Another technique that can be used is to set up diffusers within the box. These will be different heights and widths, and spaced at 'irrational' intervals. Damping material is still necessary, but you may find that you need less of it if you have effective diffusion. Upper bass and midrange can also be deflected with an internal angled brace, so that the energy is directed towards a well damped section of the enclosure. Bass (which has long wavelengths) will not be affected. Remember that anything approaching ¼ wavelength at any frequency can be your friend or your enemy, depending on how it's implemented.
+ +It's almost always necessary to add braces from the baffle to the rear of the enclosure, and also between the sides. These need to be attached very firmly, because the stresses can be quite high at high power levels. While it would be 'nice' if these braces could be angled so that remaining panel resonance(s) are different frequencies, this is usually impractical for a number of reasons. Some people have used braces from the rear of the woofer (or mid-woofer), but this isn't easy to get right, and can't be used easily if the driver has a vented rear polepiece.
+ +I suggest that you also have a look at the Small Satellite Loudspeaker System design that was described back in 2007 (it's a 3-part article). The bracing is done with aluminium 'U' section (25 × 25mm, 3mm thick), which is much stiffer than MDF and most timber. It also uses very little of the internal volume, but a very reliable method of gluing is necessary because aluminium has an annoying habit of oxidising itself (much like anodising, but thinner), and the layer of oxide can creep under the adhesive and it may 'let go' after a few years. If done well (with a proper two-part epoxy - not the 5-minute stuff) it should stay put for longer than you'll use the speakers.
+ + +Finally, you have to decide whether you will use spikes (for floor standing enclosures or at the base of the stand), or whether you will use a stand for smaller speakers. It's usually preferred that the tweeter should be at eye level when seated and listening (actually ear level, but ears and eyes are at close to the same level on most people's heads). Some people love spikes and consider that any loudspeaker no so equipped must sound dreadful, while others have the opposite view. Spikes are obviously not suitable for polished floors unless they sit in little cups, and these are also an area of controversy amongst many audiophiles (just like almost everything else in the signal chain). Use what you feel suits your system the best, and you don't need to spend a fortune - gold plating does not make spikes sound better!
+ +The price range for spikes/ isolators is quite astonishing, with prices from AU$20 for a set of eight, up to over a thousand dollars for a set of four! There are some fairly outrageous (IMO) claims made for the expensive types, but claims and reality are usually not backed up by any science. I (naturally) will make no recommendations one way or another, and there are so many conflicting opinions that I can only suggest that you do your homework, and decide for yourself which way you want to go.
+ +Stands will usually be selected to suit your tastes, decor and budget. Heavy stands add mass to the system making it less likely to move with woofer excursion, and it's important that the stands don't have any audible resonance. While it's unlikely that resonance will be audible (it will be hard to excite any resonant mode unless the boxes are flimsy to start with), sturdy and acoustically 'dead' stands will give some peace of mind. Some provide cable management and/ or the ability to be sand-filled to eliminate (or at least damp) resonant frequencies.
+ +Ideally, there will be a layer of felt, non-slip rubber or sound deadening material between the cabinet and the stand to ensure there can be no rattles at any frequency. Beware of systems that are top-heavy if you have small children - no-one wants to see their offspring crushed by a 100kg speaker box! Some have the ability to be permanently attached to the loudspeaker, while others are just intended for the enclosure to sit on the stand with no attachment. Personally, I'd avoid that, but it depends on the stand, the loudspeaker and your circumstances.
+ +As with so much in audio, there are as many opinions as there are authors, and just because a few people agree with one idea or another does not make it reality. Something that works well in one environment doesn't necessarily mean that it's suitable for your needs, and in some cases the 'product' offered is nothing more than snake-oil, and won't achieve anything useful at all. It's up to the constructor to work out what works in the specific environment where the speakers will be used. For example, using ultra-hard (perhaps tungsten tipped) spikes on a tiled floor is probably unwise. Along similar lines, titanium spikes won't 'transform' the sound, despite the (considerable) cost - a set of three can be obtained for as little as €2,199 (about AU$3,530 at the time of writing!), although some are available at an ever-so-slightly less insane price. Personally, I completely fail to see the point of buying a set of spikes that cost more than a set of very decent drivers (but they do come in a padded carry-case). I'll let you be the judge as to whether this qualifies as snake-oil .
One of the more vexing problems you'll face is to determine the (inside) dimensions based on a known volume. For example, your loudspeaker design program may indicate that an internal volume of 14.5 litres is optimum, and allowing an extra 100ml (0.1 litre) for bracing. This gives a total of 15 litres. You can try messing around (for quite a while) to work out the dimensions by trial-and-error, but there's an easier way. If you obtain the cube root of 15 (³√15) you get 2.466 (2.47 is quite close enough) with the answer in decimetres - something I generally avoid. 1 decimetre is 100mm or 0.1 metre. Obtaining the cube root is a bit of an issue in itself, since most references omit the simple way to calculate it. You can get an idea of the gyrations that most 'maths' sites put you through from Cube Root Calculator at CalculatorSoup®.
+ +Alternately, use my method, which is a great deal easier. '^' simply means 'raised to the power of', and for a cube root we raise our number to the power of ⅓. On most calculators, this is shown as xy or (confusingly) yx. Remember to include the brackets shown below or the answer will be (very) wrong! In the following 'X' is the volume in litres ...
+ ++ ³√ X = X^(1/3) so ...+ +
+ ³√ 15 = 15^(1/3) = 2.466 +
When the volume is stated in litres, the cube root is in decimetres (10cm or 100mm). I normally avoid decimetres (and centimetres) completely, but when used for volume the result is litres which is quite convenient. Having determined the base (the cube root) which can be the height, width or depth , the next measurement is obtained by multiplying the root by the ratio (e.g. 1.618), and the final dimension is obtained by dividing the root by the ratio. We simply multiply by 100 to get back to millimetres. As a worked example, we may need the following ...
+ +++ ++
+Volume 15 litres + Cube Root 2.47 dm (base dimension) + Side 1 1 1.27 dm 247 mm + Side 2 2.47 / 1.618 1.526 dm 153 mm + Side 3 2.47 × 1.618 3.996 dm 400 mm +
Note that 'dm' means decimetres (10dm/ metre), A typical enclosure using these dimensions may therefore be 153mm wide, 247mm deep and 400mm high. Of course you can use other ratios if you prefer, and remember that these are inside measurements and don't account for volume taken up by bracing, crossover networks, ports or the speakers themselves. You can use the same technique if you work with the imperial system (inches, feet, etc.) but it's far less convenient. The revised calculations are up to you, as I do not intend to provide calculations in feet and inches. Using √2 (1.414) rather than 1.618 may produce a somewhat better aspect ratio, with a slightly wider baffle (assuming that the first dimension is used for depth) ...
+ +++ ++
+Volume 15 litres + Cube Root 2.47 dm (base dimension) + Side 1 1 2.47 dm 247 mm + Side 2 2.47 / 1.414 1.746 dm 175 mm + Side 3 2.47 × 1.414 3.493 dm 350 mm +
After calculating the dimensions, make sure that you do a 'sanity check', by multiplying each dimension (in decimetres). You should get very close to the number you started with, in both of the cases shown above the answer is very close to 15 litres. Provided the answer is within ≈200ml of the target you should be fine. Before you start, make sure that you add up the volume of braces, ports, etc. These are added to the required volume before you take the cube root.
+ +If you use imperial measurements (feet & inches), you'll have to work out the measurements differently. If the enclosure is in cubic feet, convert to cubic inches (1ft³ = 1,728in³). All dimensions are in inches. For example, 15 litres is about 0.53 cubic feet (~915 cubic inches), and the base dimension is 9.7". The metric system is (as always) far simpler.
+ +The side that's used for the baffle is usually the smallest for most modern enclosures, but you decide which is the width, height and depth, as they all work. The aesthetics of the final enclosure is often the determining factor, but you may need to change that if you use a driver that's too large to fit onto the narrowest panel. A 200mm speaker doesn't fit on a baffle that's 150mm wide, but if the sides are 30mm thick it might just work out for you.
+ + +There really aren't many 'conclusions' that apply, because nearly everyone has differing opinions on what is 'good', 'bad' or indifferent. Ultimately, if you are building your own speakers for your use, then it only matters that you are happy with the results. There are many conflicting needs, including what you can (or cannot) deposit in your lounge room lest you incur the wrath of your 'better half'. Aesthetics always plays an important role, and if you have small children then even greater limitations may apply. Having 100kg speakers on nice stands may look great, but not if they can fall over and squish a child. You may not like using a grille (I don't), but keeping small fingers away from delicate tweeters becomes a priority.
+ +As I've mentioned in several articles, electronics (as with almost everything else) is a compromise - the 'art' is in making the compromises in such a way that the end result isn't ruined. This is almost always easier said than done, unfortunately. No-one wants to have to re-build speaker cabinets because of some fundamental error (of judgement or construction), especially since construction usually represents a considerable effort and cost. In some cases it may be possible to 'rescue' an enclosure by adding bracing or damping materials, but if you don't get the basics right, then the time, effort and materials may be wasted (and I freely admit that this has happened to me a couple of times). Consider that major manufacturers may build a number of prototypes before they get the performance they expected, but this isn't something that most hobbyists can afford. Mostly, we 'mere mortals' have to try to get it right the first time, and can't afford to generate vast amounts of scrap material in the pursuit of 'perfection'.
+ +Speakers are without doubt the most compromised of all the components that go together for a complete hi-fi system. The individual drivers are a compromise, and not always due to cost - even very expensive drivers are still compromised by the materials and the laws of physics. When multiple drivers are used together, the compromises are simply magnified, but they are even greater if you try to get everything from a single driver. Passive crossover networks are always a major compromise because they use inductors, which are the most imperfect passive components made. Yes, you can have them wound with flat silver wire to minimise resistance (and your bank balance), but they will still have self-capacitance that can cause issues. Not everyone likes electronic crossovers, even though there are far fewer compromises involved, and changes are easily made (both to frequency and level for each driver). However, you need a dedicated amplifier for each of the individual drivers.
+ +The goal of this article is (if nothing else) to give you some pointers towards reducing the (sometimes significant) effects of an enclosure that is (of course) yet another compromise. There is no such thing as a 'no-compromise' speaker box - without compromise, you have nothing at all. Even if you use the very best materials, that doesn't mean that they are without flaws. The same goes for bracing, damping and deadening materials. If you get everything right, you should end up with speakers that sound good - musical and suitable for the material that you listen to, but even then you will not get perfection. Electronics can be made easily with response that is dead flat from DC to daylight (well, not quite daylight perhaps), with distortion of all types that's difficult to measure. No loudspeaker, however expensive, can come close. Then there's the room ... getting that right is a major undertaking.
+ +To give you an idea of the time and effort that can go into building a 'nice' pair of speakers, see New Speaker Box Project - Part 1. Not that they are 'new' any more - they were built in 2001, and upgraded to ribbon tweeters about 5 years later. They are in daily use to this day, and have failed to disappoint in any way. Are they 'perfect'? Not at all, but they do sound very good with all types of music (along with video sound tracks, etc.). This is certainly not something I'd want to tackle again, especially since I'm more than 20 years older (they were made when I was in my early 50s). That doesn't mean that I won't build any more speakers, but they will be (probably considerably) smaller and less complex overall.
+ +One particularly troubling 'claim' I saw was that "we can hear everything we can measure, but we can't measure everything we can hear". Reality is exactly the opposite. Measurement systems are accurate to fractions of a dB, and are far more revealing than our hearing. The often neglected part of our hearing is our brain, and it lies to us. If we expect to hear a difference, then there's every chance that we will hear a difference, even when there is none at all. There are several different names for this, with one being the 'experimenter expectancy effect', and it applies to everything. This is why medical tests are double blind, so neither the experimenter or 'victim' (experimentee) knows whether they have received the drug being tested or a placebo. Audio tests also need to be double-blind, although this is very difficult with loudspeakers. Some of the major manufacturers have set up very advanced systems to ensure that the test is as close to true double-blind as possible, but this isn't an option for most hobbyists.
+ +An issue that's not discussed nearly often enough is the difference between a microphone and our ears (actually our complete hearing mechanism). A microphone is dumb - it cannot distinguish the difference between the direct sound and a reflection, which is why anechoic chambers are used by some major manufacturers. As a result, microphones in a room will never give a true indication of a system's response, because the direct sound and early reflections cause 'wobbles' in the output graph that don't necessarily exist. I once ran a test and was able to measure when a coffee cup was moved - naturally, this was completely inaudible, but the mic picked up the difference quite easily. What should be equally clear is that was an anomaly - we simply do not hear such tiny differences because our brain knows they are unimportant!
+ +Even though this article is far longer than I intended, I trust that it helps. By necessity, it's an overview - the idea was never to describe a complete system, but to provide guidelines that I've applied based on my own constructions, tests and measurements using an accelerometer, and acoustic measurements of a great many drivers over the years. Loudspeaker construction is one of the most labour intensive (and expensive) undertakings for DIY people, and anything that helps prospective constructors to get it right has to be useful. I hope I've succeeded.
+ +Over time, our hearing will accommodate even serious response errors (this is called 'breaking in' by many snake-oil purveyors), but if the response is restored to flat (perhaps by using equalisation), it will sound completely wrong for some time - until our ear-brain combination 'breaks in' to the new response. If you have access to a decent equaliser, I offer the following challenge ...
+ ++ Notch out a frequency around the middle of the frequency range (600-700Hz for example). The sound will be quite wrong for a while, but after perhaps 30 minutes you will adjust your + expectations. Next, restore the response to flat, and hear the huge peak in the midrange. It will initially sound dreadful, with a 'honky' sound that quite obviously + can't be right. However, keep listening for a while and that sensation goes away, and everything sounds normal again.+ +
+ + To say that this is confronting is putting it mildly. If you have never performed such a test it's unlikely that you'll believe it possible, which is why you must do it. + Until you experience this for yourself, you are 'sucker bait' for snake oil of all kinds. People tend to think that they can remember what something sounded like 'before' and 'after', + but in reality our auditory memory is limited to a few seconds! Naturally, there can be response anomalies that are so gross that we do remember them for much longer, but subtle + changes are not in that category. +
In closing, the hobbyist must consider that even the best speaker in the world may sound dreadful in some rooms. Even with typical furnishings, moving your head by 100mm is generally enough to affect the frequency response by as much as ±10dB [ 6 ]. This is measured by a microphone, which is completely lacking our brain's processing facilities and takes a reading at a fixed point is space. It's important to understand that we humans do not hear these extreme variations, because our ear-brain combination removes much of the interference that causes the measured to vary so wildly. However, this does not mean that we won't hear such radical variations if they are created by the sound source - the loudspeaker. However, over time we will adapt, and even seemingly outstanding differences can become the 'new normal'.
+ +There is one thing that we don't become accustomed to (at least not to the same extent), and that's distortion. More specifically, intermodulation distortion. If great enough, this turns everything you hear to mush - there is no definition and all clarity is lost. This is why most of the 'best' loudspeakers use multiple drivers, so the frequencies are separated into individual drivers and intermodulation is reduced. However, a system using multiple drivers must be designed properly, or it may cause more harm than good. Using more than a 3-way system is unlikely to improve matters (excluding a subwoofer, which creates a 4-way system).
+ + +In addition to the above, there are a few brand names mentioned and quite a bit of 'general research' that doesn't warrant a direct reference. Before embarking on your next speaker project, I recommend that you do your own research, and ensure that you get a balanced overview - relying on one opinion (or forum thread!) is unlikely to give reliable answers.
+ + +![]() ![]() + |
![]() | + + + + + + + |
Elliott Sound Products | +Equalisers |
![]() ![]() |
Contents +
Equalisation (EQ) is one of the most contentious areas of hi-fi. For many years, it was expected of any preamplifier that it would have (at the minimum) bass and treble controls. There were untold variations of course, but the general scheme that ended up being used by almost all manufacturers was the 'Baxandall' topology, named after its inventor Peter J Baxandall. This arrangement is used to this day, but for audio production (as opposed to reproduction) the equalisation available is much more complex and comprehensive.
+ +The term 'equalisation' probably came from the requirements of various operators (phone, motion picture, broadcast, etc.) to get their systems back to a flat frequency response - in other words to make it 'equal' to the intended signal.
+ +In reality, equalisation (or simply 'filtering' as it was known in the early years) has been part of recording and PA equipment from the beginning of the technology. Western Electric (which eventually became Bell Labs) described filters (equalisers) for the telephone system to adjust the frequency response and correct high frequency rolloff in the telephone lines. Early 'tone' controls were in evidence not long after the advent of AM radio ('wireless' as it was known at the time). These were typically only able to roll off the high frequencies to make the sound more 'mellow' and reduce extraneous noise.
+ +While audiophiles the world over eschew any form of EQ, at least 99% of the recordings they listen to have already been processed with individual EQ on each channel, as well as overall EQ, compression, limiting, and other 'effects' as may be deemed appropriate by the recording and mastering engineers. However, in this article, I will discuss mainly 'user adjustable' equalisation ('equalization' for North American readers).
+ +Mixing desks for recording and live production provide extensive EQ, and no-one would be silly enough to build a mixer without it. Each channel has a comprehensive tone control network, almost always with at least two bands of parametric equalisation. The term 'parametric' refers to the fact that all the parameters of the circuit are adjustable - frequency, bandwidth (Q) and boost/ cut.
+ +Daniel Flickinger introduced the first parametric equaliser in early 1971 (US Patent 3752928 A). His design used opamps to create filter circuits that were not viable with other techniques. Flickinger's patent ("Amplifier system utilizing regenerative and degenerative feedback to shape the frequency response") shows the circuit topology that was used, and it forms the basis of parametric EQ used to this day.
+ +An earlier form of comprehensive tone control was the graphic equaliser - so-called because the slider pots described a 'graph' of the final frequency response. To be useful, a graphic EQ system needs a lot of separate filters. Octave band graphic EQ systems used 10 slide pots, with one for each octave. More expensive units had 20 sliders (1/2 octave) or 30 sliders (1/3 octave). It was common for these to use ferrite-cored inductors prior to the development of integrated opamps and the invention of the 'gyrator' circuit. A gyrator uses an opamp, resistors and a capacitor to simulate an inductor (hence the generic name 'simulated inductor').
+ +It's often been stated that "tone controls are provided so the user can mess up the sound". In many cases this is certainly true, but it has to be considered that the end-user is perfectly entitled to mess up the sound if s/he wants to do so. This article is not about ultimate sound quality, but the various types of equaliser that are available, and how they work.
+ +It's also worth your while to browse the various circuits from the ESP projects list. There are quite a few different types of equaliser described, ranging from simple bass and treble controls through to quasi-parametric designs, graphic equalisers and fixed EQ systems for low frequency response extension for loudspeakers and subwoofers.
+ +![]() | Note that all the circuits shown below rely on a low or very low impedance source. This can be an opamp (best), + transistor emitter follower (ok) or a valve cathode follower (worst), depending on the other circuitry used. So, although input buffers are not shown they are essential + in all cases. This still applies where the input uses an inverting opamp stage, because the insertion loss of the circuit depends on a low source impedance. + |
The circuits below are not for construction (although you can do so if you wish, but don't expect assistance). Because they are not projects, none has been built as shown, and although all have been simulated no other tests have been done. Likewise, there's been no attempt to optimise the circuits for any particular task, so they may not be found suitable as described. I will respond to queries about projects, but I will not provide assistance to anyone to build any of the circuits shown here.
+ + +The most common fixed EQ circuit is that used for RIAA vinyl phono playback from magnetic pickups. Although there is vast number of different topologies, the end result is pretty much the same. RIAA playback EQ provides bass boost and treble cut to match the disc cutting process. This (by design) cuts the bass response so the grooves aren't so wide as to cut into adjacent grooves, and boosts the treble as a form of pre-emphasis. Upon playback, the treble cut reduces the disc's surface noise sufficiently to produce a fairly quiet end result.
+ +Other common fixed equalisers are or were used with recording tape, FM broadcast, long phone lines used for radio or television distribution and a multitude of other systems. Pre-emphasis (treble boost) and de-emphasis (complementary treble cut) increase the apparent signal to noise ratio (SNR) and these have been used for many years. Pre-emphasis is used in FM broadcasts, and the receivers have a complementary de-emphasis circuit that gives an overall flat response.
+ +Fixed equalisers can also be used to allow a loudspeaker to achieve (or attempt) 'full range' from single loudspeaker drivers. One of the best known is probably the Bose 901, which uses 9 × 100mm (4") drivers and has a 'line level' equaliser that supposedly produces flat response (although it also has some tone control available). Many subwoofers use a fixed equaliser to get as low as possible even in a small enclosure.
+ +Modern systems using DSP (digital signal processing) may also qualify as 'fixed' EQ, because after the setup process is complete there is usually no facility to adjust the relative levels. There's also a movement to apply EQ to 'correct' the speakers for the room, but this is a flawed concept for the most part, other than for frequencies below ~100Hz or so. In a nutshell, you cannot equalise a room, because most of the problems are caused by anomalies in time, and you cannot correct time with amplitude.
+ +Fixed EQ is also used in smartphones, tablets and laptops, usually both for the inbuilt microphone and speakers. The amount and type of EQ depends on the manufacturer, but it's safe to say that it will usually be done using DSP. Some may allow applications to disable the microphone EQ (and compression) for wider frequency and dynamic range. Another form of fixed EQ is a notch filter, and these can be extremely narrow and used to remove an unwanted frequency. An example is the 19kHz notch filter used in FM receivers to suppress the 19kHz pilot tone that's used for stereo broadcasts. Notch filters can also be used to remove 50/60Hz hum from a signal without greatly affecting nearby frequencies.
+ +The primary purpose of this article is to describe user adjustable controls, not fixed EQ systems. Therefore I shall not delve into the realm of fixed equalisers other than in passing.
+ + +The early forms of boost/ cut tone control circuits were passive, and had a significant insertion loss. Because there was no active circuitry in the circuit itself, in order to be able to boost the bass or treble, the overall signal was attenuated. Simple filter circuits allowed the end user to independently set the bass and treble controls to obtain a sound that was pleasing to the listener. Accuracy was never a consideration, and the setting used was purely subjective.
+ +Probably one of the earliest use of equalisers for audio was to try to get decent (and intelligible) sound from early movie soundtracks [ 3 ]. It's not known if there were any equalisers used for radio broadcast, but I'd be surprised if at least some form of (perhaps fixed) filtering wasn't applied to compensate for deficiencies in the transmitter modulators and other parts of the transmission chain. There was definitely a requirement to limit the bandwidth, because AM transmission cannot be allowed to be full frequency range due to the problem of potential adjacent station interference. These don't qualify as tone controls though, because they had fixed frequency response. The same applies to 'equalisers' used to correct phone-line transmissions.
+ +The top-cut style of tone control was standard on most mantel radios and even record players up until the late 1960s. In the valve era, it wasn't possible to include 'proper' tone controls in budget equipment because valves were expensive, and at least one triode was needed to bring the signal back to normal level. Although there were many 'high end' hi-fi systems and construction projects published in Wireless World (UK, now Electronics World), R,TV&H in Australia (Radio, Television & Hobbies) and Practical Electronics (US) and many other magazines, only the more affluent enthusiasts could afford the off-the-shelf equipment that had the latest and greatest tone controls (and other specifications to match).
+ +There was a period where the best equipment available was expected to have tone controls. The Quad 22 preamp was an example, and that had quite sophisticated controls, featuring bass and treble as expected, but also having a switchable low pass filter (5kHz, 7kHz and 10kHz) to help reduce noise from the signal source. At that time (1950s to 1960s and beyond), nearly all preamps had tone controls, and many innovative new topologies were developed to provide more control over how the controls functioned. Some allowed for quite radical amounts of boost and cut. Up to ±20dB wasn't unheard of, but most were limited to a more sensible ±12dB or so.
+ +When graphic equalisers were first introduced to home hi-fi systems they were usually very basic. Some had as few as five bands (2 octave range), and although quite limited gave the home listener plenty of scope to mess up the sound. However, if the end result made the owner happy then that's all that really mattered. With most systems today, the inclusion of DSP (digital signal processing) allows the user to select any number of 'effects' that can ruin everything with far greater ability than anything that has come before.
+ +Most simple tone control circuits use the simplest type of filter - resistance/ capacitance (RC) networks that provide a theoretical maximum slope of 6dB/ octave. Those using capacitors and inductors (real or simulated) can achieve far greater slopes, but are configured as band-pass or band-stop (depending on the pot position). Graphic equalisers come in two major formats too, with the most common types providing a variable Q (bandwidth) depending on the amount of boost and cut. The other type is 'constant Q', patented by Ken Gundry of Dolby Laboratories and further developed by Rane. These have a (more or less) constant bandwidth regardless of the amount of boost or cut.
+ +The Langevin Model EQ-251A was the first equaliser to use slide controls, but in this case they were slide switches, not pots as we expect today. It used only passive sections, and each filter had switchable frequencies and used a 15-position slide switch to adjust cut or boost. The first true graphic equaliser was the type 7080 developed by Art Davis's Cinema Engineering. It featured 6 bands with a boost and cut range of 8dB. It used a slide switch to adjust each band in 1 dB steps. Davis's second graphic equaliser was the Altec Lansing Model 9062A EQ. In 1967 Davis developed the first 1/3 octave variable notch filter set, the Altec-Lansing 'Acousta-Voice' system.
+ + +It's important to understand that there is a vast difference between the tone controls that may be used on a hi-fi or mixing console and those used in guitar or other musical instrument amps. In hi-fi or mixers, it is essential that a flat response is available, simply by setting the boost/ cut controls to centre. The circuit then has no effect on the response, so what goes in comes out without change. With musical instrument amps, the situation is very different. The tone controls work in conjunction with the instrument, pickups and the loudspeakers, and the overall effect is to provide a wide range of 'tones' through the speaker that are pleasing to the musician.
+ +For example, a guitar amp is not intended to reproduce sound, it's intended to create (produce) sound. The amplifier and speaker system form part of the instrument - any one without the other is pretty much useless. Try playing a well liked recording through a guitar amp - you will never get it to sound right. Much the same happens if a guitar is played through a hi-fi system. Even if it has tone controls, it will be difficult or impossible to get 'the sound' that a guitarist is used to hearing, and you'll probably end up with blown tweeters to add injury to insult (as it were).
+ +Early guitar amplifiers often had no more than a 'top cut' tone control, but users wanted more. The 'tone stack' as it's generally known now was developed fairly early, but despite much searching I was unable to find out who designed the first version. The guitar amp style tone stack is only capable of providing bass and treble boost (which equates to a midrange cut). The midrange control only lifts the average level across the frequency range, and is deliberately limited so it doesn't render the bass and treble controls inoperative. In most designs, there is no setting that has a flat frequency response - all you can do is vary the amount of bass and treble boost. These circuits are always passive, and have an insertion loss of 20dB or more. Insertion loss simply refers to the amount of signal you get at the output vs. the input, with the controls set to flat or the closest to 'flat' that the circuit can provide.
+ +A few designers over the years have used Baxandall (feedback) tone controls in guitar amps (often as magazine projects), and most qualify as bloody awful at best, unusable at worst. This isn't to say that they can't be used, but in general guitarists will not be at all happy with the end result. To anyone who has designed a guitar amp or two (or three, or ...) this comes as no surprise. Music production and reproduction are very different, and cannot be considered equal in any way. While electric guitar can be especially hard to get right, bass guitar and acoustic guitar with magnetic or piezo pickups can also be very demanding.
+ + +The simplest, most basic and least useful tone control simply provides bass and/or treble cut. These are easily created and were very common in many earlier wireless sets. Bass cut wasn't so common, but nearly all mantel radios from the 1940s onwards featured a 'tone control', which was nothing more than a variable treble cut. By varying a pot, the high frequency response could be rolled off to allow the user to obtain a 'mellow' sound that had a very restricted top-end. Even from an early age, I found that setting the tone control to the position that gave the most treble (such as it was with an AM mantel radio or similar) was far more satisfying than the muffled sound that my parents seemed to prefer.
+ +The general principle is shown below. No boost was possible for bass or treble, simply because early radios and record players barely had enough gain to reach full volume even without any tone control, so reducing the gain to allow boost for separate bass or treble controls wasn't an option. Gain was expensive, because it required another valve stage. The important part here is that if you want to be able to boost bass or treble with a passive network, the entire signal has to be reduced so the filters can be adjusted to provide an apparent boost. Simple bass and treble cut controls are shown below, as these are the most basic of all.
+ +These controls have the minimum possible effect on the rest of the signal, so they could be added without any gain penalty. This meant that an additional valve or transistor wasn't needed, so the cost of including them wasn't great. A couple of potentiometers, knobs, resistors and capacitors was all that was needed. With both controls set for maximum cut, the effect was to provide a signal that was all midrange - no bass, no treble, only the mid frequencies. However, if the two are combined there will be some interaction.
+ +Note that as the controls are adjusted, they can only cut - there is no facility to boost the signal at any frequency. The treble cut control reduces the level by 6dB/octave from a turnover frequency determined by the pot position and the bass cut control does the same. Treble control can also use a variable capacitor, but that was never appropriate because of the physical size of a variable capacitor with enough capacitance to be useful. It can be done easily with a capacitance multiplier, but these were never used in the valve era and remained uncommon until opamps became readily available. With the values shown, the -3dB frequency response with both controls set for maximum cut is from 177Hz to 2kHz. With the pots set for minimum cut the response is essentially flat from 30Hz to 20kHz. The circuit must be followed by a high impedance stage and fed from a low impedance.
+ +If you need to apply boost at any frequency, you need to accept a loss that's slightly greater than the boost allowed or incorporate a gain stage. This can be a valve, transistor, FET or opamp, depending on the era of the design. Early cut/boost tone controls were passive and could introduce a loss of as much as 20dB with the pots centred (flat response). This loss had to be made up by adding a gain stage.
+ +The general scheme seen below is often referred to as a 'James' EQ, so called because it was first published by E.J. James [ 1 ]. You may also see it referred to as a 'passive Baxandall', but that's not correct. The design published by Peter Baxandall is active, and uses feedback to get symmetrical boost and cut. The Baxandall tone control requires an inverting amplifier stage with low output impedance to drive the filter circuits. The James circuit requires a low source impedance and high impedance load, or performance will suffer.
+ +There are countless variations on this basic circuit. As shown, it's one of the more common arrangements and allows a nominal cut and boost of around +18dB and -20dB (it's not perfectly symmetrical). The bass and treble turnover (±3dB) frequencies are changed by using different capacitor values. Smaller caps work at higher frequencies. The bass section can use one capacitor (in parallel with VR1) or two as shown. The treble section may also use two caps as shown, vs. a single cap in series with the wiper of the treble pot.
+ +There is a slight difference between the circuit variations. Tone control circuits must be driven by a low output impedance (cathode or emitter follower), and there is some interaction between the controls with most passive versions. A true flat position is difficult to achieve with the Figure 2 controls, and a frequency deviation of up to ±2dB is not uncommon. Note that the pots are logarithmic - linear pots do not work, but log tapers are rarely good enough to ensure front panel calibration for flat response. Insertion loss is about 20dB. The following stage must have a high impedance input, and direct coupling to the grid of the following valve (with no additional grid resistor) was not uncommon. A gain of 10 is needed to restore the level with the controls set for a nominally flat response.
+ +The above shows a very simple EQ circuit that I devised a great many years ago for simple stage mixers and 'pre-mixers'. The idea was to provide some control, but not so much that it would get inexperienced users into trouble. The basic scheme is superficially the same as that shown in Figure 2, but the components are the same value for the 'top' and 'bottom' parts of the circuit (compare this with Figure 2). The insertion loss is small (6dB with the controls centred), and the maximum boost is limited to a little under 6dB. There is more cut available, but that only becomes apparent with the control(s) set for minimum bass or treble cut.
+ +Response of the bass pot is shown in green, and treble in red. The pots are linear, and graphs are shown at 25% increments. Unlike the version shown in Figure 2, when the pots are centred the response is completely flat, with almost no deviation at all. There is a small deviation that can be measured, but it's below audibility (about 0.3dB with a 100k load, or 0.03dB if loaded with 1 megohm).
+ +Interestingly, the Figure 3 circuit is almost exactly what you'd expect to see used with an inverting gain stage in a Baxandall control circuit [ 2 ]. The same values used with an inverting gain stage give perfectly symmetrical boost and cut, with a maximum of ±15dB with the values shown. This type of control is shown next, and was very common in home hi-fi systems and mixing consoles. The circuit is seen below, using the exact same component values as shown in Figure 3, but with the addition of an opamp gain stage.
+ +This type of circuit is possibly the most popular of all time. Some manufacturers have provided switchable capacitors so the response can be tailored to the user's preferences. There are variations with a midrange control, which is achieved by adding a third pot that has a cap in parallel (like the bass control) and another (smaller) cap in series with the wiper (like the treble control). When a midrange control is included, it's almost always fixed - to make it variable requires switched capacitors.
+ +You will often see the version shown in Figure 5B, using a pair of caps for both bass and treble. The response is similar to the version shown in Figure 5, but there are some subtle differences. There's a little more 'disturbance' in the midrange with the 5B circuit, and it has a little more boost for both bass (~1dB at 28Hz) and treble (~3dB at 20kHz). Cut is (almost) identical, but the frequencies are shifted slightly because the caps aren't exactly half/ double those shown in Figure 5. The alternative 5B circuit uses twice as many capacitors, and IMO is inferior to the Figure 5 circuit. Essentially it's a symmetrical version of the Figure 2 network, enclosed in a feedback loop.
+ +The generic term for equalisers with the type of response provided by James and Baxandall tone controls is 'shelving EQ', because the bass or treble is boosted by a set amount, but then returns to being almost flat above or below a frequency that's determined by the setting of the control pot. You can see this in Figure 4, the boost and cut level out below 200Hz and above 4kHz. Because the Figure 3 circuit is passive and has no feedback, at maximum cut the bass doesn't level out until about 60Hz, and the treble doesn't really level out at all. Once feedback is applied, this changes as shown in Figure 6.
+ +Colours and pot increments are the same as used for Figure 4. You will notice that boost and cut are now (almost) perfectly symmetrical. Remember that these plots used the exact same tone filters as shown in Figure 3, and the only difference is the addition of feedback.
+ +The full performance and symmetry of Baxandall circuits was difficult to realise with valve circuitry, because getting a very low output impedance from the drive and feedback stages was extremely difficult. As is common with all valve circuits, the tone control networks were high impedance, using 100k or higher for pots, and with other components scaled to suit. It became easier when transistors were used, and was virtually automatic when opamps were used as the source and amplifying devices. Perhaps some of the nostalgia for valve circuitry was the rather 'sloppy' response obtained due to relatively high impedances. This can be restored (why?) by adding resistors in series with opamp outputs.
+ +It's to be expected that some people will insist on passive controls, because they imagine that applying feedback somehow ruins the sound. This is complete nonsense of course, and there seems little point in using a vastly inferior tone control system that has no real flat setting just to avoid the 'evil' of feedback. If this approach is taken, only the Figure 3 circuit is really suitable, because a flat setting is possible and dubious (at best) log pots are not needed.
+ +In the interests of completeness, the above shows the general arrangement used to add a midrange control to a Baxandall network. The Q is low (about 0.5) and you can't adjust the frequency easily, but it does add some extra functionality that might be useful for a musical instrument amp. While you may see it added in many circuits on the Net, it's of somewhat dubious value. Because it's not easily adjusted for frequency (C2 and C3 can be changed, optionally with switches), due to the low Q most users are likely to find it doesn't really do what they need. To increase the 'midrange' frequency, reduce the value of C2 and C3 and vice versa. The values will normally be the same, but that's not essential.
+ +Calculating the component values to set specific frequencies is possible, but it's far from precise. The controls are always somewhat interactive, and because they use pots the resistance is variable. Texas Instruments has shown some formulae in various datasheets, and while they work they aren't particularly accurate and are simplifications. For the most part, it's far easier to use the data from an existing design and just scale the capacitor values. If the capacitance is doubled, the frequency is halved and vice versa. Intermediate values can be estimated quite well. For example, if the capacitance is increased/ reduced by a factor of 1.5, the frequency is changed by the same fraction.
+ +These filters all have low Q (generally less than 0.5), and the frequency for ±3dB of boost/ cut is not fixed. It varies with the amount of boost/ cut, so attempting to create a formula is more trouble than it's worth. If you use a simulator you'll be able to get accurate results, but ultimately it's about the sound. If you get the sound you want then that's all that matters. This is particularly true for guitar (and other musical instrument) amps, but it also applies for hi-fi.
+ + +While the basic shelving filters described above are fine for controlling bass and treble, to affect the midrange or a troublesome frequency anywhere in the audio band isn't possible. In many cases bass and treble controls don't even work for bass and treble. For example, if you want to get a 'fat' kick drum sound you might add some bass, but you don't want or need to keep boosting all the way down to a few Hertz. Look at Figure 6 - if you have 10dB of boost at 70Hz, you have slightly more than that at 40Hz and it's still there at 20Hz. A peaking filter can be tuned to 70Hz (for example) to give a satisfying 'thump' from the kick drum, but the level returns to normal (towards 0dB gain) as the frequency increases or decreases.
+ +Graphic equalisers have a series of bandpass filters, with each frequency band controlled by a slide-pot. Each frequency can be cut or boosted, and uninformed fiddling can cause problems. There was a brief period where stereo graphic EQ was considered a 'must' for what's probably better known as 'low-end hi-fi' - comparatively cheap systems that made up for the lack of overall quality by including extras that made the buyer believe s/he was getting a good deal.
+ +This general form of equaliser was developed in the early 1970s, and inductors were used as part of the frequency selective networks. Inductors are comparatively large, require many turns of wire and a magnetic core (steel laminations or ferrite). They are expensive to make, and nearby magnetic fields can induce hum into the windings.
+ +Graphic EQ was therefore expensive and quite bulky until the invention of the gyrator (a 'simulated' inductor, using an opamp to invert the action of a capacitor). Although the gyrator was proposed in 1948 (by Bernard Tellegen, a Dutch engineer who also invented the pentode valve), practical realisation wasn't possible until opamps became readily available. Very basic gyrators can be made using only a transistor, but their performance is sub-standard. I don't know of anyone who has tried to make a gyrator using valves because it would not be sensible. The active element of a gyrator is a non-inverting unity gain buffer, which should have high input impedance and low output impedance.
+ +Gyrators allowed designers to create large numbers of 'inductors' very cheaply compared to true inductors, and gyrators are unaffected by magnetic fields so induced hum was no longer a major problem. The general form of a graphic equaliser is shown below, but using inductors for clarity. It doesn't matter if the inductor is 'real' or simulated, it has exactly the same effect. Note that the value of the resistor (R2, R3, etc.) is often the winding resistance of the inductor, and/or an external resistor used to ensure that the series resistance of each tuned circuit is identical. In the following drawing, only the first 5 octave band filters are included. The remainder follow the standard octave frequencies. Industry standard frequencies for the three most common equalisers are ...
+ +31 | 63 | 125 | 250 | 500 | 1k0 | 2k0 | 4k0 | 8k0 | 16k + |
31 | 44 | 63 | 87 | 125 | 175 | 250 | 350 | 500 | 700 | 1k0 | 1k4 | 2k0 | 2k8 | 4k0 | 5k6 | 8k0 | 11k | 16k | 20k + |
25 | 31 | 40 | 50 | 63 | 80 | 100 | 125 | 160 | 200 | 250 | 315 | 400 | 500 | 630 | 800 | 1k0 | 1k2 | 1k6 | 2k0 | 2k5 + | 3k2 | 4k0 | 5k0 | 6k3 | 8k0 | 10k | 12k | 16k | 20k + |
The frequencies shown above are pretty much agreed upon worldwide, and have been adopted by all manufacturers making graphic equalisers. The 1/2 octave and 1/3 octave frequencies are often extended above and below those shown, and may include 20Hz and/or 25Hz, as well as 20kHz. The drawing below shows ideal values rather than those readily available, purely for convenience. The Q of each filter is about 2, extreme accuracy is not really possible and fortunately isn't necessary. The circuit below must be driven from a low impedance. Normally, there would be a unity gain buffer to drive the input. It isn't shown but must be included unless the previous stage is an opamp or other very low impedance source.
+ +Without the frequency selective networks (C1, L1, etc.), the pot sliders simply vary the gain of the circuit and unity gain is achieved when the slider(s) are centred. When the pot wiper is close to the input (+ve input of U1), the incoming signal is attenuated (cut), and at the opposite end (shown with a + sign) the opamp has gain (boost). When each pot connects to a tuned circuit, only the frequencies passed by the tuned circuit are affected. In circuits developed after ca. 1970 or so, the inductor is replaced with a 'simulated inductor' - a gyrator.
+ +The tuned circuit filters have a minimum impedance at a particular frequency as shown, so the pot affects only those frequencies passed by the filter. A series resonant circuit has a minimum impedance at the resonant frequency, and this forms the basis of most simple graphic equalisers. The scheme shown above gives an equaliser whose actual Q (as opposed to the theoretical value) varies with the slider setting. At low boost or cut settings the bandwidth is much wider than expected.
+ +One thing that is fairly difficult to find explained in simple terms is just how to determine the inductance and capacitance needed for a specific Q. The values depend on the load (series) resistance, which in the above circuit is 470 ohms. The impedance (X) of the cap and inductor must be scaled to the load resistance (RL), and the following formulae apply ...
+ ++ X = RL × Q+ +
+ C = 1 / ( 2π × X × f )
+ L = X / ( 2π × f ) +
The Q (which determines the bandwidth) of each filter depends on the number of sliders used (10, 20 or 30). A 1/3 octave graphic EQ needs higher Q filters than a 1 octave band type. Q is defined as the centre frequency divided by the bandwidth, and a 1 octave filter requires a Q of 2. A 1/3 octave EQ system needs filters with a Q of 4.31 (4 is close enough for an equaliser). You may well ask why the Q isn't constant, and the answer is quite simple.
+ +When the pot is near the centre position, the load on the tuned circuit is no longer 470 ohms, it's 470 ohms plus the equivalent resistance of the pot and the feed resistors (2.7k as shown). As the pot position varies, so does the Q, and therefore the bandwidth changes as well. This type of circuit cannot provide a constant loading on the tuned circuit, so cannot provide a constant Q. The effect can be reduced slightly by using lower value pots, but that increases the noise gain of the opamp, so the circuit will become noisy even with a low noise opamp. The opamp inputs are not virtual earth types, and both have a relatively high impedance.
+ +A constant-Q graphic equaliser suitable for subwoofer equalisation is described in Project 75, and this arrangement was first published by Bob Thurmond [ 5 ] and is shown next. Commercial units were pioneered by Rane [ 6 ], but using a different circuit.
+ +It's important to understand how this circuit differs from the previous version. The most obvious difference is that the opamp inputs are both virtual earth (close to zero impedance), and the band pass filters are not RLC types as shown above. You may see a variety of different active bandpass filters used. Typical types are multiple feedback, twin or bridged tee or even state-variable. It is possible to use RLC filters (resistance, inductance, capacitance), and gyrator based filters can be used with some extra circuitry, but the other filter types remain a simpler and better choice.
+ +You can see that the pots control the output from each band-pass filter (BPF), and the multiple outputs are summed along with the input signal at the input to U1 (signal cancellation or cut) or U2 (signal augmentation or boost), depending on the pot position. When the pot is centred, the signal to U1 and U2 is identical, so it cancels and there's no boost or cut for that frequency. The Q remains constant because it's only the output from the appropriate BPF and the load doesn't change.
+ +Above you see the response of two equalisers, one configured as a traditional graphic EQ and the other configured for constant Q. Assuming equal bandwidth for each, both will have the same response at maximum boost or cut, but the situation is quite different at any setting below maximum. The setting for constant Q vs. variable Q is shown for a pot setting of 75% (50% boost). Cut response is the same (but results in a dip of course).
+ +The general topology of a gyrator and band-pass filter are shown above. The effective inductance of a gyrator is simply the product of the three components (R1, R2 and C1). When the three are multiplied together, the answer is the inductance in Henrys. Equivalent winding resistance is the value of R1. Gyrators are covered in greater detail in the article Active Filters Using Gyrators - Characteristics, and Examples. The multiple feedback bandpass filter is a simple and fairly straightforward design, although calculating the values can be very irksome. They are described in detail in Project 63, and there's even a calculator program available that you can use to work out the component values for you. The multiple feedback filter is not easily tuned, and when variable frequency is needed the choice is between the various bandpass filters covered in the section about parametric equalisers.
+ +While a graphic EQ is certainly a very flexible way to control the frequency response, they need a large number of slide pots and therefore take up a lot of room. This makes their use on mixing consoles (for example) limited to perhaps a couple of graphic EQs for the main or 'FOH' (front of house) outputs. There simply isn't enough space on each channel strip to include one for each channel, and simple bass and treble controls are too limited.
+ +There is one other type of graphic equaliser that deserves at least a mention. As far as I'm aware, only a (small) few manufacturer ever produced them, one being IRP (Industrial Research Products). The form of filter uses an analogue delay line, typically made up with a number of all-pass filters (phase shift networks). The delayed outputs are then fed to a very complex resistor matrix, and finally to summing amplifiers for each band.
+ +These are superficially simple, but in practice are very complex. A 31 band (1/3 octave) version needs well over 100 opamps, and a resistor matrix using hundreds of resistors of different values. Even if I had a complete circuit, it would be so large as to be impractical for publication (and I'd need permission to do so). I don't have much useful information on these, but the technique is certainly interesting, based on the small amount of information I have available.
+ +To get the benefits of EQ that can be tailored to the exact needs that doesn't occupy too much space on a channel strip requires a parametric equaliser, discussed below.
+ + +Simple bass and treble controls can benefit from having adjustable frequencies. It's no longer possible to use the Baxandall topology, so it's done using various other techniques. The easiest is to use the same basic arrangement as used in common graphic equalisers. There have been many schemes used, but most use a variable frequency high and low pass filter in a feedback network. A few (including some that I designed) use an opamp to create a variable capacitance (a capacitance multiplier), and others have used a variety of circuits. It would be silly to try to include them all, so only two variants are shown.
+ +The first is fairly conventional, and there are quite a few references to very similar circuits on the Net. The circuit consists of two inverting gain stages and two unity gain buffers. The latter isolate the boost and cut controls from the frequency networks, and are essential to prevent unwanted interactions. VR1 changes the bass frequency from 200Hz to 740Hz at the ±3dB point. VR2 does the same for the treble, from 460Hz to 1.4kHz, again at the ±3dB frequencies with full boost or cut.
+ +The above circuit works well and is not critical, and component values can be changed to increase the frequency range or provide more (or less) boost and cut. As a general purpose tone control, it's far more flexible than the standard Baxandall circuit, but gives almost identical results for any given frequency setting. A better solution uses a variable gyrator for the low frequencies and a variable capacitance multiplier for high frequencies. This has the advantage that the bass control can be switched from shelving to peaking, and additional sections can also be added.
+ +The gain and boost/cut circuitry is identical to that used for a conventional graphic EQ, and is easily expanded as described in the Project 28 page. As a parametric equaliser it's not wonderful, but it's still surprisingly effective. If you only want variable frequency bass and treble controls it's better than the circuit shown in Figure 12 because you get the option of shelving or peaking for the bass control. As noted above this can be especially useful for percussion (kick drum, toms and kettle drums for example). The circuit uses three unity gain buffers and one gain stage. The input buffer is not needed if the source has a low output impedance, such as from another opamp in the circuit.
+ +Be aware that the variable capacitance multiplier (U3) can be temperamental. On occasion, it may not settle properly to normal quiescent conditions (output at zero DC voltage), and it might need to be powered off and on again before it settles down. I've been unable to replicate this on the workbench, so it seems that the circuit knows when test equipment is nearby . Mostly it works perfectly - I have one in an equaliser I use for my workshop system that's never missed a beat in over 20 years.
In shelving mode, the circuit works almost identically to that shown in Figure 12. The range of each frequency control can be changed by using a higher (or lower) value pot, and the frequencies are changed by replacing C1, C2 and/or C3 with values that provide the desired ranges. For the peaking filter section (C1 in circuit), the ratio of C1 and C2 determines the resonant circuit Q (C2 determines the inductance of the gyrator). Normally there is an optimum ratio (typically around 10:1) for C1 and C2, but because the inductance is variable vis VR3, the optimum ratio can't be maintained.
+ +There is one thing that the Figure 13 circuit does that is not especially desirable, When in peaking mode, the Q changes depending on the setting of VR3. At very low frequencies the Q is higher than at higher frequencies. This variable Q is either a benefit or a curse, depending on what you want to do. With the values shown, the Q ranges from 9.5 to 2.0 (at maximum boost or cut, and at 35Hz and 150Hz respectively). At settings below the maximum cut or boost the Q is reduced. It's normal for this type of equaliser, and if you need a circuit that has consistent Q you need a proper parametric EQ as described next.
+ + +The most flexible EQ that occupies the least space is a parametric equaliser. Provided the bass can be switched from shelving to peaking mode (and many can), you can insert a peak or dip anywhere you like to get the sound you want. Parametric EQ ranges from simple fixed bandwidth types (such as the one shown in Project 28) through to fully variable 'true' parametric equalisers based on state-variable filters. Simple versions like the P28 circuit provide no control over the bandwidth (Q), but are nonetheless very flexible and can perform most 'sound-shaping' EQ tasks very well. Variable Q is needed if you happen to have a requirement to notch out a particular troublesome frequency. High Q (narrow bandwidth) peaks are rarely needed and if used can create problems with the final mix.
+ +It's almost unheard of to use parametric EQ for a home system. These equalisers are not easy to drive, and should only be used by those who understand how they work and what they do. If adjusted incorrectly it's quite possible for an inexperienced user to not just mess up the sound, but it may be possible to kill tweeters if a substantial boost is applied close to the crossover frequency, so the tweeter receives too much energy at its lowest recommended frequency. Home systems aren't just operated by adults, and kids like to experiment! Hi-fi manufacturers assume (not unwisely) that the average user would be confused by all the options provided, and most 'high-end' equipment offers no form of tone control at all.
+ +As with graphic equalisers, a parametric EQ can be configured for variable or constant Q. Each requires a different approach to the circuit. There are countless variations for parametric equalisers, but the best all-round filter network is the state-variable topology. This is a relatively complex circuit, but has the advantage of being easily adjusted both for frequency and Q. Demands on the opamps are fairly modest and comparatively cheap opamps can perform well.
+ +A simpler version uses a Wien bridge as the variable frequency element. These really qualify as 'quasi parametric' EQ, because the Q is fairly low (around 1.3) and can't be changed. However, they are well behaved and easily tuned. A variable frequency stage needs only one opamp in its simplest form, and the tuning network is completely passive. It might seem unlikely that this would be useful, but the Q is actually much greater than a 3-band Baxandall tone control (which only manages a Q of about 0.5), and it can be tuned with a dual-gang pot. This network is not suitable where high levels of boost or cut are needed, as the circuit will oscillate if the gain is too high (set by R6). Without R6 the maximum boost and cut is 9dB and with the values given the maximum is 12dB. The performance can be improved a little by adding a buffer between the pot and the Wien bridge network, but in general the benefit does not outweigh the added expense. There's a lot more info on this topology in Project 150. The Wien bridge network consists of VR2 (A & B), R2, R3 and C1, C2.
+ +Most 'true' parametric equalisers use a state-variable filter (see State-Variable Filters for a detailed analysis). Although comparatively complex, the state variable filter gives independent control of Q and frequency. There are many variations on the scheme, but the end result is fairly similar. In the following drawing, the control section is identical to that shown in Figure 14, and the filter is simply changed from a Wien bridge to a state-variable.
+ +VR3 controls the filter Q without affecting the gain, and VR2(A & B) controls the frequency. With the values given, the frequency range is exactly the same as the EQ in Figure 14, because the values that determine the frequency are the same. The Q can be varied between 5.3 down to 0.5 which gives a very wide control range. Note that VR1 (cut/ boost) operates opposite to the way it does with the Wien bridge circuit. As shown, boost and cut are limited to 9.5dB, but this can be extended by adding a resistor from the inverting input of U1 to earth. If a 2.7k resistor is added, boost and cut are increased to 12dB.
+ +Parametric Equalisers come in multiple types, and usually include variable frequency bass and treble controls, along with one, two or sometimes three bands of true parametric. Frequency ranges usually overlap, and care is needed to ensure that boost isn't used with two sections tuned to the same frequency. There is always a chance that the equaliser will clip, or the output at one frequency will be so high as to place horn compression drivers (in particular) at risk of damage.
+ +When very high Q is used, it's generally only needed to cut a troublesome frequency. High Q boost is rarely needed other than for a special effect. Because the parametric EQ is so flexible, it takes some time to get used to using it properly. Most DAW (digital audio workstation) software includes digital parametric EQ, and there are some on-line tutorials [ 7 ] that explain how the EQ should be used. One of the general tenets of parametric EQ is to "cut narrow, boost wide", referring to the Q or bandwidth of the filter(s). A high Q notch can be very useful, but boost should normally be low Q and kept to the minimum whenever possible. A high-Q boost will almost certainly cause feedback in a live sound system, and can easily damage high frequency drivers.
+ + +The 'tone stack' as it is commonly known is only suitable for guitar or other musical instrument amps. It's very difficult to know where it came from (opinions abound, but proof is hard to come by), but tone stacks are used by most guitar amp manufacturers almost exclusively. The arrangement is quite different from a traditional passive control network, and the control pots are wired in series to form a 'stack' (hence the name). They are very economical, and use the minimum possible number of parts, but the controls are usually highly interactive and there is almost always a significant midrange 'scoop' (essentially a broad notch).
+ +Since electric guitars in particular usually have a quite prominent midrange with little bass or extended treble, the midrange scoop makes up for that by boosting the bass and treble and suppressing the midrange. Varying the bass and treble controls shifts the notch or 'scoop' centre frequency and its depth. Where a 'midrange' control is included, the closest to flat response is obtained with bass and treble at zero, and midrange at maximum. A true flat response is usually impossible though. The controls are used to get a guitar sound that suits the player, and the tone controls (as well as the speaker, cabinet and power amp) are used to create sound. The amp has to be considered as part of the instrument, as most guitarists will choose an amp based on the overall sound they can get from the pairing of guitar and amplifier, and linked to their playing style.
+ +There are very wide differences between tone stacks, not only between different manufacturers but even between different models from the same maker. Most are high impedance and are designed for use with valve stages. For best performance they should be driven from a cathode follower, but in some cases even that is abandoned. While guitarists will think that the tone stack in their favourite amp is a work of art, they are really very basic and usually don't work well with sources other than guitar. From a manufacturing perspective, these are the cheapest possible options for tone control, but it just so happens that the characteristics are pretty close to ideal for the purpose. Only a few designers have strayed, and those who were silly enough to try using Baxandall (active) tone controls have never been well received. An Australian magazine once published a guitar amp using an active tone control, and it didn't go down at all well with most experienced guitar players.
+ +The two tone stacks shown above are typical only. There are many variations between different models, but all have fairly similar overall characteristics. There is no true flat setting, and the midrange control only reduces the depth of the notch, the frequency of which varies with control settings. The controls are interactive, so changing the treble will change the notch frequency and affects the bass to some degree. The bass control has less interaction with treble and mid, and the mid control is fairly subtle.
+ +Insertion loss with all controls centred is much greater with the Fender style (average about 15dB) than the Marshall (about 8dB), and the Fender circuit has more boost for both bass and treble. The notch (which varies from around 300Hz to 1kHz for both) is deeper in the Fender circuit. There is no doubt that the two circuits will sound quite different, but that doesn't mean that a good guitar sound can't be obtained from the two. Many guitarists have a preference, but that's often because a particular amp brand is preferred. There are many other guitar amps, and they nearly all use variations of the two circuits shown. It would be silly for me to even try to show all the different circuits because there are so many.
+ +The two response graphs shown above are with the controls set at 50%. Because there's often a mixture of linear and log pots, this doesn't relate directly to any knob setting. The midrange scoop is clearly seen in both traces, and this is one of the main features of tone stacks in general. I don't know of any stack that has eliminated the midrange scoop. Only the frequency and depth change.
+ +These controls are easily modified by changing cap values. There is no design process involved, it's purely a case of trial and error, and ultimately it's all about getting the desired sound. What the controls actually do to the response is secondary to what it sounds like. If it does what the player needs then it's good, if not ...
+ + +This type of equaliser is almost only ever used by DJs, and it's quite common in DJ mixers. You will rarely see it elsewhere, but if you were to build a 4-way active system based on Project 125 (a 4-way active crossover) you get this ability free. A frequency isolator is usually simply a 3-way crossover network with its outputs summed to return to a flat response. Project 153 describes a 3-band, 12dB/octave, variable frequency isolator, and if you want to see the full version please refer to the project article. The version shown below has fixed frequencies, and although this may seem quite limiting it's often as much as you are likely to need. The term itself is something of a misnomer, in that you can't really isolate the frequency bands because they have a finite rolloff. You can use 24dB/octave filters (as found in Project 125), but that's generally not necessary to get the effects needed.
+ +To be able to get a flat response without having to bypass the equaliser, the filters must use a Linkwitz-Riley alignment. If Butterworth filters were to be used, there would be +3dB peaks at each crossover frequency (410Hz and 4.1kHz) when the pots are all set the same.
+ +The circuit is simply a 3-way crossover, with the outputs summed. When all pots are set to the same level, the summed output is flat, and the pots let the user turn the level of any band up or down. As shown the frequencies are 410Hz and 4.1kHz, but they are easily changed by changing cap values. The multiple feedback filter (U2) used in the midrange circuit reduces the opamp count. Because it's an inverting stage it means that a separate inverter isn't needed for the midrange. It also is a nuisance because the caps are not the same as those used in the high pass filter (U1) and changing frequencies is more difficult. The alternative is to use a Sallen-Key filter like all the others, followed by an inverter.
+ +With the circuit shown, the gain is 0dB with the pots all set for 50%, and the summed response is flat to better than 0.1dB. The summed response is shown above with bass and treble at 50% and midrange at zero. This is just an example using fixed frequencies, but of course there are many other possibilities. This type of equaliser is not intended to correct the frequency response, they are used by DJs as an effect.
+ + +Finally, there's one last tone control arrangement that was popular for perhaps 5 minutes or so, sometime in the 1970s. It was used in at least one Quad preamp as well as a couple of others, but it died out fairly quickly because it's not really very useful. The effect was to literally tilt the frequency response, so if the bass is boosted, the treble is simultaneously reduced and vice versa. I'm not entirely sure why anyone thought this was a good idea, but it's part of tone control history, so it's included. There are many possible tweaks that can shift the centre frequency or provide asymmetrical response, but these are generally as useless as the circuit itself.
+ +The circuit is straightforward, and uses a frequency selective network wired in reverse phase for high and low frequencies. When one end of the spectrum is boosted, the other end is cut. When the pot is centred, the response is flat. The following response graph shows the response at 25% intervals of the pot. Despite not being very useful overall, there are quite a few different versions on the Net. All behave more-or-less equally, but the Quad version was limited to ±3dB unlike most you will see (including the one shown). To reduce the range, resistors are used in series with each end of the pot (VR1).
+ +The circuit would be more useful (or maybe less useless) if the range was restricted to perhaps 6dB of maximum boost or cut, but the same thing can be done with more conventional tone controls, and that allows bass and treble to be boosted (or cut) by different amounts to balance the overall sound. As noted, only a few manufacturers decided to use this type of EQ, and it was short lived - presumably because the buying public didn't like it. I expect it seemed like a good idea at the time, but it's really a rather pointless waste of parts. As you may have gathered, I don't recommend it.
+ +I have seen it suggested for a reverb tank, but it's still not as versatile as a set of 'proper' tone controls. For a system with the minimum of knobs it might be alright, but IMO it's still a waste of parts.
+ + +In the early days of electronics, it wasn't possible to make a 'gyrator' with any pretense of having a decent Q factor. While it is possible, the gyrator hadn't been invented during the valve era. Back then, inductors were much more readily available than they are today, and would have been cheaper than a valve circuit which probably wouldn't have worked as well anyway. Even into the late 1960s and early 1970s, graphic equalisers often used inductors and capacitors to provide the filter networks. While they worked (and often very well indeed), the inductors were very sensitive to external magnetic fields. If the equaliser wasn't located well away from anything with a power transformer, hum was inevitable.
+ +The drawing shown above is a simplified version of one made by White Instruments (Model 4220). This type of equaliser is intended for cut only - allowing 'offending' peaks to be removed. Note the inductor values - the largest (63Hz) is 25.6H - that's a lot of inductance, and it will need a fairly large core to prevent saturation. The load resistor (R1) is critical, and with the design shown it's 10k, which includes the input impedance of the following equipment. If that had an input impedance of 20k, then R1 would have to be changed to 20k (the two in parallel give 10k).
+ +With the values shown, the Q of each stage is about 0.74, more-or-less as required for an octave band equaliser. With any pot set for maximum resistance, the response dip is 6dB, although this can be increased by reducing the value of R1. However, this changes the Q of the filters! Likewise, increasing R1 means less maximum cut at any frequency. The circuit must be driven from a low impedance source, ideally less than 1kΩ. The original design allowed 10dB cut for each filter, which can be achieved by increasing the pot values (VR1 to VR9) to 22k. However, that will increase the ripple in the frequency response when two or more adjacent filters are both set for maximum cut.
+ +The complete design process for the filter networks is outside the scope of this article. As they are simple L/C filters, the inductive and capacitive reactance at resonance will be equal, in this case both equal to the pot value (10k). The formulae shown earlier (under 'Graphic Equalisers') works, but the final Q is affected by the pot in parallel with the tuned circuit. If you use the formulae shown, 'X' is equal to one (1).
+ +In some cases, passive 'notch' filters may be used, especially for guitar, where a reduction of midrange is provided. While the same can be done with hi-fi, it no longer qualifies as 'hi-fi' because so much is missing. A common approach is a bridged-tee filter, which is somewhat less radical than the twin-tee filter used for distortion measurements.
+ +The drawing shows the general configuration of a bridged-tee filter. R1 and R2 don't need to be the same value, but as shown the notch frequency and depth depend on the setting of VR1. At maximum resistance, there's around 1.7dB reduction of the midrange, centred on ~300Hz. As the resistance of VR1 is reduced, the notch gets deeper and the frequency increases. At 50% (25k), the frequency is 400Hz, and the notch depth is 4dB. Things get serious at minimum resistance, with a frequency of about 1kHz and a depth of 28dB.
+ +All parameters can be changed by adjusting resistor and capacitor values. It would not be sensible to attempt to show all possibilities because there are so many. With a fixed resistance for VR1 (say ~3kΩ), R1 changes the notch depth with little effect on the centre frequency, and R2 alters the frequency with little effect on the notch depth. If this arrangement appeals to you, you'll have to experiment with the values - you can use pots in place of R1 and R2 to experiment. C1 and C2 can also be changed, with C1 affecting high frequency performance, and C2 affecting low frequencies. Changing either also affects the notch frequency. It's safe to say that everything affects everything else. No component can be changed that doesn't affect the overall response, but some are subtle, others not.
+ +The bridged-tee circuit must be driven from a low impedance, and the following stage must be high impedance. An input impedance of 1MΩ is recommended for the following stage. This isn't a circuit that you'll see in an equaliser very often, but some guitar/ bass amplifiers include a 'contour' control which is often a variation on the basic scheme shown.
+ + +It's fair to say that with the ready availability of opamps, tone controls with greater flexibility and more usable features became possible than were ever available before. When a modern design is set for flat response, there is virtually no change to the signal at all, other than a truly tiny amount of added noise and distortion which is inevitable with any active circuit. Earlier designs could also be fairly flexible, but at the expense of many components (including inductors), and frequencies that could only be switched rather than continuously variable.
+ +When it comes to wide range, flexible EQ, opamp circuits simply cannot be beaten by any earlier technology, despite any contrary claims you might hear. Using DSP is the next level, but there are still many people who prefer to keep signals in the analogue domain if possible. Controlling a complex filter using a touch-screen may be 'high tech', but it's often very hard to beat the feel of knobs on high quality pots. Rotary encoders can be used with digital systems, but you usually lose the ability to see the settings by looking at the knob pointers.
+ +Analogue circuits have another major benefit - they can be built by anyone who can use Veroboard and a soldering iron, or mount parts on a PCB. This isn't even an option for most digital systems unless the person building the circuit can not only solder surface mount parts, but also knows how to program a DSP. There's another disadvantage to the digital approach, and that's IC continuity. Many modern digital ICs (DSPs, ADCs, DACs, etc.) have a short production life, so if the IC fails after a few years it may be impossible to replace. In contrast, opamps have been with us for many years, and there's no indication that any of the popular devices will disappear. Even if an opamp does become unavailable, you can be sure that a suitable replacement with equal or better performance can be found easily.
+ +Whether you like the idea of EQ or not, it's inevitable that it has been used during the production of the original recording. There may be a very small number of tracks that have been created as direct to tape or hard disk without any processing, but they are few and far between. If such material is not a genre you even like, then there's no point at all. In general, EQ will hopefully be applied only where necessary, and preferably with as little change as possible. However, many producers will abuse your senses and the recording by manipulating frequency response so that even more compression can be added without turning the music to mush. Regretfully, this seems to be a popular pastime .
Phase response wasn't even mentioned in any of the descriptions, because it's extremely variable. All equalisers cause phase shift, and the change of phase is much more rapid with a high Q circuit. We can hear the frequency response variation caused by any equaliser, but the phase shift is not audible. There is any number of people who claim that phase is audible, but the claim belies the fact that most programme material has had at least some equalisation, and therefore has phase that varies from the original either for particular instruments and/or for the complete mix. No double-blind test has ever shown that phase shift is audible, provided it's static. Varying phase shift is used to create vibrato (cyclically varying pitch) which is audible, and is used as an 'effect' with many electric musical instruments.
+ + +![]() ![]() + |
![]() |
Elliott Sound Products | ESD Protection |
![]() ![]() |
Most circuits require a minimum of input protection, especially conventional preamps, power amps and other audio circuits. However, for test instruments or other circuitry that will be used in potentially hostile environments protection is essential. The same applies for electronics that are used in sound reinforcement, as there could be up 100m of cable that can be charged to 48V by phantom power. The phantom supply is limited to 7mA (each wire of the balanced pair), but the cable capacitance is such that several amps may be available for a few microseconds. It will usually be less, but that's not something you can count on.
Most CMOS digital ICs (including microcontrollers, PICs and CMOS opamps) have inbuilt protection diodes, but they are tiny (being located on the die), and they are incapable of handling any appreciable current. Without at least a limiting resistor in series with the input, it's not hard for a static discharge or other 'event' to cause internal diode failure. Few CMOS datasheets specify the ESD (electrostatic discharge) capabilities of the device, and you have to look for 'generalised characteristics' data for the device family (which may or may not be readily available). The limits are generally very low.
The 'human body model' is one test criterion, where a 100pF capacitor is charged to the desired test voltage, and discharged into the IC via a 1.5kΩ resistor. The test voltage depends on the level of protection claimed, and can range from 2kV to 8kV. Like so many things in electronics, it can be very difficult to find definitive data for anything even slightly esoteric, and ESD testing regimes are no exception. Even when located, the data are not always easy to comprehend unless you have experience in this area. The risetime of the discharge determines the worst case peak current. Depending on the source, this may range from a few nanoseconds up to perhaps 1μs or so. Again, definitive information isn't easy to find, but I suggest that you assume the worst. A 100pF cap charged to 2kV will create a peak current of 570mA with a 1μs risetime, increasing to 1.33A with a 1ns risetime. The peak current increases in direct proportion as the test voltage is increased.
In general, it would be unwise to expect the internal protection diodes to provide adequate protection for any circuit used in a hostile (or potentially hostile) environment. This doesn't only apply to inputs, and outputs can be just as easily damaged if subjected to ESD. We tend to think that outputs are 'immune' from damage because they are low impedance, but expecting a CMOS logic circuit's output stage to withstand a pulse of over 1A involves a high level of wishful thinking.
The ESP app note AN-015 - Input Protection Circuits provides some basic information that has been available for some time, but this article is far more complete, and has more information for the protection of digital logic circuits. It's not often that CMOS inputs are exposed to the forces of nature, but with microcontrollers and PICs becoming more common for even trivial tasks, there are more opportunities for things to go wrong.
Almost all CMOS logic ICs have internal protection diodes or protection networks, but the current rating is very limited. The same applies to microcontrollers and PICs, along with CMOS opamps. JFET and bipolar opamps generally do not have protection diodes, but they are not immune from ESD damage. Some CMOS logic ICs are rated for a diode current of up to ±20mA, but this is easily exceeded by even 'small' ESD events. External diodes can handle far more current, but at the expense of relatively high capacitance (around 4pF for a 1N4148 diode). Peak current for the 1N4148 is 1A for 1 second, or 4A for 1μs.
Something that is rarely considered is shat happens if (when) a high-voltage, high-current source is applied to a diode-protected input, using the 'standard' protection scheme shown in Fig. 1.1. The upper diode will conduct when the input is 0.6V greater than the supply voltage, and if enough current is available (which may only be a few milliamps), the positive supply is forced high. The regulator doesn't help, because almost all regulators are designed to source (supply) current, but the cannot sink (absorb) current if the output voltage is forced high. It's easy for an improperly wired piece of external equipment (such as a power amplifier) to supply 30V or more with considerable current. If that happens by accident and 30V AC is fed into the input of a DAC or other logic-level circuit, it's quite apparent that it will be destroyed fairly quickly (if not instantly).
The arrangement shown is seen in countless circuits, both DIY and commercial. Provided the value of Rlim (current limiting resistor) is high enough, it provides adequate protection for many circuits. However, if ESD 'events' are expected, Rlim needs to be a fairly high value, and that can cause the circuit to suffer from poor high-frequency response. With analogue circuits it can also introduce noise. A value of between 1k and 10k is fairly common. As with so many things in electronics, a compromise is required. Beware of positive input voltages! If Rlim is 1k, a 30V input will force 25mA into the positive supply (VCC), which can cause the supply voltage to rise to a destructive level. Higher voltages are worse.
The VCC overvoltage protection zener is uncommon, but it should always be included. The zener voltage should be slightly higher than the supply voltage, selected so that VCC cannot rise above the device's absolute maximum voltage rating.
For a single-supply circuit as shown, negative voltages are not a problem, as D2 can carry up to 200mA (1N4148), so if Rlim is 1k, the circuit is protected for up to a -200V input. The protected device may or may not be able to withstand the -1.5V or so that will appear at its input. The data should be included in the datasheet. Rlim should generally be the highest value you can use for the IC in use, while considering transition speed.
Zener diodes can provide very robust protection, but they are far from perfect. Leakage current is a problem that can show up as unexpected distortion, so in some cases you may have a difficult decision. The power supply may need to incorporate a parallel zener to absorb any voltage above the nominal regulated supply (e.g. 16V zeners in parallel with ±15V supplies, or an alternate scheme devised). By using parallel zeners for each supply rail, while the regulators cannot sink any fault current, the supply rail(s) can only be increased by 1V before the zeners conduct. This extra protection is essential if equipment is liable to be subjected to abuse.
It's not foolproof of course, and if a sufficiently powerful input signal is provided, something will fail. However, the remainder of the circuit should be saved. One of the issues we face is a compromise between protection and noise. If the input resistor were to be increased to (say) 10k, even a 100V input can only produce 10mA, but that much resistance would be quite unacceptable for a low level preamp (phono, tape head, microphone, etc.). Just the resistor will generate a noise voltage of 1.8μV (20kHz bandwidth), limiting a 1mV input to a signal to noise ratio of 27dB! Even a 100Ω resistor will generate 180nV of noise (37dB signal/ noise with 1mV), so for very low noise circuits any series resistance is very limiting.
Noise is (usually) not a major issue with logic circuits, because they operate with fixed levels (e.g. 0-5V, 0-3.3V, etc.), but for signals from the outside world almost anything is possible. Once ICs are installed on a PCB, most are fairly well protected against damage, but any pin that interfaces with external equipment (outside the main chassis) is at risk. Table 1 shows just how hostile the outside world can be!
Condition | Typical Reading (Volts) | Highest Reading (Volts) |
Person walking across carpet | 12,000 | 39,000 |
Person walking across vinyl floor | 4,000 | 13,000 |
Person working at bench | 500 | 3,000 |
16-lead DIPs in plastic box | 3,500 | 12,000 |
16-lead DIPs in plastic shipping tube | 500 | 3,000 |
The table shows measurements published by Fairchild for various conditions, with a relative humidity of 15% to 30%. The figures were originally determined by T.S. Speakman (see note below) and it's very likely that these figures have been used (at least in part) to determine the 'human body model' for ESD.
T.S. Speakman, "A Model for the Failure of Bipolar Silicon Integrated Circuits Subjected to ESD", 12th Annual Proc. of Reliability Physics, 1974.
The arrangement shown above is the standard representation for the so-called 'human body model' (HBM). The capacitance is 100pF, and the resistance is either 1.5k or 330Ω (IEC 610004-2), depending on the standard being followed. There's also a 'machine body model', used to test machine-to-machine vulnerability. It's shown in reference 5 (as is the HBM), but for the latter the time scale of the discharge is wrong - it shows milliseconds instead of microseconds. This error is repeated countless times on the interwebs.
At the 10μs mark, the charged capacitor (4kV for this example) is connected to the DUT (device under test), and the current rises (almost) instantly to the maximum (2.67A). Within 400ns the current has fallen below 250mA, and after 1μs it's down to 3mA. The peak current is almost solely dependent on the applied voltage. This is why anti-static work mats and wristbands are used in production and repair facilities, and although they are high-resistance (for user safety) they prevent static build-up by providing a constant discharge path. By their very nature, static charges are high-impedance, and are easily discharged even by a 1MΩ resistor.
It's not easy to find detailed info on CMOS or microcontroller internal protection circuitry. The above was found almost by accident, and shows the arrangement used in TI's AHC (advanced high speed) CMOS ICs. The BJTs appear to be parasitic, and exist between the CMOS circuitry and the substrate (the silicon layer upon which the CMOS transistors are formed). I've not been able to get much further info, although it is pointed out that the two parasitic BJTs form a thyristor that can cause latch-up if the input is greater than VCC+0.5V or less than -0.5V. In reality, this is probably pessimistic and latch-up is unusual. An ESD 'event' can cause latch-up problems because it's rarely possible to clamp the input range to less than 0.5V. Schottky diodes are one solution, but they have (relatively) high leakage that may compromise high-impedance circuits.
External protection circuits range from a duplication of the input protection already provided, using bigger diodes and additional resistance, to relatively complex circuits using a combination of resistors, diodes and zeners, sometimes with some capacitance as well. The arrangement used depends on the desired impedance levels, frequency response and/ or speed of operation. The protection needed for an oscilloscope's input circuits will be very different from that needed for a microphone preamp for example. In some cases, RF JFETs will be used as diodes in order to minimise capacitance, something often seen in oscilloscope front-end circuits.
BJTs (bipolar junction transistors) can also be used as diodes, and they will often have lower leakage and (possibly) lower capacitance than JFETs. Within CMOS ICs, the diodes will be MOSFETs, but there will also be parasitic diodes that are created during production. These are usually slower (and less well defined) than MOSFETs. A relative newcomer are TVS diodes (transient voltage suppressor), which are available in a number of voltages, 'surge current' ratings and peak power dissipation. The can be unidirectional (like a diode) or bidirectional (like two zeners in reverse series). TVS diodes are far more predictable than MOVs (metal oxide varistors), but they are not precision components. Of the available options, TVS diodes are probably the best choice as they are very rugged, but their junction capacitance is such that they will be unsuitable for high-impedance, high-speed circuitry. For example, a 12V bidirectional TVS diode may have as much as 1nF junction capacitance.
The capacitance of zener diodes is rarely quoted, making it hard to know if they will be alright or not at the impedance and frequency. In general, expect a 400mW, 5.1V zener to have a capacitance of up to 800pF (I measured a few without bias and obtained 124pF on average). The reading was obtained with two in opposed series (cathodes joined) to prevent forward conduction. The total capacitance was 62pF, so each must have a capacitance of 124pF as they are in series. The capacitance falls as reverse voltage is applied until the zener breaks down. So, while a zener diode (or a pair in series) may not be quite as rugged as a TVS, it will have lower capacitance. Compared to a JFET or BJT (diode connected), both the zener and TVS have vastly more capacitance, limiting their usefulness for high frequency operation.
The diode connections for testing (via simulation) are shown above. The JFET has drain and source shorted, but that's not a requirement. The two BJT configurations (base-emitter and base-collector) have very low leakage, but their capacitance is higher than the JFET or 1N4148. The base-emitter configuration has a limited reverse voltage (around 5V, but it's inconsistent) and also has higher capacitance than the base-collector connection. If at all possible, the base-collector connection is the better choice, even though leakage current is a little higher (6.5pA vs. 5pA at -5V). Note that connecting the unused pin (collector or emitter) is optional.
With a feed resistance of 1k, the zener was 3dB down at 537kHz, and the BZX79 datasheet claims only that the maximum capacitance is 300pF at 1MHz. By comparison, the following table shows the -3dB frequency of five different diode connections (simulated, not measured). The lowest capacitance 'diode' is a JFET, but it depends on the type - I simulated a 2N3819 (VHF/ UHF amplifier). Other more common types (e.g. J113) will be far worse). The stimulus was a 100mV peak sinewave to ensure that all 'diodes' remained non-conducting. The 1N4148 is a surprise, and the datasheet claims a maximum capacitance of 4pF with zero bias (2pF for the 1N4448 which is harder to get). Calculating for 4pF and 1k series resistance gives 39.8MHz, so the simulation is probably fairly close to reality.
Device | -3dB Frequency | Capacitance |
2N3819 VHF/ UHF JFET | 53 MHz | 3 pF |
1N4148 Small-Signal Diode | 53 MHz | 3 pF |
BC550 (B-C) BJT | 23.5 MHz | 6.8 pF |
BAT46 Schottky | 19.2 MHz | 8.3 pF |
BC550 (B-E) BJT | 11.9 MHz | 13.4 pF |
J113 Switching JFET | 7.1 MHz | 22 pF |
BZX79C5V1 Zener | 537 kHz | 296 pF |
Most switching FETs will be similar to the J113, having much higher capacitance than those designed for RF amplifiers. It's quite clear that the 1N4148 diode is a good choice for high speed, but they are fairly fragile and easily damaged by a severe overload. The BAT46 (or similar) Schottky diode looks promising, but its surge current is very limited (750mA for <10ms). If we use the 4kV human body model, the peak current is about 2.5A with a risetime of 5ns. The maximum possible is 2.66A (4kV / 1.5kΩ), assuming instantaneous contact. This is within the ratings for a 1N4148 (4A for < 1μs), but a Schottky diode may not survive.
Schottky (actually all) diodes have another issue that's rarely looked at - at high currents, the forward voltage climbs rapidly. At just 800mA peak, a BAT46 will have almost 1.2V across it, and at the same current, a 1N4148 will have a forward voltage of over 1.4V. The same limitations apply to all diodes. You could use higher current diodes of course, but they have larger junctions and higher capacitance. For example, a 1N4004 has a capacitance of around 53pF and a -3dB frequency (using a 1k feed resistor as for the other tests) of 3MHz. You might expect that high-speed diodes would be better, but that's not necessarily the case. A BYV29-400 (ultra-fast diode) has a capacitance of 240pF, but it's a big diode (9A). A more sensible BYT01-400 (1A ultra-fast) has a capacitance of 45pF and a -3dB frequency of 3.5MHz.
You can measure the capacitance of a TVS, large diode or zener by connecting two in series (cathode to cathode) and using a capacitance meter. For a bidirectional TVS you don't need two. This prevents the meter from forward-biasing the diode junction, and the capacitance you measure for the two diodes is half that for a single diode. So if you measure (say) 28pF for two diodes in series, each has a capacitance of 56pF. This is useful if you need particularly robust protection, and you can test the diodes that you have available. High-capacitance means that frequency response will be limited because it represents a load that the source has to be able to drive in use.
One example of an input protection circuit is shown above. This would be suitable for any CMOS (or TTL - transistor-transistor logic) IC using a single 5V supply. The input is protected against high voltages by the zener diode, reverse voltage by the Schottky diode, current limited by the two 220Ω resistors and speed-limited by the capacitor. It's deliberately speed-limited by C1, which can be reduced if you need an input signal of more than ~15kHz. The maximum peak current from a 4kV external discharge is under 500mA, and the input won't rise above 7V (generally just within ratings for 5V parts). The voltage will exceed 5V for less than 1μs.
Output circuits are generally considered to be fairly rugged, using collector/ emitter outputs (BJTs), or drain/ source outputs (MOS/ CMOS). However, if they are exposed to the outside world, they can still be damaged by ESD or just by having a low-impedance signal fed into the output terminal(s). Because output circuits are generally low to very low impedance, they are generally far less likely to be damaged than inputs. Regular readers will be aware that I always include a 100Ω resistor in series with the output of any opamp circuit, and this serves two purposes. It prevents oscillation with capacitive or resonant loads (such as coaxial cable), and it provides at least some protection against low-level ESD events. In an industrial environment (or where +48V phantom power may be present), the resistor is not enough. Diodes, zeners or a TVS should be used if there's any likelihood of damage in normal usage.
Many CMOS ICs have protective diodes on their outputs, but others don't. Damage via the output terminals is less common than input damage, but anything exposed to the outside world is at some risk. Diodes are common, but there is rarely an output resistor so there's nothing to limit the current from an ESD discharge. A pair of diodes is the most basic form of protection, but again, there has to be a zener or TVS diode to limit the supply voltage if a positive voltage is applied to the output circuitry.
With opamp circuitry (e.g. audio circuits), it's good practice to include a 100Ω resistor in series with the output. This limits instantaneous (ESD) current, but it's also required to ensure that the opamp doesn't oscillate when connected to a resonant transmission line (i.e. a shielded cable). Most opamps are intolerant of a capacitive load, and the capacitance of a length of coaxial cable is often enough to cause oscillation. Protection is another matter, and if the environment is hostile (test labs, industrial applications, etc.) then that has to be considered. 48V phantom power is perfectly capable of destroying either the input or output of an opamp circuit that isn't designed specifically to be compatible with P48V equipment. The circuit shown above is the very minimum, but may not be sufficient.
It's uncommon to see any form of protection for opamp circuits used in audio circuitry, because problems are rarely encountered in normal use. More care is needed in for industrial applications, but even there few problems will be found. For circuits using a bipolar supply, the lower diode (D2) would be returned to the negative supply, and another zener used to ensure the supply voltage can't be forced above the maximum. The zener voltages are more likely to be (say) 13V for ±12V supplies or 16V for ±15V supplies. The zeners should never conduct unless the supply voltages are forced to exceed the design value.
The protection circuits will always be a compromise. In many cases, no protection will be used at all, but in others it may need to be very comprehensive. Many designers tend to work with a fairly narrow field, and they will know what's needed for the type of equipment they work with. If problems are found later, a retro-fit solution is always possible, but expensive. Most people agree that it's better to get it right the first time.
The idea of using just diodes to (hopefully) clamp the fault voltage to the supply rail is fatally flawed if there is no protective zener included. As shown, a 30V fault voltage is applied to the input, and this could come from anywhere. It could be AC or DC, transient or permanent, and due to incorrect wiring, a fault in the remote equipment or a stray strand of wire getting into something it shouldn't. With a 220Ω limiting resistor, the current will be just under 114mA, and this could easily exceed the normal current drain of the circuitry. We'll assume that the circuit normally draws 50mA.
When the fault voltage is applied, it will attempt to force 114mA into the input via R1, and then to the +5V supply via D1. There's nothing to prevent the 5V supply from rising, so it will be elevated to something higher, limited only by the load current of the circuit. If we assume that it still draws 50mA (unlikely but possible), the supply will rise to over +18V. Some CMOS ICs might tolerate that, but if the circuit is a PIC or MCU of some kind, expect bad things to happen.
Adding the protective zener in parallel with the supply (5.1V) means that the maximum supply voltage will be limited to about 5.4V (the zener has internal resistance and does not limit the voltage to 5.1V), but this should be acceptable for almost all circuits. The situation is made (much) worse if the circuit normally draws less than 50mA (even multiple CMOS ICs may draw less than 10mA). If our CMOS circuit only draws 5mA, the fault voltage will rise to over 28V, which means certain death for the ICs.
The zener is not 'optional', because without it the protection circuit is worse than useless. It might make you feel better because you've included it, and you may never have a 'proper' fault that proves its operation one way or another. If you do have a real fault situation, there's every chance that the 'protected' circuit will fail. The presence of supply bypass caps means that they can absorb brief faults, and if greater than around 100μF will absorb brief (< 1.5ms) with only a small voltage rise (depending on the normal current drawn).
I've covered this elsewhere, but in most cases it receives minimal attention, even by people who should know better. A regulator is thought to provide a stable supply at the design voltage, but regulators have a big limitation - they can only source current to the load! If an external voltage is applied to their outputs, regulators are unable to sink (absorb) the over-voltage, and if it's high enough, even the regulator may fail. It's good practice to include a reverse-connected diode across any regulator, as that will limit the reverse voltage to 0.7V or so, ensuring that the regulator isn't damaged. These are also considered 'optional' by some, but IMO they are not!
Look at Project 05 or Project 05-Mini. Both include these essential diodes, and they are included in many other projects that include a regulator. If the diodes are omitted, even testing a circuit using an external supply (during test or repair for example) can result in failed regulator ICs.
One regulator that can supply and sink current is a shunt regulator, but these are rarely used because they draw the maximum allowable current at all times. This means that they are very inefficient, and if high current is needed the power dissipated can be far greater than is desirable. A resistor and zener is a simple example of a shunt regulator.
There is a 'special' trap waiting to catch you out if you have equipment powered by a SMPS. It's not well-known, but more than a few people have been caught out. Almost all SMPS have a voltage on the 'ground' that is created by internal Class-Y capacitors that are required to allow the supply to pass 'radiated emissions' tests for EMI (electromagnetic interference). The capacitance is usually small (rarely more than 2.2nF), but that's more than sufficient to cause failure of input and output circuits.
The phenomenon is covered in reasonable depth in the article SMPS Kill Equipment ... ?, and it's very real. It's not helped at all by the fact that the common RCA connector connects the input/ output first, followed by the ground/ shield. Any external stored charge is transferred to the circuitry first, which is decidedly sub-optimal, but normal with all RCA connectors.
Whether you like it or not, the neutral is always earthed (grounded), either at the local distribution transformer or at each premises where it enters the meter box. The DC output is floating until it's terminated to earthed equipment, and the voltage will be around 100V AC with 230V mains. It's high impedance because of the Y2 safety capacitors, so the current is low (about 100μA with 2 × 1nF Y2 caps). If external gear is connected at the wrong time (at the AC peak), the instantaneous current will exceed 200mA, and with RCA connectors that's straight into the input of the connected equipment! Fortunately for all of us, there is usually a high-impedance path to ground (often through us), and the peak will not be as severe. Can you count on this? No!
The only ways to prevent damage are to provide very good input/ output protection circuits, or to ensure that connections are never made or broken while power is applied. In many cases, this will require disconnection of the mains lead, because the SMPS is often on full-time, and its output is switched. Ignore this at your peril, as it's a very real problem.
The overall conclusions are fairly clear. For most audio circuits we don't need to take special precautions, but extreme care is needed for circuitry that is powered from a switchmode power supply (SMPS) (whether internal or external). Always disconnect the power before connecting input/ output leads to any circuit powered by a SMPS. Most of the recommendations shown are fairly basic, but if implemented properly will save equipment from damage. For 'benign' applications, we often don't need any protection at all, because the chances of a destructive fault are so low.
Elsewhere, even the circuits I described may not be sufficient because of the operating environment. Electronics in cars are particularly vulnerable, because the nominal 13.8V DC supply can jump to over 40V with what's known as a 'load-dump'. These occur when a high-current load is disconnected, or if the battery is disconnected (accidentally or deliberately) while it's being charged. Other systems can also create voltage spikes that are quite capable of causing damage in any electronic system that's not properly protected.
Most of us will (hopefully) never destroy the circuits we use by electrostatic discharge or other faults, but that can lead to complacency - just because something hasn't happened does not mean that it won't or can't. Most of the parts we use are surprisingly robust, and while I have killed parts, it was part of a deliberate test rather than an accident. If you know that equipment will be used by people who know nothing about electronics in potentially hostile environments, it's worthwhile to take as many precautions as you think will be needed.
Remember that if someone 'blows up' something that you built, it's never their fault! Many (most?) people will claim that they used the equipment as instructed, and didn't do anything wrong. The gear simply 'blew up' for no reason, and the fact that they installed the battery in reverse (or other 'accident') will not be revealed. It's something service people have had to deal with since the dawn of electronics. Even when it's quite obvious that a misguided and heavy-handed attempt at self-service caused additional damage, nothing is admitted in most cases.
Most service techs have heard that (or something similar) countless times. It won't change any time soon.
![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Essential Electronic Formulae |
There are quite a few formulae (or 'formulas' if you prefer) that are the building blocks of all electronics. I only intend to cover the basics, so you won't find the formulae for complex filters or anything else out of the ordinary. This info is covered in more detail in the beginner articles, but here I've concentrated on the basic formulae and nothing else.
+ +Think of this short article as being the 'go-to' place to find the formula you need without much by way of illustration or extensive descriptive text.
+ +In all cases below, resistance is in ohms, capacitance is in Farads and inductance is in Henrys. If you use megohms and microfarads the result will be the same, but is usually easier to calculate. If you use a scientific calculator (forget basic pocket calculators as they are useless), a microfarad is entered as 1E-6. A calculator that uses engineering mode is always better, because it will set all values to multiples of three, so you don't get 'awkward' values like (for example) 1.414E-4 (141.4µ or 141.4E-6). Common engineering values are as follows ...
+ +++ ++
+Pico 1E-12 + Nano 1E-9 + Milli 1E-3 + Units 1 + Kilo 1E3 + Mega 1E6 + Giga 1E9 + Tera 1E12 +
The last two aren't common for most circuitry, but you may come across them in a few applications.
+ + +These two terms often confuse people not used to working with maths. The numerator is the number at the top of an equation, and can be thought of as describing the number of parts described in the denominator (which is at the bottom). For example, the fraction 1/4 means that you have one of the four 'parts' - one quarter. The reciprocal is the decimal rendering of the fraction, in this case 0.25 or 250m (milli). Not all formulae describe fractions - especially those in electronics, where the goal is to find the decimal value. No-one wants to deal with 1/1,000,000 Farad capacitors - that's simply 1µF.
+ +The fraction X/Y means 'X pieces of a whole object that is divided into Y equally sized parts'.
+ + +The most fundamental of all. R is resistance in ohms, V is voltage and I is current.
+ ++ R = V / I+ +
+ V = R × I
+ I = V / R +
Hint: if R is in kΩ then the answer is in milliamps. 1V across 1k gives 1mA.
+ +The total resistance with resistors in series is simply the sum of the resistors. 3 x 1k resistors in series is 3k. Parallel resistors are a bit trickier. However, if they're the same value it's easy - 3 × 1k resistors in parallel gives 1/3k, or 333.33Ω.
+ ++ R = ( R1 × R2 ) / ( R1 + R2 ) ... or ...+ +
+ R = 1 / (( 1 / R1 ) + ( 1 / R2 ) + ( 1 / Rn )) (Rn is the nth parallel resistor) +
Most calculators provide the reciprocal (1/X), and this makes the second equation much easier to use, and it works with multiple resistors. The first formula falls apart with three or more variables. Remember to include the outer set of brackets (ellipses) in the denominator - the bottom part of the equation.
+ + +You don't need it often, but determining capacitive reactance is fundamental to some circuits. Xc is reactance (impedance) in ohms, C is capacitance in Farads and f is frequency in Hz. Pi (π) is the standard constant of 3.141592654 (3.141 is close enough, and it's available from nearly all calculators).
+ ++ Xc = 1 / ( 2π × C × f )+ +
+ C = 1 / ( 2π × Xc × f )
+ f = 1 / ( 2π × Xc × C ) +
The total capacitance with caps in parallel is simply the sum of the capacitors. 3 x 10µF caps in parallel is 30µF. This time, series caps are a bit trickier.
+ ++ C = ( C1 × C2 ) / ( C1 + C2 ) ... or ...+ +
+ R = 1 / (( 1 / C1 ) + ( 1 / CR2 ) + ( 1 / Cn )) (Cn is the nth parallel capacitor) +
The same comments apply as shown for resistors.
+ + +More common than capacitive reactance, and inductive reactance is often needed when coils are used. XL is reactance (impedance) in ohms, L is inductance in Henrys and f is frequency in Hz.
+ ++ XL = 2π × L × f+ +
+ L = XL / ( 2π × f )
+ f = XL / ( 2π × L ) +
The total inductance with coils in series is simply the sum of the inductors. 3 x 1H inductors in series is 3H. Parallel inductors are determined in the same way as resistors.
+ ++ L = ( L1 × L2 ) / ( L1 + L2 ) ... or ...+ + +
+ L = 1 / (( 1 / L1 ) + ( 1 / L2 ) + ( 1 / Ln )) (Ln is the nth parallel inductor) +
Basic resistor/ capacitor (R/C) filters can be high or low pass. A high-pass filter is also called a differentiator, and a low-pass filter is an integrator. Only single pole (1st order or 6dB/ octave) networks are described here, and the formula is the same for high and low pass filters. Whether it is high or low pass depends on the way the two components are wired.
+ ++ f = 1 / ( 2π × R × C )+ +
+ R = 1 / ( 2π × f × C )
+ C = 1 / ( 2π × f × R ) +
'f' is the -3dB frequency, and the output voltage is 0.707 (1/√2) times the input voltage. With 1V input, there is 0.707V across the resistor and capacitor, and the output phase is shifted by 90° with respect to the input.
+ +R/C networks also have a time constant, which is usually only needed for timing circuits. Note that some filters may be described in terms of time constant rather than -3dB frequency (for example the RIAA vinyl disc replay EQ curve).
+ ++ t = R × C+ +
+ R = t / C
+ C = t / R
+ f = 1 / ( 2π × t ) +
Inductor/ capacitor (L/C) filters are far more complex, and I will only provide the formulae for resonance. Q (quality factor), bandwidth and other parameters are not covered. L/C filters can be in series or parallel, but if we ignore the inductor's series resistance the formula is the same for both types.
+ ++ f = 1 / ( 2π × √( L × C ))+ +
+ L = 1 / ( 4 × π ² × f ² × C )
+ C = 1 / ( 4 × π ² × f ² × L ) +
The impedance of a resonant filter depends on the topology. Theoretically 'ideal' series resonant filters have zero impedance at resonance, and parallel resonant circuits have an infinite impedance at resonance. All real-world filters will have series resistance which changes the behaviour, but only slightly in a well designed circuit with optimised components.
+ + +Continuous dB SPL | Maximum Exposure Time + |
85 | 8 hours |
88 | 4 hours |
91 | 2 hours |
94 | 1 hour |
97 | 30 minutes |
100 | 15 minutes |
103 | 7.5 minutes |
106 | < 4 minutes |
109 | < 2minutes |
112 | ~ 1 minute |
115 | ~ 30 seconds |
![]() | + + + + + + + |
Elliott Sound Products | +Worldwide Ban Looms for External Transformers |
It has now been some time since this was first published, and 99% of the information below is as valid today was when it was written. The only real change is that the ban is well and truly in place, so conventional iron-core transformer + rectifier external supplies are no more. All currently available DC supplies are switchmode, and without exception suffer from the ills I wrote about in 2007. Having said that, they have also provided a very cheap way to include a small DC supply into a product - especially where a somewhat noisy supply will not cause any problems.
+ +The regulators were finally convinced that AC/AC external supplies must use a iron-core transformer, and the requirements were removed for these to have extremely small no-load power consumption. It did take some doing though, proving that the 'consultants' selected by the government were chosen because they would reinforce the 'official' position without dissent. Any knowledge of realities was never a requirement, so most of their input was either flawed or worthless.
+ +As of the time of this update, no additional requirements for electrical safety have been introduced or even suggested, and having used quite a number of small external SMPS (switchmode power supplies) as part of other designs, I see that they are no better now than they were 3 years ago. The opportunity for the insulation barrier to be breached (by a variety of exciting means) is present in all I've seen. I do accept that the risk is low, but it is not negligible, as was the case with conventional transformers.
+ +Everything else described below remains unchanged - the only thing that is different is that the ban is in place, and with the possible exception of some (very) old stock, the supplies available now are MEPS compliant. AC/AC supplies still use iron cored transformers (as they must).
+ +I freely admit that the sky hasn't fallen, and the majority of people haven't noticed the difference. I have had a SMPS fail due to ROHS and lead-free solder (which applies to almost all modern electronics). I was able to repair it the first time, but it failed again and did itself an injury so had to be replaced. This is a problem that will only increase - the combination of lead-free solder and the much greater complexity of an SMPS compared to a conventional supply means more failures and more dead supplies being discarded. It is likely that the additional energy used to make (and ship) a new supply will negate the savings as calculated by the Australian Greenhouse Office, and I suspect that the real savings will be a small fraction of those claimed.
+ +Update, May 2014
+
At the time of the latest update to this page, the sky is still in place, but the problems are nowhere near solved. More dodgy switchmode power supplies are around then ever before. Meanwhile, Australians are paying an average of around 25¢/kWh (yes, you read that right!). There are a few minor changes below to reflect the cost of running various appliances, and some other small changes. The bulk of the article is unchanged, and the issues I addressed have not been fixed.
+ + +Let me start by pointing out that I am not opposed to energy savings - even small ones can be beneficial. I am opposed to anything that reduces safety standards, places users' equipment at risk (by capacitive discharge damage for example), or makes products virtually unusable. This can occur if the imposed technology introduces so much noise into the system that users can no longer use equipment that was perfectly alright when used with an old technology 'inefficient' power supply.
+ +I am also opposed to laws or regulations (that affect everyone) being created, where there is no opportunity for the general public to have their say. Regulations are made by bureaucrats, and while they presumably have our best interests at heart, they cannot (and do not) generally understand the likelihood of 'unintended consequences' - things that really annoy, cause inconvenience or damage equipment - because no-one even thought of a particular application where the replacement technology is completely inappropriate.
+ +In theory, a Regulatory Impact Statement (RIS) should cover the impact on new and existing equipment, cover all disadvantages as well as advantages, and not merely proclaim the energy and CO2 savings and the likely cost of the new product. The document referenced below has made no mention of the impact of a replacement switchmode supply on equipment designed for use with a conventional (transformer based) power supply.
+ +On the Australian Government's Energy Rating library page, one used to be able to find a very scary document indeed. The document was a Regulatory Impact Statement (RIS - Minimum Energy Performance Standards and Alternative Strategies for External Power Supplies) and as originally published had the potential to create huge problems in the market place, and should be seriously reconsidered before any action is taken (the links have been removed as the document has either vanished or has been moved). It is notable that a document search failed to find any occurrence of the following words ...
+ +... and indeed, SMPS failure modes (and their possible consequences), the potential for equipment damage or compatibility with existing equipment are not mentioned. Safety is mentioned, but there isn't a single reference to any possible safety issues. It is simply assumed that safety tests will ensure that every unit that passes the tests will be completely safe. I'm not so certain - there are too many things to go wrong.
+ +The RIS covers external power supplies - those that plug directly into a power point (wall outlet) are also known as plug-packs, wall-warts or wall transformers. Others have a mains cable and an output cable, the most common example being the supplies used with notebook computers. Many existing supplies are already switchmode, and some of those may even pass the new requirements. Only two that I tested pass, most don't even come close, but their power dissipation still isn't unreasonable.
+ +A mandatory energy rating requirement effectively bans all presently available transformer based external supplies because their magnetising current is higher than allowable. In order to pass, the no-load dissipation must be less than 0.5W for supplies rated at less than 10W, or 0.75W for supplies rated at between 10 and 250W. Most small transformers draw a magnetising current of around 20-30mA, and the range of power consumption I measured was between 1.1W up to 1.8W - this was verified with a fairly wide cross-section of supplies at my disposal. The dissipated power is directly related to the winding resistance, and also includes iron loss - that amount of power needed to reverse the flux in the core on each half cycle of the AC waveform). Very small transformers usually show minimal change in mains current between no-load and full-load, although the power does vary because of a slightly improved power factor at full load.
+ +Note that certain battery chargers are exempt, in particular those that house the batteries or battery pack whilst charging. One can assume that any external supply that has an output lead that allows connection to anything other than a battery is included in the RIS.
+ + +The tests described herein were performed using a variety of methods. The most important is power consumption, so that the true power consumption of the various supplies could be measured.
+ +
Power Meter Used for Testing
The YEW (Yokahama Electric Works) 2509 power meter was used to measure VA and actual power. These are never the same with inductive or non-linear loads, because of power factor. The apparent power (Volt-Amps or VA) is simply the RMS voltage and current multiplied together, but actual power (Watts) must be computed so that phase angle (for sinewave loads) or pulse currents are properly applied to the formula. You can't measure 'real' power without a power meter.
+ +The VA was double checked using the RMS capability of my digital oscilloscope, together with an in-line current monitor. The output of the current monitor was also used to capture the current waveforms shown in the test section below. With all such measurements, there will always be some error, but I have taken pains to calibrate everything as well as possible, so errors will generally be less than 5%. Some of the error is a result of the distorted waveform - something that is very difficult to circumvent, because it can be extremely difficult to make very accurate measurements on badly distorted waveforms.
+ +Most of the measurements will be well within acceptable limits, although power readings below 1W are subject to the usual ±1 digit with any digital instrument. I have yet to construct a current amplifier to allow the wattmeter to have a x10 range, which will increase accuracy for very low power levels. This will be done as soon as I have time to do so.
+ + +The processes in place actually seem to have very little to do with saving energy, but are political. The cost of operating a small power supply for a full year is extremely low - so low that it is only sensible to show it as a yearly value (it's less than 1 cent per day). As explained in greater detail below, this is utterly insignificant compared to other allowable losses and normal household activities.
+ +Of far greater concern is that these regulations (as originally written) would have effectively banned external AC-AC power supplies. These are used extensively for alarm systems, ADSL modems and other products that use the AC input to generate dual-rail supplies internally, but they are also very useful for hobbyists who do not have the skills needed to wire a mains operated power supply. There is currently no alternative available with switchmode supplies - it could be done, but the cost and complexity are prohibitive.
+ +With a great many typical loads (such as charging a cordless phone), the mains current doesn't change very much. several (transformer based) external supplies that I tested draw around 20mA at idle, and 24mA when loaded (off-load power dissipation is about 1.4W). At least with small external supplies, one knows what to expect. In contrast, the 'phantom' power of many appliances that draw standby power is unknown unless you measure it. The tested units results are tabulated later in this article.
+ +One of the great advantages of a transformer based supply is that even though it may only be rated for (say) 100mA, you normally require 50mA, but really need 500mA for a very brief period, it will do so cheerfully. It can do this without failure for years, as long as its average power remains within ratings in the long term (between 1 and 5 minutes, depending on the size of the transformer). A SMPS can't do that - its maximum current is the limit, even for a few milliseconds. If you need a peak current of 500mA you have to use a 500mA supply. Its efficiency at low (average) power will be very poor, probably as bad or worse than a transformer.
+ + +It's impossible to say who is affected elsewhere in the world, but in Australia, the effects will be widespread. These new regulations do not require a vote in parliament, as (somewhat perversely) they fall under the Electrical Safety Acts of the various states (see links in the reference section). Because of the very nature of the regulations, they can be very difficult to interpret accurately, making the likelihood of anyone actually reading through the reams of information (literally) highly unlikely. +
The definition of 'sell' for any product falling under the regulations now or in the future, includes ...
+ ++ (a) barter or exchange; and + (b) let on hire; and + (c) offer, expose or advertise for sale, barter, exchange or letting on hire. ++ +
In other words, anyone with existing hire stock may be expected to discard any non-conforming external power supplies and replace them with new supplies (which for the most part didn't even exist at the time). This includes tool hire companies who offer battery drills with external supplies to charge the battery packs, computer hire companies for notebook computers and the like. In some cases, it also includes second hand goods, so it will be illegal (with some very substantial fines mentioned - see the references below) to even sell a second hand item with a non-compliant external supply.
+ +Already, the simple act of discarding 'non-conforming' equipment and replacing it will completely wipe out any savings - the energy used to make the replacement supplies (and the CO2 liberated) will exceed the savings from using supplies that are rarely used with no load anyway. Even this assumes that replacements will be suitable or usable!
+ +In some cases, it may be easier and cheaper for hire companies to scrap the tool along with its supply - even more wasted resources and energy. Many battery tools use fairly specific connectors, and although they are battery chargers, the suggested rules indicate that they may not be classified as such. Rather than become embroiled in a stupid legal battle with bureaucracy, not many people will risk the fines for non-compliance.
+ +This has the potential to turn into an absolute nightmare!
+ + +The following letter was sent to the Australian Greenhouse Office. This does not simply relate to Australia - some of the provisions are already mandatory in some states of the US, and the EU and China are also parties to the discussions.
+ +There are many factors that the author [of the RIS] has either missed completely, or glossed over - much of which appears to have originated overseas. While it cannot be denied that many existing external power supplies, plug-packs, wall transformers and other small external supplies are not particularly efficient, it has to be understood that the actual power used by most is quite low when not in use. + +Since the document appears to be primarily concerned with no-load performance, this is so minor in the greater scheme of energy savings that it is a fruitless exercise to waste time and money pursuing such an endeavour. Several things that the document has missed or glossed over are of great importance overall. These include ... + + +
+ It is assumed that having a regulated output is beneficial, and saves equipment manufacturers from having to include a regulator in the equipment. This is only partially true. The output from most small SMPS is noisy, having significant high frequency noise superimposed on the DC output. Many small SMPS react very badly to additional filtering capacitance used at the output, because of regulation feedback loops that are marginally stable. Noise reduction is therefore non-trivial, and could conceivably actually increase the cost of the powered item. + +Where the product requires a noise-free supply, manufacturers will still have to include either complex filtering or a linear regulator to remove high frequency noise, and/or provide a lower voltage (typically 5V) for digital electronics in the device. + +Where a product draws a widely varying current, many small SMPS units are incapable of maintaining good regulation over the full range. It is common that with very light loading, the voltage is very unstable, causing a low frequency modulation of DC voltage - this is not usually any multiple or sub-multiple of the mains frequency, but is determined by the time constant(s) within the stabilisation feedback loop. + +None of these problems occur with existing linear supplies. Although the output voltage varies with load, the variations are predictable and easily accommodated by most circuitry. + +As noted above, a 'bulk' filter can be used to reduce waveform distortion, but such an item will be expensive, will have to be installed by a licensed electrician, and will consume some power in its own right. This power is wasted, and could easily exceed the savings made by replacing linear supplies with SMPS. It is also difficult to ensure that such a filter has no adverse effects on the distribution system, regardless of the number or type of non-linear loads for which it needs to provide compensation. + +As should be obvious, none of these issues have been properly tested, verified or factored into the draft proposal, either within Australia, the USA, China, Europe or anywhere else. The ramifications of a ban imposed upon traditional linear power supplies simply because they do not meet an arbitrary minimum efficiency figure are widespread and somewhat unpredictable, because of the huge diversity of applications. + +I urge the appropriate persons and agencies to review the proposed scheme as a matter of urgency. While there are claims of huge savings, these are almost all illusory. The vast majority of external power supplies are connected to equipment that is intended to be powered continuously. Where this is not the case (modem and printer power supplies being two examples), a far greater saving is available by simply using a switching system that is activated by the host computer. If the host machine is on, the devices are likely to be needed, but when turned off, the attached supplies can be completely disconnected. + +It is very important that any regulation considers not the standby power consumption, but the standby VA - especially for non-linear power supplies where there is no effective way to remove the harmonics generated. While a small switchmode supply may dissipate as little as 0.5W at idle, the same device still consumes current in order to function. The nature of the current can easily give a power factor of less than 0.4, so 0.5W becomes 2VA (e.g. 8.7mA at 230V). It is unrealistic to expect any SMPS to draw less current than perhaps 10-20mA at idle, regardless of its full load current. This equates to between 2.4-4.8VA, but measured power can (and will usually be) be significantly lower. + +To treat the VA rating as unimportant or 'invisible' is to defeat the entire purpose of any act or standard intended to reduce the waste of electrical energy. To make matters worse, the non-linear current waveform does not return the reactive portion of its current to the supply (as will an inductive or capacitive load) because the current waveform is non-reactive. + +In summary, I suggest that the current recommendations be revised to reflect reality rather than illusion, as they are impractical and deny the reality of the effects of millions of small harmonic current generators attached to the electrical distribution grid. + +Let us not forget the myriad of small appliances that use AC-AC external supplies, that simply cannot be replaced with switchmode supplies using available technology. Even if this were to change overnight, it is doubtful that a switchmode replacement could equal the performance of an iron cored transformer. Regardless of any objections to the contrary, these supplies are very common, and their contribution to energy loss is negligible compared to the standby power consumed by just one typical television receiver. |
The above covers most of the important points, but I did refrain from stating the obvious - that those who are pushing this latest affront are deluded if they think that what they plan is capable of making any difference whatsoever. There are countless items in the average home (not including businesses - the biggest power wasters of all) that draw as much current on idle as all the external power supplies put together. I have no objection whatsoever to any reduction in the standby power consumed by appliances that are not being used - ideally, the standby current should be zero. If something isn't being used, then why does it need to consume any current at all? We certainly don't need a clock in every piece of equipment we own, and the continuous display of the time (usually wrong) is pointless and annoying. However, many of these appliances have a need to know the time (for timed operation for example), so the time does need to be maintained. It does not need to be displayed!
+ +However, all things need to be maintained in context - something that is not being done in the arguments. It is easy to make it appear that something useful is being done by using statistics and emotive methods, and these are at the forefront of the vast majority of the discussions at present.
+ +Speaking of keeping things in context, I had a problem with the next diagram. It doesn't really fit in anywhere, but it is important.
+ +
Figure 1 - Harmonic Current Caused By Capacitor
The link to the Integral Energy report about the danger of harmonic currents will mean something to a very small number of people. To make it clear to all, I recorded the current waveform through a 2µF capacitor, connected directly across the mains. Normally, the voltage waveform looks a bit distorted, but otherwise it has no spikes or peaks or other 'nasties' on it. Compare the voltage waveform shown below to the current waveform above. This is a perfect example of the issue discussed in the paper, and shows just how easily the amplitude of harmonics can be increased by capacitance. The waveform shown is current, but that current will cause a similar shaped voltage to appear across the length of any cable, and higher capacitance will cause more problems because more current flows. The harmonic structure is quite visible - although analysis is more difficult. The ripples at the peak of the waveform are the main problem. Somewhat surprisingly, the kinks at the zero crossing followed by vertical transitions are simply the result of the slight voltage waveform clipping, phase shifted by the capacitor. The peaks and 'squiggles' that you see at the crest of the wave is harmonic 'magnification' - exactly as described in the Integral Energy paper. + +
Current through the 2µF cap should have been ~144mA at 230V input, but was measured at 166mA. That's a 13% increase of current, all caused by additional harmonics introduced into the mains waveform by non-linear loads elsewhere on the supply grid. The exact cause of the extra harmonics is not known for certain, but the referenced paper does give some clues.
+ +On the basis of this simple test, we can say with certainty that the problem is very real, and that the situation is far more complex than we may have imagined. Likewise, we can be certain that as we increase the number of non-linear loads, we will increase the severity of the problems faced by power companies. As is to be expected, if they have to do something to fix the problem, we will pay for it.
+ + +I have run tests on as many of the small linear supplies that I can get my hands on, and the results are quite predictable. None achieve the expected outcome for no-load power consumption, but once placed in context it is easy to show how little difference these moves make in real terms.
+ +Before continuing, let's compare the schematics of a linear and a switchmode supply. Note that the SMPS circuit is simplified fairly dramatically, because they are rather complex devices despite external appearances.
+ +
Figure 2 - Linear DC External Power Supply Circuit
There's not much to it - just a transformer with a built-in thermal fuse, a bridge rectifier and a filter cap. Because there is so little technology involved, there is also very little that can go wrong. A sustained short circuit at the output will simply cause the thermal fuse to open, which will happen at around 130°C. Although this means the supply is no longer usable, it happens very rarely. Most of us have transformer based supplies that are many, many years old, and still work fine.
+ +
Figure 3 - Simplified Switchmode DC External Power Supply Circuit
As you can see, even in simplified form, the SMPS is much more complex. More parts means more things that can (and will) go wrong, and the lack of a central primary potential heat source makes full thermal protection more difficult. The isolation barrier is bridged by either one or a pair of Y1 class capacitors - these are designed so they can never fail short-circuit, but I have my doubts that every single Y1 cap (C4 & C5 in the above) ever produced by anyone, anywhere (including fakes!) will provide adequate protection for 20+ years. Further problems caused by these caps are discussed in more detail below. Occasionally, Y2 caps are used. These are a lower specification, but it apparently acceptable to use two Y2 caps in series rather than a single Y1 cap.
+ +Electrolyte leakage from a failed filter capacitor could easily bridge the isolation barrier, making the output of the supply potentially lethal. While the chance of this (or other possible failure mechanisms) may be small, it's much more likely with an SMPS than any linear supply. The latter have been used for a very long time, with no fatalities recorded that I could find. It is worth noting that the RIS does not apply to medical applications, simply because the leakage current is far too high because of the Y1 capacitors that are used in almost all cases.
+ +
Figures 4 & 5 - Linear Power Supplies
The above photos show an AC supply (on the left ... which was strangely marked as being 3V DC), and a normal DC plug-pack, in this case rated for 12V at 1A output. As you can see, there's almost nothing inside. The transformers are wound on a split bobbin, so primary and secondary are side-by-side. This is done for safety, and also reduces the capacitance between the AC mains and the output. With the internal thermal fuse, these transformers are considered to be safe when used according to the datasheet.
+ +
Figure 6A - Switchmode Power Supply
No comparison in terms of complexity, and you can't see the underside of the board which is covered with surface mount components. This particular supply is for a digital camera, and is rated for 7V at 2.1A. This supply will not pass the new regulations either - it draws far too much power at idle (1.1W). In common with nearly all SMPS, the DC output is rather noisy, with 7mV RMS noise, but having noise spikes reaching as much as 50mV. These require additional filtering to clean up the noise. A typical linear regulator will have less than 1mV output noise, and with no high frequency noise spikes at all. Regulation of the SMPS output voltage is only fair - nowhere near as good as you will obtain with a high quality linear regulator.
+ +
+Figure 6B - Mobile Phone SMPS - Bottom of Board Figure 6C - Mobile Phone SMPS - Top of Board
The mobile (cell) phone charger shown above exceeds all requirements. Although you can't see all the parts on the copper side of the board very well, there are quite a few - all surface mount. In terms of complexity, it's less complex than the camera supply, but also has to supply a lot less power.
+ +Another small SMPS I tested draws about 2.5mA at idle - that's not a lot of current, and represents 0.6VA. No-load power is about 0.4W, but this is difficult to measure accurately when the meter only has a resolution of 0.1W. The supply rating is 12V at 400mA, or 4.8W. At full output power, consumption rises to 9.4W, representing 51% efficiency. While many will be somewhat better than this, many will be the same or worse. A roughly equivalent transformer supply draws about the same at full load - the difference in real terms is tiny. And yes, like many such supplies, the output of the SMPS floats at 120V (see below for more on this subject).
+ +
Figure 7 - Linear Supply Current Waveform
The above is the actual captured current waveform from a cordless phone AC supply. The waveform is rather distorted, but has a very low harmonic content and is generally considered benign. The current measured at 17.5mA with no load, but the waveform does not change dramatically when the transformer is loaded normally (for example when plugged into the charger base).
+ +
Figure 8 - Switchmode Supply Current Waveform
In contrast, the current waveform from a SMPS is very spiky (this is also a direct capture of the measured current), and has harmonics that extend to quite high frequencies. Although this was not measured, any waveform with sharp transitions must have considerable high frequency content. The waveform shown is with the supply loaded to about half power. Unlike the transformer unit, the waveform does change considerably as load is increased - the current spike gets larger. Mains current was measured at 43mA RMS, but as you can see the peak current is about 160mA.
+ +The voltage waveform shown is measured directly off the mains using a divider circuit. You can see how the wave shape is modified with 'flat-topping' caused by the myriad of switching power supplies connected to the grid - all drawing current at the peak of the mains voltage. Distortion was measured at 4.5%.
+ +As mentioned above, I measured a number of supplies - both switchmode and linear, the results are shown below.
+ +Rating | No Load | Full Load | ||||||
---|---|---|---|---|---|---|---|---|
Type | Voltage | AC Current | +Power In | Test Current | AC Current | +Power In | Efficiency | PF |
SMPS | 5V DC | 3mA | 0.4W | 1A | 72mA | 9.0W | 55% | 0.52 |
SMPS | 5.7V DC | 1.3mA | < 0.2W | 710mA | 46mA | 6.5W | 70% | 0.58 |
SMPS | 7V DC | 14mA | 1.1W | 1.8A | 130mA | 16.9W | 75% | 0.54 |
SMPS | 19V DC | 14mA | 2.2W | 2.35A | 130mA | 16.9W | 75% | 0.54 |
Linear | 6V DC | 26mA | 1.9W | 300mA | 23mA | 2.7W | 66% | 0.49 |
Linear | 9V DC | 19mA | 1.4W | 200mA | 22mA | 4.0W | 45% | 0.76 |
Linear | 9V AC | 20mA | 1.4W | 300mA | 24mA | 4.3W | 62% | 0.75 |
Linear | 12V DC | 28mA | 1.6W | 400mA | 48mA | 10.0W | 48% | 0.87 |
Linear | 12V DC | 21mA | 1.4W | 400mA | 53mA | 10.6W | 45% | 0.83 |
Note: All ratings and measurements are for 230V 50Hz mains supply. Test current is as close as practicable to rated output current
+ +A motley assortment, but the linear supplies are representative of those likely to be found in most homes. They cover a range of ages, from only a couple of months to quite a few years. As you can see, the power factor of most of the linear supplies is acceptable, but overall efficiency is not good. The miniature 5V SMPS also has rather poor efficiency - there are losses that simply exist regardless of how much we'd prefer they didn't.
+ +Most of the linear supplies have a reasonable power factor at full load, although there is one anomaly, having a PF of only 0.49. Even this is not as potentially harmful to the power grid as an SMPS, because the waveform distortion contains predominantly low-order harmonics.
+ +For most of these supplies, efficiency is a minor issue. They are rarely used to maximum capacity, and may only ever supply less than half their rated current in normal use. Using supplies at less than full load has a greater effect on the efficiency of switchmode supplies, and may become worse still because of their inability to handle transient loads above the maximum current rating. As a result, a larger than expected supply may need to be used to be able to supply any high transient current drawn by equipment.
+ +
Figure 10 - Notebook PC Supply Current Waveform, No Load
An excellent example is the supply for a notebook (laptop) PC. At idle, the current waveform is as shown above - it is very spiky, and shows that there are lots of harmonics generated. While the supply is idle, it's actually not too bad, but only because the current level is quite low. When called upon to do some work the situation is very different.
+ +
Figure 11 - Notebook PC Supply Current Waveform, 2.3A Load
The waveform now has a major (and very sharp) current spike, which occurs at the very peak of the voltage waveform. The harmonic content of this waveform is pretty nasty - it's so bad that the distortion reading looks impossible. My distortion meter insists that the THD is about 80%, and the simulator can be configured to include a distortion meter that gives roughly the same result (using the same technique as a real distortion meter). + +
It is important to understand the processes that all interact here. Many people have found that sound recorded by a notebook PC is unacceptable if the supply is connected, so run on batteries while recording. It's often not the PC supply causing the problem directly - the harmonics get into the external equipment and ruin the signal to noise ratio before it even gets to the PC.
+ + +Unfortunately, the logic used in the arguments presented for a ban of 'inefficient' power supplies (or lights) is not scientific but emotional. Energy saving data will not usually be given for a single supply, but the total number claimed (or dreamed up) in use will be used to inflate the outcome. If there are a million 1W supplies, and each can be made 50% efficient instead of (say) 25%, we 'save' 250,000W - assuming they are all in use. Now we can assume they are all powered 24/7, so we can save 2.19GWh a year. Because that sounds like a big number and the CO2 number will also be very impressive, people will "ohhh" and "ahhh" appropriately.
+ +No-one seems to be able to agree about the amount of CO2 produced per kWh, but around 900g/kWh seems reasonable *. That makes 1,970 tonnes of CO2 for 2.19GWh. This is indeed an impressive number ... if viewed in isolation. If we break it down, each unit saves 0.25W, or 6Wh / day and a few milligrams of CO2. This is not at all impressive so naturally you'll never see it described that way.
+ +* There's not a lot of agreement about the amount of CO2, but the figure used seems reasonable.
+ ++ Note: The referenced RIS document expands on my little example by assuming that Australia has 34,000,000 (34 million, 1.6 supplies for every + person in the country), and that every single one of them is operating continuously at well over 3W. This is obviously nonsense!+ +
+ ** I know that human breathing is irrelevant and 'unscientific', but it's interesting and helps put everything into perspective. +
Lets compare the 'saving' above to the energy used by one of the simplest household activities imaginable (and I could rework this 'RIS style' so that all 21 million Australians do the same thing - even if less than 5 years old). A million people open their fridge door a few times a day, thus turning on one or two 25W lights for a total of (say) 5 minutes each day. This also involves losing some of the cool air in the process, so the fridge compressor has to run to restore the temperature. We will completely ignore the energy used by doing this!
+ +Having retrieved the milk, we'll assume that only once per day our 1,000,000 people will want to boil water to make tea or coffee. The kettle will be typically 1kW, and takes just over 3 minutes to boil 500ml of water. That's 51.6Wh / day per person, or almost 19GWh / year (as well as 17,100 tonnes of CO2) - just from 1 million people making a single cup of tea/ coffee per day.
+ +That's over 8 times as much energy as would be saved by swapping over to 1 million more efficient power supplies. As we can see, banning tea and coffee will save far more energy than most other measures suggested put together! This is especially true of people who have more than one cup a day - shame on them!
+ +We must also consider the cost to the environment and the users when supplies fail and have to be replaced. There's the obvious waste of resources in the failed (and rarely recycled) supply, plus the cost of making, shipping, storing and retailing for the new supply. I know it's not fashionable to still be using last week's phone or tablet, so most people will never keep these supplies for long enough for them to stop working. Many (perhaps even the majority) will not be recycled, and will go to land-fill. Some will be stuck in a drawer for a while first, but essentially their fates are sealed.
+ +The so-called '1W initiative' (where all appliance standby power is reduced to 1W or less) is a bit like patching a small hole in a roof, but failing to see that half of it is missing altogether. Not that I object to measures that will save power, but the standby power of plug-pack and similar supplies is a tiny fraction of the total power that's lost overall, and to demand the use of switchmode supplies and fail to demand PFC (power factor correction) is foolish in the extreme. That safety and longevity may be compromised as well is not at all satisfactory.
+ +While many devices may be able to pass the basic requirements, some disable PFC when idle in order to do so (see Electronic Design - this is a suggested method to reduce power). Not just power, but VA (also called 'apparent power') needs to be reduced at the same time, or the whole process is marginal and results only in more harmonic noise on the supply grid. At present, the measured THD (total harmonic distortion) of the 50Hz mains measured at my workbench is 4.5%.
+ +It's not that there's an inherent issue with reducing wasted power, but measures have to be taken in context with overall usage and how much can really be saved by reducing the standby consumption of devices that are really already quite low anyway. There are many other things than can be done that will make far more difference, such as minimising the power used for street lighting and carparks. There's a railway commuter carpark opposite my house, and it uses what look like mercury vapour lights that are on all night, even after trains have stopped running. If the lights were converted to LED and used motion sensors, the lights could be dimmed when there's no-one using the carpark, and only those lights where movement was detected would switch back to full brightness. The potential saving would be far greater than my total consumption of electricity for the day.
+ +It is worth noting that most of the measures being looked at target individual households, yet household energy usage is small compared to that of commerce and industry. A single small office block may use several hundred fluorescent lamps (not to mention PCs, boiling water urns, etc., etc.), which are often left on all night. At 36W per lamp, and (say) 250 lamps left on all night for 'security', that amounts to 9,000W that is wasted for at least 8 hours a day. Over 26,000kWh per year is squandered, almost 24 tonnes of CO2 is liberated, and that's for one small office block. (See, I can invent a bunch of numbers too ).
To put this into full perspective, consider that the same regulatory bodies in Australia have recently reduced the allowable power dissipated by small (63 litre) hot water systems to 1.33kWh / day. That means an average loss of 55W, all day, every day, with no hot water being used at all. The losses are all due to heat loss through pipes and insulation. See MEPS Requirements for Electric Storage Water Heaters for full details. Compare that to the minuscule power dissipated by a few external power supplies. On the one hand they quibble about 2 or 3 Watts (a few Wh/day), and on the other consider a heat loss of 1.33kWh/day to be acceptable. The fact is that it probably is 'acceptable' if one just looks at the economics - it will be expensive to provide additional insulation and other measures to reduce heat loss further, but the potential for real savings is immeasurably greater. The power lost by one or two small transformer supplies operating continuously is less than that needed to re-heat the water in the heater after washing one's hands just once per day. With 1.33kW/day, you could run over fifty plug-pack wall supplies continuously and still be well below that figure.
+ +None of this has anything to do with reality - it is all politics. Politicians (and bureaucrats) need to be seen to be doing 'things' as expected by their constituents (or the political party in power at the time). By making lots of noise about something utterly insignificant and using statistical 'evidence' to prove how much difference it will make, the public is easily hoodwinked into believing that things are changing and/or that the government is serious. The overall effect of applying minimum performance criteria to something as trivial as a small external power supply is almost certainly between zero and negative in the long term.
+ +Small devices such as mobile (cell) phone chargers will normally be switched on when the phone needs charging - why would it be left powered if it's not doing anything? It probably doesn't help much that in the US (among other places), power outlets normally do not have a switch. All standard wall power outlets sold in Australia are fitted with a switch, so it's easy to turn things off without even removing them from the outlet.
+ +Some Australian power points aka GPOs - general purpose outlets, aka wall outlets) are fitted with a neon indicator. These are fairly uncommon now, but were popular for a while. Each neon draws around 50mW (0.05W) for no useful work. Perhaps the regulators should introduce the death penalty for those still using such abominations!
+ + +To appreciate just how silly this ruling really is, we need to look at some real examples. There is no point quoting hypothetical figures and multiplying by the assumed number of external power supplies that may exist. To see the reality we need to take measurements such as those shown above, and compare the power usage with normal household activities.
+ +If an appliance draws 100W, that is simply 100Wh if it's on for 1 hour. The appliance will use 1kWh if switched on for 10 hours. An appliance that uses 1,000W (1kW) needs to run for just 1 hour to use 1kWh. While simple, it needs to be understood for the rest to make sense.
+ +Most average sized transformer based external supplies draw around 20mA at idle (according to specifications and measurements shown, and assuming 230V 50Hz). This works out to 4.6VA, and the power factor at idle is typically around 0.3 - an average of 1.44W is dissipated with the supply just connected, but with no load. That's 34.56Wh / day, or 0.83 cent a day to run, assuming $0.25/kWh. CO2 generated will be under 16 grams per day. Even if this supply were disconnected permanently, the maximum saving is $3.03 a year. Reducing standby power to 0.5W will save $1.97 / year, and a tiny amount of CO2.
+ +If we compare the power of existing supplies with other normal activities at home we can see just how futile this particular measure really is ...
+ +Appliance / activity (Day) | Wh / day + | kWh / year | Cost / year | CO2 / year + |
Conventional transformer supply, 1.44W | 34.6 | 12.6 | $3.03 | 11.4kg + |
Boil 1 litre water once/ day (Note 1) | 103.2 | 37.7 | $9.41 | 34kg + |
75W lamp, 4 hours | 300 | 109 | $27.37 | 68.6kg + |
18W fluorescent lamp, 4 hours | 72 | 26.3 | $6.58 | 23.6kg + |
Reheat 2 litres after washing hands | 81 | 29.7 | $7.42 | 26.8kg + |
Reheat 20 litres in HWS (Note 2) | 814 | 297 | $74.27 | 268kg + |
Electric stove, 2kW, 30 min. | 1,000 | 365 | $91.25 | 328kg + |
Clothes Dryer, 2.4kW, 150 min. (Note 3) | 6,000 | 312 | $78.00 | 346kg + |
TV set (large), 4 hours | 600 | 218 | $54.73 | 197kg + |
Clock radio, 24 hours (5W) | 120 | 43.8 | $10.95 | 39.4kg + |
Human dissipation & breathing [ 1 ] | 2,400 | 876 | n/a (Note 4) | 365kg (Note 5) + |
Notes:+ +
+ 1 - It takes 4.1868 joules to raise the temperature of 1 gram of water by 1°C. A joule is one Watt - second (1kWh = 3,600,000 joules)
+ 2 - HWS - Hot Water System, heat from 20°C to 55°C (lower than normal temperatures are recommended for maximum savings)
+ 3 - Based on a single 160 minute cycle per week, not including cool-down period
+ 4 - See grocery bill
+ 5 - Human carbon dioxide generation is about 410g / kWh. Adult humans dissipate around 100W 24/7 (not relevant, but an interesting comparison). +
As you can see, just one human staying alive uses more energy and liberates more CO2 than any of the other activities listed. If people exert themselves by exercising or working, this figure increases. Perhaps the governments of the world might consider banning all forms of exercise, or perhaps mandate that we all breathe half as much. Yes, I know this is silly, but it's no sillier than banning little power supplies whose overall contribution is so low as to be negligible.
+ +Concentrating on very low power devices is easy for governments, and they can wave their silly statistics around and impress the populace with their forward thinking. What is being done about the really big power wasters? Exactly the same thing that is done about large corporate water wasters - nothing. Because they are large corporations, they have some political muscle. Governments almost anywhere can be swatted like flies by some of the huge multi-nationals, so they are left alone to produce the same silly and meaningless 'power saving' measures that so amuse the government regulators.
+ +Everyone gets to feel as though something is being done, and can relax knowing that the government has our best interests at heart <choke>.
+ +When was the last time you saw an official report similar to The Australian Government's RIS document, where the savings were compared to the total power consumed? Never? Likewise.
+ +According to best estimates I could find (at the time of the last update in 2010), Australia generates (and uses) over 128,000GWh (128TWh) per annum. If we assume that an extremely generous 50% of all external power supplies that exist in Australia (34,000,000 according to 200702-ris-eps, page 61) are powered on 24/7 and dissipating 1.44W, that's a total of 17,000,000 x 1.44 = 24.5MW. Annual consumption is 214GWh. This sounds like an awfully large amount of power, and that's why the total power consumed is never mentioned - 214GWh is 0.17% of the total energy used - insignificant compared to other possible savings. Transmission losses alone will exceed this amount by at least 40 times. Note that the estimates used here are exceptionally generous to the legislators, but strangely, the RIS quotes 1,000GWh for consumption by existing supplies. To achieve that figure, every supply (all 34 million of them) in Australia would need to be connected 24/7 and be drawing 3.35W all day, every day. This is clearly complete rubbish.
+ +In reality, my figures are much higher than the real amount. It is very difficult to even estimate the final numbers, because no-one really knows what most people do with external supplies. According to the data in the RIS, each and every Australian household has 4.15 external supplies in continuous use. This is almost certainly nonsense - I'd be very surprised if the number were even half that. There will be a vast number of households that may have one or two supplies, and many won't have any at all. There will also be many dwellings where there are more - I have at least 10 in use (and probably 30 or more in a box - do they count?). The number appears to have been grossly inflated to make the report look 'good' and come up with some impressive numbers.
+ +The significance of any claimed saving (either for individuals or the environment) is dramatically diminished - and that's using an unrealistically large number of supplies powered on permanently. I don't know about you, but to me, this is so futile as to defy belief. The cost to manufacturers, importers, and the public will outweigh any financial advantage due to power savings by a huge amount - all to achieve nothing.
+ +The individual household saving will be about 12 cents a week ($6.18 a year), based on reducing the average standby power of 5 external supplies from 1.44W to 0.5W. If we assume $1500 per annum electricity usage, the saving is 0.4%. Does anyone think that they will improve their lifestyle significantly with such a saving? Is this going to have any effect whatsoever on greenhouse gas emissions? At 18kg per annum, we can safely say that it will go completely un-noticed, even if every household in Australia made the change tomorrow.
+ +Concentrate on things that make a real difference. This is interventionist government at its very worst.
+ +I also suggest that you look at the replacement cost of failed switchmode supplies, but not including any collateral damage to equipment as that is too difficult to quantify. If just one external SMPS per household fails in a two year period (yes, that's just an educated guess), the annual cost to the householder is at least 3 times the cost of operating a conventional linear supply for one year. The latter have an indefinite life, but 10 years is a fair estimate. When the SMPS fails, 99% of householders will just drop it in the bin, because there are few opportunities for recycling such small devices.
+ + +ESD (electrostatic discharge) used to only be concerned with typical charges that build up on equipment (and people) by purely conventional electrostatic generation methods (walking across carpet, sliding on vinyl chairs, etc.). The hazards with SMPS have been around for a while, but any regulation that makes them the only choice will vastly increase the chances of equipment damage. While the voltage from a traditional ES discharge is usually very much higher than you'll get from an SMPS, the available charge (and current) is a great deal less, yet it is still a very real problem.
+ +Something that many people have discovered is equipment failure where switchmode supplies are used. The most common failures are with equipment that has input circuits (typically audio/visual gear, but a great deal of other equipment is also at risk). They power up the equipment, then connect input leads (or change input leads while the unit is connected to the supply), and it doesn't work any more. This has never been a problem with linear supplies.
+ +Almost all equipment powered by an SMPS is not earthed (grounded), including a lot of equipment that has an internal power supply. Almost invariably though, this equipment ends up being earthed by being connected to other equipment that is earthed. Never mind that fact that it is technically illegal (at least in Australia) to earth double insulated products - the fact remains that it happens all the time because the consumer is unaware that there is a problem.
+ +So, if you have a new set of powered PC speakers, they will (under these new rules) use a switchmode power supply. Your desktop PC will be earthed. If you connect the speakers to the power supply before connecting the input leads, the PC speakers (including the input circuits and leads) will be floating at ~115V AC (assuming 230V mains). The SMPS DC output is connected back to the mains with a pair of (usually) 1nF caps, so floats at half the mains voltage. This applies to almost all SMPS, because without the caps most will not pass radiated EMI requirements.
+ +Figure 12 shows the residual voltage developed across a 5.1k resistor between chassis and earth. This particular measurement was taken from a TV set-top box, and measures 3.8V peak to peak, or 0.86V RMS. 168uA isn't much current, but remember that without the 5.1k resistor the voltage is ≈115V RMS, and voltage peaks are about 160V (either polarity). The voltage is high enough to feel, and there is more than enough current available to damage an input circuit if it happens to be connected while the AC voltage is at (or near) its peak. Also, note the high frequency content (the thick fuzzy sections). This noise is injected into the signal common (earth), and can easily generate considerable noise in circuitry.
+ +In particular, all mains noises - clicks, pops, whirring noises, etc., are injected directly into the earth (ground) circuit of the connected equipment. For low level signals (guitar effects pedals and phono preamps for example), external transformers are common to get rid of such noise, but using an SMPS will bring them back - probably much worse than if the transformer were inside the same housing. An initial test with a guitar amp showed that the noise level introduced by a switching supply connected to an effects pedal (aka 'stomp box') made the system unusable - the background noise level was increased by at least 40dB, going from the traditional slight hiss to a nasty (and loud) combined hum and buzz. A linear supply made almost no audible difference - perhaps a couple of dB at the most.
+ +The available current caused by the Y1 caps is small, but many people have reported getting a tingle or a bite from such equipment. With 230V mains, the current is only about 75 to 200uA - in theory, this should be below the threshold of feeling. Should you make a solid equipment connection right at the peak of the AC waveform, an instantaneous current spike is available, limited only by series resistance. The spike can easily be well above the current needed to destroy any opamp's input circuit or even a sound card output.
+ +The instantaneous current depends only on the impedance of the wiring, and can exceed several amps if the impedances are all low enough. Note that the current spike also has the ability to damage the output circuit of an FM tuner, CD player, or other signal source. Needless to say, damage so-caused will not be covered by any guarantee. Even where a resistor is used on the output stage (typically 100 ohms), you can still get a 1.6A peak current if you connect at the peak of the AC voltage.
+ +The current spike can easily remain above 1A for around 60ns, and since there is a peak voltage of over 160V available at the time, it has enough energy to cause real damage. Many people have been caught by this just using double insulated A/V equipment with an internal SMPS, and as transformer based units disappear the problem will get worse. Although there is a tiny amount of capacitance between the mains and secondary of a small (conventional) transformer, it is dramatically less than that from any SMPS using the caps. Measured voltage with a 10Megohm oscilloscope probe showed less than 10V RMS with any linear supply I tested, and was only a slightly distorted sinewave with no HF noise of any consequence. Compare this with 120V RMS from an SMPS tested the same way!
+ +This is yet another annoyance - doubly annoying if it damages an expensive sound card or other signal source. There are already plenty of complaints on forum sites where exactly this problem has occurred, and they will become more common as transformer based supplies are phased out.
+ +You can be excused for thinking that the peak voltage available is half the peak AC voltage, so for 230V mains, the peaks are 325V, and the maximum voltage at the supply output is ~160V ... sound right? Actually, that's maybe - if you are lucky. There's a big 'but' in there though, and it talks a bit of lateral thinking to get there.
+ +Imagine that as you make the connection, a momentary contact is made at the peak of the waveform. You get a tiny (usually invisible) arc, and that seems to be the end of it. Unfortunately, what has really happened is that the Y1 caps have now charged up. Should the next momentary contact be made at the opposite AC peak, you have the full 340V available. This is best shown with a simulation, because even a 10Megohm oscilloscope input causes the voltage to collapse too quickly ...
+ +
Figure 13 - Voltage at DC Output of SMPS
The voltage seen at the beginning is the normal output, as set up by the capacitors. At 45ms, the DC output of the supply is momentarily connected to earth via a resistance. You can see that the entire waveform then shifts downwards, because the capacitors are now charged with DC (which causes the offset). At 55ms, the output is again momentarily connected, but instead of the expected 170V, there is now 340V ... the caps are charged to 170V DC, and have an additional 170V from the AC supply at the waveform peak. There are three brief connections shown (45ms, 55ms and 65ms), after which you can see that the residual AC waveform now varies between close to zero volts and -340V. This will remain for as long as the caps stay charged - a few seconds at least. The DC component can range between zero and 170V, depending on the exact time of the last contact.
+ +It is actually surprisingly easy to achieve this as you insert a connector into its socket. There are invariably short periods of connection and disconnection when any circuit is connected or disconnected. For this very reason, 'de-bounce' circuits are needed for digital inputs that are activated by mechanical switches. Whether you manage to make the connection without damaging something is purely a matter of luck, and if you rely on luck, it will run out some day.
+ +In case you are wondering ... this isn't weird science or conjecture of any kind, just physics doing what it must.
+ + +A very basic (and admittedly rather crude) test was performed on a BC546 transistor. These are fairly rugged small signal devices, and one would expect them to be pretty immune to most external influences - after all, it is rare in the extreme for one to die unless severely overloaded. As a 500mW transistor with a maximum collector current of 100mA, it is vastly more rugged than any opamp input device. I tested the gain at 125, then subjected the base to a few touches of the earth (negative) side of the output connector of a small (unearthed) SMPS. The transistor's emitter was earthed, and transistors are used in much this fashion as 'real world' interfaces to digital circuits because they are so hard to destroy.
+ +After the test, I checked the gain again - it had dropped to 30. The transistor had been ruined, simply by connection to an external switchmode supply. Other tests would reveal that noise performance would most likely also suffer, but there's no point if the transistor is rendered useless anyway. Had this transistor been used as an 'indestructible' interface to a piece of earthed equipment (Unit A), then connected to something else powered by an SMPS but not earthed (Unit B), that would be the end of Unit A - it's input circuit has been destroyed simply by making an otherwise perfectly normal connection as will be done by countless (and hapless) consumers who will be unaware of any likely problem. The packaging of any SMPS you will find certainly makes no mention of it as an issue.
+ +It is a crude test, because no limiting resistors were used, but even a 1k 'protection' resistor would easily allow anything up to 340mA instantaneous base current - well in excess of the absolute maximum collector current specified in the data sheet (maximum base current is not specified, but is typically about half the peak collector current). Many input circuits have little or no protection components where discrete transistors are used, simply because there has never been any real need to do so in the past. Causing the emitter-base junction of a transistor to enter the zener breakdown region generally causes loss of gain and increased noise.
+ +A second transistor was tested the same way - its gain fell from 150 to zero!
+ +If a discrete bipolar transistor can be killed so easily, then we can take it as read that bipolar input opamps will also be killed because the transistor element in the IC is much, much smaller than that of a discrete component. FET input opamps don't stand a chance - even with typical protection circuits in place (usually just a series resistance at the opamp's input). This protection has always been sufficient before, but may not be enough if linear supplies are no longer available.
+ +While this is happening, US chip makers are claiming that the existing ESD (electrostatic discharge) limits and the level of protection they need to include are arcane, outdated and 'overkill'.
+ +The Industry Council on ESD Target Levels is working on a white paper at the International Electrostatic Discharge Workshop, which convenes 14-17 May in Lake Tahoe, CA, in support of a proposal to reduce on-chip ESD stress target levels by more than half. The reduction is supposed to lower cycle times and costs for chip makers, who are struggling to meet the current ESD levels in new designs. According to the council, those levels are outdated and represent 'overkill', causing unnecessary debugging time, IC redesigns and product delays. The group maintains that its proposal will not compromise quality or performance. See EETimes article for more details. Of course, product makers are complaining that the IC manufacturers just want to make ESD protection someone else's problem. Given that the ESD from a switchmode supply using Y1 caps for EMI compliance can kill a BC546 discrete transistor, the tiny devices used in many ICs don't stand a chance.
+ +So, two BC546 transistors were completely destroyed by 'zapping' their bases from the ground lead of a small SMPS. What else? I also tried a few opamps - 2 LM4558 ICs died in the interests of testing, as did a TL072. The test jig was wired up on a piece of Veroboard, and each half of the opamp was configured with a gain of 2 (using 10k resistors) and with a 10k resistor from the non-inverting inputs to ground. Outputs were isolated using 100 ohm resistors.
+ +No input stages survived being touched a few times with the ground lead of the SMPS. The common failure mode for the LM4558 opamps was that the output would swing to the positive supply. The TL072 was the opposite, but this is not necessarily what will happen every time. Opamp outputs actually survived with the 100 ohm resistor in place, but the spikes were very visible on the oscilloscope. It is probable that some degradation would occur with each zap, so while the device may survive initially, it will have reduced performance and will fail sooner rather than later.
+ +When the same test was run without the 100 ohm output resistor, the outputs of both opamp types could be killed easily enough. A few touches with the ground lead of the SMPS was all it took. Output stages are naturally more robust than input sections, but they still died.
+ +Including series resistance in output stages is recommended procedure to prevent oscillation, but including resistance in input stages can cause an increase of noise (because of the higher effective input impedance). While many of the projects on the ESP site use series input resistors, they are generally fairly low values (between 1k and 2.2k). From the tests I did, this is enough to save the input stage of even the FET input stage of the TL072 - at least for a rather limited number of test zaps. I did not test for increased noise levels.
+ +The opamps tested are old technology, and are much simpler and more robust than those used in large scale integrated (LSI) circuits. I didn't test a CMOS device. These are fairly robust with simple logic ICs, but even then are known to be static sensitive - far more so than analogue opamps. Modern ADC and DAC chips will be far less tolerant because of the ever decreasing size of individual components in LSI designs.
+ + +I am playing 'devil's advocate' quite deliberately here. The failure modes described may be 1 in 1 million or less, but that's one too many. A linear external supply is generally considered to be extremely safe, in that there is no likely failure mode that can make the supply a potential death-trap. I have never heard of an injury, fatality or fire caused by the failure of a linear power supply, because they are so simple that full protection is easily accomplished with minimal difficulty or cost penalty.
+ +Something that is an unknown at present is the end of life failure mechanism for small SMPS. I have seen many switchmode supplies (in equipment) that have failed in a rather spectacular manner, but there are few indicators at present as to what typical small SMPS will do when they fail. Because of the number of parts involved, it is impossible for anyone to predict which one will fail first, and it is probable that there will be multiple different failure mechanisms. Although the specific details have not been made public, there was a fatality in Australia in 2014 when a young woman was electrocuted by a mobile phone charger. Other fatalities worldwide have also been reported.
+ +The worst-case is for an electrolytic capacitor to explode. This is not at all uncommon, and the results can vary from nothing else happening, to scattering burning paper and shredded aluminium foil throughout the equipment. Having seen both on a number of occasions, I know that either is possible. Should paper and foil be scattered throughout the confined insides of a small SMPS, there is a chance that some of the foil could bridge the isolation barrier. Have a look at the photos (above) of the insides, and you can see that there really isn't very much clearance, so even a small piece of foil is enough.
+ +Should this occur, the output could easily become connected directly to the live mains or the high voltage rectified DC - that this could be a fatal failure is obvious. Anyone coming into contact with the 'safe' DC output is at serious risk of electrocution. Leaking electrolyte from an electrolytic cap can easily have the same effect (it is conductive, and needs to be for the cap to function).
+ +A very common failure for SMPS is for the output filter capacitors to dry out through progressive loss of electrolyte over the years. When this happens, the DC output voltage will develop a high ripple voltage, and the average DC voltage often increase (anything up to 4:1 is likely). Not only will this commonly destroy the connected equipment, but any remaining capacitors in the output circuit are now at risk of explosive failure or rapid venting of electrolyte. No (cheap) SMPS I have encountered so far appears to have any protection whatsoever from an output over-voltage failure that could result in either severe internal damage to the supply itself or to the external device being powered.
+ +Two notable SMPS failures I have experienced recently involved the internal supply for a PC and a DVD player. In both cases, the output voltage failed high - killing the PC motherboard and the entire DVD player's internals. The PC supply (and this was only the auxiliary 5V supply section - the PC wasn't even turned on at the time) generated a great deal of charred PCB material, as well as soot and smoke. It also managed to destroy a substantial protective diode on a disk drive. Is there anything in a small external SMPS to prevent the same thing happening? Not that I've seen so far. The extra circuitry needed will increase the cost and requires extra PCB space - two commodities that are at a premium for low cost devices.
+ +Quite frankly, I don't consider the circuitry or isolation barrier provided in any external SMPS I've seen so far to be sufficient to prevent a breach, regardless of how the device chooses to end its life. There are just too many possibilities, because there are so many individual parts and so many different ways the supply can fail. The fact that supply failure can also cause the device it powers to fail adds yet another level of unpredictability. While a cordless phone (for example) may never be expected to fail in a spectacular manner with a linear supply, what happens when the supply voltage is doubled (or more) because of an external supply failure? I doubt that this has ever been tested, because at present there is no need to do so where a linear external supply is used.
+ +I consider the two main issues to be possible electrocution and fire hazard. Apart from notebook PCs and a few other devices, the use of external SMPS has not been great up to the present. The supplies for notebook PCs are typically relatively expensive, but really cheap SMPS are fairly new so (at the time of initial publication - 2007) there are/ were comparatively few of them. Statistical data are pretty much non-existent ... search as I may, I couldn't find anything of any value. As noted above, the chance of either is low, but once there are millions of SMPS all made for the lowest possible cost in use, it's easy to envisage that a catastrophic failure is almost inevitable. Consider too that the PCBs used nearly always use the cheapest phenolic resin material available. I have yet to see a small SMPS using a fibreglass PCB, although few professional products use anything else.
+ +While there is quite a bit of information on the Net regarding common SMPS failure modes, none that I saw included small supplies in the 1 - 50W range as will become common when the ban is imposed on 'low efficiency' linear supplies. The majority (predictably) refer to computer supplies, or TV, DVD or power amplifier supplies. Almost all that you look at will state that there are many possible causes for failure, but some are more common than others.
+ +We already have a deplorable situation with most small SMPS perfectly capable of killing equipment because of the Y1 caps fitted for EMI suppression. While the authorities insist that this is 'perfectly safe', I remain unconvinced. There are many cases worldwide of fake components, and Y1 caps (more expensive than most) are an almost certain target for counterfeiters at some stage. Why? Because they can make a quick profit. I simply cannot foresee a situation where these caps will always perform perfectly forever, regardless of anything.
+ +With a conventional transformer, there is a large and highly visible isolation barrier between the primary and secondary. Even if this were breached, the wires used to wind the transformer are insulated as well, and each winding is covered with more insulation.
+ +In a SMPS, there is a small (and usually hidden) barrier in the transformer, which could have an inherent fault that can't be seen when the supply is built. The isolation barrier is bridged by (usually) an opto-coupler to provide feedback for regulation. It is also commonly bridged by one or two Y1 rated capacitors - again, no-one knows what's inside the package. The PCB isolation barrier is a bare section of (cheap) PCB substrate, with sections hidden under the transformer and opto-coupler ... perfect places for moisture (e.g. leaked electrolyte) to accumulate. Many small SMPS use slots in the PCB to increase the creepage distance beneath opto isolators and 'Y' caps, and sometimes beneath the transformer as well. This helps eliminate moisture build-up, but the barrier can still be breached by conductive materials jettisoned from blown electrolytic capacitors.
+ +I have great difficulty accepting that the SMPS 'equivalent' to a conventional transformer can possibly be electrically safe to the same standards. Having worked with electronics all my life, I know that there are just too many possibilities for a failure. The regulators obviously have a much higher opinion of the inherent safety of every single component ever made than do I (or any other person who has serviced a failed electronics device).
+ + +This was a work in progress, but as of well before May 2014 (the date this page was updated) all external supplies that are subject to the MEPS requirements are now switchmode. AC plug-packs are still available, but all 'traditional' DC supplies are now off the market in Australia. Some of the new switchmode supplies are pretty good, although they all suffer from high frequency noise. As is to be expected, there are some that quite clearly don't meet mandatory safety standards, and these are nearly all due to small-scale importers who sell on auction sites.
+ +Web searches will find instances of equipment having been damaged, and it can be difficult to determine if such damage is the result of Y-Class caps discharging into input circuits. It is a real and known issue to many technicians, but it is unrealistic to expect the average consumer to understand the possible risk, and doubly so if it's not spelled out in the instructions ... I've not seen it yet - some state that the equipment should be connected before the power supply, but no reason is given.
+ +There are some similarities between this article and the CFL vs. incandescent lamp debate, but the difference is that here we are talking about much smaller savings and the potential for much greater risk to equipment. The effective elimination of AC external supplies is of particular concern, although these have not been affected so far.
+ +There is some hope though ... alternative cores for conventional transformer based units exist, and it is certainly possible to make transformer supplies comply with all the requirements. The big question at the moment is whether anyone will do so. Only time will tell.
+ + +As noted in the amendment at the top of this page, some of the most arcane parts of the proposal were not included (so we still have iron cored AC-AC supplies), but I have seen an alarming increase in the number of untested and unsafe products available for sale - particularly on-line auction sites. As noted in the PSU Wiring article, there are some serious breaches of the Australian Electrical Safety Acts (each state has it's own for reasons that are entirely obscure).
+ +An excerpt ...
+Some overseas manufacturers (use your imagination as to which country might be responsible) have even decided not to bother with the nuisance of Y caps, and I have seen standard 1kV ceramics used in this role. This can only be described as very scary - especially since anyone can become an importer these days, and sell on auction sites. Most are completely unaware of mandatory requirements which vary from one country to the next, so no safety tests are performed at all.
These power supplies (all external PSUs in fact) are prescribed articles in Australia, and are subject to mandatory electrical safety testing. Because people implicitly trust the power supply not to kill them (a not unreasonable expectation) it's important to ensure that it won't. The tests are designed to ensure to the best of anyone's ability that no failure can cause the output or any exposed metal to become live, and that the PSU cannot catch on fire, emit smoke, or melt the casing to expose live parts.
+ +I don't know about you, but I don't trust a foreign manufacturer who is desperately trying to sell for the lowest possible price. I know that thermal fuses will be missing (I haven't seen one in any of the cheap supplies), and that shortcuts will be taken. This includes using unapproved (or downright unsafe) parts, very basic circuitry with mediocre performance, and inadequate creepage and clearances between mains (hazardous) voltages and SELV (safety extra low voltage).
+ + +Electrical Safety Regulations for Australia | |
---|---|
Australian Capital Territory :- +Electricity Safety Act 1971 - Sect 27 +Electricity Safety Act 1971 - Sect 27 |
+New South Wales :- +Energy and Utilities Administration Act 1987 |
Queensland :- +Electricity Regulation 2006 - Sect 162 |
+|
Victoria :- +Electricity Safety Act 1998 - Sect 68 - See "supply" +Electricity Safety Act 1998 - Sect 3 - See "supply" |
+Western Australia :- +Electricity Act 1945 - Sect 33E +Electricity Act 1945 - Sect 33F |
Please note that these are just a few of the regulations that may apply. It is certain that there are others, but the above should keep everyone entertained for minutes at a time
![]() | + + + + + + + |
Elliott Sound Products | +Frequency & Amplitude Explained |
![]() ![]() |
Introduction +
Sound is carried from the source to our ears or a microphone by means of minute vibrations, which are passed through the air. Sound has two primary components, frequency and intensity. The frequency refers to the pitch of the tone or other sound, and typical sounds have many different frequencies all happening at once. Frequencies are measured in Hertz (Hz), named after the physicist Heinrich Hertz. The old standard (now discontinued almost everywhere) used Cycles per Second (cps) as the standard measurement. Hz and cps are the same thing - both refer to the number of complete cycles of a waveform in one second.
+ +Sound intensity (or amplitude) is measured in decibels (dB). The prefix 'deci' means one tenth. The Bel was invented by engineers of the Bell Telephone Laboratory to quantify the reduction in audio level over a 1,600m (1 mile) length of standard telephone cable, and was originally called the transmission unit or TU. It was renamed in around 1923-4 in honour of the Bell Laboratory's founder Alexander Graham Bell. Because the Bel is too large for general use, the dB became the preferred unit. 1 Bel is 10dB.
+ + +The range of frequencies we humans can hear is generally taken as being from 20Hz to 20,000 Hz (20kHz), but the conditions are not usually specified. As we get older, the first to suffer are the high frequencies, and by around 50 years of age, most males will be limited to around 14-15kHz, with females usually suffering less loss. Frequencies below 25Hz are felt rather than heard, but the conditions under which we experience such low frequencies make a big difference to how they are perceived. At very low frequencies, there is little difference between the threshold of hearing and the threshold of pain, which can make low frequency noises especially troublesome.
+ +Our hearing is most sensitive at around 3.5kHz, as shown in Figure 1. Our hearing, eyes and sensitivity to touch or pain, are all logarithmic functions. This enables us to experience a vast variation with each sense. As the intensity of the sense increases, we automatically compensate by reducing our sensitivity. In this way, we can hear the gentlest rustle of a leaf in a tiny breeze at a sound pressure level (SPL) of 0dB, but are not instantly deafened by a nearby jack-hammer at perhaps 1,000,000,000,000 (1 trillion, or 1 x 1012) times the sound power (120dB SPL).
+ +When two frequencies are close to each other, our hearing plays some interesting tricks on us. If one tone is 6dB louder than the other (but close in frequency), we may not hear the second tone. This is called acoustic masking, and is used by the MP3 format to remove a great deal of the 'redundant' audio information. This reduces the size of the file dramatically, and with some music the end result may be almost indistinguishable from the original. Material with rich harmonic structure is less successful, with cymbals and harpsichords suffering because there is simply too much information and none of it is actually redundant. It's also worth mentioning that all of the audible cues we use to hear a 'sound stage' are considered redundant by MP3 encoders, so much of the subtle stereo image disappears. Only material that's panned hard Left or Right will remain, and the sound stage is gone forever.
+ + +In (western) music, we generally use the equally tempered scale. While not absolutely musically accurate, it does allow musicians to make key changes (moving an entire piece of music up or down the musical scale) without having to re-tune their instruments. This is a vast topic, and requires a great deal more than you will find here if it is to be fully understood. Unless you are a musician, a full understanding is not required. An octave can be divided into equally spaced semitones ('notes') as described below.
+ +Musical notation is based on the use of 12 semitones in each octave. An octave is the perfect interval between the 1st and 8th tones of the diatonic scale. See Answers.com if you want more specific information about the diatonic scale.
+ +In western music, each octave is comprised of 12 semitones. An octave is double or half the original frequency, so (for example) one octave from middle A (440Hz) is 880Hz or 220Hz. Both 'new' notes are called A. The word octave is derived from 'Octo-' (Latin/Greek) meaning eight, because the western octave is divided into 8 'full' tones in the diatonic scale.
+ +Figure 2 shows the range - the keyboard is shown as a reference only, and is not meant to be that of a real piano. Of common musical instruments, open E on a (4 string) bass guitar or double bass has a frequency of 41.2Hz, while a grand piano's bottom A is 27.5Hz. Many instruments can get far lower - examples being pipe organs and electronic synthesisers.
+ +High frequencies are more complex. Any note is made up from the fundamental (usually taken as the lowest frequency component of the sound - the first harmonic) and a series of harmonics above this (usually at octave intervals). While many instruments produce harmonics that are exact multiples of the fundamental, others do not. A flute also contains wind noise, reed instruments often have very complex harmonic relationships, and percussion instruments can have harmonics that are not related, but extend to well beyond our hearing range (snare drums, cymbals, etc). With many plucked or struck stringed instruments the second harmonic is dominant (louder than the fundamental). This is especially noticeable with guitar, but is apparent with many other instruments too.
+ +The division of an octave into 12 equally spaced tones is done using the 12th root of 2 (approximately 1.0594631). If you multiply 440 by the full version of this number 12 times, you get 880 - exactly one octave (depending on your calculator). The same method may be used to divide an octave into any number of divisions - for example, 3 divisions are used for 1/3 octave band graphic equalisers. The third root of 2 is approximately 1.26 in case you were wondering
A decade (one tenth or ten times the frequency) is approximately 3.2 octaves (3.1623 or the square root of 10). Decades are sometimes used instead of octaves in engineering, although current practice most commonly uses octaves.
+ +Frequency and amplitude are inextricably coupled in the real world, with both playing an equally important role. It is only in test and measurement where these two functions are separated, and that is so we can see how one affects the other to ensure that a reasonable standard is achieved.
+ + +The wavelength of any signal depends on the form of the signal (acoustic or electrical), the transmission velocity in the medium (air, concrete, an electrical wire) and the frequency. For audio, we are generally only concerned with the wavelength in air. While the wavelength of RF (radio frequency) signals in cables is usually very important, the wavelengths at audio frequencies in cables are very large indeed. A 20kHz signal has a theoretical wavelength of 15,000 metres (15 km) as an electrical signal, ignoring other effects such as velocity factor (look it up if you are interested). Because the wavelengths are so much greater than any normal cable length, there is no requirement for impedance matching when audio signals are carried by cables of any kind. Note that this doesn't apply to telephone systems, but this is a very different topic and is not relevant here.
+ +Sound in air at 20°C and at sea level has a velocity of 343m/s [2]. The speed of sound varies markedly with temperature and is proportional to temperature, but the Hyperphysics calculator will work it out for you if you need to know exactly.
+ +The formula to convert frequency to wavelength (commonly written as λ - the Greek lower case lambda) is ...
+ ++ λ = c / f where c is velocity of sound and f is frequency ++ +
It is also useful to remember that sound travels at about 343mm / ms (both metres and 1 second divided by 1,000). Our hearing mechanism is carefully refined to ensure that sounds we hear are made as clear as possible, so we automatically reject repeat sounds (echoes) that arrive within about 30ms of the original. This allows us to hear clearly even in a reverberant room (or a cave a few millennia ago). 30ms means a distance of around 11.5 metres, meaning a cave room of about 5 metres square. Such a room will sound somewhat odd, but speech is still clear. Larger rooms (with longer delays) can cause a significant loss of intelligibility if one is in the 'far field' (distant from the sound source).
Being able to calculate wavelength is very important for anyone designing loudspeakers, as there are many characteristics of a speaker box design and room placement that rely heavily on knowledge of wavelength and time delay. These topics are covered in countless white papers, articles and books, and are not relevant to the material in this article.
+ + +Most beginners in electronics find dB very confusing. This is understandable, but it is easy to learn, and is every bit as important as Ohm's law when working with electronics or loudspeakers. The main thing to remember is that 1dB remains 1dB, regardless of the context. Likewise, 6dB remains 6dB. Let's look at the formulae first (no, they are not hard - calculators do almost all the work). For those who prefer not to use a calculator, there are on-line conversion tools (but it's far better if you do it yourself).
+ ++ dB = 20 × log ( V1 / V2 )+ +
+ dB = 10 × log ( P1 / P2 ) +
Where V1 and V2 are any two voltages, and P1 and P2 are any two powers (in Watts). The reverse formulae are ...
+ ++ V = 10(dBu / 20) × 0.775+ +
+ V = 10(dBV / 20) [ × 1 ]
+ P2 = P1 × 10(dB / 10) +
But why are there different formulae? This is simple - power into a given impedance or resistance is determined by the square of the voltage. If 1 Volt into 1 Ohm gives 1 Watt, 2V into 1Ω gives not 2W, but 4W ( P = V² / R ). The multiplication by 10 or 20 takes this into account, so it doesn't matter if you work with power or voltage, you get the same answer in dB. The notation '10( x )' denotes 10 raised to the power of 'x' (e.g. 10² is 100).
+ +Using dB provides a convenient way to indicate very large or small numbers, and in a way that directly relates to the way we hear. For example, it is standard practice to measure frequency response of amplifiers, speakers and many other things at the -3dB points. Speakers are commonly quoted as (for example) 40Hz - 20kHz ±3dB. 3dB means half or double the power, or a voltage ratio of 1.414:1
+ +That last number is a good one to remember - the square root of 2 ( √2 ) is 1.414, and it is used in many electronics calculations.
+ +Figure 3 shows the range generally accepted as the minimum dynamic range in audio. As you can see it is vast, covering a span of 1 million to one. The total range that is of interest spans 120dB, being the dynamic range of typical good quality analogue and digital equipment. A microphone preamp may be quoted as having an equivalent input noise of -127dBm ... feel free to calculate the noise level in millivolts (it will actually be microvolts). Using dB to express such small numbers is far more intuitive than specifying the noise level as 0.346uV, which although impressively small, tells us nothing about its audibility.
+ +Here are three very useful dB facts that are worth remembering ...
+ ++ 3dB = half or double the power+ +
+ 10dB = half or twice as loud
+ 10dB = one tenth or ten times the power +
Perceived loudness is what you hear as the change, and means that if you have a 100W amplifier and you want the sound to be 'twice as loud', you need to use a 1kW (1,000W) amplifier to do so. Note that doubling the power results in a 3dB increase, and although audible it is not dramatic. It was determined long ago that 1dB is the smallest change that the average listener can hear. While open to some dispute at regular intervals, it still holds if the test is done with a single tone under ideal conditions.
+ + +While it is sometimes believed that dB is either some absolute value or a 'dimensionless number', neither is correct. Many standards exist to refer to specific levels, both with sound and electrical devices. dBm in particular causes many problems for people, and it is often used incorrectly.
+ +Note: dBm has actually been hijacked by radio and other technologies, so the definition has changed somewhat. It was originally used to describe only 1mW in a 600 ohm load (775mV), but is now taken to mean 1mW into any impedance (typically 50 ohms for radio and cable TV/ internet), and even optical fibre links. As it stands now, it's better to use dBm only in relation to 1 milliwatt, and use the appropriate formula to covert to a voltage based on the impedance.
+ +There are defacto standards for 'line-level' audio, being +4dBu (1.228V RMS) for professional equipment, and -10dBV (316mV RMS) for consumer or 'pro-sumer' (professional consumer) devices. For digital systems, these are generally referred to 0dBFS - full scale for DACs and ADCs. These are 'reference' levels, but they are not regulated so vary with different equipment. Most instrument amplifiers and electronic musical instruments provide whatever signal level the designer chooses, and they are usually not calibrated against any reference level.
+ +There is no such thing as a defined 'microphone level', because it varies over a wide range. The output of a microphone is usually specified for a particular SPL (e.g. -50dBV, referenced to 1Pa [94dB SPL]). In this case, we know that the output level is 5mV at 94dB SPL, so at 100dB SPL (6dB greater) the output will be 10mV. For example, a Shure SM58 mic has a quoted output of -54.5dBV open circuit (1.85mV at 94dB SPL, 1kHz claimed). Some mics are more sensitive than this (i.e. higher output), others less. The output voltage of many mics can reach 500mV RMS quite easily with high SPL (right next to a [loud] singer's mouth, in front of an amplifier or next to a drum skin).
+ +While these 'reference' levels are commonly referred to, it's generally never stated whether this is the peak or average level. There's typically a 10dB difference between the two, but that varies with the type of material, e.g. speech or music ('dance', pop/ rock, orchestral, etc.). In some cases, the peak to average ratio is deliberately compressed, but getting below a 6dB peak/ average ratio is difficult, and the result is highly unsatisfactory for serious listening.
+ +If we assume a reasonable 10dB peak/ average, that means if the average level is -10dBV (consumer) or 316mV, the peak will be 1V. For the +4dBu 'professional' level (1.23V), the peak level will be about 4V. All circuitry have to be able to accommodate the peak level without overload (clipping), so if a pro line-level input had a gain of (say) 5, the peak level will be 20V - well above the level that a typical opamp can achieve. The idea of 'headroom' is that there should always be some 'reserve' level, and 10dB is reasonable. For a 4V peak input, that means the maximum peak level could be up to 12.6V. This is usually easily achieved with opamps using ±15V supplies. Some designers will aim for a higher voltage, but that depends on the opamps. The common NE5532 has a maximum supply voltage of ±22V, and is often used with ±18V supplies to get the maximum headroom. The LM4562 has an absolute maximum supply voltage of ±18V, and the maximum recommended is ±17V. That means that you may not be able to use the latter to replace NE5532 opamps in some equipment.
+ + +When sound level readings are taken, it is common to apply what is known as A-Weighting (see Project 17 for a design and frequency response of an A-Weighting filter). The A-Weighting curve is designed to allow for the fact that out hearing is less sensitive at low and high frequencies, but fails to account for the actual SPL. When sound is above 100dB SPL, our hearing response is reasonably flat (see Figure 1), and the use of A-Weighting is inappropriate. Under these conditions, the C-weighting curve should be used, which has an essentially flat response over the audio band.
+ +A-Weighting is also often used for measuring amplifier noise, and because this is normally only ever at very low volume, the use of the A-Weighting filter is generally appropriate. Personally I prefer not to use it, but most manufacturers do. In a truly sensible world, A-Weighting would never be used, because it's nearly always applied inappropriately. See the article Sound Level Measurements & Reality for more on this topic.
+ +If A-Weighting is used, any mains-frequency hum is heavily attenuated (by over 30dB), and despite the claim that A-Weighting compensates for our hearing response, we can nearly always hear mains hum if it's present. Some people will refer to 'buzz' (which is far more audible) as 'hum'. They are two completely different sounds, and should be described properly so others know what to expect.
+ + +A frequency response curve is an example of the use of both frequency and amplitude, with frequency being shown on the X (horizontal) axis, and amplitude on the Y (vertical) axis. Both axes are usually logarithmic. Response curves are often provided with preamplifiers, power amplifiers, audio signal transformers, loudspeakers and microphones. Purely electrical response graphs are generally flat between 20Hz and 20kHz, but microphones, speakers and even transformers can show significant deviations from the ideal.
+ +Figure 4 shows an example of a frequency response curve, in this case taken from my Clio analyser. The source material was an FM radio tuner, and the program was set up to show the highest peaks over a 15 minute period. Note that the chart includes any equalisation applied by the radio station (I used radio Triple J as the source - they do not play advertisements, thus eliminating pollution caused by the often radical EQ and compression that is used in ads to make them sound 'loud'. The 19kHz FM stereo pilot tone is just visible on the right side of the graph, and you can see that the FM bandwidth is limited to 15kHz. (The pilot tone is used to identify a stereo transmission, and is used by the stereo decoder to derive separate left and right channels from the 38kHz sub-carrier.)
+ +It is generally accepted that the overall energy distribution of music looks more-or-less like that shown in Figure 5. That there will be variations is obvious, and while interesting and potentially useful, you cannot rely on any simple graph to determine how much power you need. Loudspeaker efficiency and peak to average ratio of the signal must also be considered.
+ +Peak to average ratio is an important topic itself. Because music has dynamics (loud and soft passages), and because of the nature of a complex audio waveform, the RMS (root mean squared) voltage is useful only to get an idea of the average power delivered to a speaker. The RMS value of a sinewave is 0.707 of the peak voltage, as shown below.
+ +You may recall that I said earlier that one should remember the number √2 (1.414). The RMS value of a sinewave is determined by dividing the peak value by 1.414, or you may multiply by 0.707 (the reciprocal of 1.414 ... i.e. 1 / 1.414 ). In Figure 6, the peak value of the sinewave is 1V, and the RMS value is 707.1mV. Most meters display the RMS voltage, but only those called 'True RMS' will get the value right for a complex waveform such as that shown in Figure 7. Not that the waveform is especially complex - it is made up from 3 sinewaves, at 1kHz, 2kHz and 4kHz, all with a peak voltage of 1V.
+ +The real RMS voltage of the waveform in Figure 7 is 1.225V. If one uses the calculated RMS voltage (based on the peak voltage of 2.33V), the answer is 1.566V - an error of almost +22% (+2.13dB). Most meters are average reading, RMS calibrated, meaning that the signal is rectified and averaged, but the meter scale is calibrated to read RMS. Such a meter will give a reading of 1.014V, a -12% error (-1.65dB). It is very easy to introduce serious errors into any calculation that involves complex waveforms, and this is one of many reasons that a reasonably pure sinewave is specified for most test procedures. While 'True RMS' multimeters are more accurate, some do not handle high crest factors well. The crest factor is the ratio of the peak and RMS values of a waveform, and to work well with high crest factors, some serious maths is generally needed. Digital oscilloscopes with voltage readouts compute the value, and will usually get it right (but with limited 'absolute' accuracy).
+ +True RMS meters may also have limited frequency response, especially at low levels. Readings can also be very slow at low levels, because the IC used to 'compute' the true RMS value can't handle low levels as well as high levels. Most work best at close to their maximum input voltage (often around 200mV).
+ + +Because crossover networks are an unavoidable requirement in quality loudspeaker systems, they also require some explanation. Crossovers are used to separate the audio band into a number of separate frequency bands. The frequencies are chosen to suit the loudspeaker drivers being used, and (to some extent) the requirements of the designer.
+ +Driver Type | Minimum Frequency | Maximum Frequency |
Subwoofer | < 20Hz | 100Hz |
Woofer | 40Hz | 300-3kHz |
Mid Woofer | 100Hz | 3kHz |
Midrange | 300Hz | 3kHz |
Tweeter | 1.5kHz | > 20kHz |
Super Tweeter | 10kHz | 30kHz |
The above table is not intended to be absolute. There are a great many factors that influence the way a driver can (or should) be used, and these are not relevant to this article. The crossover network is also subject to many variations. Apart from the choice of frequency, there is also the choice of slope (the rate of attenuation with frequency), some networks are deliberately designed to be asymmetrical, having different slopes for the high-pass and low-pass sections. +
Filters are divided into four different types ...
+ +No filter simply stops all signals above or below the specified frequency. As the selected frequency is approached, the signal level starts to reduce, and the filter frequency is usually taken as that frequency where the signal level is 3dB below the passband. There are exceptions, and these will usually be explained in the description of the network. + +
In order to obtain different rolloff slopes, filter 'building blocks' can be connected in series to obtain a greater rate of attenuation. The commonly used filter orders are as shown below. The simplest filter is a first order, and uses one reactive component (a capacitor or an inductor). A second order filter uses two reactive elements, and so on.
+ +Filter Order | Rolloff Slope | Reactive Elements |
First | 6dB / octave | 1 |
Second | 12dB / octave | 2 |
Third | 18dB / octave | 3 |
Fourth | 24dB / octave | 4 |
Active filters require power - they are called 'active' because they use active components, such as opamps, transistors or sometimes valves. Passive filters use only passive components - capacitors, inductors and resistors. Passive filters always have losses (especially resistance in inductors), so not all the amp power gets to the speakers. At high power levels the losses can become very high, reducing the available power for the speakers and causing inductors to run at high temperatures.
+ +Active filters require a separate power amplifier for each loudspeaker driver, while passive networks use a single amp. There is a tradeoff - do we use large and expensive passive components and a single (relatively) large power amplifier, or an active crossover and a number of smaller power amps?
+ +It depends on what we are trying to achieve, the expected performance and the budget. It would be silly to use an active crossover and separate amps for a cheap PC speaker, and equally silly to use passive crossovers in a large sound reinforcement system running at perhaps 5,000W or more. All filters (whether active or passive) will provide a rolloff slope based on the filter order. With passive crossovers, it is usually necessary to compromise because high-order filters become too expensive and consume excessive power. There is much more detail in the article Biamping - Not Quite Magic, But Close.
+ +These filters are all set for 1.1kHz so they can be compared. This is not usually considered a useful frequency for loudspeakers, but is convenient for purposes of illustration. Here you can see the rate of rolloff for the 3 types shown. Higher order filters provide greater protection for the speaker (especially tweeters), but cause greater phase shifts than low order filters. While not usually audible, some designers will try to avoid phase shift as far as possible.
+ +All analogue filters cause phase shift - it is a characteristic of how they function in the analogue world. FIR (finite impulse response) digital filters can be configured so there is no phase shift, but despite claims to the contrary, we usually cannot hear a static phase shift. If the phase is constantly changing, we will often hear a frequency shift due to phase shift modulation (Doppler frequency shift is an example).
+ + +All of the examples in this section show a combination of frequency and amplitude. It must be stressed that a full and complete understanding of these topics is essential to your understanding of audio as a whole. Without that understanding, you are left wondering what certain terms really mean. You may also become less likely to believe some of the outrageous drivel that is spouted by some manufacturers - they rely on a lack of understanding to baffle people with pseudo-science.
+ +This short article is intended to introduce the basics of each of the topics shown. Far more information is available, either on the ESP site or elsewhere. Some of the explanations have been simplified for clarity, but care has been taken to ensure that the simplifications are not at the expense of accuracy.
+ + +Some of the images in this page came from Lenard Audio (with permission). They have been modified and adapted to the style normally found in the ESP site for general compatibility. + +
![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +FETs & MOSFETs |
JFETs (junction field effect transistors) have been with us for many years now, and there was a time when there were many different types available, often with some very desirable characteristics. JFETs became readily available about 10 years after BJTs (bipolar junction transistors), and were quickly adopted for applications that required high impedance inputs. BJTs require an input current to conduct, and that means that they also require current from the signal source to change their output current. While the current is usually very low, it does cause problems in some cases. MOSFETs (metal oxide semiconductor field effect transistors) came along a bit later, and revolutionised high speed switching.
+ +FETs (both JFETs and MOSFETs) are voltage controlled, and require no (static) current from the signal source. However this only applies with DC, because gate capacitance has to be considered for AC. All JFETs are unique amongst semiconductor devices, in that they conduct more-or-less equally in both directions (i.e. both 'normally' and with drain and source interchanged). A BJT will (perhaps surprisingly) work with emitter and collector reversed too, but the gain of modern devices is very low in this mode - sometimes less than unity. FETs can be thought of as a voltage controlled resistor, and this is exploited in many different ways.
+ +Unfortunately, the resistance is not linear. It varies with current, and although the effective resistance can be changed by varying the gate voltage, that relationship isn't linear either. Claims that FETs are more linear that BJTs must be treated with suspicion, because in most cases it's simply not true. Similar claims are made for valves (vacuum tubes), and they aren't true either.
+ +MOSFETs conduct in both directions too, but they have an intrinsic body diode that will conduct when the reverse voltage exceeds around 600mV peak. This means that they cannot be operated as a linear amplifier if the drain and source are interchanged. Nor are they useful as a voltage controlled resistor, because the body diode will conduct if the peak voltage exceeds 600mV. Even if the voltage is kept well below 600mV (peak - about 325mV RMS sinewave), linearity is very poor. However, MOSFETs can make very good audio switches if configured properly.
+ +Unfortunately, the range of available JFETs has shrunk alarmingly in the last few years. Most of the devices that were used for very low noise circuitry are no longer available, and those you can still get from major suppliers are far less useful than the 2SK170 and its ilk. While you can (allegedly) get the 2SK170 or similar from ebay (mainly from Chinese suppliers), the chances of them being the real thing are not good. It's far more likely that you'll get something far more pedestrian, but re-labelled. The LSK170 (made by Linear Systems) is an equivalent, and is currently available, although distribution is patchy.
+ +Some typical FET applications are as follows ...
+ +These applications will be covered in more detail below.
+ +Most of the JFET circuits and simulations shown here are based on the BF256B - not because it's anything special, but because it's still was readily available for a reasonable price. It's intended as an RF amplifier, but that doesn't preclude audio in any way. Like any active device, FETs work from DC up to a frequency determined by the specific characteristics of the device itself - whether by design or accident.
There are also a number of comparisons made between JFET/ MOSFET and BJT circuits. In many cases, this is not flattering to JFETs as their performance is often well below that for a circuit with similar performance based on common bipolar transistors. This is not intended to discourage the use of JFETs at all, nor to suggest that they are 'inferior'. They are different, and it's important to understand the differences between the two parts. However, if you don't need very high input impedance, BJTs will usually give better results than FETs.
+ +One of the reasons that BJTs are so popular is that they have one parameter that is extremely predictable - the base-emitter voltage. This is normally taken to be 0.7V (sometimes 0.65V), and it's the same for small signal and power devices, and is still correct for PNP or NPN. This makes them very simple to bias, but more importantly makes it fairly easy to calculate the gain of a stage. Because BJTs have high transconductance and an exceptionally high collector impedance, it's possible to set the gain with only a pair of resistors. This isn't really possible with JFETs (it can be done, but the gain is not determined solely by the resistor values). It is almost possible with MOSFETs due to their much higher transconductance.
+ +In the following article, only N-Channel JFETs and MOSFETs are discussed. P-Channel versions work in the same way, but naturally require the supply polarity to be changed. Fully complementary designs (including linear CMOS - complementary MOSFETs) are not shown, because they are a rather different application. Also, output coupling capacitors are not shown. These will be needed in most cases, but were not included for clarity. Input coupling caps are shown when they are required.
+ +There is one other function that is potentially useful for MOSFETs. They make very good high power relays, and this is discussed in the MOSFET Relays article. This mode of operation is not covered here.
+ + +One of the things I won't do in this article is explain exactly how a JFET or MOSFET is made. The inner workings are explained on countless websites, and there's no point repeating what is readily available elsewhere. Nor will I go into any detail about how they work at the quantum level, as this is also available from manufacturers and physics sites. What I will do is point out that FETs are voltage operated, and apart from having to charge and discharge the gate capacitance, they do not draw appreciable current from the signal source.
+ +The DC gate current of most FETs is typically measured in nanoamps, and usually can only be measured when the gate-source (or gate-drain) voltage is close to the maximum permissible. A typical figure is around 1nA at 25°C, but as with many other semiconductors it increases at elevated temperatures. The gate capacitance of the average small-signal JFET is around 5-30pF, depending on how the device has been fabricated and the intended usage. JFETs for RF applications will usually have lower capacitance than devices intended for low noise audio (for example). The capacitance is primarily due to the thickness (or otherwise) of the P-N junction that separates the gate from the channel.
+ +Two of the most important parameters for small signal JFETs are the drain-source current (IDSS - current with gate shorted to source) and the 'cutoff' voltage, where the gate voltage closes the conduction channel so only a small defined current flows. This typically varies from 10nA to 100nA, but it depends on the device and the manufacturer. This is called the drain-source cutoff voltage (VGS (off)). You also need to know the maximum Drain-Source voltage (VDS), especially if you intend to run with supply voltages over 15V or so.
+ +The gain/ transconductance of a FET (or MOSFET) is measured in Siemens, with most having rather low gain compared to a BJT. Transconductance is also referred to as 'Forward Transfer Admittance' (written as |Yfs| ) in some datasheets. Transconductance is the same gain terminology used with valves, which are also voltage controlled. It can be difficult to relate to the Siemens as a unit, and it is often easier to convert to mA/V. The early (during the valve era) way to specify transconductance was the 'mho' (ohm spelled backwards), and it's still seen in some FET datasheets. You may also see the mho as a symbol - ℧ - an upside-down Omega.
+ +1 Siemens (1S) is equal to 1 Ampere per Volt, so 1mS is the same as 1,000µmhos, which is 1mmho or 1mA/ Volt. Transconductance of JFETs varies depending on the manufacturing process and the intended application. Typical values will range from around 1mS (1,000µmhos, 1mmho or 1mA/V) to 22mS (22,000µmhos, 22mmho or 22mA/V).
+ +
Figure 1 - Transconductance Graph For 2SK170/ LSK170
For reference, the above graph shows the transconductance (actually |Yfs| ) for the 2SK170/ LSK170 (from Linear Systems) low noise JFET. As you can see, the transconductance changes with drain current, so to get low distortion the current should remain constant. This is easily achieved in practice, and the linearisation effect of using a constant current load is also seen with other FETs, BJTs and valves.
+ +Almost all JFETs are what's known as 'depletion mode' devices. This means that they conduct with no gate voltage (typically gate shorted to source or at the same potential). This is the maximum current region, and usually must be avoided for linear operation. The FET is biased off by applying a negative voltage to the gate with respect to the source, and they can be biased in exactly the same way as a valve.
+ +The vast majority of MOSFETs are 'enhancement mode', meaning that a positive voltage is required on the gate (with respect to the source, for an N-Channel device)) to make the MOSFET conduct. There were quite a few depletion mode MOSFETs available many years ago, but they are less common today. They are recommended for constant current sources and MOSFET relays, although their power ratings are typically much lower and on-resistance much higher than enhancement mode MOSFETs. I will not cover depletion mode MOSFETs in any detail, as their usage is generally rather specialised. P-Channel MOSFETs are also predominantly enhancement mode, but depletion mode types are available (albeit fairly uncommon).
+ +Like BJTs, both JFETs and MOSFETs change many of their characteristics with temperature. I'm not going to provide any detail on that, as it's all available in the datasheets. However, it is important to remember that when the temperature of any semiconductor changes, its normal operating conditions will be altered. A robust design ensures that no realistic temperature variation will cause the circuit to malfunction, so proper testing is essential. You don't need an environmental chamber, but you do need to test thoroughly.
+ +Be particularly careful with MOSFETs, because RDS(on) increases as the device gets hotter. While this characteristic helps to force current sharing when MOSFETs are in parallel, it also increases the risk of thermal runaway. Make sure that MOSFETs that dissipate significant power always have a properly sized heatsink to ensure that the temperature can never reach a dangerous level.
+ + +One of the less endearing attributes of JFETs is their parameter spread. The IDSS (drain-source current, gate shorted to source) can vary widely, typically with a 5:1 ratio. This means that the same type of JFET could have an IDSS ranging from 1-5mA, with some having an even wider range. Even the somewhat revered 2SK170 has a quoted IDSS range from 2.6 to 20mA - a 7.7:1 ratio. This means that simple biasing techniques don't necessarily work, because all of the performance parameters have similar variations. VGS (off) is the voltage where the drain current is reduced to a specified value, and again using the 2SK170/ LSK170 as an example, this ranges from -0.2 to -1.5V for ID of 0.1µA (100nA).
+ +What this means in a real circuit is that proper biasing can be difficult to achieve unless feedback is used. A JFET will probably work without you having to make circuit changes, but it won't be biased into its most linear point on the transfer curve. This may limit the dynamic range and/or distortion characteristics, so it's almost always necessary to provide at least some degree of adjustment to ensure the best linearity.
+ +Transconductance also varies widely, although in most cases it doesn't change as much as the datasheets might imply. Because the gain of a single FET amplifier stage is far lower than that from a BJT, they are often operated with no local degeneration (as is usually provided by an un-bypassed source resistor). This means that the AC voltage gain varies with the transconductance. In theory, if a JFET has a transconductance of (say) 1mmho (1mS, or 1mA/V) a gate voltage change of 1V will cause the drain current to change by 1mA. Likewise, a change of 10mV will cause the drain current to change by 10µA. If the drain resistor is 10k, that's a voltage change of 100mV across the resistor - a voltage gain of 10.
+ +If another FET of the same type but with a different transconductance is substituted, the gain will be different. These variations in all of the important parameters mean that it can be difficult to get a consistent gain from FET stages in production. This is not to say that it can't be done, nor is it necessarily a problem in reality, but it is important for stereo preamps where good channel balance is expected. These issues can be solved easily enough by using feedback (AC, DC or both).
+ +The parameter spread is not just limited to the device itself. Manufacturers often don't use the same terminology, and some will specify forward transconductance, others show |Yfs| instead. Transconductance may be given in mS (milli-Siemens) or µmhos (sometimes mmhos - millimhos), but few use the more easily understood value in mA/V, which as noted above is the same as mS and mmhos.
+ +In short, all FET parameters vary, some widely, and the designer has to be aware of this if a consistent design is expected at the end of the process. Regular readers will be aware that few ESP projects use discrete FETs. This is due to the likelihood of any given device disappearing from the market after publication, or because the most appropriate FET is simply no longer available. The inherent variability of the basic parameters is the final straw - I don't like to publish circuits that constructors will build, but that fail to work as intended without modification. Sometimes there are simply no alternatives, and the Project 16 - Audio Millivoltmeter is a case in point. There are several alternative devices suggested, and a note that the source resistor will likely need adjustment to set the optimum operating conditions.
+ +This is all rather tiresome, and provided you don't need extended frequency response, a FET input opamp is a far better option if you need very high input impedance. It's also important to understand that despite claims to the contrary, FETs do not provide 'higher resolution' of audio signals, they don't 'sound better' and nor do they somehow (magically perhaps?) improve anything (bass, treble, midrange, 'air' or 'authority') compared to opamps or BJTs. When they are used as amplifiers, they amplify (and distort) just like any other active device, including valves (which are also bereft of 'magic'). Depending of the FET type and usage, there might be a difference (in either direction) in background noise level, but this depends on a great many factors and is not an intrinsic characteristic of FETs over other amplifying devices.
+ +We seem to be operating in some kind of parallel universe sometimes, with some people claiming often huge benefits of one type of amplifying device over the others. Most of this is imagined or the result of personal prejudice, but even major manufacturers can (and do) postulate that their JFET opamp is somehow 'sonically superior' to others. If any amplifying device can amplify an audio signal by a given amount, it will be indistinguishable from any other with the same gain, frequency response, noise and distortion. To claim otherwise is akin to believing in fairies at the bottom of the garden. Some topologies are 'better' than others, but usually only in one or two major parameters. Other parameters may be worse.
+ +I tested a pseudo-random batch of JFETs that had all been set up as constant current sources, with a design current of 4mA. They are 'pseudo-random' in that they all came from the same batch from the supplier, and were removed from the bag and installed with no attempt at grading them. The current was set using a trimpot for each FET. Based on the resistance needed to bias them, their gate-source voltage can be determined for 4mA drain current ...
+ +Resistance | VGS | Resistance | VGS + |
662 Ω | 2.65 V | 896 Ω | 3.58 V + |
644 Ω | 2.58 V | 648 Ω | 2.59 V + |
633 Ω | 2.53 V | 644 Ω | 2.58 V + |
665 Ω | 2.66 V | 655 Ω | 2.62 V + |
As you can see, most are passably close, but one (yellow shaded cells) is well out of range of the others. This is why a simple biasing scheme can fall apart - the one device that's well outside the specs of the others will also cause its performance to be well outside reasonable bounds, so without making an adjustment the proper operating point will be unpredictable. Even those that are pretty close will create issues if you are building a discrete opamp with JFET inputs (for example). If you don't match the FETs, you could have a DC offset of as much as 1.05 volts with the worst two JFETs shown in the table. If you use the best two, there's no difference There are two with a VGS of 2.58V, but there is no guarantee that this exact match will apply at a different current (hint - it probably won't).
+ + +A JFET voltage amplifier stage is easily made, but as noted above the parameter spread can mean that the circuit may need to be tweaked to get the optimum operating point. The gain of a simple JFET amplifier stage is much lower than you can get from an equivalent BJT stage with a similar parts count. Of course, the JFET has a much higher input impedance, and this is often the main reason for using FETs over BJTs. The operating point is important when the minimum possible distortion is required, and simple resistor loading limits the maximum output voltage swing if distortion is to remain within acceptable limits.
+ +With FETs and BJTs in simple circuits, second harmonic distortion is dominant, with lesser amounts of 3rd, 4th, 5th etc. For a given output swing and gain, FETs will almost always have higher distortion. With a BJT stage, the emitter resistor can often be left un-bypassed and the relative values of collector and emitter resistance set the stage gain. Because FETs generally have a much lower gain than BJTs, the source resistor nearly always has to be bypassed or you won't be able to get enough gain from the amplifier stage.
+ +
Figure 2 - Basic JFET Common Source Voltage Amplifier Stage
The circuit shown (according to the simulator) has a voltage gain of 17.4 (24.8dB), an input impedance of 1MΩ (purely due to the value of R1) and an output impedance of about 8.6k, a little less than the value of the drain resistor. With a 100mV (peak) input, output is 1.74V peak, and distortion is simulated to be just over 1.4%. If R3 is left un-bypassed (remove C1), the gain falls to 2.5 (8dB), but there will be a noise increase because of the thermal noise of R3 (which is amplified by Q1). C1 is selected so its reactance is no more than 1/10th of the resistance of R3 at the lowest frequency of interest. 68µF is the closest readily available value, but 100µF is preferred, and gives a low frequency -3dB frequency of 3.8Hz.
+ +Note that the source voltage is +1.51V, so the gate is negative with respect to the source via R1 (which holds the gate at ground potential). This is the biasing voltage required to set the drain to somewhere near half the supply voltage. The biasing principles for a depletion mode FET are identical to those used for valves. The bias voltage needed depends on the FET itself - not just the type number, but it may need to be adjusted for individual FETs.
+ +You might have noticed that the -3dB frequency is higher than expected. 100µF and 1.5k has a -3dB frequency of 1Hz, so you would expect that would be the -3dB frequency of the amplifier stage. However, the impedance of the source comes into play, reducing the apparent impedance that has to be bypassed to around 420 ohms. This is also the input impedance if the circuit is operated as common/ grounded gate (a mode that isn't covered in this article, other than in Section 8 below).
+ +Based on the measured output impedance, the effective drain resistance (which is in parallel with R2) can be calculated at 61k. This value isn't useful, but I thought I'd mention it anyway. The DC (quiescent - no signal) operating voltages for the circuit are shown in the same colour used in the graph.
+ +The DC operating point is not necessarily optimum for all parameters. By increasing the current slightly, distortion is reduced and gain is increased, but the drain voltage is reduced and the output will clip asymmetrically. For example, reducing R3 to 1k increases the gain to 19.4 (25.8dB) and reduces distortion to 0.43%. However the drain voltage is only 7.2V and the negative half cycles will clip well before the positive half cycles. This may not matter in a practical circuit of course, but it's a trade-off that you need to be aware of. Other JFETs may behave differently, and the only real way to know is to run tests.
+ +While it is possible to calculate the gain of a simple JFET voltage amp stage, doing so is rather irksome. The process used to calculate the gain of a valve stage can be applied (see Biasing and Gain in the valves section), but JFET datasheets don't include the 'amplification factor' (usually written as µ or 'mu'), nor is the effective drain resistance (equivalent to plate resistance for a valve) specified. Determining the amplification factor is not as easy as it might be, because FET manufacturers don't provide the necessary data.
+ +While it may look easy enough to use the transconductance figure to calculate the gain, it's not as straightforward as it may appear. If a FET has a transconductance of 1mS, this works out to be 1mA/V - meaning that the drain current will (theoretically) change by 1mA for each volt change at the input. While you might imagine that this can be used to calculate the gain, it usually doesn't work. Most of the time, the specific parameters that are needed to calculate the gain have to be measured, and if you set up to do that you can simply measure the gain instead. This is far easier than working out the parameters and calculating the gain, and the end result is also more accurate because you measured it with the device under normal operating conditions. Remember parameter spread - it is not your friend .
To expand on the issue of gain calculation, you must be aware that the transconductance is not a fixed value. It changes depending on the drain current, so it will be quite different at (say) 100µA and 10mA. As an example, you might measure 2mA/V (2mS) at 1mA current, but at 7mA it might be 5.7mA/V (5.7mS). You can't make meaningful calculations with a moving target like that.
+ +Because of the source resistor, there is some tolerance for differing VGS values. The DC operating point (measured at the drain terminal) will change proportionately to the difference in VGS, and it might be enough to cause the circuit to fail to work properly. This is especially true if the operating point is marginal to start with. The circuit will probably still amplify, but may have (perhaps grossly) excessive distortion.
+ +Like all simple single transistor (JFET, MOSFET or BJT) gain stages, power supply rejection is minimal, so a very clean DC supply is essential. Power supply hum or noise will be coupled into the output signal with very little attenuation. The power supply rejection will typically be less than 3dB, so if there's 100mV of noise on the supply, you can expect at least 70mV at the amplifier's output.
+ +Simple FET voltage amps are (or were) fairly common in undemanding applications, but in many cases something a little more refined is needed. A popular configuration is called a 'mu-follower', which uses a second JFET as a bootstrapped load for the amplifier. This improves linearity and increases the gain. There are many variations on the basic idea though, and there is no consensus as to which version is the 'true' mu-follower. While the gain is increased significantly by most of the common schemes, so too is output impedance. This means that the following circuitry must have a very high input impedance, or a voltage follower is needed to reduce Zout (output impedance) to something usable.
+ +
Figure 3 - Mu-Follower Voltage Amplifier
The DC operating conditions are not changed much with this arrangement, but the gain is considerably greater. It may not look like it based on the graphs, but the input is only 10mV peak, where the previous example used a 100mV peak input. Gain is now around 67 (36.7dB) and distortion is 0.78% when the output level is increased to be the same as the previous example (1.74V peak). So, while the gain has been increased significantly (3.8 times), the distortion is only reduced by a factor of 1.8 - worthwhile, but not dramatic. Output impedance is also increased, rising to about 30k. The following stage needs an input impedance of at least 10 times this (300k) or gain will be greatly reduced. Frequency response of the circuit as shown (at the -3dB points) is from 3.1Hz to 775kHz - more than enough for any audio circuit.
+ +Output impedance can be reduced to something more sensible by adding a FET source follower, but a BJT emitter follower is likely to give better results. You may imagine that a MOSFET would be preferable, but it's not necessarily the case - while output impedance is lower than with a BJT, the distortion is higher. A reasonable sized power MOSFET is a little better than something like the 2N7000 (a low current N-Channel MOSFET), but a high gain, small signal BJT usually performs better.
+ + +A FET has an almost infinite current gain because the input impedance is so high. The only limitation is the fact that a resistor is needed so the gate is referenced to ground (or a suitable negative voltage with respect to the source). There's no reason that the resistor can't be as high as 1GΩ or more, although PCB leakage may become a problem with such extreme impedance levels. In most cases, an input capacitor is essential, because without it the FET's bias point can't be maintained (the gate would be at zero volts rather than around 8.4V or -1.5V referred to the source - FET dependent). If the source is a piezo-electric transducer a capacitor is not needed, because the piezo element is capacitive and doesn't pass DC. This can be useful for piezo accelerometers for example, but a FET input opamp will usually give better results.
+ +
Figure 4 - JFET Source Follower
The overall performance of the circuit can be improved by using a constant current sink in place of R3. With all semiconductors, they are more linear in any topology if the current is maintained at a constant value. This applies equally to voltage or current amplifiers, and with BJTs, FETs (including MOSFETs) and valves. With low signal levels the gain is probably not worth the additional parts and cost, but it obviously depends on what you wish to achieve. As shown, the distortion with a 2V (peak) input signal is 0.05%, but is reduced dramatically with a FET current sink as the load. Performance is improved further with a better current sink, but the law of diminishing returns makes it uneconomical to add to a simple circuit that doesn't even equal a lowly TL071 for audio frequency distortion performance.
+ +Input impedance of the circuit is around 5MΩ because R1 is partially 'bootstrapped' by being joined to the junction of R2 and R3. Input impedance can be increased further (to around 20MΩ) by bypassing R2 with a 100µF capacitor. Gain is not unity - it's about 0.94 due to the low transconductance of the FET. Output impedance is around 400 ohms.
+ +
Figure 4A - Improved JFET Source Follower
As noted above, the circuit can be improved by using a second JFET as a current source for the active FET. This improves linearity, and if R2 is bypassed as shown, the low-frequency input impedance is increased to around 70MΩ. By the time the frequency is increased to about 3kHz, there's no difference to input impedance whether R2 is bypassed or not. According to the simulator, distortion is only 0.0064%, which is a very good result. The gain is also improved, with the above circuit having a gain of 0.987. Output impedance remains unchanged at approximately 400 ohms, but only when C2 is included. Operating current is 1.3mA as simulated, but parameter spread means that R2 and R3 may have to be different values unless the JFETs are matched.
+ + +By their nature, JFETs are variable resistors. By changing the voltage on the gate, the resistance can be controlled from the minimum (RDS(on)) up to several hundred k-ohms (at the minimum - some go much higher). The minimum possible resistance may require a positive voltage on the gate (N-Channel), and isn't often used. Unfortunately, the resistance is non-linear, and if used for audio signal attenuation (for example), the non-linearity causes distortion. It's generally necessary to limit the peak voltage to no more than around 100mV (70mV RMS), but for very low distortion it needs to be lower.
+ +It has been known for many years that distortion is reduced if 50% of the signal at the drain appears on the gate. This causes cancellation of the even order distortion components (2nd, 4th, etc. harmonics), leaving the lower level odd-order harmonics (3rd, 5th, etc.). This is shown below. Doing this can introduce some side-effects, including a delay caused by the coupling capacitor (C1) having to charge. This is known to create (sometimes unacceptable) delays into peak limiters in particular, often accompanied by very obtrusive audible artifacts. There are ways around this, but they complicate the circuit.
+ +Not only the drain-source resistance is non-linear. The resistance versus gate voltage is also non-linear, so a change of (say) 10mV will have a large effect when it transits the gate-source threshold voltage, but has (much) less effect beyond the threshold as the voltage is made less negative. Again, parameter spread means that the threshold is not predictable from datasheets, and in most cases a preset (trimpot) is needed so the threshold can be set accurately.
+ +
Figure 5 - JFET Variable Attenuator
The FET distortion is worst when there is a high voltage across the device, and can be made worse if the source impedance is low as this means higher current. This depends on the FET used - in the above circuit, distortion is actually worse if the value of R1 is increased. As shown, the maximum distortion is around 3.2%, when the peak output signal is at 80mV (56mV RMS). The control and output waveforms are shown below, but only the signal envelope can be seen because the time span is too great for a 1kHz signal to be visible. The control voltage varies from -2.5 to -1.5V across the span of the graph.
+ +This general arrangement is used in countless audio peak limiters, but there is a hidden trap that is not immediately obvious. Imagine that the control voltage suddenly changes from -2V to -1V. The voltage at the gate of the FET will initially only change by 0.5V, because R2 and R3 form a voltage divider. With 10nF as shown, it takes over 80 milliseconds before C1 charges and allows the full 1V change (actually 990mV at 80ms) to reach the gate. This limits the basic circuit to relatively low attack times when used in a limiter, so many FET based commercial products use a more complex arrangement to ensure that the time constant does not cause problems. One solution to this problem is to make C1 much larger than necessary, so its influence is no longer relevant because the control voltage is always divided by two (or at least for long enough for the control circuit to correct for the change). C1 also couples a small control voltage signal through to the output, with the signal level depending on the circuit impedances.
+ +
Figure 6 - JFET Variable Attenuator Waveforms
The red waveform is the signal envelope, and green is the control voltage. It's quite obvious that most of the control takes place over a rather limited control voltage range (between -2.35V and -2.1V). When the FET's gate is at 0V, the signal is reduced to 5.1mV peak, an attenuation of about 26dB. If the gate is made positive the FET will turn on a little harder, but the small amount of extra attenuation isn't worth the trouble (the attenuation is increased by a little under 9dB with +2.5V on the gate). The wider voltage swing is harder to accommodate with simple circuitry. It's also harder to ensure minimum control voltage feed-through (where part of the control voltage change appears on the signal line).
+ +The distortion created by the FET is often very audible, so for low distortion compressors and peak limiters, the voltage across the FET must be kept to a minimum. This creates a conundrum though, because low signal voltages mean that more gain is needed after the gain control, increasing noise. There are many FET based limiters around (including one in the ESP projects page), and a few have achieved something akin to cult status amongst users. If the arrangement suits your needs, then there's no reason not to use it - compressor-limiters can be very personal choices.
+ +It is not essential to use a negative control voltage. If the source is raised to around 3V above ground, the control voltage can then range from 0 to +1.5V and the result is identical to that shown above. However, there is now a DC voltage at the drain, so the input and output must be capacitively coupled.
+ +
Figure 7 - JFET Attenuator Waveform Without 1/2 Voltage At Gate
For reference, the graph above shows the asymmetry created if the gate doesn't receive the 1/2 signal voltage. Where there is asymmetry, there is clearly a considerable amount of even-order distortion. The control voltage is identical to the example shown in Figure 6 and isn't repeated. The distortion reaches over 15% when the control voltage is -2.3V (850ms into the graph), just as the positive half cycles of the waveform start to be attenuated.
+ +The inset shows a small part of the waveform. The distortion is clearly visible, with the positive peak at 92mV and the negative peak at 41mV. That is a completely unacceptable amount of distortion. It's primarily second order, and despite claims that this type of distortion sounds 'nice', it doesn't. Not even a little bit!
+ +While you may imagine that a MOSFET could be used if the signal voltage is kept low, this is not the case. A MOSFET will create massive distortion, regardless of whether a 1/2 signal voltage is applied to the gate or not. Consequently, this option is not discussed.
+ + +The main things that you need to be aware of with MOSFETs is that they are primarily designed for switching, and that they have a very high gain compared to JFETs. They also have a comparatively high input capacitance, and this limits their frequency response with high impedance signal sources. Even small signal types (such as the 2N7000) have a gate-source capacitance of 20-50pF, which is around 10 times that of a 'typical' JFET. Where a JFET voltage amp stage might be perfectly happy with a 1MΩ source impedance, a similarly configured 2N7000 MOSFET will roll off the high frequencies from as low as 400Hz (-3dB).
+ +The first MOSFET voltage amp shown below expects an input impedance of no more than 10k, and even then will have a -3dB frequency of less than 50kHz. As the source impedance increases, matters get worse. It can be helped by not bypassing the MOSFET's source resistor (R4 in Figure 8), which bootstraps the input capacitance and reduces its effect on frequency response. However, this will reduce gain and increase noise.
+ +Because MOSFETs are designed for switching, they are not characterised for linearity or noise. The latter is important in low level circuits, and it's very difficult to find much real information on their noise performance. In general, I would expect them to be much noisier than JFETs and most BJTs, so using MOSFETs for amplifying very low signal levels is ill-advised, both for noise and input impedance.
+ +Depletion mode MOSFETs are available, and these are 'on' with no gate voltage, in the same way as a JFET. A negative gate voltage is used to turn a depletion mode MOSFET off. However, these are relatively uncommon compared to enhancement mode devices, and consequently they are not covered in the descriptions that follow.
+ + +MOSFETs have much higher transconductance than JFETs, so more gain is available from a single stage. Because most common MOSFETs are enhancement mode, they need a positive voltage on the gate referred to the source to conduct. This means that a biasing scheme similar to that used for bipolar transistors is needed, and the inherently high impedance is not usually available because of the resistors needed for biasing and the input capacitance (which is a major limiting factor).
+ +As with JFETs, MOSFETs have a fairly wide parameter spread, so biasing is again likely to be uncertain. As noted in the introduction, BJTs have the advantage that their base-emitter voltage is (comparatively) stable and predictable, but that does not apply to JFETs or MOSFETs. A MOSFET used as a voltage follower (source follower) is less of a problem, but voltage amp stages can be tricky to get right. For the most part, a JFET is a better choice for a voltage amp - especially if you need high input impedance.
+ +In much the same way as a BJT is biased, a voltage divider is needed to set the gate voltage to that value necessary to maintain the MOSFET in its linear region. If it's saturated (turned fully on) or cut off (fully off) no gain is available. The gate potential is quite sensitive, because MOSFETs have a comparatively high transconductance, but an unpredictable gate-source voltage. This makes biasing without using feedback a somewhat tricky proposition. The use of a feedback biasing scheme ensures fairly stable operating conditions, but reduces the input impedance. A source resistor is also essential, as this provides another level of feedback. It can be bypassed so the feedback affects DC conditions but not AC gain.
+ +Don't expect low distortion from a MOSFET voltage amplifier unless you add a current source in place of the drain resistor. Most MOSFETs are optimised for switching, and linearity is generally worse than JFETs, which in turn are usually worse than BJTs. This doesn't mean that you can't get good results, but it takes more effort. When you compare the results from any of the simple discrete devices to a decent opamp, there is simply no doubt that the opamp will outperform all of them other than for specific tasks (such as RF applications).
+ +The same conventions as used for JFETs have been applied here. The gate-source voltage is +2.56V (note that it is positive, not negative as with JFETs). The input signal is 10mV peak, and the peak output is 2.33V - a voltage gain of 233 (47.3dB). However, the gain for the negative-going output signal is 240 - the difference is a clear indication of even order distortion, measured at 2.65%.
+ +
Figure 8 - MOSFET Voltage Amplifier
An unexpected result of using R1 joined to the drain is that it creates a negative feedback path. This reduces input impedance drastically. Rather than the 500k you might have expected, it's only 5k. The bias network can be connected as shown in Figure 8A, to remove the AC feedback. While the gain is impressive, distortion is over 2.6% - not so impressive. While the distortion is mainly (supposedly 'nice') 2nd harmonic, there's simply too much of it. If C2 is omitted, gain is reduced and input impedance becomes a more acceptable 300k.
+ +There are a few things that can be done to make a MOSFET voltage amplifier somewhat more 'friendly'. However, it adds more parts than would be needed for an opamp doing the same job and still won't even come close in performance. No matter, as the next circuit shows what can be done to make the stage behave better. Because it has lower gain, the signal voltage has been increased to 1V peak (707mV RMS). There is no longer any AC feedback from the drain to the gate, so input impedance isn't compromised.
+ +
Figure 8A - Improved MOSFET Voltage Amplifier
The 'improved' version shown above has an input impedance of 1MΩ, but retains the DC negative feedback to stabilise the operating conditions. C2 removes the AC negative feedback. Because R4 is not bypassed, gain is reduced (and noise is increased), but it partially bootstraps the gate capacitance, and can provide passable frequency response with signal source impedances up to 100k (20kHz is less than 0.5dB down with a 100kΩ source). Gain is only 5.3 (14.5dB) and noise will be higher due to the noise contribution of R4, which is amplified by Q1 ... this always happens when a source or emitter resistor is not bypassed. Distortion is reduced greatly (to less than 0.1%, even at the much greater level), so it may be a worthwhile compromise.
+ +To see how well (or otherwise) the simulation stacked up against reality, I built the Figure 8 circuit, but left R4 un-bypassed. DC conditions were quite close to the predicted values, and performance was acceptable. The circuit didn't add any audible noise to my workshop system, and distortion at 1V RMS output was below 0.1%. The -3dB frequency was over 400kHz when driven from a 50Ω signal generator - this is somewhat less than the simulation claims, but is still far more than necessary for audio.
+ +It is possible to use a simple voltage divider to provide the bias, and that also avoids the feedback problem. However, it also means that the bias stability is poor because of the wide variation of gate-source voltage for different devices - even from the same batch. The datasheet says that the 2N7000 has a VGS threshold voltage for a 1mA drain current ranging from 0.8 to 3 volts. Most others are similar ... parameter spread strikes again.
+ +In the same way (well, almost) as with a JFET, MOSFETs can be used as a mu-follower. The gain is a very impressive 6,4000 times (76dB), and output distortion with a 3V peak signal is 0.34%. However, there's a downside (but you knew that already ). Input impedance falls dramatically, and for the circuit shown below it's only about 300 ohms. Meanwhile, output impedance is increased significantly to 200k - that's higher than many valve stages.
Figure 9 - MOSFET Mu-Follower Voltage Amplifier
Best you don't ask about frequency response. As shown, the response at the -3dB points is from 200Hz to 4.8kHz - telephone quality at best. Without feedback, the circuit has no practical value, and even with feedback its usefulness is doubtful. However, there may be a place for it in something, but I have no idea what that might be. Still, this article is about looking at the options.
+ + +A small signal MOSFET makes a rather good follower. Output impedance is low, and because of the relatively high transconductance the output voltage is not reduced as much as with a JFET. With 2V peak input, the output will be around 1.95V peak, and the output impedance of the circuit shown below is only 40 ohms, and distortion with no load is less than 0.001%.
+ +
Figure 10 - MOSFET Source Follower
Unfortunately, the input impedance is much lower than for a JFET, because of the requirement for the two biasing resistors (R1 and R2). Input impedance is the parallel combination of the two, 600k as shown. This can be increased by using a bootstrap circuit (or a separate bias supply and feed resistor as used in Figure 8A), and an input impedance of over 50MΩ is possible, but high frequency response is poor with high impedance sources. If the gate is direct coupled to the preceding stage biasing is not required, and this makes it easy to provide low output impedance from JFET, MOSFET or valve (vacuum tube) voltage amplifier stages.
+ +Use as a direct coupled follower is probably one of the best options, and although direct coupled BJT followers are common, the MOSFET is a better option where minimal loading of the previous stage is needed. You do need to allow for the voltage difference between gate and source, but for circuits operating with reasonable supply voltages (12V or more) it's easy to compensate for the ~2.5V offset that exists.
+ +High voltage MOSFETs can be used to replace triode cathode followers in valve (vacuum tube) amps. They will provide a much lower output impedance, and outperform the cathode follower in all respects. Unlike a cathode follower though, there will be very little added distortion. The large gate capacitance is pretty much negated by the local feedback, so there will be no loss of high frequencies. There may be a small risk of damaging the gate's insulation if a protective zener diode (typically 12V) is not used between gate and source, but in most cases this is very unlikely. The MOSFET has an additional advantage of not needing any heater current (unlike a valve), and indefinite life.
+ +In valve power stages, there is a lot to be said for using MOSFET followers to drive the output valves. This lets you use lower value grid resistors, ensuring much more stable operating conditions for the power valves, without excessive loading or loss of gain from the phase splitter. Few 'purists' will like the idea of course, even if it does improve performance and reliability. The IRF840 is a 500V MOSFET that is well suited to use as a follower in valve designs. There are also TO92 versions, such as the ZVN0545A or SSN1N45BTA, but their dissipation is limited to around 700mW. The STQ3N45K3-AP is rated for 3W and has internal zener protection for the gate, and at less than $1 no valve can come close. When TO92 MOSFETs are used like this, limit the current to no more than 2mA to keep the dissipation low (2mA at 200V is 400mW - adjust as needed for the voltage across the MOSFET).
+ +You do need to be aware that the output of a MOSFET source follower will jump to the full supply voltage when used after a valve amplifier stage. This is because the valve does not conduct until the cathode warms up, and the sudden high voltage output may damage following stages unless protective measures are taken. Project 167 describes a suitable design, including protection circuitry.
+ + +MOSFETs can be used as signal switches, in a 'solid state relay' configuration. This works well, but there are a few things that have to be considered. The article MOSFET Solid State Relays covers most of the applications, but not signal level (100mV to ~10V RMS, low current) usage. There are two ways that a MOSFET 'solid state relay' can be used - either short the signal to ground or connect the signal through the relay. The latter does work, but the residual when the signal is off may be higher than expected, and distorted. With a 2V RMS source and a 2.2k load, expect an output of perhaps 2-4mV RMS, but very distorted (perhaps 10% THD or more as simulated - reality may be different). Consequently, the normally open (series) connection isn't recommended, with the preferred option being to short the signal to ground. However, this isn't without its limitations either.
+ +One of the main issues (for both series or shunt operation) is that the gate supply should be floating. If it's not, you must use resistors to allow the gate to be biased, and that creates unwanted (and undesirable) interactions with the signal. Having a floating supply for each switch is a serious (and costly) nuisance to incorporate into the circuitry. There is a way around it though, which is to use a commercially available MOSFET relay that uses light activation. An example is the CPC1014 (made by IXYS) or similar. This not only has the switching function, but offers full isolation of the LED and switch, rated for 1,500V DC. There are many similar devices from other makers, but they may be more expensive than electro-mechanical relays and usually don't work as well. No matter, as they are interesting and useful.
+ +
Figure 11 - MOSFET Relay (CPC1014 Example)
The series and shunt circuits are shown. In each case, a CPC1014 or similar is indicated. These are opto-isolated MOSFET relays, and are available from a variety of suppliers. They are available with voltage ratings from around 60V up to 250V or more, and all use a LED to turn on a pair of light-activated MOSFETs. By default, the gate-source region is completely isolated, so there's no need to mess around with a floating supply. It's certainly possible to make small signal (low current) MOSFET relays using discrete parts, but the end result will be far more costly than the IC.
+ +The shunt connection will normally be the preferred option, because when used in series, the off signal may be rather badly distorted. It's a very low level (depending on the device itself), but the signal may still be audible and will not sound very nice at all. The shunt circuit's output level when on (signal shorted to ground) depends entirely on the on-resistance of the MOSFETs. The CPC1014 has an on-resistance of 2 ohms, so a 1V signal through a 2k2 resistor will be attenuated to less than 1mV (-60dBV). In reality the off level may be somewhat higher than 1mV, but it depends on the device characteristics. Note that the series circuit provides an output when the DC supply to the LED is on, and the shunt circuit provides an output when the LED is off.
+ +You can operate these relays in a series/ shunt configuration, so when the series relay is off, the shunt relay is on. This shorts out any residual signal that may sneak through, and should be easily capable of providing more than 100dB of isolation. Obviously it's more expensive than a single switch, but it will work well. You'll probably find it hard to justify the cost compared to an electro-mechanical relay though, since a single DPDT relay can switch both stereo channels at once for less than $5.00 or so.
+ +The LED current has to be kept below the rated maximum, and for most of these ICs, around 10mA is about right. The LED is infra-red, and these have a typical forward voltage between 1.2V and 1.4V. The details for the device you intend to use should be checked to make sure that all limits are observed. R1 is selected to provide enough LED current to ensure reliable operation, based on the datasheet suggestions and the available switched supply voltage.
+ +As noted, the CPC1014 is only one of many similar devices. MOSFET relay ICs are made by many different companies, but the operating principles are identical. They are acceptably fast, with typical switching speeds being in the order of 3ms or less. Another option is the IR PVT422, which is a dual relay (two independent relays in a single 8-pin DIP package). On resistance is 35 ohms which is a limitation (but it's rated for ±400V). There is also the Omron G3VM-351, available in SMD and through-hole packages. However, it's on resistance is considerably higher than the CPC1014 because it's rated for a higher voltage. The Omron G3VM-21 types have an on resistance of only 40mΩ - a search will help you choose the right device for your needs. Note that these devices are very different from standard LED/ photo-transistor opto-couplers, and the standard types cannot be used in this role.
+ +These devices are certainly interesting, and are definitely something that you should know about. In electronics, the most obvious choice is not always the most ideal. Most of the time, a conventional electro-mechanical relay is a better choice, but knowing that alternatives exist is always helpful. Prices range from less than AU$3.00 to AU$15 or so, depending on brand, type and package style.
+ +One application where these devices would be very well suited is 'combo' guitar amplifiers. Standard relays can cause problems due to the vibration, but MOSFET relays are not affected. It's unlikely that too many commercial amps will use MOSFET relays due to the cost, but DIY amps are less of a problem because their builders are far more interested in getting everything right than saving a few dollars/ pounds/ euros etc.
+ + +The cascode topology is rarely (if ever) needed for audio, but it's common with RF, and was originally used with valves. This hasn't stopped people using cascode operation in audio of course, and in some specialised cases it may be worthwhile. The increased high frequency performance is due to the greatly reduced voltage swing at the output 'port' of the lower amplifier stage, so internal capacitance has less influence. If a voltage doesn't change (or only a little), the capacitance doesn't need to charge and discharge, so HF response is no longer affected. Essentially, a common emitter (or source) amplifier is direct coupled to a common base (or gate) amp stage, resulting in very high isolation from output to input, and much higher frequency response than can be obtained from a common emitter/ source amplifier. The primary reason to use a cascode circuit is when very high input impedance is needed, along with extended high frequency response.
+ +Common gate amplifiers were not covered above because they are fairly uncommon as a stand-alone circuit. The upper FET (Q2) in the circuit below is operated in common gate mode, because the gate is connected to ground for AC. You may notice the similarity of the cascode circuit to the mu-follower. It appears (to me) that the mu-follower may have been a development from the cascode, but whether that was by accident or design is not known. Cascode circuits (with valves) date back to the 1930s.
+ +
Figure 12 - JFET Cascode Voltage Amplifier
The (simulated) -3dB upper frequency of the circuit shown is 154MHz, with a gain of 3.8 (11.6dB). The upper response can be extended further by using a small inductor in series with R5, which increases the load impedance at high frequencies. There is little or no need for cascode circuits in audio, but there are still people who insist that there are audible benefits. While I find this highly unlikely, as long as no-one claims 'magic' properties it's just harmless fun.
+ +Cascode amplifiers can be made using any available amplifying device, and there's no reason not to mix two different types, such as a valve and MOSFET, JFET and BJT, etc. The biasing methods may change, but the basic idea isn't altered. Unless you are working with RF, it's unlikely that you'll need a cascode amp. There are countless examples on the Net if you want more ideas. Audio doesn't need response to several MHz, and it can also make interference from AM transmitters a great deal more difficult to suppress.
+ + +It's doubtful that this article has answered all questions, but hopefully it will set you off to find more information. There's a great deal of data and many circuits that you can play with, and you should now have an appreciation of some of the compromises that affect the designs you might come across. All circuits involve compromise, and there is no one amplifying device that is ideal for everything. We are spoiled for choice with opamps, BJTs, JFETs and MOSFETs, with the discrete devices available for either polarity, so spare a thought for the early designers who had a single choice - which valve to use for this or that application.
+ +There is usually no good reason to use discrete parts instead of opamps in most audio circuits, despite claims that opamps somehow sound 'bad'. For some applications there may be no choice, especially when very high bandwidth is needed. I've (mainly) only shown the basic circuit arrangements here, but for RF work (in particular) the cascode circuit topology is very common because it provides a wide bandwidth and high input impedance with any given device (or combination thereof).
+ +There are no recommendations, and note that the schematics are shown for reference - these are not construction projects, but are simply to demonstrate the different circuits available. All results and waveforms are based on simulations, but parameter spread means that real-life circuits will almost certainly need to be tweaked to get similar results. One point is very important though - the power supply has to be as clean (noise-free) as possible, because most of the circuits shown have very poor power supply rejection.
+ +Failure to provide a clean supply will inject noise into the audio path, including hum, buzz and wide-band noise. A resistor/ capacitor filter following a 3-terminal regulator works very well, and I'd suggest somewhere between 10-100 ohms in series with the supply, with no less than 1,000µF (and preferably more) to ground. The 20V supply shown is simply a suggestion. Normally, anywhere between 15V and 30V will be alright, with higher voltages providing more leeway to account for parameter spread.
+ +As already noted, the choice of JFETs is far more limited than used to be the case. Most of the very low noise types such as 2SK170 have gone. The LSK170 is an equivalent that is (allegedly) available, but I couldn't find it listed by any major distributor. Many others are still current, but often only in SMD versions. This makes them far less useful for DIY because of the tiny package and the requirement for considerable skill mounting tiny parts. A fairly large proportion of JFETs you can get easily (e.g. J105, 107, 109, etc.) are designed primarily for switching, so they are not optimised for linearity or noise. The BF256B (TO92) is at least a partial exception - it's an RF device but will still work fine at audio frequencies.
+ + ++ 1 JFET and MOSFET Datasheets - BF256, 2N7000, 2SK170, etc.+ +
+ 2 CPC1014 and other MOSFET relay datasheets
+ 3 Cascode - Wikipedia
+ 4 Web search of 'mu-follower' schematics (many circuits found, not all are useful) +
![]() | + + + + + + + |
Elliott Sound Products | +Voltage Followers And Buffers |
Contents
+A voltage follower, regardless of the technology used to build it, is a current amplifier. A small available current from the source is usually due to the circuit having a high impedance, so it cannot supply enough current to drive the following circuitry. Most of the time, we are concerned with voltage amplifiers, which (as their name suggests) increase the amplitude of the signal. These are used when the voltage from the source is too low to be useful. In reality, the vast majority of circuits combine both voltage and current amplification, although the latter is often not the primary goal. It comes 'free' with the circuit (especially opamps).
+ +When a voltage amplifier is combined with a current amplifier, the end result can be considered to be a power amplifier, having increased both the output voltage and the ability to supply current. In small signal applications, nearly all opamp circuits are actually 'power amplifiers', but they are rarely referred to as such, because the output power is negligible compared to that we normally expect. For example, most opamps are able to supply a few milliwatts at most.
+ +The voltage followers discussed here are only current amps, and do not increase the amplitude of the signal. Indeed, most actually reduce the voltage slightly, with outputs varying between around 0.9 to 0.99 of the input voltage. However, the current from the load can be increased by a factor of between a few hundred up to many thousands of times, depending on the topology of the circuit. In most cases, the 'amplified' current will still only be a few milliamps, although the composite transistors shown in Section 10 can provide many amps of output current from an input of just a few milliamps.
+ +It can be difficult for beginners (in particular) to understand that output impedance and output current are completely different, and one does not imply the other. The purpose of this article is to show the various methods available to get significant current gain, which is essential when the source has significantly less output current than is needed by the circuit being driven.
+ +Examples of devices that need a current amplifier (essentially an impedance converter) are capacitor (aka 'condenser') microphone elements and piezo sensors (common for vibration measurements amongst others). There are many other good reasons to use a current amp/ voltage follower though, because some amplifying devices (notably valves, aka vacuum tubes) have high impedance outputs that aren't very fond of loads that are now common. Most modern-day loads are typically less than 100k, but many are down to 10k and sometimes less.
+ + +These days when a voltage follower is needed, it will almost always be an opamp connected as a unity gain amplifier. It can be inverting or non-inverting, with each having its own set of advantages and limitations. The non-inverting connection suffers from (slightly) higher distortion because the common mode voltage is high (i.e. the voltage seen by both inputs at the same time), but with modern opamps this is rarely a problem. The distortion can be measured with (very) good equipment, but there are now opamps that have such low distortion that it's almost impossible to measure it. It is very rare indeed for the distortion to be audible, and if so, it usually means something else is wrong with the circuit.
+ +The greatest benefit of the non-inverting connection is that input impedance is very high, and if you use a FET input opamp it can be very close to infinite. Output current is determined by the opamp you use, as is the DC offset which may be problematical with extremely high input impedance. Noise is usually fairly low, but with high impedances it will be dominated by the noise voltage from the input resistor unless the source bypasses the noise (as happens with a capacitor (aka 'condenser') microphone for example).
+ +Using an inverting opamp configuration solves he common mode distortion problem, because there is virtually none. The inverting connection has the disadvantage that its input impedance is limited by the resistor values used. They can't be too high or noise becomes a major problem for low level signals.
+ +For the most part, this article looks at more primitive techniques used as voltage followers - primarily transistor emitter-followers and JFETs, and the valve (tube) cathode follower will also be discussed as part of the historical view.
+ +While there are some who insist that opamps are somehow 'bad' and that only discrete designs should be used, there is nothing to suggest that this is true of anything other than the most pedestrian of opamps. Even there, a lowly µA741 opamp will have better distortion figures than many discrete designs (although noise and speed are seriously compromised). There are some esoteric circuits that are arguably better than (some) opamps, but at the cost of many parts and significant PCB real estate.
+ +Since I'm not about to build and measure every circuit discussed, the results will be as derived from the SIMetrix simulator. It can be somewhat optimistic in some respects, but because familiar transistors and basic opamps will be used for all simulations, the results will be comparable. I used a signal voltage of 1.414V RMS (2V peak) for the simulations, as this is a realistic operating level for many common circuits.
+ +Opamp circuits will be described using a dual supply, typically ±15V. Discrete followers will generally also use a dual supply, although they can all be used with a single supply if preferred. Eliminating the DC offset is usually best done by adding an output coupling capacitor, and that's generally necessary even when a dual supply is used.
+ +An important point to make is that an impedance converter circuit should ideally be able to source and sink current equally well. If it can't, the output may be asymmetrical with some loads. Sourcing current is taken to mean that the circuit is providing current to the load, while sinking current means that it's drawing current from the load. Any follower should also be able to provide the same peak voltage (positive and negative) to its rated load, and preferably down to the lowest load impedance likely to be encountered (real life is unpredictable).
+ +Simple emitter followers can't usually provide fully symmetrical operation unless their operating current is unrealistically high. In some cases you can offset the output voltage so that there's less voltage across the transistor, and more across the resistor, and that can restore symmetry for a defined load impedance and reduce distortion. However, creating deliberate asymmetry isn't a cure-all and will only work if you know exactly what you're doing.
+ +Be very aware that simple circuits such as emitter followers have relatively poor power supply rejection ratios (PSRR), so hum or noise on the supplies will affect the signal to some extent. Simple emitter followers as shown in Figure 2 will have a PSRR to the emitter circuit of around -27dB, and about -44dB to the collector circuit, with a 10k source impedance. These figures depend on the component values and (especially) the source impedance, so are only a guide.
+ +Of the circuits discussed here, very few are suitable for buffering DC voltages. Because there are DC offsets that can seriously affect the performance of many of the circuits, they are only suitable for AC operation, meaning that there is a requirement for an output coupling capacitor to block the DC component. In many cases, an input coupling capacitor will also be used, especially if the source has a DC potential.
+ +Many single opamps have provision for an offset null potentiometer, so that input transistor DC offsets can be zeroed, allowing the circuit to operate accurately with DC voltages. This is rarely necessary in audio frequency circuits because the DC is removed by a capacitor, but it's essential for high accuracy circuits that include a DC component that must be preserved. Note that there are many advanced techniques to obtain very high accuracy for DC (such as chopper stabilised amplifiers), but these are not covered here because they are specialised (and usually expensive) parts and aren't necessary or desirable for normal audio frequencies.
+ ++ Note Carefully: While all the circuits shown on this page have their outputs directly connected (with or without a capacitor), if the circuit is going to be used to + interface to the 'real world' via a shielded cable, a resistor must be placed in series with the output. If that isn't done, oscillation is far more likely than not, and it + may be at such a high frequency that it doesn't show up on a typical 20-50MHz oscilloscope. The resistor needs to be at least 50Ω, and I generally use 100Ω resistors in + this role. + ++ +In some cases it may also be necessary to add a 'base stopper' resistor, directly in series with the transistor's base connection with as little PCB track as possible between the + two. The value can vary from as little as 100Ω up to perhaps 1k or more, but remember that a higher resistance will degrade noise performance. Sometimes you can figure out that a + base resistor is needed if you discover that audible noise or distortion changes or goes away when you touch the transistor or components connected to it with your finger. + +
While it may seem unlikely that an emitter follower or unity gain opamp can oscillate, it most certainly will do so if a high Q tuned circuit (such as a length of coaxial cable) + is connected directly to the output. Since any oscillation so caused will be RF (radio frequency), it can go unnoticed, but distortion performance will be degraded and in some + cases the oscillation may be audible as an audio frequency buzz. The resistor damps the tuned circuit, and makes sure that oscillation will not occur under normal circumstances. +
Despite everything I've said above, there are still instances where a discrete design is a better option. If you need higher voltages than can be handled by affordable opamps, or if you need higher output current than can easily be supplied, then a discrete design may be the simplest and cheapest option. This also applies if you need especially wide bandwidth (over 1MHz or so) or other special requirements that aren't met by available ICs. You may never need to build a discrete circuit, but there's no doubt that opamps can't be used for everything.
+ +One factor that has to be understood is the intrinsic emitter resistance (re - literally 'little r e') of a bipolar transistor. This varies with emitter current, and is generally taken to be ...
+ ++ re = 26 / Ie (in milliamps) ++ +
So if the emitter current is 1mA, re is equal to 26Ω. It falls to 2.6Ω at 10mA, and rises to 260Ω at 100µA. This non-linearity is responsible for much of the distortion in any circuit that uses bipolar transistors, and where the emitter current changes during operation (which will be the case in the great majority of circuits). Similar mechanisms exist in all amplifying devices, and despite claims to the contrary, no known amplifying device is truly linear - especially valves!. With JFETs and MOSFETs, one distortion mechanism is the variation of gm (mutual conductance) with drain current, but there are also others that are rather complex and will not be covered here. The design process should always ensure that non-linearities are minimised by using appropriate circuit techniques, not by using 'esoteric' parts.
+ +It's also important to recognise that with few exceptions, the circuits shown here are in their basic form only - they are not optimised for any particular application.
+ +An 'ideal' (i.e. theoretical) voltage follower has an infinite input impedance and an output impedance of zero ohms. Obviously the 'ideal' doesn't exist other than in simulators, but it's still a useful tool during simulation because opamps (in particular) come close enough to the ideal case that any difference is largely academic. The input impedance of a JFET input opamp is usually in the gigohms range, and the output impedance is a few ohms at most. The output voltage is limited by the power supply voltages, and the output current is set by the opamp itself. It's usually about ±25mA or so, but if loaded that heavily the available output voltage is reduced.
+ + +There are a few acronyms that you'll find in this article. Most should be familiar, but they are repeated here so you don't have to look them up.
+ +++ + ++
+BJT Bipolar Junction Transistor (standard transistor, such as 2N2222 or BC549, etc.)
+FET Field Effect Transistor, equivalent to ...
+JFET Junction Field Effect Transistor
+MOSFET Metal Oxide Semiconductor Field Effect Transistor
+CMOS Complementary Metal Oxide Semiconductors (Complementary MOSFETs)
+Opamp Operational Amplifier (aka op-amp or op amp) +
The basic opamp circuits will be covered first, because they set the goal posts for the parameters that we aspire to. With few exceptions, discrete transistor designs don't even come close to the opamp based followers. The main parameters we are interested in are input impedance, output impedance, and gain. While it's accepted that followers in general don't have gain as such, if the internal gain is too low, then there will be a loss of signal. It's usually less than 1dB even with a valve cathode follower, but it's still a loss of level that will compromise the effectiveness of circuits such as active filters that rely on feedback to get the desired performance.
+ +There is a full discussion about output impedance below, but a word of warning is needed here as well. While a typical opamp may offer an output impedance (with feedback) of less than 1Ω, there is also a limit to the short-circuit current, and the maximum output swing is dependent on the load impedance (and hence the peak output current).
+ +This means that if you use a load impedance that's too low, you will not be able to get the maximum output voltage, and distortion is increased - often dramatically. Most common opamps are limited to a load impedance of 2k or more, but there are also quite a few that can handle 600Ω loads, and a few that can handle even lower impedances. If you need to drive a low impedance, you must check datasheets to verify that you can get both the output current and voltage you need, or the circuit may not be acceptable for your purposes.
+ +
Figure 1 - Opamp Voltage Followers
Figure 1 shows the standard opamp buffers, non-inverting and inverting. Of these, the non-inverting configuration is the most common, and although it does invoke common-mode distortion (because both inputs are driven to the same voltage), it is one of the most used circuits known. A great many ESP projects use non-inverting buffers, and they are particularly common with active filter circuits. The input impedance is set by R1 (100k - although it may be a great deal higher with some opamps), and that's in parallel with the opamp's input impedance.
+ +The offset null connections are optional, and are only necessary if an absolute DC level must be maintained. Pin numbers and pot value vary, so the datasheet must be consulted to determine the proper connections and value for the opamp being used. In most cases the offset null isn't necessary, particularly when capacitor coupling is used.
+ +Minimising DC offset is usually not particularly important for audio, especially when the supply voltages are greater than ±5V or so, because there's plenty of 'headroom' and even a few hundred millivolts of offset isn't an issue. The output capacitor removes the DC component and everyone is happy. However, if you do need a low offset, that's achieved by keeping the DC resistance from each input to earth/ ground equal. This is shown above in the inverting circuit, with a bypassed resistor to earth from the non-inverting input. Its value is equal to the resistance of R1 and R2 in parallel - assuming that the source resistance/ impedance is zero.
+ +The resistor is bypassed by a capacitor so that the resistor's thermal noise is not added to the signal, thereby reducing the signal to noise ratio. This arrangement used to be very common, but most modern opamps are good enough to let you simply earth the unused input (no series resistance). It is rarely necessary to ensure input resistance balance, but if you are designing a high gain DC amplifier then it's advisable to keep the resistance at each input the same.
+ + +As noted in the introduction, the main benefit of the non-inverting configuration is its very high input impedance. Even if opamps with bipolar inputs are used, a high input impedance generally only affects the DC offset. There is a measurable input impedance of course, and if you need an impedance greater than around 1 Megohm it's better to use a FET input opamp. Opamp manufacturers don't specify the input impedance directly, because it depends on how the device is used. They do specify the input bias current, and this can be used to work out the DC offset you'll get with a given input resistance to ground. You can also use the input bias figure to work out an approximate input impedance, but it's not always reliable for a variety of reasons.
+ +It might be possible to measure the input impedance, but it will be with some difficulty. The easy way is to add a resistance in series with the generator, and adjust its value until the level has fallen to half (6dB). Assuming that the generator has a negligible impedance (typically between 50 and 600Ω), the opamp's input impedance is then the same as the series resistance. However, since the opamp is used with 100% negative feedback, even a rather basic opamp such as the RC4558 will almost certainly have an input impedance of several megohms. The datasheet claims a typical input resistance of 2MΩ, but in my experience this is somewhat pessimistic. Input bias current is ~50nA (typical).
+ +Should the input resistance be more than 1Meg or so, you will have real difficulties taking a measurement. The opamp's bias current may cause it to swing to a supply rail before you can take a useful measurement. You can use a lower value resistor and calculate input resistance based on a voltage drop. I'll explain this the long way, as that only needs Ohm's law and basic maths, so is more easily remembered ...
+ ++ Measure the output with zeroΩ in series with the input. Let's assume 1V RMS. Add a variable resistor (a 1M pot for example) in series with the input pin of the opamp, and adjust + it until the output voltage falls to 900mV RMS. If the series resistance is (say) 500k, you know that 100mV is dropped across the resistor, and 900mV is available at the opamp's input + pin. Make sure that your measurement does not include any DC component of voltage or current.+ +
+ + Iin = 100mV / 500k = 200nA
+ Rin = 900mV / 200nA = 4.5 MΩ +
The series resistance isn't so high that the opamp saturates (swings to either supply rail), but is enough to allow a fairly accurate measurement of the opamp's input resistance under normal operating conditions. A similar technique is used to determine output impedance, which we shall examine later in this article.
+ +Note that some opamps will swing negative if the input resistance (the opamp's bias resistor) is too high, and others swing positive. It depends on whether the input stage uses PNP or NPN transistors. PNP input transistors will cause the input voltage to be pulled towards the positive supply, NPN transistors cause it to be pulled towards the negative supply. Since we are talking about followers, for the non-inverting case the output follows the input.
+ +FET input opamps (JFET or CMOS) draw negligible input current, for example a TL072 is specified for a typical input bias current of 65pA and an input resistance of 1012Ω (1 TΩ). Any attempt at measuring such a high resistance is doomed, because you aren't measuring a resistor, it's an insulator. It's generally safe to assume that most FET input opamps have an input impedance that's much higher than you will ever need for most applications. PCB leakage may easily become a factor well before the opamp itself has any influence.
+ +Of course there will be situations where your circuit may need exceptionally high input resistance, and special construction techniques are required if that's the case. Most of the time, general purpose circuits (especially audio) don't require impedances much greater than a couple of megohms, and ultra-high impedances won't be examined here. For those interested, there is a project showing a 1GΩ preamp (see High Impedance Input Stages / Project 161 for more info.
+ + +Inverting opamp buffer stages have a couple of major disadvantages. The input impedance is set by the input and feedback resistors. These must both be the same value for a unity gain inverting buffer. It's inadvisable to make them very high values (> 100k) because noise becomes a serious issue. In general, it's not a good idea to use the inverting buffer for high impedance low-level signals due to the circuit noise. The input impedance is simply the value of the input resistor, and it doesn't need to be measured.
+ +There is also an advantage, in that the input common mode input voltage is close to zero, ensuring minimum common-mode distortion. While this is rarely a problem for most decent opamps where distortion remains at close to immeasurable levels, it's something to be aware of. This is one of many trade-offs that are required in all aspects of electronics design - for every disadvantage, there is usually an advantage, but neither may be of any real consequence for most designs.
+ +The inverting configuration also has a noise gain of 2, so the opamp contributes more noise than a non-inverting buffer which has unity noise gain. As mentioned above, there is a benefit in that distortion is usually lower because there's no common mode voltage, and both opamp inputs sit at close to zero volts regardless of input signal (assuming a dual supply). However, the reduction of distortion is generally rather small with most opamps, and using that as an excuse for not using a non-inverting buffer would be unwise.
+ ++ What is 'noise gain'? If you examine the configuration of an inverting buffer, you'll see that the feedback and input resistors are just what you'd expect to see + in a non-inverting amp with a gain of 2. When the source impedance is low compared to the input resistor, noise is therefore amplified by 2, but the signal + is only amplified by -1. The noise gain is simply a measure of how much the noise is amplified compared to the signal. This applies to all inverting opamp + stages - the noise gain is equal to the signal gain plus 1. ++ +
It's common (or it used to be common) to include a resistor (shown in Figure 1 as 'Optional') in series with the +ve opamp input to ground (R3). The value depends on whether the input is AC or DC coupled. If there's a cap in series with the input, R3 will have the same value as the feedback resistance (R2). With no cap (DC coupled), R3 will be equal to half the value of R1 and R2 - 50k as shown. If R3 is replaced by a short to ground, the DC offset at the output of the inverting buffer will be around 13mV, vs. well under 1mV when the resistor is used. Of course, this depends on the opamp used.
+ +So, while the extra resistor removes much of the input stage DC offset, the resistor must be bypassed with a capacitor to minimise noise. The bypass cap needs to be large enough to bypass noise down to the lowest frequency of interest. If you need response to 20Hz, the cap's reactance needs to be equal to the resistance at one tenth of that frequency - 2Hz. For example, a 50k resistor needs a bypass cap of 1.59µF (use at least 2µF as shown, 10µF is fine). It's unrealistic to expect the cap to bypass 1/f (aka 'shot') noise, so there may be a small uncertainty when measuring DC. + +
With most newer opamp designs the resistor is not necessary, especially if the opamp has offset null terminals. For audio, it's rarely used because it simply adds more parts for no useful purpose. The output should always be capacitively coupled unless response to DC is a requirement.
+ + +The simplest and best known voltage follower is the emitter follower, also known as a common collector stage. The collector is at AC ground potential, because it's connected to the supply rail. These used to be very common in all kinds of audio circuits, but they perform very poorly in almost all respects compared to an opamp. Input impedance depends on the load that's connected to the output, so rather than maintain a high defined input impedance, it varies when the load is added, changed or removed. There's a 0.65V DC offset from input to output, and it needs a DC load from the emitter to ground or a supply rail (which rail depends on whether the transistor is NPN or PNP). The load is most commonly a resistor, but that causes the output drive capability to be asymmetrical. While it can source a reasonable current via the transistor, its current sinking capability depends on the resistor value.
+ +All simple follower circuits have a small loss of level, typically providing an output of between 0.99 and 0.999 of the input level, depending on the gain of the transistor(s) used, the topology and the source and load impedances. Unlike opamps, the input and output impedances of emitter followers are interdependent, so changing one also changes the other. Opamps avoid this by using very high internal gain and lots of feedback, so while there is still some interdependence it's usually so small that you will be unable to measure the difference.
+ +As shown above, the two circuits have a 1k emitter load, and the external load impedance should not be less than 10k if a large output voltage swing is required. If lower load impedances are expected, R2 needs to be reduced, but that reduces the input impedance and increases the quiescent current drawn. With ±15V supplies, this single transistor stage draws more current than 3 to 5 opamps (depending on type), but doesn't perform anywhere near as well. The performance of both circuits is roughly similar, and you can even use both together with the outputs joined with caps to create a complementary emitter follower as shown further below.
+ +With the values shown, the input impedance of the transistor (ignoring the 100k bias resistors) is about 500k with no load, falling to ~450k when the 10k load is connected. Input impedance is roughly the value of the load impedance in parallel with the emitter resistor, multiplied by the transistor's gain (hFE of 500 in my simulation), and in parallel with the input bias resistor (R1). Because of the transistor's bias current, there is 1.8V dropped across R1 (18µA base current) and a little over -2.5V at the emitter due to the typical 700mV between base and emitter of a silicon transistor.
+ +Most of the circuits shown here use a dual supply, but when only one supply is available the emitter follower must be biased so the emitter is at roughly half the supply voltage. The most common arrangement is shown next. As you can see from the voltages shown, the emitter of Q1 sits at a little over 6V rather than the optimum 7.5V. R1 needs to be reduced to 69k to obtain the optimum bias point, but as long as the signal level never exceeds a couple of volts (RMS) the use of two equal resistors is quite alright.
+ +It's important to understand that the use of two resistors as shown in (a) reduces the input impedance. It's now R1 in parallel with R2 in parallel with the transistor's input impedance. Equal value resistors were used here to demonstrate that the emitter voltage will be less than desired. Reduced input impedance can be avoided by either using higher value resistors or the second biasing scheme, using a bypassed voltage divider as the bias supply. The second version has the advantage that any power supply noise is not passed on to the base circuit. The voltage divider (R1 and R2) is deliberately unbalanced to obtain close to half supply at the emitter.
+ +There are many variations on biasing schemes, including direct coupling the base of the emitter follower to the output of the preceding stage. There's also a technique known as bootstrapping, where the emitter signal is fed back to the centre tap of the voltage divider as shown above - C2 connects to the emitter rather than ground. This trick boosts impedance with positive feedback. By ensuring that the AC voltage at each end of R3 is almost the same, its apparent impedance for AC is increased by at least an order of magnitude, but DC conditions aren't affected.
+ +The input impedance of the bootstrapped emitter follower is around 340k, so it should be apparent that R3 has little influence for the AC input. The DC resistance is still 100k of course, so the voltage drop caused by the transistor's base current isn't changed. Bootstrapping has been used in this way for a very long time, and it can even be applied to valve circuits.
+ +Bootstrapping has a couple of downsides though, firstly that there's a 2dB gain boost at 1.5Hz with the values shown. In effect, a rather odd 8 to 9dB/ octave high pass filter is created by the combination of C1, C2 and the associated resistors, and it has a higher than expected Q ('quality factor') that creates a peak before it starts to roll off. The effect (and Q) of this filter depends on the source impedance, so it can be unpredictable in 'real world' applications. With a high source impedance, the amount of boost is reduced, and when the source impedance is 100k (for Figure 4) there's no boost at all. The low frequency response extends to just below 1Hz (-3dB) with a 100kΩ source. This point is rarely raised in most articles you might come across, but it can be a trap if you don't know about the potential for 'interesting' low frequency effects.
+ +Secondly, because the bootstrap circuit uses positive feedback, it will cause transient instability if there is a DC change at the input. Another issue that arises is that the circuit may have a significant settling time, so after power is applied you may have to wait for several seconds (or more, depending on component values) before the DC conditions are stable. This is also due to the use of positive feedback, which causes a damped low frequency oscillation at a frequency determined by the resistor and capacitor values used.
+ +It's essential to build the circuit and test it with your application before you decide that using a bootstrapped input circuit is the best option. Because of the positive feedback, the impedance depends on the signal frequency, and it is also affected by the Miller capacitance of the transistor as well as any stray capacitance, limiting input impedance at higher frequencies.
+ +Although shown with a single supply, bootstrapping can be applied to any variant shown in this article (including JFETs and opamps). For the circuits using dual supplies, only one resistor to ground and one to the base is needed (two resistors instead of three), with the bootstrap cap connected to their junction. This lets you reduce the value of the input resistor to reduce DC offset if you wish to do so. The resistors don't have to be the same value, but to see exactly what happens with any given circuit requires that you build and test it, or at least run a simulation (which will usually be very close to reality).
+ +The version shown above is simply too interesting to omit. It has many limitations, but despite that it's used as the input stage for some single supply ICs where the input is allowed to include ground (e.g. LM358, LM386 and a few others). While it might not seem possible, the circuit acts as a normal emitter follower with an AC input that's referred to ground. The input can swing to a maximum of about ±600mV, despite the fact that there is no separate negative collector voltage supply. The circuit relies on the base-emitter junction voltage to provide just enough voltage differential between the collector and base to allow the transistor to function normally.
+ +The circuit is a special case, but can be very useful. It can be DC coupled as shown, or capacitor coupled as with the other circuits described here. When a coupling cap is used between the source and input (shown dotted in the drawing), the base voltage will rise depending on the transistor gain and emitter resistance. With 100k (and using a transistor with an hFE of around 420 for a BC559C), the base voltage will rise to about 2.7V, allowing a considerably higher input (and output) voltage of about ±2.6V (1.9V RMS). As with most other circuits shown here, you will need to experiment. In general, I wouldn't be happy with an input signal of more than around 700mV RMS even with capacitor coupling, because transistor parameter variations could easily cause problems otherwise.
+ +By using a pair of emitter followers of opposite polarities as shown here, the total DC offset is minimised, being reduced to between 100mV and 150mV, rather than over 2V as with the single stages. Performance is similar to the Darlington and Sziklai pairs shown below. The additional resistor (R2, 10k) helps force Q1 to draw enough current to ensure reasonably high gain, and without it the circuit won't work at all because Q2 has no path for base current. Ignoring base offset voltage, the emitter of Q1 will be at -600mV, and this is offset by the base-emitter voltage of Q2 (700mV). They will never cancel perfectly because the transistors operate at different currents, and the DC offset at the base of Q1 isn't compensated so some of it also appears at the output.
+ +This circuit has the advantage that input bias current is minimised, and if the transistors were identical they would balance out perfectly. Many 'symmetrical' amplifiers use a similar input stage, but true symmetry is not achieved because NPN and PNP transistors will never be perfectly matched. Input impedance is about 200k for the circuit shown (not including R1 and with no load), and input bias current is a little over 3µA, which is significantly better than a single transistor as shown in Figure 2. It has the advantage that output drive is symmetrical, so it can source and sink current (almost) equally well. However (and despite appearances) the circuit cannot drive very low impedances to the full supply rail. With the values shown it can provide up to ±30mA into the load with around 0.5% distortion. The input impedance falls with the load impedance, and is reduced to about 150k with a 1k load.
+ +There are many opamps that can do a great deal better, and they don't require two output capacitors - you can often get away with not using an output cap at all if the offset is low enough.
+ + +To increase the input impedance, a Darlington pair can be used. This provides higher overall gain and better linearity, but increases the DC offset at the output. A far better circuit uses an NPN and a PNP transistor in a complementary feedback (aka Sziklai) pair. This arrangement has an internal gain that's similar to the Darlington, so in both cases input impedance is increased, and both output impedance and distortion are reduced. With the complementary pair, the DC offset at the output is also reduced. The circuits are shown below.
+ +However, there's a small problem with these arrangements, in that the first transistor (Q1) is run at very low current, and this limits the effectiveness of the pair of transistors in both circuits. When operated at low current, the gain of a transistor falls, and this is sometimes shown in the datasheet, although it's often not shown for exceedingly low current (a few 10s of microamps). This problem is partly circumvented by adding a resistor, shown as R2 (10k), and it forces Q1 to operate at a slightly higher current than would otherwise be the case. Input impedance is typically increased to over 4 megohms, although it falls at higher audio frequencies. The Darlington has a higher input impedance, but it falls faster with increasing frequency, and is roughly the same as the Sziklai pair at 20kHz. The HF performance is affected by internal (mainly collector to base) capacitance.
+ +Of the two, my preference is for the Sziklai pair. It has a lower DC offset at the output (which improves symmetry slightly), but it has a certain 'elegance' that is lacking in the Darlington pair. The small amount of local feedback that is inherent with this topology also helps to reduce distortion (albeit only by a tiny amount). Because of the very high transistor gain, the voltage gain is very close to unity - expect at least 0.999 with the values shown.
+ + +The circuits shown so far have a resistor to provide the current for the emitter follower. This is cheap, but doesn't provide the best linearity because the transistor's current is constantly changing with the applied signal. For example, with a 1k resistor as shown, the transistor's current changes by 1mA for each volt change of the emitter voltage. Transistors give their best linearity when the current through them is constant, but this isn't the case if a resistor is used to set the emitter current.
+ +A current source will improve the stability of the operating current, but it's not a panacea. The load also demands current, and this causes the emitter current to vary (thus causing re to vary accordingly and introducing some distortion), but the change is usually - and ideally - much less than when a resistor is used. Unfortunately, adding a current source also increases overall complexity and increases the component count. This makes the use of discrete circuits less appealing.
+ +The current source can be used with a simple (one transistor) emitter follower, or with the Darlington and Sziklai pair versions. The current source circuit can be referenced to the positive or negative supply (the latter is shown), allowing it to be used with PNP or NPN emitter followers respectively. It also works with JFETs and MOSFETs, but with those it only improves linearity - input impedance is not affected. The amount of current through the source depends on the load impedance, and for small signal circuits needs to be no less than around 5mA - the circuit shown runs at 15mA to match the others described in this article. 15mA will let you drive up to 20V peak-to-peak into a 1k load using ±15V supplies. However, you will note that the transistor current may still fall to zero if too much output current is expected from the current source. If this is the case, the source current needs to be increased.
+ +While there is definitely a distortion reduction compared to a resistor load, it's likely in most cases that the extra complexity isn't warranted. This is especially true if the peak signal level doesn't exceed around 1/2 the supply voltage (±7.5V for Figure 6). Under these conditions, distortion may be reduced by as much as an order of magnitude, reducing from 0.05% to 0.005%, but is still a great deal higher than a less costly opamp.
+ +Many low power Class-A amplifiers use this technique, and the source current has to equal the maximum peak loudspeaker current, typically up to 2.5A or so for a 20W/ 8Ω amplifier. When a high output current is expected, it's usually not possible to include a large current safety margin. However, for small signal conditions it's easy to ensure that the emitter follower transistor current changes by no more than around ±20% or so. The smaller the current variation, the greater the linearity. This is provided the current is well within the device ratings - running a higher current than necessary can easily make performance worse.
+ + +A circuit that is popular is the so-called 'diamond buffer', which uses four transistors. The input impedance of the version shown is 500k, and DC offset at the output is around 145mV in my simulation. Input bias current is less than 2µA. It can source and sink current equally well, and the output impedance is about 10Ω. Because of the push-pull arrangement, it operates in Class-AB, and quiescent current is less than 4mA - this is significantly better than a simple emitter follower, especially when you consider its output drive capability (it can drive a 100Ω load to greater than ±10V peak).
+ +The circuit seems to be generally attributed to the (now obsolete) National Semiconductor LH0002 buffer, which has a circuit that's essentially identical to that shown below, but with some resistor value changes. The datasheet claims distortion of 0.1%, which is acceptable but certainly not in the league of even 'ordinary' opamps. It was designed to be used with an opamp, with the buffer included in the opamp's feedback loop.
+ +In terms of PCB real estate it's not good - 4 transistors, 4 or 5 resistors (depending on the signal source) and a large output capacitor to allow it to drive low impedance loads. However, there are very few opamps that can come close to it for output current, so it may be worth considering if you have a particularly low load impedance. Other than its high output current, it doesn't come close to an opamp in terms of input impedance, output impedance or distortion, so unless you really need to be able to drive 100Ω loads (for example) it's probably best avoided.
+ +Be aware that if the circuit shown is used inside the feedback loop of an opamp it may be unstable, due to high frequency phase shift within the circuit. This is less likely with an integrated circuit because it can be optimised and all signal paths are very short. For those who think that opamps somehow 'ruin' the sound (hint: they don't) the diamond buffer may be attractive, but these days it should be viewed as a curiosity.
+ + +The greatest advantage of using a JFET (or a MOSFET) is that they have extremely high input impedance, limited only by their input capacitance and small amount of leakage. In almost all other respects they are inferior to bipolar transistors, but if you have a need for an input impedance over 10 megohms then you probably need a FET. Where a bipolar emitter follower circuit has a 'gain' of around 0.99, a FET will be a lot worse. A simulation using a BF245C JFET and a 2N7000 MOSFET shows that the JFET gain is 0.903 and the MOSFET is 0.986 - significantly better. Note that the circuits shown below are not optimised, and I used the same value of source resistor as was used for the emitter in the BJT versions above. Ideally, JFETs will be operated at a lower current and they can't drive low impedances as well as bipolar transistors.
+ +In the circuits, the JFET is shown both with a single supply and a dual supply. When a single supply is used, an additional resistor (R2) is needed to bias the FET properly, and it will normally be bypassed (C2) for AC. This connection provides a bootstrap for the input resistor (R1) as a matter of course, and including C2 improves input impedance. If C2 is omitted, the input impedance is around 2.6MΩ, rising to 4.2MΩ when C2 is included. C2 also has a small effect on the gain and output impedance. When it's installed, the FET sees a slightly lower impedance at its source, so gain and output impedance are both reduced slightly.
+ +Distortion for the dual supply JFET measures 0.1% and the MOSFET gives 0.028%, again, a better result. However, the input capacitance of the MOSFET is much higher than the JFET, and the input impedance of both falls as frequency is increased. Measured at 1kHz, the JFET has an impedance of 245MΩ while the MOSFET is only 32MΩ. These values are both significantly higher at lower frequencies, and vice versa. At 10kHz, the JFET is down to 24MΩ and the MOSFET measures 3.3MΩ.
+ +While these figures are fairly respectable, a TL072 (a very lowly JFET input opamp by modern standards) shows an input impedance of 1TΩ (in the simulator and as shown in the datasheet) from DC to well beyond normal audio frequencies, with no loss of impedance until over 200kHz, being down to 900GΩ at 1MHz. No, I don't believe that either, but it's measured with the same simulator as the two types of FET. While somewhat optimistic, it's probably not as far off the mark as you might imagine. Having built a preamp with 1GΩ input impedance using a TL072 I can attest to its performance across the audio band and to at least 50kHz (I didn't measure it beyond that).
+ +The output impedance of a JFET source follower is really nothing to write home about. For the single supply version, output impedance measured 197Ω, and for the dual supply version I measured 133Ω - both are less than awe-inspiring. The MOSFET again does a great deal better at only 25Ω, and this is comparable to a bipolar transistor driven from a medium impedance (around 10kΩ) source.
+ +The performance of JFET followers can be significantly improved by using a current source in place of the source resistor, and adding a BJT emitter follower. This boosts the gain to something closer to unity, and provides a much lower output impedance. These options aren't shown in their entirety, but the version in Figure 12 performs significantly better than its dual supply equivalent in Figure 11 above, even without the added bootstrap capacitor.
+ +The circuit shown above combines the best of both worlds - the high input impedance of a JFET plus the low output impedance of a BJT. The circuit can be considered a hybrid Sziklai Pair, but with the JFET as the 'controlling' device. The (simulated) output impedance is less than 2Ω, a seemingly impossible feat for such a simple circuit. The same JFET without the buffer not only has much higher output impedance (around 100Ω) but lower gain. A follower is expected to have a gain of unity, but the unbuffered JFET has a gain of 0.83, vs. 0.98 with the BJT. There can be no doubt that these improvements are worthwhile, but performance is still not as good as an opamp.
+ +Regular readers of the ESP site will probably be aware that I rarely specify JFETs for anything where a viable alternative exists. There are several reasons for this, with the main one being that their operating characteristics are extremely variable. Two JFETs, even from the same manufacturing batch, will rarely even be similar, and parametric selection is tiresome. It doesn't help that many of the FETs that used to be common are now very difficult to get, with some of the better devices (especially low noise types) being virtually unobtainable. This makes them rather unappealing for anything more than mundane tasks that can often be accomplished with a bipolar transistor for far less cost.
+ +For a great deal more information on using JFETs, I suggest you read Designing With JFETs. The article describes a simple way to characterise JFETs for maximum drain current (IDSS) and 'pinch-off' voltage (VGS(off)), the two parameters that are the most variable and also the most critical to get a working design.
+ +There is no denying that a JFET provides a very high input impedance, and for this alone they are sometimes the only sensible choice if a FET input opamp can't be used for some reason. If you happen to need good high frequency response, JFET source followers can benefit from bootstrapping, but unlike the example shown for a BJT emitter follower, the drain is bootstrapped to reduce the rolloff caused by the JFET's Miller capacitance between the drain and gate. If the same voltage exists on the gate and drain, it follows that capacitance between them is effectively cancelled until there's a significant phase shift (greater than 45°). This occurs at 197kHz in my simulation.
+ +With a 10MΩ source impedance, response without C2 is -3dB at only 22kHz. When C2 is added, this is extended to 215kHz - almost an order of magnitude improvement. The BJT follower is essential to ensure a low output impedance and to minimise loading on the JFET, and without it the bootstrapping won't work. The same technique also works with a MOSFET, and the improvement can be equally significant. This is a somewhat unusual application of the bootstrap principle, and while it may seem to be similar to the system of bootstrapping used in many power amplifiers, it's actually quite different, and serves a completely different purpose.
+ +The input can also be bootstrapped in the same way as shown with emitter followers (and as shown above in basic form), and you can use the two bootstrapping techniques on the same circuit if you need to. A bootstrapped input resistor has the advantage that the resistance (R1) can be a comparatively low value (10MΩ perhaps) while still providing an extremely high impedance to the source. Naturally, the same caveats apply regarding settling time and the potential for unwanted low frequency boost.
+ +Note that any form of 'true' bootstrap circuit can only ever work for AC signals. Because a capacitor is used, bootstrapped DC circuits are not possible. Similar techniques can be used for DC, but they will be active (i.e. using transistors, opamps, etc.) and are significantly more complex.
+ + +One circuit that works surprisingly well is shown below, although it's now very dated and doesn't come close to the performance of an opamp. The basic topology is that of a discrete opamp, but simplified to the bare minimum. In the original Wireless World article there was also an alternate version shown, but it doesn't work well so has not been included in my analysis. I've seen the alternate version elsewhere as well, but it still doesn't work as well as the one shown here without significant additional complexity.
+ +The circuit operates using Q1 and Q2 as a standard long-tailed pair, but 100% negative feedback is applied to the inverting input directly from the collector of Q3. Output impedance is very low at 0.85Ω, and the output voltage swing is limited to +15V and -13.5V, so it can swing to the positive rail, but not the negative. With a ±10V input (7.07V RMS) distortion is 0.06% with no load, and it doesn't change with a 10k load. The load impedance does change the maximum negative swing though, because it is limited by the current that can be drawn through R4.
+ +While interesting (and it performs slightly better than the 'diamond buffer' shown above in some respects), there's really no point because even 'ordinary' opamps will outperform it. Although it's shown with a 100pF capacitor in parallel with R2, this may not be needed. According to the simulator, response extends to over 60MHz, although it's doubtful that would be achieved in practice. However, it will beat almost all opamps in terms of frequency response, so it does still have a potential place in the world.
+ +For the sake of completeness, a MOSFET version is shown above. The 2N7000 devices are not particularly quiet, so noise will be an issue at low signal levels. The offset voltage can be trimmed by changing the value of R2. As shown (and simulated), the offset is 10mV, and it can be reduced to almost zero by making R2 around 250Ω. In reality, the MOSFETs will not be matched, so the actual value for zero offset will change - perhaps considerably. The MOSFETs must be in close thermal contact, but even that doesn't guarantee a low and stable offset voltage. Without the coupling caps, response extends from DC to well over 1MHz, and the distortion (simulated) is less than 0.03%.
+ +JFETs can also be used, but the component values need to be changed to obtain the proper operating conditions. With all circuits using a long-tailed pair, the current through each device should be the same. For example, in the Figure 13A circuit, the tail current (through R3) is about 6.2mA, so each MOSFET (or JFET) should draw 3.1mA. Balance is achieved by varying the value of R2.
+ + +In the early days of electronics, the valve (aka vacuum tube) was the only option. Unlike transistors and FETs, valves come in one basic format - roughly the equivalent of an N-Channel JFET. There was/is no complementary version, so options were limited. Valves were then (and are now) expensive, and they need a fairly high current to bring the cathode or filament up to operating temperature. Because none of this power is usable by the circuit itself, it's wasted - another expense.
+ +By their nature, valves have a high output impedance, determined almost completely by the plate (anode) and load resistors. The internal plate resistance (rP) also plays a part, but it's not usually considered 'significant' for small signal applications. To get around this, the cathode follower circuit was common, and it was essential in many circuits because without it, the output impedance was too high to be useful.
+ +Cathode followers were poorly understood for quite some time, and many articles were written trying to explain their operation to the engineers of the day [ 4 ]. The referenced article took up four full pages in the magazine! Put simply, the voltage between the grid and cathode will try to remain constant, and if the grid voltage increases, the valve will draw more current from the supply, raising the voltage on the cathode by a similar (but slightly smaller) amount. The converse applies when the grid voltage is reduced.
+ +Valves have very limited gain compared to bipolar transistors, and it was generally accepted that the gain from a cathode follower was around 0.9 (compared to 0.98 or more with a transistor or MOSFET). It's also notable that trying to use tetrodes or pentodes makes no difference, because both plate and screen are at the B+ voltage and the valve will behave as a triode (the screen grid is tied to the plate, which is a triode connection). However, see below ...
+ +A) shows a cathode follower with an input capacitor (C1a) and biasing resistors (R1a and R2a), which operates as a stand-alone circuit with an input from any suitable source. In many cases, cathode followers are simply directly connected to the anode circuit of the preceding stage as shown in B). That removes the need for C1a, R1a and R2a, and has no downsides, but three components are saved. One always has to be careful with valve circuits though, as it's easy to exceed the maximum allowable cathode to heater voltage because the heaters are nearly always ground referenced. If the voltage is exceeded the valve may be damaged, but even if it survives it may not function properly.
+ +The circuit shown as A' is a special case of using a pentode. The circuit is described in The Radiotron Designers' Handbook, page 324, and explains that if pentode operation is required, the voltage between the cathode and screen must be constant. Suggested methods were an inductor, or the arrangement shown, using a capacitor that's effectively bootstrapped to the cathode by C2a'. This forces the voltage across R4a' to remain constant, ensuring pentode operation. The values shown as 'SOT' are 'select on test'. (My thanks to the reader who alerted me to this interesting design.)
+ +There are several differences between a valve cathode follower and its modern day JFET or MOSFET equivalent. With a valve, operating voltage, impedance and distortion are higher, and both gain and current are lower. Only a single supply is normally used. There was (almost) never a need to have a dual supply with valve circuits, although it could be done if you wanted to - and it was done in early valve based opamps. The design current for the cathode follower shown is about 2.6mA, so the valve might be 1/2 of a 12AT7 or 12AU7 for example. If you wanted to use a 12AX7, the current has to be reduced because they are designed for low current operation (typically no more than about 1.5mA).
+ +Note that R2a can be bypassed, and like the single supply JFET circuit shown above, the input resistor (R1a) is effectively bootstrapped. Because of the low gain of valves, the impedance increase is not as great as you might hope. Unlike JFETs, there is a definite limit to the upper value of the grid resistor, determined largely by the materials used and the geometry of the valve's internal structure. If the resistor value is too high, the valve will attempt to bias itself as the grid collects stray electrons. This is called 'grid leak' or 'contact' biasing, and generally uses a resistor from 2.2MΩ to 10MΩ or thereabouts. The tiny current flow (typically less than 1µA) causes a voltage to be developed across the grid resistor (negative at the grid) which biases the valve. In general, grid leak bias is rather unpredictable and is usually a bad idea, and it should (IMO) be avoided.
+ +If you want to know more about valve circuits in general, see the ESP Valves Index page.
+ + +Rather than using a cathode follower to buffer the output of a valve stage, a better option is to use a high voltage MOSFET. They are much cheaper than valves, don't need a heater supply, and they have higher performance. The output impedance will also be much lower and output current higher because you can run a MOSFET with a higher quiescent current than most valves can safely handle. A small heatsink will usually be needed if dissipation is more than 0.5W or so (or the MOSFET can be thermally coupled to a cool part of the chassis using a silicone thermal pad). Suitable devices include the IRF830 shown, IRF820, IRF840, STF3NK80Z, etc.
+ +It is extremely important that any MOSFET follower used after a valve gain stage has good protection for the following circuitry. When the B+ (high voltage) is connected, the output will rise to the full B+ voltage until the valve's cathode warms up, and this can damage whatever is connected to the output. A capacitor is not sufficient - you need to include a resistance and a zener diode clamping circuit to ensure that the output voltage can't exceed ±10V or so (assuming that the following circuit is transistor or opamp based).
+ +Cathode followers are rather ordinary in terms of drive capability, output impedance and linearity. When used at modest signal levels (up to 5V RMS or so), a MOSFET will far exceed the performance you can reasonably expect from a valve. You may expect the gate capacitance (CGS) to cause havoc, but it's effectively bootstrapped by the source itself. For the circuit shown, the -3dB frequency should be at least 100kHz (assuming that the valve has little or no high frequency rolloff).
+ +The value of the valve's cathode resistor and bypass capacitor (R2 and C1) haven't been shown because they depend on the valve used, and voltages shown are typical. With the MOSFET resistor values given, the AC output of the MOSFET follower is about 0.98 of the input, and is significantly better than a cathode follower in this respect. Without R4, there is virtually no signal loss at all, but there is also no current limiting. The current limit is around 25mA with a 330Ω resistor. This allows more than sufficient drive to following stages, but limits the damage that can be created with high signal levels.
+ +Output impedance is 330Ω, and is based almost entirely on the value of R4. If R6 is 22k as shown, it should be rated for at least 1W, and the MOSFET needs a small heatsink because it will dissipate a little over 700mW with a 250V supply. Increase the value of R6 if you don't need the drive capacity provided by the 22k source resistor. Output impedance is not affected if you change the value of R6, but the ability to provide a high level signal into low impedances is reduced. The circuit above can provide well over 5V RMS into a 2.2k load impedance.
+ +Linearity can potentially be improved further by including a current source load for the MOSFET, but that shouldn't be necessary for most applications. The added complication is unlikely to provide any audible benefit, and if not done well may do more harm than good. See Project 167 for more information about protecting the following stages, and there's also a muting circuit that can be added.
+ +Most regular readers will know that I am not a fan of using 'vertical' MOSFETs (HEXFETs or other switching types) for linear circuits. This is an exception, because they are well suited to use a followers operating at high voltages. They are almost too perfect in this role, but at least you will know that any distortion comes predominantly from the preceding valve stage. Distortion as simulated is less than 0.01% with 7V RMS output and a 22k load at the output. A cathode follower will hard pressed to even come close to that, regardless of the valve used.
+ +As a side-note, you can eliminate many of the protective parts if the MOSFET follower is used to drive a tone stack (in a guitar amp) or is used internally with other valves. For example, MOSFET followers are ideal to drive the grids of output valves, providing far greater bias stability. They also have no problems driving the grids positive (Class-AB2), which can have some decidedly adverse effects when a valve drive circuit is used. The zener diode and gate resistor are mandatory, but the limiting resistor (R4) and zeners are not required. The protection is intended to stop the valve stage(s) from destroying transistorised equipment (including opamps).
+ + +Taking a measurement of output impedance is often very difficult. This is especially true when the output impedance is extremely low, because your measurement will include losses in test leads and relies on the accurate measurement of small voltage changes. First, measure the output voltage with no load, then apply a load of known resistance and measure the voltage again. With most circuits, the voltage used must be small or the circuit will distort, which of course ruins the measurement.
+ +For most of the circuits described here, you can use a voltage of 1V RMS, and the load resistance should be such that the output voltage falls by a measurable amount (around 100mV is usually alright). Note that this will not work with an opamp, because the output impedance is exceptionally low (usually well below 1Ω) but current capability is limited.
+ +It is very important that you understand that you can have a low output impedance, but not the ability to drive an external load to full voltage. These are two different parameters, and one does not imply the other. A circuit can be designed to have high output impedance but supply a high current, or to have a low output impedance but only supply a small current (such as an opamp). Likewise, a circuit can also be designed to have low output impedance and drive a high current, and an audio power amplifier is a perfect example of this.
+ +While the procedure shown below assumes a voltage drop of 100mV and an open circuit output voltage of 1V, you need to substitute the actual values you use. It will not always be possible to drive 1V into a load that is low enough to obtain a measurable voltage difference, depending on the circuit topology and its output current capability. You can also use a fixed resistor instead of a pot, and simply adjust the values in the formulae to suit.
+ ++ Measure the output voltage with no load on the output. Let's assume 1V RMS. Add a variable resistor (a 1k pot for example) as an output load, and adjust it until + the output voltage falls to 900mV RMS. If the load resistance is (say) 400Ω, you know that 900mV is dropped across the load resistor, and therefore 100mV is + 'lost' within the circuit itself due to its output impedance. Make sure that your measurement does not include any DC component of voltage or current, and + that the output remains undistorted. If the output clips (distorts) when loaded, the measurement is invalid!+ +
+ + Iout = 900mV / 400 Ω = 2.25mA
+ Rout = 100mV / 2.25mA = 44.4 Ω +
This works well with most output impedances, provided they are not too low. In some cases (such as with opamps) it will be almost impossible to measure a reasonable voltage change, even with an unrealistically low value load resistance. The technique therefore can't be relied upon in all cases because it's not possible to measure the voltages accurately enough. With an opamp, output impedance/ resistance is reduced to near zero because of the large amount of feedback.
+ +In a feedback circuit, the output impedance is roughly equal to the internal output resistance divided by the feedback ratio ... provided there is sufficient feedback. For example, an amplifier may have an internal impedance of 1k and a gain of 1,000 (60dB), and if used with 100% feedback (a follower) its output impedance will be 1Ω. However, its ability to supply current to the load is limited by the internal resistance/ impedance, and not the value obtained after feedback is applied. The hypothetical amp just described cannot provide more than ±15mA when powered from ±15V supplies, even though its output impedance is only one ohm. Note that this is a simplification, and while it's fairly accurate for some topologies it is at best a crude approximation.
+ +As already noted, the measurements must exclude any DC component and the signal level must be low enough to ensure that there is no clipping or other distortion. In many cases, the output impedance will be frequency dependent, and this will always be the case with opamps. While output impedance may be less than 1Ω at 1kHz, it will be considerably higher at 100kHz. Typical opamps will have an output impedance of up to 10Ω (sometimes more) at 100kHz, because there is less available feedback due to the opamp's internal frequency compensation.
+ +This is not usually a major issue though, and it's certainly not a valid reason not to use an opamp unless you are working with high frequency circuits. In such cases, it's usually necessary to use discrete circuits because opamps are mostly not intended for use with frequencies much above 50-100kHz. There are exceptions of course, but they may be rather costly and a discrete solution can sometimes work out better all round.
+ + +I don't propose to cover high current followers in any great detail, because they are already explained in various other articles and projects on the ESP website. High current versions are typically used in the output stages of power amplifiers, and can be simple complementary Darlington pairs, Sziklai pairs or in some cases a triple (three devices in cascade), and using various mixtures of NPN and PNP transistors. There are many combinations, and it is hard to provide the detailed analysis that each deserves in a short article.
+ +Instead, I will show some of the common variations, purely for interest's sake. If you want to know more, you will need to perform your own analysis because the choice of transistors determines how well each version will work in any given configuration. The selection of devices depends on the application, frequency range, voltage and current, and given the number of transistor types available, the number of combinations is truly vast.
+ +In the drawings below, resistors between individual transistor base-emitter junctions are not shown. For high-current triples, Q2 could have an emitter-base resistor of around 220Ω, and Q3 might use 22Ω, but these values need to be determined by the application and to suit the devices and intended purpose. Higher resistances can increase the turn-off time, and lower values draw more current. This is part of the design process, and each case will be different.
+ +There are six possible connections of devices to create an NPN triple, and the transistor types will be selected as needed to create the versions shown. Their PNP equivalents increase the number of possibilities by another 6 - 12 different configurations in all. I've not included any type numbers because they will vary with the application. Their voltage rating must be greater than the total voltage that may appear across them, and they need to be graded with the lowest current device as Q1, a medium current device for Q2 and a high current type for Q3. As noted above, resistors (not shown) are nearly always be essential between the base-emitter junctions of Q2 and Q3, with values usually selected to minimise the transistor turn-off time.
+ ++ The version shown in (b) is used in the P68 subwoofer amplifier project, but it uses a high power transistor as Q2 and a much lower than normal base-emitter resistor + for Q3. The thermal stability is very good indeed, partly because Q3 is turned off under quiescent conditions. This is a 'special' application of a triple connection. ++ +
Note that in every case shown, the input transistor determines the effective polarity of each combination. The remaining transistors are then arranged as Darlington or Sziklai pairs as shown in the drawing. Of the six variations, (a) and (b) will have the best thermal stability, because there is only a single base-emitter junction between the base and emitter of the triple. It follows that (f) will be the worst, because there are three base-emitter junctions in series. The remaining three have two junctions in series, and if possible it's better if the final transistor (Q3) doesn't have its base-emitter junction involved, because the temperature of the final stage is nearly always subject to the greatest variation because its dissipation is the highest.
+ +Because NPN and PNP output transistors will always have small differences, it can be helpful if both sections of a push-pull emitter follower output stage (as used in most amplifiers) can be made to be as similar as possible. The above circuit is not common, and it's not one that I've used in any of the published designs. However, it does have very good linearity, and the only difference between an NPN and PNP stage is the driver transistor. Although the drivers will not be identical, there is likely to be less variation between them than with the output devices. Note that transistor types and resistor values are a suggestion only, and many different types can be used.
+ +The only power amplifier that I know of that uses this arrangement is Bryston (albeit with some variations), but it's inevitable that it's also used by others. Over the years I've looked at hundreds of different output stage circuits, and this is probably the most impressive, but of course it does come with a cost penalty due to the requirement for four output devices in a push-pull output stage. While it performs very well, it's very doubtful that there is the slightest audible difference compared to a 'traditional' output stage.
+ + +It should be fairly obvious that for small signal audio frequency applications, it's almost impossible to beat an opamp with any discrete option. Some are fairly good if you work at it, but the PCB area needed is a great deal more than that for an integrated circuit. FETs in general are a good option if you need an exceptionally high input impedance, but again, a FET input opamp will generally have far better performance than a discrete circuit. However, the noise performance of FET input opamps is usually not as good as bipolar types, and a low noise JFET may be a better option where noise performance is critical. There are some benefits to using MOSFETs, but their noise performance is usually a limitation so using them at low levels isn't usually a good idea.
+ +Power amplifier output stages nearly all use emitter follower output circuits, most commonly Darlington or Sziklai pairs. In some cases you will see 'triples' used, having three transistors in a cascade arrangement. There are many different options, and section 10 gives a brief overview only. These circuits are designed for high voltage and current, and are not generally used for small signal applications.
+ +Given the fact that even a simple Darlington pair may not perform as well as expected due to the very low current in the first transistor, it would be rather pointless to add yet another transistor which would operate at perhaps only a few microamps. It's gain (and its contribution to the circuit) will be well below expectations and it's essentially a waste of a transistor.
+ +Be aware that most of the circuits were simulated using BC549C (NPN) and BC559C (PNP), and while any suitable transistors can be used, performance depends on the hFE of the transistors that you actually use. Lower gain devices will create greater DC offsets because their base current will be higher, and vice versa. Unlike opamps, the performance of the simple discrete circuits described depends heavily on the device(s), and you can easily run into problems if the gain is much lower than expected. If you build any of the circuits shown here, they will all work as described, but you may find that the DC voltages are different due to transistor(s) with higher or lower gain than used for the simulations.
+ +Hopefully the reader has more information than before, and is aware of the limitations or potential benefits of simple circuits. There are few reasons in any audio frequency circuit to use an emitter follower these days because the performance will never come close to an opamp, and there is little or no cost saving. The DC offset is always going to be a problem, and making it 'go away' is far more trouble than it's worth. However, some circuits will benefit by not using an opamp, and the variations shown provide an insight into the likely advantages for specialised applications.
+ + +![]() |
Elliott Sound Products | Frequency Vs. Gain |
![]() ![]() |
Opamp (aka op-amp or operational amplifier) specifications can be rather daunting, especially if you need gain at high frequencies. This isn't a requirement for audio, but there are many who believe that audio circuitry should be fast. It can be hard to argue with this, because any circuit that's much faster than the signal it's meant to amplify has less opportunity to 'damage' the signal. Very few people would be happy with an analogue preamp circuit that was incapable of providing its full output voltage at 20kHz, even though it will never be expected to do so in any real circuit.
Things are less clear-cut than they appear, particularly with most opamps. There are two parameters that determine the high frequency performance - unity gain bandwidth and slew-rate. If you look at one but ignore the other, things may go badly for your design. The vast majority of modern opamps are internally compensated, which means that they have a natural rolloff at 6dB/ octave from a predetermined (during the opamp's design) low frequency 'corner'. This is often at only 10Hz, so the full claimed open-loop gain (i.e. the gain before feedback is applied) is only applicable for DC or very low frequencies.
Some opamps are internally compensated for a gain of perhaps three or more, and these will oscillate if you attempt to use them for a unity gain buffer (for example). Compensation pins are then made available, so you can add the required external capacitance to ensure stability. The NE5534 is probably the best-known example, and a 22pF compensation cap is recommended for stability with unity gain amplifiers (inverting or non-inverting buffers for example).
Some early opamps had no internal compensation, partly because fabricating capacitors on a silicon die is difficult. The LM301 is one example, and it was recommended that a 30pF capacitor be used for compensation. The datasheet is far from complete, and the performance data are far from complete (compared to modern opamps). Slew rate isn't even mentioned in the specifications section, but from the graphs shown it's rather poor, even with a much reduced compensation capacitor. However, this is a very old design, and it's not a device I'd suggest. You'll still see it used every so often, but it's not advised.
One of the things that anyone working with opamps needs to know is how to follow the info in the datasheet. It would be nice if everyone used the same nomenclature and presented data in the same way, but this is not the case. As a result, you need to be able to interpret the data so you can make direct comparisons. This doesn't always work, because some datasheets leave out things that they consider 'un-necessary'. I doubt that anyone knows just how they decide what is 'necessary' and what's not.
There are two different types of operational amplifier - voltage feedback (VFB) and current feedback (CFB). Most of this article concentrates on 'traditional' voltage feedback types, but current feedback is also covered. They may share the same schematic symbol, but they're very different in the way they are used and how they perform. CFB amplifiers are optimised for very high speed, and cannot be considered to be 'general purpose' devices.
There are two ways that opamps are used when gain is required. The non-inverting configuration is also often used with RF set to zero, and RGain omitted. This is a non-inverting buffer, with unity gain (0dB). If RF and RG are equal, a non-inverting amplifier has a gain of two (6dB) or a non-inverting stage has unity gain (0dB), but with the signal inverted. It's not immediately obvious, but a unity gain inverting amplifier actually has a gain of two - the input is always assumed to be a low impedance, and it must be (very) small compared to RGto achieve unity gain. From the opamp's perspective, this is no different from the non-inverting configuration but with the non-inverting input grounded. It must 'see' a gain of two.
The gain of two (for a unity gain stage) is often known as the 'noise gain', because the circuit has unity gain for the signal (but inverted), but opamp input noise is amplified by two. Note that an inverting stage doesn't require a resistor to ground, as the reference is set by the non-inverting input being grounded. A non-inverting stage must have a ground reference, and that sets the input impedance. The input impedance of an inverting stage is the same as RGain at all frequencies where the opamp operates in the linear region. In some cases it's necessary to add a resistor from the non-inverting input to ground to minimise DC offset. If used, it should be bypassed with a capacitor to minimise noise.
An inverting unity gain stage is therefore twice as noisy as the non-inverting stage. At all gains, the inverting stage operates with 'signal gain + 1' (a gain of 3 means a noise gain of 4). The gain for each stage is easily worked out ...
Non-Inverting, Gain = RF / RG +1
Inverting, Gain = RF / RG
These simple formulae apply for all opamps, including discrete and current feedback types (the latter are a 'special case' discussed below). Knowing the formulae and the reasons they work is essential to your understanding. I've lost count of the number of people who send me an email to ask how to change the gain of a circuit or opamp stage, but this is something that everyone should know. In reality, the relationship is a little more complex, but there is no need to know any of the more 'advanced' maths - the simple versions shown work well enough until the opamp starts to run out of 'excess' gain at high frequencies, and the feedback ratio (set by the two resistors) cannot be maintained any more. That's what this article covers, but the complete formulae are still not necessary.
The value of the bias resistor (RBias) influences the DC offset at the output of the opamp stage. If an opamp draws a 100nA input current, you'll see 100mV developed across a 1Meg resistor. If a capacitor (CG) isn't included in series with RG, any input DC offset voltage is amplified by the stage gain. For a complete guide to designing opamp circuits, see the Designing With Opamps article. In general, allowing opamp stages to have gain at DC is a bad idea for audio, but may be essential for some test equipment and industrial applications.
The bandwidth of an opamp is almost always referred to as the 'unity gain bandwidth' or 'gain-bandwidth product' (GBP). This is the frequency where the gain has fallen to unity (1, or 0dB) without feedback. For mere mortals this is very difficult to measure, but it's easily simulated. A graph showing gain vs. frequency is usually provided in the datasheet, but sometimes it's only stated in the general specifications. When gain (small or large signal voltage gain) is specified, it's almost always for DC or some (very) low frequency.
At least in theory, the gain-bandwidth product tells you the gain you can achieve at a given bandwidth. For example, if an opamp has a gain-bandwidth product (or open loop unity gain frequency) of 1MHz, then if you want a gain of 10 (20dB), the maximum bandwidth (-3dB) will be 1MHz divided by the gain (10). This gives 100kHz. However, there are other factors that must also be considered, and if you only rely on the GBP without considering the peak amplitude and wave shape (we assume a sinewave) things can be very different from what you expected.
An example is shown below, and this was simulated using an RC/MC4558 opamp. These are very common in guitar pedals and they are a cheap option that have better performance than most people expect. They are not in the same league as an LM4562 (for example), but the simulator claims that it has an open-loop gain of 110dB (316,000) and the datasheet says 106dB (200,000). This is shown below, and the simulation is in reasonable agreement with the datasheet. The unity gain bandwidth (also known as gain-bandwidth product) is 3MHz. When used in an audio circuit, there is 44dB of gain available at 20kHz, so if the gain is set for 20dB (×10) there's only 24dB of feedback. Where you might measure a distortion of 0.003% at 1kHz (7V RMS output), that climbs to 0.26% at 20kHz. All opamps are affected in the same way. Note that these results are simulated, not measured. However, measurements will show the same trend with any opamp you care to test.
The closed loop shows a gain of 10, or 20dB. The response remains flat until it approaches the open loop gain. Once there's less than 10dB of feedback (when the open loop gain falls below 30dB), the closed loop response falls. For it to be effective, you ideally need at least 20dB of feedback. With decreasing frequency, the FB ratio increases, and at 10Hz there's 86dB of feedback. You might wish that it were different, but physics isn't amenable to the whims of us mere mortals.
Critics of opamps will point to this as a major failing, but in reality it's usually not a problem. It's uncommon for any audio circuit to require a gain of more than 10 (20dB), and if it does it will often be split across two gain stages. However, you need to understand that this is real, so expecting high gain at high frequencies is usually unrealistic. If that's what you need, consider using two opamps in cascade. A total gain of 100 is easy using two gain stages, and at frequencies up to 100kHz with low cost opamps. There will also be circuits where the distortion contribution of the opamp is minimal compared to the distortion expected from the source.
Project 158 shows a low noise preamp with a gain of up to 1,000 (60dB) using NE5532 opamps. By using three stages, each with a gain of 10, you get plenty of bandwidth and very high gain. Ultimately, it was necessary to limit the high frequency response to reduce audible noise. With a gain of 10, an NE5532 can get to 350kHz with a 1V RMS output easily, and there's no visible distortion on a scope until you approach 450kHz.
Circuit design is invariably a series of trade-offs, and a solution for one application doesn't mean that it's suited to another. There will always be situations where good gain 'flatness' is needed, but distortion isn't a major issue, and test equipment is often a case where the requirements are very different from audio applications. Most test equipment that requires a lot of gain is not troubled by a bit of distortion, but gain linearity with frequency is very important. 1dB of variation in an audio circuit will often be quite acceptable, but if a measurement system under (or over) estimates the level by 1dB that may be a 'failure to meet specifications'.
The response shown in Fig. 1.2 is typical of many opamps. The rolloff is 6dB/ octave (20dB/ decade), and it's there because without it the opamp will oscillate. Although shown with a 10Hz -3dB frequency, this varies from one opamp type to the next. Some will start to roll off at a lower frequency, and some higher. The compensation capacitor is known as the 'dominant pole', and it ensures that the opamp will be stable in user's circuits. In the early days there were quite a few uncompensated opamps, such as the LM301, but even that has a required dominant pole capacitor, which is external. The minimum suggested value is 3pF. In some cases, the 'single-pole' compensation is replaced with a two-pole network. This rolls off faster, but the rolloff starts at a higher frequency. Response decreases by 12dB/ octave instead of 6dB/ octave. Not many opamps have this capability, and it's not covered here.
It's not quite so obvious, but for a given -3dB frequency, the bandwidth is also dependent on the open loop gain. If two different devices have a 10Hz -3dB frequency, the one with higher gain must extend the unity gain bandwidth by a proportionate amount. Let's say we have one opamp ('A') with an open loop gain of 10,000 (80dB) and another ('B') with a gain of 100,000 (100dB). Both start to roll off at 10Hz (-3dB). opamp 'A' will have unity gain at 100kHz, but opamp 'B' will still have a gain of 20dB at 100kHz, and will fall to 0dB (unity) at 1MHz. Opamp 'C' doesn't start to roll off until 100Hz, thus extending its unity gain bandwidth to 10MHz, but without increasing the open loop bandwidth.
This is quite clear from the graphs shown above. This relationship holds for all opamps for a given -3dB frequency in the open loop gain. If the -3dB frequency is raised from 10Hz to 100Hz, this has the same effect - the open loop gain is extended by another decade. If the red trace is extended to 100Hz, it would intersect the green trace at that frequency, so it will have a unity gain frequency of 1MHz. The blue trace shows the result if the -3dB frequency is extended to 100Hz with 100dB open loop gain. The use of very high open loop gain to improve bandwidth is seen in the data shown in Table 1.3 (the last three opamps in particular).
It's worth noting that the NE5532/4 are unusual, in that the open loop -3dB frequency is extended to around 1kHz, but open loop gain is lower than most others. By extending the -3dB frequency from 10Hz to 1kHz (two decades), the effective bandwidth is also extended by two decades. Conversely, some opamps start to roll of at less than 10Hz, so while they may seem to have more than enough gain, a lower rolloff frequency limits their closed loop maximum gain vs. frequency. These three parameters are interactive - open loop gain, open loop bandwidth and compensation -3dB frequency. They all need to be considered for a final design where extended high frequency response is required. For most audio applications you don't need to be too fussy, as any opamp with a gain-bandwidth product of 3MHz or more will work fine in most cases.
Note: Not all NE5532/4 opamps are created equal, as they are made by a number of manufacturers. The response referred to above may or may not apply to those you buy. However, the general specifications are usually fairly consistent, so changing brands won't usually cause any problems. This may also apply to other opamps that are available from more than one manufacturer.
The compensation capacitor is selected to ensure that the gain has fallen to unity before the phase shift through the opamp has accumulated 180°. If there is more than 180° phase shift, the signal polarity is inverted, and negative feedback becomes positive feedback, causing oscillation. If you see a specification for 'phase margin', that's the difference between 180° and the actual phase shift through the opamp. For example, a phase margin of 45° means that the opamp has a total phase shift of 135° at its unity gain frequency. You don't need to worry about this for any opamp that's compensated for unity gain. Sometimes it's not specified at all, and in other cases it may be included the the graphs for the device.
Where high speed is essential, there are some truly awesome opamps available if you're happy to pay the price for them. One that's very hard to beat is the AD797 (Analog Devices), which has full output bandwidth to 280kHz. The gain-bandwidth product is up to 450MHz, and you can have a -3dB frequency of 8MHz with 20dB of gain. This doesn't come cheaply though, as they cost around AU$25 - AU$40 each (depending on variant and supplier).
The thing to take away from this is that nearly all opamps require compensation, including discrete versions. You can build an opamp that doesn't require compensation (for example the opamp shown in Project 231, but it still needs to be compensated if you use it with a gain of less than 30dB (×30). Don't expect it to match most integrated opamps that you can buy, but distortion is lower than you'd expect from a simple circuit, and it has a high slew rate (about 6V/µs compensated). You can get response to 1MHz with 40dB of gain (×100), which is a good result. No compensation is needed if you operate it with 40dB of gain, but it's essential for a gain of 30dB or less. See the project article for full details. Current feedback opamps generally don't require compensation in the traditional sense (see Section 4 below).
The NE5534 is well known, and the datasheets should be in everyone's collection. However, the schematic is not easy to follow, so I've used the RC4558 as an example of a 'real' circuit diagram. This is fairly straightforward, and it gives you an idea of the complexity of even a simple, comparatively low-performance opamp.
The 'dominant pole' is the 10pF cap. This is sufficient to allow the circuit to remain stable with unity gain. Some datasheets don't mention the minimum gain without compensation, but it's usually about ×3 (or 10dB). The TI datasheet is well filled with graphs of the essential parameters. The common way to ensure stability is to use a 30pF compensation cap (33pF is the closest standard value). With this connected between pins 5 and 8 of an NE5534, the slew rate is reduced to 6V/µs (it's 13V/µs without compensation). Interestingly, the lowly TL07x JFET input opamps also have a slew rate of 13V/µs, and they are internally compensated for unity gain, but with a gain-bandwidth product of only 3MHz.
The values from the 'Typical' column of the NE5534 datasheet show that it has a large signal gain of 100V/ mV (100,000 or 100dB). The slew-rate is claimed to be 13V/ µs, but (and this is important) that figure only applies when there is no external compensation capacitor. If a 22pF compensation cap is included, the slew rate falls to 6V/ µs - a significant difference.
If your input signal is a sinewave and the output becomes triangular at high frequencies, it's slew rate limiting. The opamp isn't fast enough to keep up with the signal, and the opamp is operating open-loop (no feedback) during this period. The old claims of TIM/TID (transient intermodulation distortion/ transient induced distortion) were based on exactly this phenomenon, but failed to understand (or chose to ignore) the fact that no audio signal in a properly designed circuit will ever be fast enough to cause a problem. In all but a few cases, TIM was a furphy - it simply didn't happen. It was (and is) easy to create it, but not with a normal audio signal (e.g. music).
The slew rate needed in any application depends on the frequency, waveform and amplitude. A 20kHz sinewave signal at 2V RMS (2.82V peak, typical of the maximum output from a preamp) has a maximum rate of change (slew rate) of 365mV/µs, but if the amplitude is increased to 10V peak (7V RMS) that increases to 1.26V/µs. Increase the frequency to 30kHz (still at 10V peak) and the slew rate becomes 1.88V/µs. If we were designing a measurement system that has to extend to 100kHz, the slew rate increases to 6.28V/µs.
This applies irrespective of the opamp's unity gain bandwidth. An NE5534 without external compensation has a slew rate of 13V/µs, reduced to 6V/µs with a 22pF compensation capacitor. It's apparent that to be able to get 10V peak output at 100kHz, an NE5534 must be used without the compensation cap, or its output cannot change fast enough to keep up with the signal. Slew rate for a sinewave is determined by the formula ...
Slew Rate = 2π × f × VPeak V/s
We divide that by 106 (1,000,000) to obtain V/µs
It's of no consequence that the open loop bandwidth is 10MHz with the 22pF cap in circuit. The maximum frequency and/or amplitude is limited by the slew rate if we expect more than a couple of hundred millivolts at frequencies up to 1MHz. Slew rate has other effects on a circuit too. If an opamp is driving a nonlinear load (such as an analogue meter's rectifier), the output may have to swing by 1V or more just to overcome the diodes' forward voltage. Ideally, this will be close to instantaneous, but no circuit, opamp or discrete, has an infinite slew rate, so operation at high frequencies is compromised.
Test instrument circuits are a 'special' case, and it can come as a real surprise when an opamp that looks like it should easily handle the highest frequency of interest falls apart during testing. The response may fall dramatically well before you thought it should, so your meter circuit that should handle 250kHz only makes it to 50kHz before the response is well down from where it should be. Even a simulated circuit using an 'ideal' opamp (almost infinite bandwidth and slew rate) may prove disappointing, and it can be hard to understand why.
The frequency response you can get from any opamp is limited by its unity gain bandwidth and the slew rate. At unity gain, the response will usually extend to the unity gain bandwidth, but you can only get an output voltage that remains below the slew rate. At low levels, you can usually expect to get up to the full bandwidth claimed for a non-inverting amplifier (buffer), but somewhat less for an inverting amplifier. This is because an inverting buffer has a 'noise gain' of two, and the opamp is behaving exactly as it would if it had a gain of two. The -3dB frequency will be a little less than half that for a non-inverting buffer.
You have to look at the open-loop gain plot to see why this is true. Unfortunately, the graph resolution is never good enough to see this clearly, but it's always the case. This may be unexpected if you're not fully acquainted with all the specifications and their implications. As an example, a simulation using a TL072 opamp shows the -3dB response extending to 4.86MHz for a non-inverting buffer, reduced to 2.25MHz for an inverting buffer. The same effect applies to all opamps!
The next issue is the maximum output voltage at the highest frequency of interest (let's say 1MHz). We know that the TL07x series have a slew rate of 13V/µs, so any voltage (at any frequency) that exceeds that means that the level will be severely limited. Using the formula shown above, it's easy enough to see that 2V (peak) is the absolute maximum for a 1MHz signal, but in reality it will be a bit less to retain linear operation. Remember that when any feedback circuit has entered slew rate limiting, it's no longer linear and it has zero feedback. A simulation shows that a TL072 has an absolute maximum of 1V RMS (1.414V peak) before slew rate limiting causes a loss of feedback. With 1MHz at 1V RMS, the outputs of both inverting and non-inverting amps are reduced - you can't get unity gain when you're so close to the limits.
It's not only frequency response that's affected when you push an opamp to its limits. With a circuit gain of x10 (3.16dB) you need at least 20dB of excess gain, and preferably more, at the highest frequency of interest. Referring to Fig. 1.2, if you require a gain of 10, the output will be flat to within 1dB up to 60kHz. At that frequency, the open loop gain has fallen to 35dB, so there's only 15dB of feedback. There's 20dB of feedback available at about 33kHz. At the highest (audio) frequency needed (20kHz) you have a total of 46dB of gain, allowing 26dB of feedback. As the amount of feedback is reduced, distortion increases in (almost) direct proportion.
Opamp | Open Loop Gain | Slew Rate | Unity Gain B/W |
1458 | 200 V/mV (106 dB) | 0.5 V/µs | 1 MHz |
4558 | 200 V/mV (106 dB) | 1.6 V/µs | 2.8 MHz |
TL07x | 200 V/mV (106 dB) | 13 V/µs | 3 MHz |
LM833 | 316 V/mV (110 dB) | 7 V/µs | 15 MHz |
NE5532 | 100 V/mV (100dB) | 9 V/µs | 10 MHz |
NE5534 (CC=0) | 100 V/mV (100 dB) | 13 V/µs | 10 MHz |
NE5534 (CC=30pF) | 100 V/mV (100 dB) | 6 V/µs | 10 MHz |
OPAx134 | 1 V/µV (120 dB) | 20 V/µs | 8 MHz |
OPA1642 | 5 V/µV (134 dB) | 20 V/µs | 11 MHz |
AD744 | 400 V/mV (122 dB) | 75 V/µs | 13 MHz |
LM4562 | 10 V/µV (140 dB) | 20 V/µs | 55MHz |
AD797 | 20 V/µV (146 dB) | 20 V/µs | 110 MHz |
CC is the compensation cap for the NE5534. It's interesting to compare the parameters that ultimately limit the high frequency performance. As you can see from the table, the NE553x devices have less open loop gain than 'lesser' devices, but have a much wider bandwidth. The TL07x opamps have a very high slew rate, but can't get beyond 3MHz. The OPA134 (or dual OPA2134) has a very high slew rate, but it can't beat an NE5532 for maximum frequency. The LM4562 has the same slew rate as the OPA134, but it has a gain-bandwidth product of 55MHz vs. only 8MHz. The AD744 has a slew-rate of 75V/µs (faster than any of the others listed), but according to the datasheet only manages 13MHz unity gain bandwidth.
It should come as no surprise that this confuses people. It is confusing, and these examples show why you can't just look at one parameter when high speed and/ or wide bandwidth is required. The parameter that affects your circuit (for better or worse) depends on the application, the highest frequency of interest, and the expected signal amplitude at that frequency. If you simply select the fastest opamp you can get (based on the gain-bandwidth product), it may be unable to supply the full output level at the highest frequency because the slew rate is too low. Likewise, if you select on slew rate, the bandwidth may be inadequate (the TL07x is a good example - 13V/µs. but only 3MHz bandwidth).
Another specification that is supplied for some devices but not others is full-power bandwidth. This is the -3dB frequency at maximum output swing before clipping or slew-rate limiting. I didn't include it in the table because it's not always specified. Sometimes it's provided in the device parameters, sometimes it's shown as a graph, and sometimes it's not included at all. Where this is made available, it's almost always for a unity gain, non-inverting stage.
The simple CFB amplifier (aka CFA) shown below is configured for a gain of 3 (9.54dB), has flat response to 26MHz, and a slew rate of around 280V/µs. The two input transistors are not within the feedback loop so their distortion is dominant. Just like its integrated brethren, the high frequency response is controlled by the value of RF. This simple version can't compete with an integrated circuit, but it shows how the feedback is applied. Instead of going to the base of a transistor, it's applied to the emitter(s). This is a low-impedance point in the circuit that responds to current - hence the term 'current feedback'.
The CFA was patented in 1985, but was 'discovered' in around 1982 [ 6 ]. Many early transistor power amplifiers used the current feedback topology, well before anyone had named it as such. The Mullard 10-10 stereo amplifier is an example, published in the 1960s. A number of similar designs were popular around that time and into the 1970s, and almost all used the current feedback topology. Voltage feedback became common when most designers switched to using a long-tailed pair for the input stage.
CFAs are also known as 'transimpedance' amplifiers. To make everyone's life miserable, it's customary to state the gain in ohms. Essentially, it's a measurement of how many volts output you get for a given input current, and volts divided by amps is ohms. A particular CFA may have a 'gain' of 600kΩ, which means that for each milliamp of input you get 600V output. This is clearly impossible, but it's easily scaled. In this case, an input current of 1µA would cause a 600mV output voltage (600mV / 1µA = 600k). Fortunately, you don't need to get your head around this and it's unlikely that too many readers will be rushing out to buy current feedback opamps. You also need to be aware that the term 'transimpedance amplifier' may also refer to a voltage feedback opamp configured as a current to voltage converter. It's rather disappointing that the two seem to have been conflated, for reasons that escape me.
DC offset is usually somewhat higher with CFB opamps than VFB types, and the simulated version of the Fig. 4.1 circuit has an offset of over 40mV with the +ve input grounded via a 1k resistor. This is despite simulator transistors being well matched. It's an issue with all such designs, whether discrete or integrated. Capacitive coupling eliminates the DC offset of course, but that may not be possible in some circuits. The simulated version has a distortion of 0.33% with 3V output and no output load other than the feedback network. The circuit can drive 10MHz into a 50Ω load at up to 4V peak (5.6V RMS). Due to its simplified topology, distortion performance is rather poor if it's heavily loaded. As shown (using BC549/559 transistors), the -3dB bandwidth is 38MHz (simulated).
There used to be only a few opamps using current feedback, rather than the more common voltage feedback. Over the years, the number has grown dramatically, and there are now countless examples. These can usually be recognised by the use of a very low value feedback resistor, and they are designed to operate in the MHz ranges. They will work just fine for audio, although some have a low input impedance. An example is shown in the article High Speed Amplifiers in Audio, published after Texas Instruments sent me some to play with. These have a -3dB bandwidth of up to 200MHz, and were designed to drive xDSL (digital subscriber line) - a twisted pair telephone line used for data. This technique has lost favour in most countries now (supplanted by cable/ fibre optic connections), but for quite a while it was the preferred method of providing high speed internet connections to customers. It's still used, and CFB opamps are likely to be with us for quite some time yet, because designers have found them to be useful for many other tasks.
The ability to transmit multiple carrier signals onto a single twisted pair was revolutionary at the time, but it required amplifiers with very wide bandwidth and extremely low distortion. Current feedback opamps are now very common, and they are ideal for handling very high frequencies. Unlike a voltage feedback opamp, CFB opamps do not use a dominant pole for compensation, so they have fairly flat response from DC to daylight (well, not quite daylight, but you get the idea).
CFB opamps are well suited to video line drivers, intermediate frequency amplifiers (in radio receivers) and anywhere that very good high frequency response is needed. There are no audio systems that need this much speed, but it probably won't hurt anything. You do need to be aware of DC offset, and in extreme cases you might find that using a CFB opamp in an audio system causes it to pick up radio frequency interference. This is unlikely to be what you want to achieve. Note that you cannot (and must not) add a capacitor in parallel with RF to limit the HF response, as that will cause oscillation. Instead, use a higher value feedback resistor, or a simple passive filter at the non-inverting input.
An example of a current feedback amplifier (CFA) is the Analog Devices ADA4310. The response curves are shown above for four gain settings. This particular device has various power settings, and the graphs shown are with it set for maximum power. As power is reduced, so is bandwidth. Don't expect to find these in any audio products. Doing so would be rather pointless, although their input impedance is within the normal range we expect. The ADA4310 datasheet claims 500k input impedance. If the feedback resistor is made lower than the recommended minimum a CFA will show greater peaking and may become unstable. Feedback resistors higher than the suggested range should also be avoided. The maximum supply voltage for the ADA4310 is ±6V, limiting dynamic range for audio applications.
Note: Selecting the feedback resistor using the same criteria you'd adopt for a voltage feedback opamp could easily see the bandwidth reduced by an order of magnitude (e.g. from 100MHz down to only 10MHz). The values suggested in the datasheet are there for a reason! Also, be aware that the open-loop gain is generally lower than most VFB opamps, so expecting very high closed loop gain is usually unrealistic.
Gain (dB) | RF (Ω) | RG (Ω) | -3dB Bandwidth |
+2 (6dB) | 499 | 499 | 230 MHz |
+5 (14dB) | 499 | 124 | 190 MHz |
+5 (14dB) | 1k | 249 | 125 MHz |
+10 (20dB) | 499 | 55.4 | 160 MHz |
+20 (26dB) | 499 | 26.1 | 115 MHz |
The symbol for a CFA is usually the same as used for voltage feedback devices, but the feedback resistances used are far lower. The maximum suggested value for the feedback resistor for the ADA4310 is 499Ω (510Ω would work too), and these devices are designed to drive 50Ω loads. It's common to see the gain rise before it starts to roll off, and the rise is greatest when a CFA is set for low gain. CFAs generally have low distortion and extraordinarily high slew rates. The CFA shown has a maximum slew rate of 820V/µs. Input noise is claimed to be only 2.85nV/√Hz.
Power dissipation in CFAs is generally higher than a 'normal' voltage feedback opamp, and some require a heatsink. The supply current for the ADA4310 is 15.2mA (full power mode), so dissipation is 182mW - not much, but it's a tiny SMD IC. The first integrated CFA I played with was a TI THS6012, a fairly substantial device that also required a heatsink that was very difficult to accommodate. One interesting claim is that CFB opamps have bandwidth and distortion characteristics that are 'relatively unaffected' by the gain. Most application circuits shown in datasheets indicate a maximum gain of up to ×10 (20dB), but ×4 (12dB) is more common.
The very wide bandwidth of CFAs can mean that cables are no longer 'unimportant'. Because of their very high maximum frequency, a short length of coaxial cable can become a resonant circuit. This can happen with 'ordinary' opamps as well, but at the frequencies where a cable can cause problems they have little or no gain. A 857mm length of coax is 1/4 wavelength at 70MHz, well within the bandwidth of most CFAs. To prevent reflections and potential instability, coax should be terminated with its characteristic impedance. This is a nuisance (to put it mildly). Adding a 51Ω resistor in series with the output will generally work well enough.
Another CFB opamp worth looking at is the OPA2677, with a small-signal bandwidth of 220MHz and a slew rate of 1,450V/µs. The suggested feedback resistor (RF) is 511Ω, or 250Ω for a maximum bandwidth of 150MHz. The maximum supply voltage is +12V (or ±6V). If you think either of these devices suit your needs, you need to read the datasheet carefully and observe all precautions. Supply decoupling is particularly important, and MLCC types are the only ones that will ensure good performance. In general, multiple values in parallel are generally used to cover the frequency range. Normally, I never suggest this for audio, but when you're working with RF, everything changes. The supply current for the OPA2677 is 18mA for both channels, but it can supply up to ±380mA to the load.
Several ESP projects are CFAs. The Project 37 (DoZ) preamp is one example, and the Project 217 'practice' amplifier is another. The P37 preamp has no compensation, and response extends to 10.5MHz (-3dB), with a unity gain bandwidth of 25MHz, and a slew rate of 36V/µs. All of this from just four transistors! While these figures were taken from a simulation, measurement has shown a small signal -3dB frequency of 8MHz which is extraordinary for such a simple circuit. It's quite capable of providing 3V peak output at well over 1MHz, something that is difficult with most opamps.
It's important to understand that there are two different versions of current feedback. The first is the type discussed here, and the second is used to increase the output impedance of amplifier circuits, as described in Project 27 (guitar amplifier) and Care & Feeding Of Spring Reverb Tanks. Both use current feedback, but it's used to sense the current in the load, and is not a characterisation of the amplifier topology. That both use the same terminology is unfortunate, but a quick look at the circuit of one or the other will allow you to figure out what you're looking at. Current feedback used to increase output impedance is almost invariably a mixed feedback system, using both voltage and (load) current feedback paths, with the current sensed across a low value resistor.
There are several references to the loss of feedback in this article, and it's helpful to understand how this happens. Feedback works by sending a scaled version of the output back to the inverting input of an opamp (or power amp). Provided the circuit is operating in linear mode (not distorting for any reason), the voltage at the two inputs (+ve and -ve) will be equal. This assumes an 'ideal' device, and in reality there is always a small difference, but for basic analysis it can be ignored.
Should the device become non-linear for any reason (e.g. clipping or slew rate limiting), it's no longer possible for the input voltages to be equal because the output is not linearly related to the input. Simple deduction tells us that if the device's input voltages are not the same, it can only be operating open-loop - the feedback is no longer in control of the circuit's behaviour. The output is simply controlled by the relative polarity of the two inputs. In this (non-linear) mode of operation, the output simply assumes the polarity of the most positive input.
If the non-inverting input is more positive than the inverting input, the output will be positive, and vice versa. Normal (linear) operation can only resume when the feedback is restored. As noted, this can happen if the input changes too quickly and the output can't keep up (slew rate limiting), or if the output is driven to one or the other supply rail (clipping). This phenomenon was the basis for the arguments that raged (for a while) about TIM (transient intermodulation [distortion]) aka TID (transient induced distortion). It's very real, and it can happen, except that there were presumptions made that failed to account for the nature of music. Musical instruments (and the recording processes) don't have anything that changes fast enough to cause problems with a properly designed circuit.
It's very easy to create TIM/TID on the test bench and in a simulator, but you can use the formula for maximum slew rate to work how fast an amplifier needs to be to handle normal audio. It's almost impossible to cause TIM/TID with music alone. To give you an idea, an amplifier with ±100V supplies will never be driven to more than ±50V with an audio signal of 20kHz, but we'll ignore that and work out the slew rate for 100V peak at 20kHz. That works out to be 12.6V/µs, which is total overkill, but easily achieved.
For more sensible power ratings (not everyone needs a 600W/ 8Ω amplifier), the demands are similarly reduced. A more typical power amp will use ±50V supplies (150W/ 8Ω) and will never have to provide full power at 20kHz - the worst case is around 35W otherwise everyone would blow up their tweeters. The slew rate needed for that is only 2.5V/µs!
Fig. 5.1 shows what happens when the output (red trace) can't keep up with the input. The input signal was a 1V peak sinewave at 20kHz. The output is unable to change quickly enough to permit the passage of a sinewave, so a triangular wave is produced instead. I used a 741 for the simulation, as it is one of the few that will limit at audio frequencies. Its slew rate is only 0.54V/µs. The required slew rate is 1.26V/µs, as the frequency is 20kHz with an expected peak voltage of 10V. This condition will always exist at some frequency (and/ or level), but with most 'decent' opamps you won't see it until the input frequency is over 50-100kHz. For example, an NE5532 with a 10V peak output will show the onset of slew rate limiting at around 130kHz. At 100kHz there's no limiting, and distortion is under 0.1%. This is of no consequence of course, as it's well above the audio band. CFB opamps are different, and they are generally fast enough that slew rate limiting won't occur.
It's a good idea to ensure that the slew rate is at least double that which is needed for clean reproduction, and that's usually fairly easily achieved. None of this helps if the amp is driven to clipping, and while that's ideally avoided, it will happen occasionally. For power amplifiers, ensuring a clean recovery from clipping can be more important than slew rate. Clean recovery from clipping is particularly important for guitar amps, as they are usually driven hard.
Fig. 5.2 shows SR limiting in more detail. The input signal was a 1V peak sinewave at 20kHz, with a small 1MHz sinewave superimposed. The amplifier is supposed to provide an output of ±10V peak (7V RMS) as shown by the green trace, and it should include at least some of the 1MHz signal. As before, the opamp's output can't keep up. The output tries to get to the required voltage, but it can't reach the ±10V peaks quickly enough. As you can see, all traces of the added 1MHz signal have gone. The opamp is operating open-loop - the feedback is irrelevant.
Fairly obviously, any 'nuances' in the input signal are lost, as is the original waveform. This is intended to be exaggerated so you can see the effects easily. If slew rate limiting occurs in a 'real' circuit it's far more subtle, and it may even go un-noticed. You'll never see it happen on a scope with a music signal because it happens too quickly and the signal is dynamic. It is very easy to induce though, and if you observe the output of a slow opamp stage that's expected to provide (say) 10V peak at 20kHz, you will see the output waveform almost exactly as shown (the frequency may be much higher before the problem is visible). As shown, it will show as a drop of 5.26dB if you only monitor the output with an AC millivoltmeter. If the input level is reduced to 250mV peak (1.77V RMS output) the problem goes away, and the output of even a slow opamp is flat to well over 20kHz, with distortion back to 'normal' (ignoring the 1MHz signal - that's only there to show what happens during slew rate limiting). Fairly obviously, choosing a more sensible opamp also solves the problem - no-one would use a 741 in an audio circuit other than for testing, and even that's unlikely for the most part.
Note that Fig. 5.2 is a test designed to show not just slew rate limiting, but what happens to other frequencies that may also be present. No-one will have an audio signal with 1MHz (or other high frequency) superimposed, but there will be harmonics of other signals present. The example is extreme, and it will never happen with music. If you were to run a bench test with the setup described, you will see the same thing. Examples such as this can be used to 'prove a point', but they do not represent what happens with music. For what it's worth, an LM4562 will show an output that's very close to the 'expected' (green) waveform. If a 1MHz signal were somehow present along with the audio, it should be filtered out as it serves no useful purpose and will probably increase distortion.
With opamps used in low-level circuits (preamps, active crossovers, line drivers, etc.), the demands are generally fairly modest. The 'old faithful' NE5532 (or 5534) opamp has been used in countless high quality mixing consoles used to create the music you listen to. It stands to reason that they are also perfectly suited to home equipment. From the basic (frequency related) specifications provided in Table 3.1, it's obvious that they will never cause problems in most audio circuitry. The LM4562 (and its close relatives) used to be very expensive alternatives, but these opamps are now barely more expensive than the NE5532, so it would almost be silly not to use them. Unfortunately (and in common with so many other parts), through-hole (DIL) packages may be hard to find.
This article will not answer the all-important question of 'which opamp is best'. There is no 'best' opamp for all applications, the range of devices is truly vast, and while some are acclaimed as sounding 'better' or 'worse' than others, mostly this is nonsense. It certainly applies if there's audible noise, or if you try to use a completely inappropriate opamp (e.g. µA741 or 1458) for audio, but with reasonable signal levels (up to 1V RMS) and across the audio band, even these work. They are far from optimum though, and I would never suggest that you remove an LM4562 and use a 4558 instead. The NE5532/4 are still excellent opamps, and their only real issue is a rather high DC offset voltage due to their comparatively high input bias current. This is easily solved by using AC coupling - you can't hear DC, so there's no reason to amplify it.
Note that every IC opamp ever made has full gain at DC (that's where the open-loop gain is measured), and all have almost identical low frequency performance (noise excepted). The differences are in the higher frequencies, where the open-loop gain is falling at 6dB/ octave and feedback becomes less effective. If you need to amplify high frequencies, then you must examine gain-bandwidth product (unity gain bandwidth), slew rate and 'full power' bandwidth if that's provided in the datasheet.
The selection of an opamp for instrumentation is usually far more difficult than for audio. Test equipment needs flat frequency response, often to 250kHz or more, but there may be no need for particularly low distortion. In some cases, DC accuracy may be an absolute requirement, while in others it doesn't matter at all. Many test instrument circuits make greater demands on opamps than any audio circuit, as there are many criteria that must be satisfied. This is why manufacturers have such detailed datasheets, so you can wade through all the parameters to choose the device best suited to your needs.
You often need to be very careful with wide bandwidth opamps, as a minor error in PCB track layout can cause the device to oscillate. Some are more resistant to oscillation than others, and regular readers will have noticed that I never specify the LM833 for any projects. Many people have found these opamps to be marginally stable unless everything is done perfectly. In extreme cases, just adding a socket can cause (often serious) problems, and it's essential that opamps always have a bypass capacitor as close as possible to each supply pin. The bypass can be between the two supplies, but bypassing each supply to ground is also essential. More problems are caused by poor (or non-existent) bypassing than almost any other design error.
Sometimes an oscillation problem is 'invisible'. Nothing shows up on a scope, but distortion may be higher than expected. The problem may 'go away' (or appear) if you touch the IC body, or connect/ disconnect a test lead. This generally indicates that the opamp is oscillating internally, with little or no visible clue. Should you experience this, it's almost invariably due to poor bypassing. There might be other causes, but proper bypassing is so important that you need to be aware of all possible issues if it's not done properly.
This article covers but one aspect of opamp design - speed/ frequency response. Depending on the application, you may also need to optimise for noise (see Noise In Audio Amplifiers) or distortion (Distortion - What It Is And How It's Measured). For high quality audio, both of these are essential, but bandwidth is rarely an issue if an 'audio qualified' opamp is selected. Be aware that even major manufacturers may make (IMO silly) claims for 'audio quality' with nothing to back it up.
Most references are the datasheets for the various devices mentioned throughout the article. There aren't many 'independent' references, because the topic (frequency vs. gain) is not well covered elsewhere, and much that you find is not useful. CFAs are very well documented, but many of the explanations are rather convoluted. There are some references though ...
![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Circuit Protection |
![]() ![]() |
The general idea of a fuse seems fairly straightforward, but in reality it's a science unto itself. The maximum and minimum current ratings depend on many factors, including the size and shape of the fuse body itself, the material from which it's made, plus many other factors that aren't immediately apparent. There is a bewildering array of different types of fuse, but for the vast majority of electronics work the M205 (5 × 20mm) miniature cartridge fuse is the most common. The 3AG (6.3 × 32mm) fuse used to be the most popular many years ago, but the smaller M205 has taken over for most applications. Readily available currents are shown next.
+ ++ M205 - 5 × 20mm 32mA - 16A+ +
+ 3AG - 6.3 × 32mm 40mA - 32A +
These ranges cover most of the common uses we have in electronics, but for high voltages and high currents, larger cartridge fuses up to 400A are readily available. Special techniques are necessary for high voltages, and especially where the supply is DC. If the fuse is not designed for it, as the fuse wire melts it's perfectly capable of drawing an arc (it happens even at low voltages). Should the voltage be high enough to sustain an arc longer than the fuse, then you can expect (and in turn will receive) dire consequences.
+ +There are also PCB fuses (generally with a 5mm pin spacing, but M205 fuses are readily available with wire leads), and these simplify the assembly process. It's generally expected that if the fuse fails, the product is discarded, because for most people it's not economically viable to have the unit repaired (the fuse will only fail after the device itself has failed!). In some cases, fusible resistors are used. These dissipate power, but if it exceeds the threshold (for longer than some predetermined period) the idea is that a fusible resistor will become open circuit. It doesn't always happen the way it's supposed to, but they are cheap 'insurance' against catastrophic failure that may cause a fire or cause isolation barriers to be breached.
+ +The power dissipated by a fuse at its rated current varies considerably. For example, a 32mA fuse has a typical cold resistance of over 250 ohms (!!), and it will dissipate over 250mW at rated current. A 15A fuse from the same series (Littelfuse Axial Lead & Cartridge Fuses - 5×20 mm, Fast-Acting, 217 Series) has a resistance of 4mΩ and will dissipate 240µW. See Table 2 for more on this topic.
+ +For a 32mA fuse, there ca be up to 10V across it at rated current. This is a very significant loss of voltage, and it's apparent that the actual current will normally be a great deal less than the fuse rating. While some fuse datasheets specify the fusible wire material, many don't, so it can be difficult to determine the actual temperature based on the resistance increase.
+ +Copper (bare or tin-plated) is common, but copper has a high melting point. Some of the materials you are likely to find are listed below. Aluminium is not very common, because it must be welded, which is more difficult than soldering (the other metals listed can be soldered). Note that when silver is soldered using a tin-based solder, the resulting alloy has a lower melting temperature than either of the metals used individually.
+ +Material | Resistivity, Ω·m³ | Melting Temp, °C | Tempco, Δ/°C + |
---|---|---|---|
Aluminium | 2.65 × 10-8 | 660 | 3.8 × 10-3 + |
Copper | 1.724 × 10-8 | 1084 | 4.00 × 10-3 + |
Silver | 1.59 × 10-8 | 961 | 6.1 × 10-3 + |
Tin | 11.0 × 10-8 | 232 | 4.2 × 10-3 + |
Zinc | 5.92 × 10-8 | 419.5 | 3.7 × 10-3 + |
There are also various alloys that may be used. Tin/ lead used to be common (i.e. 'solder') but it's now discontinued (RoHS strikes again ). Tin/ zinc is used in some cases, but the specific alloy is not usually provided. This makes it hard to determine the temperature coefficient, as it is highly dependent on the amount of each material. The above list is not exhaustive, but it does cover the majority of fuses encountered in electronics.
Silver is common for very low current fuses because its high conductivity minimises the amount of material needed, and allows the fuse to be fast acting. For high current fuses (aka 'HRC' or high rupturing capacity), the amount of material has to be minimised (requiring high conductivity). This reduces the amount of metal vapour created when a fuse fails catastrophically, as is the case with a short circuit across a low impedance supply.
+ +Because a greater mass of metal takes longer to heat, slow-blow fuses (also referred to as 'T' (time), time-lag or delay fuses) commonly use a larger diameter and higher resistivity alloy. Another technique is to use a thick wire with a spring that causes rapid separation when the fuse wire melts, but there are other methods used as well. Any fuse wire that has an effective 'heatsink' (however this is arranged) will naturally take longer to open with an over-current condition. A fault has to be present for a longer time to allow the wire and its 'heatsink' to rise to the material's melting temperature.
+ +For information on metals and their characteristics, I recommend The Engineering Toolbox, as it's an excellent source of sometimes hard to find information.
+ +Rated Current Amps | Interrupt Current Amps (Max) | Resistance, Ohms 0A Rated A + | Voltage Drop At Rated Current | Dissipation At 150% Current + | |
---|---|---|---|---|---|
0.315 | 35A@250Vac | 880 m | 4.13 | 1.300 V | 1.6 + |
0.4 | 277 m | 3.00 | 1.200 V | 1.6 + | |
0.5 | 206.5 m | 2.00 | 1.000 V | 1.6 + | |
0.63 | 190 m | 1.03 | 650 mV | 1.6 + | |
0.8 | 120.3 m | 300 m | 240 mV | 1.6 + | |
1.0 | 96.4 m | 200 m | 200 mV | 1.6 + | |
1.25 | 70.1 m | 160 m | 200 mV | 1.6 + | |
1.6 | 52.8 m | 119 m | 190 mV | 1.6 + | |
2.0 | 41.6 m | 89.5 m | 170 mV | 1.6 + | |
2.5 | 33.4 m | 68.0 m | 170 mV | 1.6 + | |
3.15 | 22.4 m | 47.6 m | 150 mV | 2.5 + | |
4.0 | 40A@250Vac | 16.5 m | 32.5 m | 130 mV | 2.5 + |
5.0 | 50A@250Vac | 13.7 m | 26.0 m | 130 mV | 2.5 + |
6.3 | 63A@250Vac | 9.5 m | 20.6 m | 130 mV | 2.5 + |
The table shown above is adapted from a Littlefuse datasheet (Axial Lead & Cartridge Fuses 5×20 mm > Fast-Acting > 217 Series) for fast-blow glass fuses. I've shown the values that are most likely to be used in typical electronic projects, but the complete table has a lot more information and covers fuses from 32mA to 15A. I added the column that shows resistance at maximum current (copper wire is assumed), and it works out that the fuse wire temperature is around 300°C at full rated current. (Details for the calculation of temperature are shown at the end of this page.)
+ +High voltage and 'HRC' (high rupturing capacity) fuses use a ceramic body (instead of glass) and contain a granular filler, usually high purity quartz sand with a specific grain size and packing density. The grain size is designed to provide space for the vapours and gases produced by the arc to expand. It also provides a large surface area for effective cooling of the arc. Some of the filler melts under the influence of high arc temperatures, absorbing a huge amount of energy and extinguishing the arc quickly. This isn't something that's normally needed in typical electronic projects, but HRC fuses are common in industry and power distribution.
In an ordinary electrical circuit, voltage control is the most practical approach for supplying power, and a given load will have some sort of impedance. Current flows through the load according to Ohm's Law, V = I × R, where V is the potential (Volts), I is the current (Amps), and R is the load impedance (Ohms). Solve Ohm's Law for current, and the result is I = V / R. All is well if the load is functioning correctly, but if a fault (failure condition) occurs, what next?
+ +In theory, if the resistance approaches zero, the current approaches infinity regardless of the voltage. In practice, all real world electrical sources and failure modes cannot support unlimited current. Even so, the current through the fault may be sufficiently high to cause equipment damage, fires or even explosions. Since these are unacceptable results, the electrical source must be quickly interrupted.
+ ++ Other than the Foreword (above) and the 'Additional Comments' and 'Measuring Transformer Internal Temperature' sections (at the end of this page), comments and additions by the editor + (Rod Elliott) are shown in indented italics, like this paragraph. The only changes to Aaron's original text are very minor, and spelling has been changed to Australian English from the + original US English. Images have been resized and/or redrawn to reflect ESP image standards.+ ++
In many audiophool websites and forum pages you will see references to exorbitantly priced 'quantum' fuses. There will be user claims (real or created by the seller) claiming how the 'veil' was lifted from the music, how the sound became '3-dimensional' (it already was), or how bass was supposedly 'improved'. These claims are lies, without a grain of truth in the reviews or the manufacturer's claims. As you'll discover reading this article, a fuse has a very specific job to do, and by necessity has some resistance. Without that, the fuse could never blow, and it would be a room-temperature superconductor.
+ +The amount of absolute bullshit that you'll find on this topic is astonishing, and you will never see a shred of evidence gained from laboratory testing. The 'reviews' are invariably non-blind, where the listener is usually the person who installed the overpriced piece of crap, so it's inevitable that having spent AU$200 or more (for one fuse!) he will hear a difference (women are generally too smart to believe this nonsense). Further claims that the fuse is directional (Really? For AC?) are even sillier. An AC circuit has a total polarity reversal 50/ 60 times per second, so the fuse can't possibly be directional.
+ +It's fair and reasonable to claim that there is no such thing as a quantum fuse, or that all fuses are 'quantum', since there are quantum changes in all conductors as they pass current. I would dearly love to publish names here to shame the charlatans who sell this rubbish, but most come from the US where litigation can be instigated at the drop of a hat - or a fuse. I'm in no position to try to defend a lawsuit, but I suspect that many readers will read between the lines and know to whom I'm referring. If this is new to you, relax - you haven't missed a thing!
+ +There are also countless 'audio review' sites who's authors sing the praises of quantum fuses, overpriced (and possibly non-compliant) mains leads, 'special' interconnect or speaker cables along with countless other 'products' that are utterly bogus. This article does not cover 'quantum' fuses or anything else that is purely subjective. Audio (and electronics in general) is made using science and measurement, not opinion or dogma.
The fuse is found in everything from small electronic devices to high-voltage power systems. The two main components are a conductive element designed to fail by thermal melting, and a dielectric region capable of breaking any resulting arc. A fuse is a one-shot device and commonly appears in a cartridge that allows for easy replacement after an interruption.
+ +The fusing element must not hinder the normal circuit path, but it must also overheat and melt in response to excessive current (overcurrent). In simplified terms, the melting function depends upon the power expression of P = V × I, where P is the power in Watts. By algebraic substitution with Ohm's Law, the expression becomes P = I² × R, showing that a linear change in current produces a square-law change in power dissipation, which is useful in a fuse. But how quickly should the fuse respond? Fast? Slow? How fast or slow? The capabilities can be summarised with a TCC (Time/Current Curve). A logarithmic scale is typical, with current shown on the horizontal (x) axis and time on the vertical (y) axis.
+ +The TCC in Figure 1 is illustrative of a very fast 1A fuse. First, we observe that no fuse of this type can fail from overcurrent until at least 1.25A is flowing, and none is guaranteed to fail until 1.5A is achieved, or 150% of nominal current. A factor of 1.5 is rather optimistic, and many real-world fuses are more in the range of two, representing the safety factor that must be allowed when a fuse is specified.
+ +Second, note that the TCC is a region representing unavoidable variations, since a thermal melting mode cannot be precise with real-world materials. Here, one fuse might pick up at 1.25A and fail, i.e. clear the fault, in about ten seconds, and another fuse may not clear until 1.5A flows for ninety seconds. The gap between the nominal current rating and the minimum trip is also unavoidable, as a minimum trip too close to the nominal rating would subject the fuse element to large, thermally-induced mechanical stresses during normal operation.
+ +The curve characteristic is maintained in a family of fuses. A 10A fuse of the same family would yield a similar curve, but shifted right, with an initial pickup from around 12.5-15A.
+ +For any fuse, three data points are particularly interesting to a hobby user:
+ +A nuisance (fatigue) failure occurs when the fuse has been aged by numerous operation cycles, or the fuse is operating slightly above its nominal current rating, or the fuse is subject to abuse such as inrush spikes or even mechanical vibration. Metal fatigue from aging is normal, but the latter cases can indicate a system design problem or an insufficient fuse rating. A fuse that is being run slightly above its current rating, or stressed by an inrush, can often be observed to move inside the cartridge. It may survive a number of cycles but it will eventually fail.
+ + +A 'thermal fuse' permanently interrupts a circuit in response to an external heat source exceeding a threshold temperature. Heating-element appliances (such as a coffeemaker), where a system failure might produce a fire under normal operating current, often include one or two. Thermal fuses typically have a live body, so the requirement for electrical insulation with a high temperature tolerance means these devices can be quite dangerous if mishandled.
+ + +All self-resetting devices should never be used for primary electrical protection in a hobby project. They are more correctly applied to low-cost control circuits.
+ +The 'self-resetting thermal switch' includes a circular metal plate, which expands in response to an external temperature until it changes from convex to concave. The new position mechanically trips a switch. These devices are commonly applied to simple temperature control circuits in heating-element appliances and can also be used for hobby tasks such as power supply cut-out, fan start-up, or tripping a control circuit.
+ +A 'self resetting fuse' is a thermal circuit breaker that temporarily opens in response to an overcurrent condition through a bimetallic strip. A bimetallic strip bonds two metals with dissimilar thermal expansion rates. When heated, the strip curves due to dissimilar stresses. In a switching application, the strip is permanently attached to a wire at one end and only makes touch contact at the other, causing the circuit to open when the strip bends.
+ ++ While 'Polyswitches' (PTC thermistors, as used for loudspeaker protection) may seem ideal as a self-resetting thermal fuse, they are completely unsuited for + mains voltages. The maximum voltage is sometimes specified, but in most catalogues it will be missing. Most appear to be rated for around 60V peak at most, and + if subjected to 120V or (much worse) 230V, they will probably fail spectacularly if used as a mains protection device. There might be exceptions, but none has + been seen thus far.+ + +
+ Since the failure of such a device will almost certainly liberate smoke and perhaps some fire, there is no point using something that will produce exactly the + failure mode one is attempting to prevent.+
A circuit breaker is a resettable, electromechanical device that interrupts an electrical supply in response to a failure mode. A basic circuit breaker responds only to overcurrent, but specialised designs can detect very precise failure conditions via electronic detectors. The latter type include Ground-Fault (GFI) and Arc-Fault (AFI) interrupters. Both are interesting enough, but have little or no application to hobby projects.
+ +Circuit breakers used for protecting circuits at a service distribution panel normally include an electromagnet in the circuit path. The magnet coil does not interfere with the circuit under normal operating conditions, but it will quickly pull the breaker mechanism open in response to an overcurrent condition. Residential and commercial power protection is the most common application, but smaller magnetic breakers can also be purchased for hobby use. Magnetic types include a lever or rocker arm that can be manually tripped, like a switch, and some also have a third 'trip' position between 'on' and 'off' to indicate a previous automatic operation.
+ +Thermal circuit breakers use a heater and a bimetallic strip in series with the circuit, and the strip releases a spring-loaded lever upon deformation. Heating is proportional to P = I2 × R, and a thermal breaker will mimic the operation curve of a time delay fuse. Thermal devices are commonly featured in low-cost power extension strips and as a snap-in replacement for a standard fuse holder in hobby applications. Most thermal devices cannot be manually tripped, and will have a short time delay before the tripped device can be reset. Some types may have a relatively high 'on' resistance and are unsuitable for low-voltage applications.
+ + +A selection of circuit breakers and fuses were obtained for testing. The test kit included a True-RMS multitester as an ammeter, an 8Ω dummy load, and a 500VA variable autotransformer. An isolated power supply was inserted between the mains AC and the autotransformer for safety reasons. The output voltage was adjusted to produce a desired current through the dummy load up to a practical limit of about 7A. A breaker or fuse was then placed in series with the circuit and the trip time was observed. For fuse tests, a visual trip monitor circuit was included by connecting an LED circuit in parallel with the fuse circuit. When the fuse failed, full circuit potential would be applied across the LED circuit.
+ +Only one circuit breaker of each type was tested, so multiple trips were run and averaged. Thermal devices were cooled between test cycles to obtain a full reset. Current ratings were selected on the basis of availability and compatibility with the 7A current limit of the test kit. Only 3A fast-acting and time delay fuses are presented here, although multiple types and ratings were tested as a consistency check. Four 3A units were tested at each type and current level to produce an average data set. Fuses that survived a test were assumed to be damaged, and not reused.
+ + +The device shown in Figure 2 is a magnetic-type circuit breaker, rated at 4A nominal current and a maximum of 6A. In other words, the characteristics of this particular unit may vary, but no device of this type will fail to trip if 6A or more flow through it.
+ +In repeated tests, the device was found to pass a maximum continuous current of about 4.25A before tripping. For instantaneous currents above 4.25A, the trip time was effectively instantaneous.
+ + +A disassembled 2.5A device is shown in Figure 3, but the 0.5A device is essentially identical. When the thermal element overheats a bimetallic strip, the strip curves back and a forked spring releases the trip plate. Pressure from the coil spring opens the circuit. The 500mA device had a rather high cold resistance of 3.4Ω, rising to over 8Ω when the element-side terminal was briefly heated with a soldering iron. The device will shave at least 2V off the protected supply at the rated 500mA current, and is therefore unsuited to many low-voltage applications.
+ +At 0.75A, or 150% of nominal, the breaker required about one minute to release. At 1A, or 200% of nominal, the breaker required 14 seconds to release. The unit required 1-2 seconds to release at 4A, and was still delayed by nearly one second at 5.5A, more than ten times the nominal current rating. While this behavior is not typical for all thermal circuit breakers, the observed results reflect the danger of relying on devices with unknown specifications if precise results are required.
+ + +The device shown in Figure 4 is a physical replacement for a 3AG style, panel-mount fuse holder. Unlike the previous thermal device, this unit specifies a maximum normal operation resistance of 0.069Ω, which was experimentally confirmed with a high-precision Agilent benchtop meter. It will not meaningfully attenuate the supply voltage anywhere within its nominal current range.
+ +At 3.75A, or 125% of nominal, the device produced a range of clearing times ranging from as low as 82 seconds to as high as 122, with an average of 101 seconds. Increasing the current slightly to 4A (133%) produced more reliable tripping, as the average time dropped to 51 seconds with reduced spread. At 4.5A (150%) and 5.25A (175%), the respective average clearing times reduced to 17 seconds and nine seconds, and at 6.0A (200%) a consistent five second clearing was obtained.
+ +
Figure 5 - A Selection of Various Circuit Protection Devices
Pictured above are (numbered sequentially and from left to right) cartridge fuses ... (1) M205 (20 x 5mm), (2) 3AG, (3) 5AG, and (4) a specialty PCB mount unit. There are also plastic (automotive) blade fuses: (5) Maxi, (6) ATO and (7) Mini. We can also see (8) a self-resetting thermal switch, (9) live-body thermal fuse, and (10) a blinking decorative (e.g. Christmas) lamp that uses a self-resetting bimetallic switch - not a circuit protection device, but using an almost identical mechanism.
+ + +Multiple fuse types and ratings were checked for consistency, but two 3AG style models with a 3A rating were subjected to extended testing: CQ-ADL, rated for time delay applications, and CQ-AFE, designed for fast acting operation. Units were tested at continuous currents of 3.75, 4.00, 4.50, 5.25, and 6.00A. Four new, unused units of each type were tested at each current level (40 units total) and the results were averaged.
+ +At 3.75A, no failure could be achieved in either fuse type, even when the test was extended out to 20 minutes. The fuse element deformed and the current dropped slightly, suggesting significant heating in the element, but the current finally stabilised and a steady-state condition was achieved. A nuisance failure would eventually occur after repeated cyclings but the circuit would not be protected under these conditions.
+ +At 4.00A, type ADL required anywhere from 7-13 minutes to fail, and type AFE failed in a bit less than five and a half minutes. A real-world circuit would eventually be disconnected but no portion of the attached device would be reliably protected from damage.
+ +At 4.50A, clearing times dropped dramatically for both fuses, to 01:43 (ADL) and 00:30 (AFE). A power transformer with reasonable thermal capacitance would be protected by either fuse, although a sensitive small-signal circuit feeding from the transformer would be damaged. At 5.25A, clearing times reduced further to 50 seconds (ADL) and 6.3 seconds (AFE), and 6.00A produced 16.75 seconds (ADL) and 0.5 seconds (AFE). The circuit is now protected to the greatest extent practical.
+ +The results can be estimated graphically as a TCC, although note that the scales in Figure 6 and Figure 7 are not logarithmic:
+ +Slow-blow (or time delay) fuses are generally less predictable than fast blow types, and there are also different manufacturing techniques. It is impractical to try to test all types, because many suppliers consider the different types to be interchangeable, so obtaining a reliable supply of a specific type is unlikely. The general trends shown will still apply though. Also, expect to see minor differences between 3AG types as tested, and M205 (20mm x 5mm) types.
+ +The test results demonstrate that a device can only be reliably protected when the available fault current is at least twice the nominal fuse rating.
+ + +How should a breaker or fuse be specified? The traditional approach is to use an available device rated at the nominal full load current, and then hope for the best. Some guesswork is unavoidable, but there are at least two ways to improve the odds. First, a safety factor of 2.0 is required, so the attached device must be able to sink a fault current of at least twice the nominal device rating in order to guarantee a clean trip. Anything that limits the maximum current into a short circuit must be considered carefully. For example, if the supply is 24V, and the circuit has a 10Ω source resistor in series with the power supply, a perfect short circuit on the low side of the resistor is limited to:
+ ++ I = V / R = 24 / 10 = 2.4A ++ +
Accounting for the 2.0 safety factor, the fuse cannot be rated higher than the nearest size to 1.2A (possibly 1.25A, but typically 1A or 1.5A). Over-spec the fuse, and it will be nothing more than another section of wire in the feeder circuit.
+ +Second, many hobby projects involve a transformer, and no transformer can supply unlimited fault current. If the transformer short circuit impedance can be determined, it can be used to calculate the maximum primary current that will ever flow during a secondary short. Transformer design is beyond the article scope, but we do need to know something about the short circuit test.
+ +First, the transformer to be tested should be rated at least 20-25VA or so. Very small transformers tend to have high nominal impedances under normal operating conditions and will overheat and burn under a fault condition without tripping a conventional fuse. Where practical, the transformer should be fitted with a thermal fuse (as is done in 'wall wart' plug packs), but in some cases the designer may have to accept a transformer burnout as the normal failure mode. In such case, the fuse should be specified to protect the rest of the system without nuisance failures from supply inrush, and the transformer core should be completely isolated from the chassis, else adequately grounded where possible. Core grounding is a normal practice for all EI 'frame' transformers, but is disregarded in double-insulated equipment and cannot readily be achieved with toroids. The average hobbyist will find it difficult (and possibly illegal under local electrical safety laws) to construct DIY double-insulated equipment and is advised not to try unless using a self-protected plug pack, or equivalent low-voltage power supply with an insulated, protected output.
+ +Second, a variable autotransformer and two digital multimeters are needed. See Figure 7. The transformer's nominal voltage ratings and the nominal power rating must be known in advance. The full load current, IFL, can be calculated for either set of windings by dividing the power rating by the nominal winding voltage. For example, if the transformer has a single 120V nominal primary and a 300VA power rating, the full load current through the primary is:
+ ++ IFL = VA / V = 300 / 120 = 2.5A ... or ...+ +
+ IFL = 300 / 230 = 1.3A +
The short-circuit test can be run on either set of windings. For our purposes here, the primary will be connected to the variable autotransformer, with one multimeter connected as a series ammeter, and the other multimeter measuring the applied voltage. The secondaries are shorted. Small control voltage windings will contribute little to the test results, and could overheat during the test routine from normal error variations. These may be left floating, but all power windings must be involved for the test results to be accurate.
+ +Although not used for the tests shown below, the safety isolation transformer should be considered essential. This is a normal transformer, with a secondary current capability of at least double the expected maximum test current. The primary should be rated for the mains voltage where you live, typically either 120V or 230V.
+ +The autotransformer should be set at 0V, and power is then switched on. The voltmeter should confirm less than 1V across the transformer and the ammeter should not be measuring appreciable current. If this is not the case, STOP THE TEST NOW and check all equipment settings and connections! Otherwise, the autotransformer output voltage is slowly raised until the ammeter reports that the transformer primary's full-load current is flowing. Although it is not being measured, the corresponding full-load current will also be flowing through the shorted secondary windings. The short-circuit voltage can now be read from the voltmeter.
+ ++ Note that it is not strictly required that the test current be equal to the nominal full load current. I ran some tests, and found that even a ±50% error + in the test current changes very little. The final result is still quite accurate, provided both voltage and current are recorded accurately. It's also worth pointing + out that should the transformer normally run hot in the (proposed) application, the test should be run at or near the normal operating temperature. Because copper + has a positive temperature coefficient of resistance, the winding resistance will increase if the transformer is hot. For some smaller transformers especially, + this may be sufficient to make a marginal fuse rating completely useless.+ ++
To determine the short circuit impedance, simply divide the short circuit voltage by the full-load current. For our 300VA transformer, if the applied voltage in the short-circuit test was 10V on the 120V winding when 2.5A were flowing, then the short circuit impedance would be:
+ ++ RSC = VSC / IFL = (5 / 2.5 ) = 8Ω ++ +
The result is sometimes designated by Z rather than R since the impedance contains both resistive and reactive components. For simple current calculations, it can be understood and used like R in Ohm's Law.
+ +A sample of test results for several real transformers follows in Table 3. + + +
Manufacturer | Model | Pri-1 (V) | Pri-2 (V) | Sec-1 (V) | +Sec-2 (V) | VA | IFL (A) | VSC (V) | +RSC (Ω) |
Antek, Inc. | AN-0212 | 115 | 115 | 12 | 12 | 20 | 0.1739 | 9.84 | 56.6 |
Antek, Inc. | AN-0512 | 115 | 115 | 12 | 12 | 50 | 0.4348 | 10.70 | 24.6 |
Antek, Inc. | AN-1225 | 115 | 115 | 25 | 25 | 100 | 0.8696 | 8.77 | 10.1 |
Amveco Mag. | AA-28263 | 120 | - | 57 | CT | 288 | 2.4000 | 7.29 | 3.03 |
Amveco Mag. | AA-28263 | 120 | - | 57 | CT | 288 | 2.4000 | 7.33 | 3.05 |
ILP Mfg. | 49783R1-1014 | 120 | - | 24 | 24 | 160 | 1.3333 | 9.71 | 7.28 |
Toroid of MD | 4230 | 115 | 115 | 18 | 18 | 100 | 0.8696 | 5.85 | 6.72 |
Australian Tests - Full load current shown is test current - may differ slightly from rated full load current | |||||||||
Altronics (Toroid) | M5518 | 240 | - | 18 | 18 | 300 | 1.23 | 12.30 | 10.0 |
Altronics (Toroid) | M5525 | 240 | - | 25 | 25 | 300 | 1.215 | 13.49 | 11.1 |
Harbuch (Toroid) | 12417 | 240 | - | 48 | 48 | 500 | 2.15 | 9.71 | 4.5 |
CSE (E-I) | 96501105 | 110 | 110 | 28 | 28 | 200 | 0.909 | 19.80 | 21.8 |
Custom (E-I) | N/A | 240 | - | 28 | 28 | 300 | 1.24 | 16.61 | 13.4 |
Unknown (E-I) * | N/A | 120 | 120 | 12 | 12 | 4 | 17.0m | 22.1 | 1300 |
+ * 4VA transformer included for comparative purposes only. ++ +
We can now calculate the maximum current that can flow through the transformer primary when a short circuit occurs on the secondary by simply dividing the nominal primary voltage by the short circuit impedance. Consider the 50VA transformer from the table above. The nominal primary voltage is 115V and the short circuit impedance is 24.6Ω:
+ ++ ISC = V / RSC = 115 / 24.6 = 4.7A ++ +
Even if both secondary leads are screwed down to a heavy bus bar while the transformer is energised, not more than about 4.7A will ever flow in the primary even while the transformer begins smoking! The transformer will be operating at about 540VA under these conditions. Comparing the experimental fuse data obtained earlier, we deduce that any transformer fuse rated higher than about 3A fast-acting or 2A slow-blow will not reliably protect this transformer from a fault condition occurring on the secondary side.
+ ++ For a 230V transformer of similar ratings, we would expect that the full load and short-circuit currents will be roughly half those measured on the 115V transformer. As a result, the + required fuse rating will also be about half that required to protect the unit described above.+ +
+ + If we examine the comparison between the Amveco 288VA transformers and the Altronics 300VA (these are as close as we can get with the available samples). The full load and short circuit + currents are as follows ...
+
++ ++ IFL = 2.40A, ISC = V / RSC = 115 / 3.03 = 37.9A (Amveco)+
+ IFL = 1.25A, ISC = V / RSC = 240 / 10.0 = 24.0A (Altronics) +
++ + +This simple comparison shows that there is a very wide margin with these more powerful transformers, and the required fuse is about half for 240V (compared with 115V) as expected. Note + that from these data, it is obvious than low power transformers are by far the hardest to protect. In many cases, an embedded one-time thermal fuse is the only safe way to protect + transformers below about 10VA.
+ +Look at the 20VA transformer in the table. Short circuit current is just over 2A, and full load current is ~170mA. This ideally requires a 200mA fuse, although a more readily available + 500mA fuse will work. While the transformer is protected against a major fault, there is little protection against a sustained minor overload. The Australian 500VA transformer has a huge + primary current variation, and it can cheerfully blow almost any fuse you use in the case of a serious fault (S.C current is over 50A at 240V!).
+ +As the transformer size goes down, so does the ratio of full load to fault current - for example, some ~2VA transformers may show a significant current increase with a shorted secondary, + but the current is still so small that using a conventional fuse may not be possible. Protection against small but sustained overloads is zero, because of the tiny increase in primary current. + See the conclusion for more on this topic, as there is more involved than may initially be apparent.
+
Fuses and circuit breakers should not be selected haphazardly. Each unique type has different functional characteristics, and that protective device may be the only thing standing between the user and a fire or serious electrical shock! If a safety factor is applied to a device's trip rating, and it is properly understood that a transformer can sink only a limited amount of current in a worst-case fault, then it should be possible to design a project for a plausible fuse or circuit breaker, rather than merely hoping that the most convenient thing off the shelf will suffice.
+ +Fusing and equivalent protection devices have often been an uncertain field for many hobbyists. Although it is not possible to completely cover the topic in a short article, the reader has hopefully been provided with enough information to improve his or her understanding of the topic, and will be equipped to build safer, more reliable projects.
+ + +Be warned and beware of small transformers (typically anything less than perhaps 10-15VA, lower ratings are progressively worse). Because these normally run with a partially saturated core, the calculated full load current cannot be used - it must be measured at full rated voltage. Likewise, the short circuit current must also be measured with the full mains voltage applied. While this will seriously overload the transformer, if tests are kept brief (as long as it takes to get an accurate measurement), the transformer will not be damaged. Make sure you allow time for it to cool to normal quiescent temperature between tests.
+ +Everything becomes more complex with small transformers, because of core saturation. They are manufactured like that because the regulation would be woeful otherwise, due to the high winding resistance (about 550 Ohms for the 4VA unit tested). So, while the calculated full load current of the 4VA transformer is 16.6mA, with 230V applied, the transformer draws 51mA with no load, 56mA at full load, and 190mA with the secondary shorted - when you consider that you need a ratio of at least 2:1 for reliable fusing within a sensible time frame, this transformer cannot be protected reliably with available fuses, and any fuse will be somewhere between dubious and worthless.
+ +This is made a lot worse because the copper wire has a positive temperature coefficient, and the resistance increases as the transformer get hot. Eventually, a point of thermal equilibrium would seem likely, but it will be at a temperature above the allowable maximum for the insulation. During testing, the current could be seen falling as the fault was maintained (I did a heat test for 1 minute). By the time I switched off the power, primary current was already down to ~150mA and still falling. At that stage, the transformer could only be considered warm - it was not hot (at least not on the outside - I couldn't measure the winding resistance but it can be calculated as being around 90°C based on the resistance increase).
+ +After perhaps 5 minutes or so, one can expect that the current would fall to less than 100mA as the copper heats up more and its resistance increases. By this time, the transformer would be dangerously hot. The chances of any readily available fuse being able to protect this transformer are virtually nil - even for a dead short on the secondary. Protection against a minor overload is impossible, because the increase in primary current is so small between no load and full load. Smaller transformers show even smaller variations, hence the common application of a one-time thermal fuse buried in the windings. In many applications, the thermal fuse is the only way the transformer can be protected from a catastrophic (and potentially lethal) failure.
+ +Large (500VA or more) toroidal transformers pose an additional challenge. While the full load current may only be a few amps, the inrush current (that current that flows when power is applied) is often limited only by the DC resistance of the primary winding. This can make the fuse selection very difficult, since it must withstand a current of 50A or more for one mains cycle, yet protect the equipment against a sustained overload - not necessarily a short circuit on the secondary. Slow-blow (time delay) fuses are one solution, the other is to use a soft-start circuit that limits the peak current to something that a fuse can handle without fatigue. See Project 39 for an example.
+ +Ultimately, the user must realise that all forms of in-line protection are a compromise. Both fuses and circuit breakers can protect against catastrophic failure if properly selected, but it is extremely difficult to provide adequate protection against a sustained overload, particularly with audio power amplifiers. It is very common that the power transformer will be (sometimes severely) overloaded when both channels of an amplifier are driven to full power. This is not normally a problem, because in normal use the maximum power is only needed for transients, so the overload is brief. However, the fuse must not blow during normal use, but if the amp is driven to clipping and kept there for some time, the transformer will overheat. Whether it is damaged or not depends on just how hot it gets and the design of the transformer itself. Generally, this is information that often arrives too late - especially with DIY equipment. Many people have said that the ESP designs are very conservative, and this is entirely deliberate. By suggesting a transformer that is larger than really needed means that the chances of a sustained overload will not cause the transformer to fail, and makes it a lot easier to apply proper fusing.
+ +As Aaron points out, selection of protective devices is not as simple as it may seem. Any fuse or circuit breaker should be selected based on either the transformer manufacturer's recommendations, or after some basic tests to determine the limits. In general, transformers between 75VA and 300VA are reasonably easy to protect, both against a catastrophic failure or sustained overload, although even these can be affected by significant power-on inrush current (~150VA and above). A slow blow fuse of the appropriate rating usually offers the best protection at the lowest cost. The issue of 115V vs. 230V does add an extra layer of complication though, so make sure that you understand the facts before you decide on a particular fuse value.
+ + +You can't measure the temperature directly because the primary winding is inaccessible. Because the primary is almost always wound first, it's buried to the point where you usually can't get to it. However, we can easily use the tempco of copper and a bit of maths to work it out, just by measuring the cold and hot resistance. The result will always be approximate, because we don't know how the copper wire has been processed (so its tempco can be somewhat variable). I have adopted a figure of 4 × 10-3.
+ +There are some discrepancies as to the actual coefficient of resistance for copper - figures found on the Net range from 3.9E-3 to 4.3E-3. I have adopted a middle ground, settling on 4E-3 (4 × 10-3). Feel free to use the value with which you are most comfortable. Note also that the coefficient of resistance does change depending on whether the copper is hard drawn or annealed.
+ +If we accept that copper has a thermal coefficient of resistance of 4 × 10-3 per °C, therefore, if a transformer has a DC resistance of (say) 50 ohms at 25°C, at 150°C this will increase to ...
+ ++ RT2 = RT1 × ( 1 + α × ( T2 - T1 )) ++ +
where T1 is the initial (ambient ¹) temperature, T2 is the final temperature, and α is the thermal coefficient of resistance. Substituting our values in the above equation we get ...
+ ++ R150 = R25 × (1 + 4 × 10-3 × ( 150 - 25 )) = 75 Ω ++ +
+ ¹ Note that 'ambient temperature' is the temperature immediately adjacent to the device. It is not the temperature in the room, outside, or in Outer Mongolia! ++ +
Conversely, if we know the change in resistance, then it's an easy matter to calculate the final temperature (T2), provided we have a reference resistance taken at a known temperature before the test (T1, 50 ohms).
+ ++ ΔT = ΔR / ( RT1 × α ) ++ +
Where ΔT is the temperature rise and ΔR is the change in resistance. For the previous example the change in resistance is 25 ohms (75Ω - 50Ω), so we get ... + +
+ ΔT = 25 / ( 50 × 4 × 10-3 ) = 125°C+ +
+ T2 = ΔT + T1 = 125 + 25 = 150°C +
The only way to determine just how long it will take for a transformer winding to reach any given temperature is by measurement. Although it is (theoretically) possible to calculate it, this would require far more information than you will be able to obtain from the maker, and far more maths than I am prepared to research and pass on.
+ +I expect that few people will ever bother to take measurements and run calculations as shown here. That's a pity, because there is so much to learn that while it may not be immediately necessary, it is useful, and increases overall understanding and general knowledge.
+ + +It's not uncommon that fuses are installed in the speaker outputs of some amplifiers (I never include them in this position), and they are sometimes also used in speaker boxes as part of the crossover network - usually to protect the tweeter. In short, this isn't a good idea, because the constantly varying temperature (and therefore fuse resistance) can actually cause distortion! Because it's almost always external to the feedback loop of a power amplifier, the distortion is uncompensated, and while it's usually inaudible, it is there nonetheless. Beware of 'Audiophool' fuses at insane prices that claim to 'fix' the problem, because they don't and can't. Fuses in an amplifier's supply lines (or in the transformer's primary circuit) do not cause this problem, regardless of claims by charlatans. Amplifiers (and preamplifiers) are mostly unaffected by small changes in the supply voltage, as evidenced by the fact that power supply ripple (which is present in almost all power amp supplies) doesn't get through to the output.
+ +There is an 'industry' of fraudulent 'products' that never ceases to amaze (and annoy) technical writers and those who actually understand the physics behind electronic products. Amongst these are the 'audiophile' (or audiophool) fuses, which not only claim to prevent the issues above, but in some cases are even 'directional', even though no diode is used. The prices charged can be astonishing, as are the claims made by the sellers and their 'satisfied customers'. This is fraud, and the fuses (like most of the other nonsense these mongrels sell) will (and can) never do anything like the sellers claim. A fuse relies on the material heating as current increases, so it must (by definition) have resistance. While there are various materials with a well defined (and low) temperature coefficient, most have extraordinarily high melting temperatures. These alloys are used in high power resistors, but they are not suitable for fuses because they have a low thermal coefficient of resistance. Anyone silly enough to pay US$150 each for 'quantum' (quantum my arse!) fuses is not thinking clearly, or is simply hoodwinked by people who should be evicted from this planet! And yes, you can pay even more - anyone for fuses with bees wax and 'special noise reducing powder' for just US$225 apiece?
+ +In the lands of snake-oil, there are many claims made, zero real science (unless you consider voodoo to be 'science') and no reason to buy anything other than 'ordinary' fuses from reputable manufacturers. Using fuses in speaker lines is generally unwise, but it would be equally unwise to bypass them if fitted, because that may place your speakers at risk. A better proposition is to use a DC protection circuit (such as that described in Project 33), but this isn't always possible in a commercial product. There's no reason that it can't be built as an 'outboard' unit I suppose, but that would make it a rather 'interesting' add-on.
+ +Of course, if you happen to believe the nonsense from the snake-oil vendors you'll completely ignore everything that's been written in this article. Because the 'special' fuses you buy are (allegedly) from small workshops with mermaids as the workforce, you can be fairly sure that they won't have any test data, nor will they have any certification from UL, CSA, IEC or any other standards organisation. This could leave you in an untenable position with an insurer if your equipment catches on fire and burns down the house, but this is presumably a minor annoyance that's overridden by the vast improvement that you imagine you hear.
+ +In general, avoid on-line 'reviews' and opinions. Few are based on any science whatsoever, and most are wishful thinking. Audio fraud is one of those things that seems impossible to stop, either because the authorities aren't interested in 'small players' or no-one has complained loudly enough. It's a sad fact that many victims of fraud never report it because they think it may make them look stupid
High voltage fuses are different from 'conventional' fuses, in that they must be able to quench the arc that develops under fault conditions. This isn't something that most hobbyists will need, but when you're dealing with several thousand volts, the vaporised metal that forms on the inside of the fuse housing can become a conductive path. High voltage fuses almost invariably use a ceramic tube, which is filled with ceramic powder. The fuse itself must be long enough to ensure that creepage (distance along the body of the fuse) and clearance (distance through air between conductive parts) distances are maintained to suit the voltage. HRC (high rupture capacity) fuses are common for high voltage protection, and the cartridge is filled with fine silica sand, or other medium suitable for quenching the arc.
+ +This isn't something that most hobbyists will ever need, but if you happen to be playing with valve (vacuum tube) amplifiers, you may find that a 'normal' 250V glass fuse is inadequate, and you'll need to use a fuse that is specifically rated for high voltage operation. This becomes more important if the fuse is in the DC supply, because DC can maintain an arc far more effectively than AC, which by definition passes through zero twice each full cycle.
+ + +There are no specific references, but various datasheets from reputable fuse manufacturers were used to verify some of the data. In particular, Littelfuse and RS Components datasheets were the source of some of the tabulated results shown in the Foreword to this article. Provided you avoid the charlatans there's a lot of good information to be found, but the test results that Aaron showed are his own work and don't rely on published data.
+ + +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Guitar Amplifiers |
Vast numbers of guitar amps are sold every year, and of those being used, quite a few will fail for one reason or another. It is commonly believed that guitar amp makers know what they are doing, and produce the best product they can. Anyone who has repaired a number of guitar and bass amps quickly learns that it is a myth that they are well designed. Some are, but a great many are not.
+ +The errors made by the manufacturers are many, and I include some of the best known amps available here. I won't name any names, but it's not hard to figure out the popular brands I might be referring to. While these makers offer both valve (vacuum tube) and transistor amps, I will concentrate on transistor amps here. The design mistakes in valve amps are many and varied, some are relatively minor (but will reduce valve life) while others are close to unforgivable.
+ +Because repairing valve amps is something that should be left to professionals who know the circuits and their specific quirks (and how to fix some of the more serious design errors) they will not be discussed here. Very few guitarists who have paid usually big money for a valve guitar amp will accept major changes ... sometimes even if the changes will improve the sound and increase valve life!
+ +While we would hope that transistor (and valve for that matter) amp design would be mature and that mistakes would be few and far between, sadly this is not the case. Common mistakes that you will find are over-stressed output stages, heatsinks that are too small, and barely adequate power supplies.
+ +With many of the 'low end' guitar amps, the electronics are the cheapest part of the amplifier, usually followed closely by the speaker. A replacement speaker can set you back a significant amount, and unless you have access to a woodworking workshop making the case isn't trivial. In these cases, replacing the amplifier makes a great deal of sense. For a fairly small outlay you can have an amplifier that will work reliably for many years. It will never have the big-name brand, but that should be secondary to the amp's sound. It may even make sense to keep only the cabinet and (perhaps) the power transformer, and replace everything else.
+ + +Contrary to what you might expect, the design of a good-sounding transistor guitar or bass amp isn't hard, but there are a few things that must be considered. For hi-fi, we are interested in a nice flat response from well below 20Hz to over 20kHz, but guitar and bass don't need any such thing. The lowest note on a guitar is 80Hz (close enough - bottom E is actually 82.4Hz if tuned to concert pitch), and extended bass doesn't sound good. Good guitar speakers will have little response above 5-7kHz or so, so there is no reason to have extended high frequency response.
+ +Bass normally extends to 41.2Hz, but 5 and 6 string bass guitars get down to just over 30Hz. Guitar, bass and other plucked-string instruments typically have a predominant second harmonic. This means that the majority of the bass energy for the open bottom string is either ~60Hz or ~80Hz, and for guitar is ~160Hz. No-one ever designs instrument amps that deliberately remove the fundamental frequency though - this is left to the player, tone controls and external pedals. Needless to say, the speaker also plays a significant role - most 'combo' style guitar amps have an open back, and this reduces the bass response dramatically.
+ +While the amp doesn't need wide frequency response, it will tend to get it automatically. This is the simple reality of transistor amps - you get wide bandwidth free. Even the vast majority of valve guitar and bass amps have a much wider frequency response than the player will ever use. On the other hand, few hi-fi manufacturers worry too much about the performance of their amps when driven into clipping, but this is the way many guitar amps are operated for much of the time. It is essential to use an output stage design that clips gracefully, and doesn't make any nasty noises in the process.
+ +Guitar amps have a very hard life on the road, and it is guaranteed that they will be used under circumstances where no-one would normally expect electronic equipment to function. High ambient temperatures, weird and wonderful combinations of speaker boxes, speaker leads that get pulled out while the amp is playing at full volume - these are all common. Few amplifiers will withstand this kind of abuse, even though they are supposedly designed as guitar amps. The most common errors in both valve and transistor guitar amps are the result of penny-pinching, and (despite claims that you may hear) do not improve the 'tone' of the amp. The converse is also true - fixing the faults will not make the amp sound worse (often it will be a lot better), but reliability can often be improved dramatically.
+ +As an example of a very conservative design, have a look at Project 27. This is an extremely popular design, and thousands have been built. Failure is almost unheard of because the amp is deliberately over-designed. The output transistors recommended are rated at 125W each, and there are two in parallel. The worst case dissipation is around half the total allowable transistor dissipation when used with the suggested supply.
+ +While I could also show many examples of highly marginal designs (from the popular brands alluded to above), I will not do so. Suffice to say that IC power amp chips or a single pair of 125W Darlington output devices operated from 40V supplies will generally fail when pushed hard - especially if the heatsink is either marginal or far too small. Other designs are guaranteed to have poor clipping behaviour or may not be able to drive some speaker loads without making horrible noises. In general, most of the designs will sound alright when driven hard though - it is power supplies and/or thermal management that let them down.
+ + +When a commercial amp fails, it may be thought that it was a random event. In some cases you might be right. However, after the amp has been repaired several times with the same problem (typically a failed power amp), you could be excused for thinking that something must be wrong. If this is the case, there is little point blaming the repairer or repairing the power amplifier again. Sooner or later you know it will fail again, and if that happens to be right in the middle of a gig then you have every right to be unhappy.
+ +Most IC power amplifiers have comprehensive internal protection, and it may be thought that these are ideal for guitar amps. Although basic over-current protection is helpful, severe protection schemes make a guitar amp pretty much useless. An amp that switches itself off or goes into thermal shutdown in the middle of a song is not useful - the old saying that "the show must go on" applies to the electronics as well as the performers.
+ +So, if you have an amp that has failed more than once, or cuts out in the middle of a gig, what do you do? Repair isn't helpful, because technically the amp (and/or the IC) is doing exactly what was intended. The manufacturer obviously didn't understand the expectations of musicians and failed to ensure that the amp would continue to function under highly adverse conditions. You would think that established guitar amp makers would have learned all the lessons they need to make a reliable product, but alas, this doesn't seem to be the case. It must be said that some are very reliable indeed - not all have issues.
+ +The most sensible option is to replace the amplifier module entirely. Many of my customers have done just that, and as noted above, Project 27A (the power amplifier) is well suited and has been used for just this purpose. Project 101 is another amp that has been used to replace existing power modules in commercial guitar and bass amps. For bass, Project 68 has been used - it is not recommended for guitar, because it is much too powerful. If you really do want ear-shattering volume on stage, then it's far better to use 2 or 3 smaller amps (around 100W) than one large one. At the very least you have redundant amps and will not be left playing air-guitar if one fails.
+ +All of the ESP amps that have been used as replacement modules have provision for current feedback. This gives the amps inherent current limiting, and also gives a better sound for guitar and bass. The current feedback increases the output impedance of the amplifier, making it sound more like a valve amp, but much more reliable.
+ +Regardless of which module is used, it is essential to provide a substantial heatsink and excellent thermal management. Particularly for stage equipment, it is important to keep all semiconductors at the lowest possible temperature to ensure reliability. Some commercial amps have barely enough heatsink to survive even at normal household ambient temperature. The chances of long-term survival on stage under hot lights is very slim for such designs.
+ +In the majority of amp failures, the power transformer will survive, and it's important that the original supply voltages aren't exceeded. Reusing the transformer ensures this, but if it does need to be replaced, get one with the same output voltages. This keeps the amp's power the same and protects the loudspeaker from excess power. If the speaker has also failed then you can choose a replacement that can handle more power, but be certain that you actually need it! Guitar speakers are usually very sensitive (around 100dB/W/m is common), and they make a lot of noise with only a few watts.
+ + +In the previous section, I mentioned 'current drive' and high output impedance. These two functions are the same thing, and are the result of using current feedback. This deserves more attention, because it is very common in transistor guitar amps - a little less so for bass amps. The speaker return current flows through a resistor, as shown below.
+ +
Figure 1 - Basic Current Feedback Scheme
As shown, the speaker current is determined by the applied signal voltage. An input of 1V will cause 5A to flow in the load, regardless of the load impedance (up to a point!). In contrast, a conventional power amp is designed so that the speaker voltage is determined by the input voltage. This is shown in figure 2. If an input voltage of 1V is applied to the input, the speaker voltage will be 20V - this is a voltage gain of 20. This gain (and output voltage) will also cause the load current to be 5A, but only if the load is resistive. Loudspeakers have an impedance that changes from being inductive, capacitive or resistive, depending on the frequency applied.
+ +In case you were wondering, R2 (1k resistor) across the speaker is so the amp won't be completely open loop with no speaker connected. With no speaker, the gain is nominally 5,000 but this will never be reached in practice, and a more realistic gain will be no more than 100 or so. This is a highly simplified diagram. As shown, the amp has gain down to DC, and that is a very bad idea for a guitar amp. The feedback network is always more complex to separate the AC and DC gain, and to limit the gain with no speaker to something 'sensible'. In practice, all guitar amps that feature high output impedance will use mixed feedback (both voltage and current feedback).
+ +
Figure 2 - Conventional Voltage Feedback Scheme
If the load is 4Ω, the two circuits above will give the same power. Assuming 1V RMS input, the power in the load will be ...
+ ++ P = I² × R = 5² x 4 = 100W+ +
+ P = V² / R = 20² / 4 = 100W +
Both amps give exactly the same power into the load, but the current drive amp needs a tiny bit of extra voltage to compensate for the voltage lost across the 0.2Ω current sensing resistor. The power dissipated in R3 (current drive) will be 5W when the load has 100W. This is always the case - a certain amount of power is lost so we can monitor the current. Because the speaker's impedance varies widely with frequency, the small power loss will never be noticed.
+ +To understand the real difference between voltage and current drive, we need to look at the power developed as the load impedance varies. For this experiment, we will use an input signal of 0.1V, which will provide 2V across the load, and/or 0.5A through it. The mixed feedback case combines both voltage and current feedback, to give an output impedance of 8Ω. This is somewhat harder to define exactly without a fair amount of calculation, but we end up with an open circuit output voltage of 6V RMS.
+ +
Figure 3 - Mixed Feedback Scheme
Mixed feedback is shown above. There is voltage feedback provided by R2 and R3, but when there's no speaker attached the gain is influenced by R4. Gain with no speaker is about 240, and with a 4Ω load this falls to 38, because the voltage developed across R5 is part of the overall feedback network and increases the amount of feedback applied. At intermediate impedances the gain changes accordingly, so at 8Ω, the gain is 65. If the amp were designed for pure current feedback, the gain would double into an 8Ω load compared to that at 4Ω. By changing the value of R4, it becomes possible to modify the output impedance to anything you like. With all other values as shown, R4 needs to be around 260Ω for an 8Ω output impedance. This is an approximation - precision is not necessary, so 220Ω or 270Ω would work just as well.
+ +Load Impedance | Power - Zout = ∞ + | Power - Zout = 0 | Power - Zout = 8Ω + |
4 Ω | 1 W | 1 W | 1.0 W + |
8 Ω | 2 W | 500 mW | 1.13 W + |
16 Ω | 4 W | 250 mW | 1.0 W + |
32 Ω | 8 W | 125 mW | 720 mW + |
As you can see, when the impedance increases, the power from a traditional voltage amplifier will decrease. A current amp behaves exactly the opposite, and increases the voltage as the impedance rises so the current remains the same. Most guitar amps are configured to have an output impedance of somewhere between 4 and 100Ω. This might seem like a very large variance, but in reality it's not as audible as you might think. The mixed feedback system gives almost constant power to the load (roughly -3dB at 32Ω) for an amp with 8Ω output impedance. You can simply use a resistor in series with the load to increase the output impedance, but that wastes a lot of power. Using feedback is a much better method.
+ +Despite claims to the contrary that you might hear, there is nothing to suggest that using a voltage amp with equalisation sounds any different from using a current amp and relying on the speaker impedance varying with frequency. Using a current amp has advantages though - the input voltage determines the speaker current, and the current does not change as impedance is reduced. This can save the amp from failure - at least in the short term.
+ +Using the same input voltage as shown above, a voltage amp attempting to give 2V output will try to deliver 20A into a short circuit (assume a typical resistance of around 0.1Ω). A current amplifier configured as shown will deliver 0.5A into any impedance - including a shorted speaker lead. This adds a layer of protection that can make the difference between instantaneous amp failure or not. Current drive does not provide long-term protection, and if driven into a shorted lead for more than a few seconds the amp will probably fail.
+ +Voltage drive means that the amp will try to produce the required output voltage into a short circuit, and without some form of protection the amp will usually fail almost instantly. Full 'load-line' or safe operating area protection is offered by some commercial designs, but it is imperative that it doesn't operate under any normal condition. This is an extremely difficult requirement for a guitar amp, because they are often driven extremely hard for much of the time.
+ +The Project 27A power amp uses mixed mode feedback and operates largely in current drive (the decision is up to the constructor), and also has current limit protection circuits. P101 (MOSFET power amp) has provision for current feedback, as does P68.
+ + +As long as the power amp is driven into distortion, output transistor dissipation is actually very low. If the designer relies on this to select transistor power and heatsink size, disaster will surely follow. Guitarists commonly use pedals or master volume controls to get the required amount of 'grunge' but at reduced volume. It is entirely likely that the amplifier will be operated at the absolute worst possible case dissipation for long periods. Inadequate heatsinks or poor thermal design will result in a failed amplifier.
+ +This topic is somewhat counter-intuitive, and as such deserves (demands?) some additional explanation. We need to look at power transistor dissipation under a variety of conditions, and I apologise in advance for the technical nature of the discussion. Unfortunately, any attempt at simplification would likely result in falsification, and the processes involved are not well understood - even by some 'professional' designers. The amplifier used for these simulations is shown below. Note that although an opamp symbol is used, this is actually an 'ideal' opamp, so has infinite voltage and current capability.
+ +I mentioned earlier that guitar amp design is not hard, but unfortunately it is hard if the designer does not understand the consequences of installing something as simple as a master volume control, most commonly with a voltage limiting (clipping) circuit. It is entirely possible to double the average power stage dissipation, simply by the setting of the master volume. I know this might sound unlikely, but the following measurements show what happens.
+ +For this exercise the load will be resistive, so is constant across the frequency range. This is done for the sake of simple explanation, as it gets complex very quickly if a real speaker load is used. The example amplifier has 35V supply rails - just right for a 100W/ 4Ω guitar amp. In both cases, the signal is clipped at exactly the input voltage required for full output. In the first case, the clipped signal is applied directly to the amp's input, and in the second it is attenuated to exactly half voltage (one quarter power - nominally 25W) with the master volume control.
+ +
Figure 4 - Test Amplifier For Dissipation Measurements
If the amp is driven to full output stage clipping with the band limited noise signal I used, output voltage (RMS) is 22.8V across the load. Average load power is 130W. Transistor dissipation is around 23.5W (average) for each device (one NPN and one PNP). When the master volume (VR1) is reduced to give half the output signal (12.8V), load power is reduced to 41W, but transistor dissipation rises to almost 29.5W per transistor. The guitarist is not to know that this is worse for the amp than driving it into clipping at full volume, and it is the responsibility of the designer and manufacturer to ensure that the transistors and heatsinks are up to the task.
+ +Note that the amp is shown as a voltage amplifier, not a current amplifier. When the load is fixed and resistive, the performance of voltage and current amps is identical. However, current drive (or just increased output impedance) makes example calculations difficult with speaker loads because their impedance varies with frequency.
+ +
Figure 5 - Test Signal After Clipping Circuit (Point "A")
The input signal is such that the average level is shown above, and has moderately heavy clipping. The level was carefully adjusted so the output transistors were subjected to a power dissipation that I know is realistic from many years of working on and with guitar amps. This figure cannot be accurately specified though, because noise (used for the test) is random in nature ... even in the simulator. This is not changed much when a guitar is played - the signal voltage varies constantly, even just playing the same chord over and over.
+ +Although 29W doesn't sound like a great deal, there are two transistors so the total into the heatsink is almost 60W. The transistor metal tab area is small and the total thermal resistance from junction to heatsink will usually be over 2°C/ Watt. The transistor's junction temperature will rise by at least 60°C above that of the heatsink! Naturally, if the heatsink is allowed to get hot, the transistors will quickly reach their thermal limit and failure is only a matter of time. If a heatsink has to get rid of 60W of heat without its temperature increasing dramatically, it has to be very large indeed. A heatsink with a thermal resistance of 0.5°C/Watt is physically rather large, but with 60W of heat being pumped in, the heatsink will operate at around 55°C ... but only if the ambient temperature on stage remains at 25°C! Not likely.
+ +To understand exactly what is happening, we will use the same ±35V supplies that were used in the example above. If the amp is clipping, the only significant power dissipation occurs during transitions between positive and negative limits. The voltage across the transistors when turned fully on might be around 1V - this depends on the output stage topology. Current is close enough to the full supply voltage divided by load impedance, so we can determine power dissipation at maximum positive and negative excursions ...
+ ++ I = V / R = 35 / 4 = 8.75 A+ +
+ PTOT = V × I = 1 × 8.85 = 8.85 W
+ P = PTOT / 2 = 4.425 W (Each output transistor) +
When the master volume level is reduced so that the output voltage swing is exactly half the supply voltage, there will be 17.5V across the load, so current is 4.375A ...
+ ++ I = V / R = 17.5 / 4 = 4.375 A+ +
+ PTOT = V × I = 17.5 × 4.375 = 75.56 W
+ P = PTOT / 2 = 37.78 W (Each output transistor) +
Note that these are maximum worst case theoretical average values, based on a perfect squarewave. The actual average power will usually be more like that shown above with the simulated signal waveform, but it can get very close to the theoretical maximum if heavy overdrive is used.
+ +Doubling the number of transistors is not just for extra safety, it is essential. There is no other way to keep the die temperature within allowable limits with the transistors typically used for guitar amps. For a great deal more on this topic, see the articles Semiconductor Safe Operating Area and Heatsinks.
+ +Now consider a chip amp, operating from ±35V supplies and driving a 4Ω load. The IC dissipation could quite easily exceed 50W for extended periods of time. This may seem to be (almost) within ratings, but the combination of thermal resistance from junction to case, case to heatsink and heatsink to ambient air will nearly always conspire to cause overheating. Remember too that an amplifier driving a reactive load may have an instantaneous dissipation up to twice that developed when driving a resistive load. With any IC power amp, the hot bits are concentrated in a rather small area, which makes removing the heat harder than with larger discrete devices.
+ +As a result, the IC will either shut down or fail. For example, the TDA7293 is a high power IC amplifier, and the maximum rated dissipation is 50W at a case temperature 70°C according to the data sheet. If operating a TDA7293 (or similar) as a guitar amp, there is simply no economical way to keep the case temperature at or below 70°C unless the supply voltage is reduced, resulting in a less impressive output power specification. The actual allowable continuous power dissipation is closer to 25W than the claimed 50W. If pushed to the maximum, failure is inevitable. Thermal shut-down is not helpful if it happens in the middle of a song in front of an audience. Total failure is (of course) even worse - especially if there is no spare amp.
+ +Another popular IC power amp is the LM3886, although as far as I know only one major manufacturer uses it in a guitar amp. The data sheet claims that it can dissipate 125W, but a footnote states that this is at a case temperature of 25°C. Quite obviously, it is impossible to maintain the case at 25°C regardless of the size of the heatsink. The thermal resistance between semiconductor die, case and heatsink will be sufficient to reduce the real continuous power dissipation to perhaps 25W at best.
+ +Smaller amplifiers are not immune either. Several amps (known and unknown brands) use the TDA2050, LM1875 or similar for typical output powers of 20-30W. At least one (but probably a great many) unknown Chinese made amp has a claimed output of 50W, but it produces only 20W. In reality, this is a good thing because the IC is hard pushed to produce even the lower power continuously. When the master volume is reduced we get the same problem as above, but now we have a device in a TO-220 package. The realistic absolute maximum continuous dissipation allowable in this package is around 10-15W. Above that, the case to heatsink thermal resistance will cause it to overheat. If allowed to dissipate 20W, the silicon die will be running at least 30°C above the case temperature! The case will be at least 30°C above the heatsink which will be perhaps another 30°C above ambient. So, at 30°C ambient, the die runs at 120°C. The same process applies for all amps of all sizes, and calculation isn't hard.
+ +We will first estimate that the worst case ambient temperature could be as high as 35°C (not at all unreasonable on stage), and select a nice chunky heatsink with 0.5°C/W thermal resistance. If we are to dissipate up to 75W of heat, the heatsink temperature rise is 37.5°C for a 0.5°C/W heatsink. Add the ambient temperature, and we already have a heatsink temperature of 72.5°C. When the junction to case thermal resistance is considered, it turns out that the heatsink is too small. It is unlikely that the case to heatsink thermal resistance will be less than 1°C/W for any IC amplifier or even any single transistor - unless seriously over-specified. Add the same again for junction to case, giving a total of 2°C/W.
+ +We haven't even really started and the design is starting to look like it may not be possible! If a single IC is used, it is impossible with the conditions described.
+ +It may seem that you can't change the junction to heatsink thermal resistance by much, but you can, by using devices in parallel. This is simple with transistors, but not so easy with most IC amplifiers (for a variety of reasons). Assuming transistors shown in Figure 4 ... if you use two in parallel for the upper and lower output devices, the power is halved so the effective thermal resistance is halved. Power is halved because it's now shared by two transistors instead of just one. For convenience, assume a total thermal resistance (junction to heatsink) of 2°C/W and 50W average dissipation for the amplifier ...
+ ++ One device (IC) - 50W dissipation, 2°C/W, 100°C rise+ +
+ Two devices (Transistors, push-pull) - 25W dissipation (each), 2°C/W, 50°C rise
+ Four devices (Transistors, parallel push-pull) - 12.5W dissipation (each), 2°C/W, 25°C rise +
For any given sized heatsink, the temperature rise of the transistor die is reduced with the paralleled transistors. It's apparent that the heatsink's thermal resistance has to be fairly low if the transistors are to be kept at a reasonable temperature. Remember that paralleling the transistors does not reduce the total power that must be dissipated, it can only reduce the thermal resistance between the die and the heatsink for each transistor. Two transistors dissipating 25W each is no different from one transistor dissipating 50W as far as the heatsink is concerned. However, it's apparent that using extra devices makes a big difference. If the effective thermal resistance of the output transistors is reduced, the heatsink can be smaller than otherwise. Now you know why the Project 27 power amplifier uses four output transistors.
+ +Since we really do need the lowest possible thermal resistance between the transistor or IC case and the heatsink, the mounting materials are critical. Thin mica, Kapton or aluminium oxide insulating washers (all with thermal grease) are the only options that will give a low enough case-heatsink thermal resistance - silicone pads should never be used where high dissipation is expected.
+ +It's also worth pointing out that people tend to think that 'ambient temperature' means the temperature they feel. Not so! For electronic equipment, its ambient temperature is the air temperature in the immediate vicinity of the gear itself. In some cases, it will be influenced by the hottest part(s), an important consideration around valve amplifiers. For a heatsink, the ambient temperature is that of the air which surrounds the heatsink itself, and may be considerably higher than the surrounding air if ventilation is inadequate.
+ +It should now be obvious that ...
+ +In general, the maximum case temperature of any transistor or IC power amp should not exceed 60°C. It is certainly possible to run devices hotter than this, but doing so reduces our safety margin and increases the likelihood of failure. Remember that the semiconductor die will be much hotter than the case - this information can be obtained from the device datasheet. It is essential to work out if the design is viable under realistic worst case conditions.
+ +In the above, it has been assumed that the amp will be playing continuously and at worst-case conditions. This could happen, but some pragmatism is needed because we would otherwise create an insoluble problem. Guitarists usually don't just thrash the living daylights out of the guitar for hours on end without ever stopping. Because there will be gaps, breaks between songs, quieter bits (well, maybe) and other things that reduce the average power dissipation, we can safely assume a slightly less irksome final design.
+ +Using a fan dramatically increases the thermal efficiency of a heatsink. However, the fan, any filter, and the heatsink also need to be cleaned regularly. This is rarely done where fans are fitted, so failure or over temperature cutout are likely. A fan is not a panacea though - if the heatsink is too small, then it's too small, and the fan will only postpone the inevitable. In some chassis, the ability for fresh (hopefully cooler) air to enter the chassis and hot air to exit easily is sub-optimal. All that happens is the air inside the chassis gets hotter and hotter, as does the heatsink. The following photo shows just how badly things can go wrong when a completely inadequate power stage (in all respects) fails.
+ +
Figure 6 - Power Amplifier Board From Popular Guitar Amp [1]
I won't say what brand of amp this is from, but some will recognise it instantly. It has every problem I've referred to in this article - the use of a power amplifier IC and heatsink that is clearly far too small. It is obvious from the photo that it has failed in spectacular fashion (see the badly burnt section of PCB). These modules apparently use a fan, and the fan was supposedly 'upgraded' from a 'very thin and inefficient' unit to something a bit better. Clearly this was rather pointless, as the heatsink is simply too small, fan or no fan. There is absolutely nothing about this arrangement that I would consider even approaches a professional level - despite the big brand name.
+ +An Internet search reveals that this particular amp is renowned for failures, but additional searches demonstrate that it is by no means alone. Unfortunately, for as long as guitar amps have been made, there have been reliability issues. There is no evidence that any major manufacturer has done much to fix the problems or even acknowledge their existence in many cases, although some custom or 'boutique' amps might be better if they are designed properly.
+ + +I have suggested that the P27A power amp is a good solution for guitar and low power bass. It uses two transistors per side, so maximum dissipation for each device under worst case conditions is 18.75W. I always recommend that Kapton film is used as an insulator, along with thermal grease applied carefully. This can result in a thermal resistance of ~0.5°C/W for each transistor, limiting the case temperature rise to less than 10°C per device.
+ +If we allow for a maximum case temperature of 60°C, the heatsink will operate at 50°C under worst case conditions. Allowing the same 35°C ambient as before, the temperature differential between heatsink and ambient air is 15°C. The heatsink needs to be 0.2°C/W to allow continuous worst case operation at the maximum likely ambient temperature.
+ +While this would be really nice, it is quite impractical and far too expensive. In reality, and considering that there will always be periods of lower power and even no power at all, a heatsink of around 0.5°C/W is generally quite sufficient. This is most certainly not a small heatsink though, and is much larger than you might expect to find in most commercial 100W guitar amps. The addition of a fan is very worthwhile. Yes, fans are noisy, but I can guarantee that you won't hear the fan above a 100W guitar amp being pushed hard. Any fan should be thermo-controlled, so it only comes on if it's needed. Including a fan doesn't mean that the heatsink can be reduced to almost nothing though - that defeats the whole purpose.
+ +A heatsink of 0.5°C/W is large, but it's very easy to incorporate into virtually any guitar amp. The size is fixed by the speakers, although convention also plays a part. There is absolutely no reason at all to skimp on the heatsink to the extent that's become common, but we know that it's done to reduce cost. If it also reduces reliability to the extent that the amp becomes virtually useless, then the cost reduction is of no consequence - people will recommend to others that they don't touch that brand/model and much bad karma is released.
+ +It's not at all uncommon for a final design to be over-rationalised to the point where it becomes an abomination from a technical perspective. There are several commercial guitar amps that for all intents and purposes have no heatsink at all. Riveting a power amp IC to a steel chassis does not constitute a heatsink, nor does a small bit of aluminium angle attached to the output device(s).
+ +This is not a new problem - many years ago I was the repair agent for a brand of (decidedly flakey) guitar amps in Australia. The first batch kept failing, and I told the manufacturer that the heatsink was far too small and that riveting it to a steel chassis did nothing useful. They denied it - "the amp and heatsink were designed by a professional engineer", I was told. I pointed out that he was a pretty useless 'engineer' if he couldn't get a heatsink right, and they denied that too.
+ +There were also other unrelated problems with the amp, which rapidly gained a poor reputation and died quietly in the market after only a couple of years.
+ +Despite the claims about their 'engineer' having got the heatsink 'right' the first time, the next batch of amps had the same (and still woefully inadequate) heatsink, but now it was separated from the steel chassis with spacers to get some airflow. I told them again that the heatsink was still far too small, and of course they denied it (again). Meanwhile, amps were failing at regular intervals (I think you can guess why). I was eventually deemed 'persona non grata' by the maker because I had the temerity to tell an owner exactly why his amp kept blowing up, and this suited me fine. I had no great desire to keep arguing with idiots who couldn't understand that the design was fatally flawed. It could have been fixed, but the pig-headed attitude of the people running the company wasn't going to let that happen. A short length of 3mm thick aluminium angle does not constitute a satisfactory heatsink for a 100W amp, regardless of what any so-called engineer says.
+ + +Many guitar amps these days are classified as 'modelling' amps (sometimes referred to as 'profiling' or 'virtual'), which allow the user to select the characteristics of many famous (or infamous) amps that have been in use since the 1960s. Pretty much without exception, these use digital techniques, mainly relying on a DSP (digital signal processor) to adjust the many different responses that are made available. Effects such as reverb, tremolo, and (less commonly) vibrato are common, and some also allow the user to select different speaker 'tone' and breakup effects. The DSP is generally configured and controlled my a microprocessor or microcontroller, which in some cases might also perform some limited DSP functions itself. The microcontroller must be programmed of course, and only the amp manufacturer will have the source code.
+ +While undoubtedly very capable, there's actually a hidden 'gotcha' lurking within. I have customers who have experienced this, and usually it means the amp is scrapped! The problem? It's in the digital subsystems themselves, and is created by the IC makers. A great many ICs have a rather short manufacturing life, which in some cases might only amount to a single production run. Once all the ICs that were made have been sold, that is it. No-one thinks twice about 40 year old amplifiers, but 2 year old ICs can easily be utterly unobtainable.
+ +Once it's no longer possible to get a replacement circuit board (almost all digital systems use SMD (surface mount devices) pretty much exclusively, and the individual parts are generally not offered for sale. If the DSP or microcontroller fails, the amp is a large paperweight. There is almost never any way to get it working again unless the failure is a common part such as a voltage regulator or a simple logic IC. Even if a pot (or rotary encoder) fails, it might not be possible to get a replacement that will fit if it's a bit out of the ordinary. A broken pot (not at all uncommon) may signal the end of the amplifier, especially if the PCB is damaged. I've sold P27B preamp boards to customers who would otherwise have to throw away the entire amp.
+ +There are many other specialised parts that can render a 'digital' amplifier uneconomical or impossible to fix. Of these, the LCD (liquid crystal display) often used to tell the user about the current settings may be unique, so a failure can render the amp unusable. There will always be comparatively 'simple' faults that can be fixed by your local amp repairer (provided s/he's skilled enough), but many 'traditional' repairers will shy well away from boards covered in SMD parts. Electrolytic capacitors aren't reliable, and doubly so for many SMD versions. While these are (in theory) an 'easy fix', it doesn't always work out that way.
+ +Even (new) valve amps aren't immune to the influx of surface mount parts and modelling preamps. Most are conventional, but some seem to think that the more bells and whistles they include, the better for everyone. In general, this is a particularly bad idea unless effective precautions are taken to minimise heat transfer between the valves and digital circuitry. Valve amps have traditionally been (mostly) fairly easy to work on, but stacked digital boards and many multi-pin inter-connectors can wreak havoc on reliability and serviceability.
+ +The issues mentioned here are ones that you generally do not find on review sites or in comments from users. It's expected that there will be few failures during the first year or so (i.e. the warranty period), and if the amp fails under warranty it will often be replaced with a new one. Once the warranty has expired, you may be left with a relatively costly junk box. If the parts (usually complete circuit boards for anything digital) can't be obtained, then the amp is scrap. It may be possible to keep the cabinet, speaker, chassis and power supply and rebuild it, but of course the modelling functions are no more.
+ +Bear in mind that just one ten-cent part failure may be enough to kill the entire amp. With a densely packed board covered in SMD parts, finding that one faulty part can be close to impossible, even if you have the schematics. These are often impossible to obtain unless one is a registered repairer for the manufacturer. In general, SMD boards are not made with any intention that they will ever be repaired. If someone can fix one it's a bonus, but that's not the normal process.
+ + +I have made this comment in several articles, but when it comes to power amps, there is good reason to say it again ... There is no such thing as a heatsink that's too big. While an overly large heatsink may well pass the point of diminishing returns and give no extra benefit due to its excess size, it does no harm to the semiconductors. The trick is to use a heatsink that provides enough thermal dissipation to ensure the reliability of the output stage. Saving money during manufacture only to have multiple after sale warranty claims is not good business practice.
+ +People all over the world are either having guitar amps repaired or repairing them themselves on a regular basis. One of the great advantages of 'solid state' transistor amps is that they should be vastly more reliable than valve amps, not less reliable. When manufacturers skimp on heatsinks or think they can get away with IC power amps, the customer suffers. If the maker is a no-name brand from somewhere in Asia then there is no great expectation, but if the maker is well known and has been building guitar amps for over 40 years, you have every right to expect that they'd finally get it right.
+ +You might also think that a German brand (albeit made in China) should be on top of things, but no, you'd be wrong there too. It doesn't matter if the brand name is based in the UK, US, Europe or elsewhere, inadequate heatsinks and/or poor design choices deliver unreliable amplifiers to a largely unsuspecting buying public. Many of these amps will only be used in a small home studio or for practice, and may last almost forever. This is not the case if they are expected to work on stage, night after night, under generally adverse conditions. Depending on the player's style, some amps will give acceptable service, while others give nothing but trouble.
+ +Of all the brands, there is one US maker who seems to generally get most things more or less right. There have been some spectacular blunders with early valve amps and some of the re-issues, and the continued use of completely unshielded pickups and wiring inside many of their guitars is a constant source of irritation. However, they do seem to enjoy comparatively better overall reliability than many of the others, but there will still be exceptions. Many of their transistor amps are borderline IMO, but don't often fail - mainly because modern power transistors are extremely rugged and regularly outperform their datasheet maximum ratings and published safe operating area.
+ +Unfortunately, the lessons of old rarely seem to make it through to the present, and guitar amp makers (whether valve [vacuum tube] or transistor) continue to produce amplifiers that have inbuilt flaws that should not exist. There is a consistent flow of guitar amps being repaired to keep technicians all over the world busy. I have no desire to see them put out of work, but the senseless repetition of repairing faults that should not have existed in the first place is not productive.
+ +For some reason, people seem to think that if a certain sized heatsink and fan is fine for one of today's high speed microprocessors (as used in your PC), then that's all that's needed for a power amp too. Not so! A processor may dissipate a considerable amount of power, but it never has to cope with comparatively high voltages or reactive loudspeaker loads. The dissipated power is fairly steady, and always at very low voltages (most micros these days run on only 3.3V and some use even less). There is no comparison between the two, and to imagine that there is any similarity whatsoever is to invite trouble. The above photo is ample evidence of this line of thought and the end result.
+ +Ultimately, if you want a really reliable guitar amp, you'll have to build it yourself. Naturally I suggest the P27 preamp and power amp, but other combinations can also give good results provided the final design is over-engineered to at least the same degree is the P27A power amp. I know from many of my readers that P27A power amps have been retro-fitted into all sorts of guitar amps after their owners got sick and tired of constant failures. I know from personal experience just how hard the amp is to kill - it's possible, but you really have to work at it. + +Modelling amps are often very tempting, but beware of the potential pitfalls. You may get one that lasts close to forever, but equally, you may get one that fails catastrophically a year after the warranty runs out, and parts are no longer available.
+ + +No references were used while compiling this article, the information is from my own accumulated knowledge and other articles already on the ESP website. However, this has been augmented by information from friends who service guitar (and other amps) and the article was prompted by readers who contacted me about replacement power amplifier modules for commercial guitar amps that kept failing. There is one exception ...
+ +![]() | + + + + + + |
Elliott Sound Products | +Guitar Loudspeakers |
The choice of loudspeaker has far more influence over the overall tone of a guitar amp than any other factor. The 'wrong' speaker can make a perfectly good amplifier sound awful to one player, but perfect for another. Much depends on how the amp is used - clean (or relatively 'clean'), distorted or heavily distorted ('crunch') playing styles may require very different loudspeakers, although most guitarists will find (or try to find) something that suits their style(s) so that the same amp/ speaker combination can be used for all their material. Some professionals use different amps for different songs (or parts thereof).
+ +As with many things related to audio, there are many myths around guitar speakers. This is partly because the choice is so personal, but there are many misconceptions and unfounded claims as to what make a good, bad or indifferent guitar sound. Consider that many very accomplished guitar players can actually use any amp that comes their way if need be, and it's their playing style that sets them apart, not the equipment itself. Yes, they will have their preferred setup, but they don't fall to pieces if it's not available.
+ +Many of the claims you'll come across are dubious, and some are downright false. This seems to be an area of great debate all over the Net, with very little agreement and little or no science. Ultimately, the laws of physics determine what any loudspeaker sounds like, even if the exact mechanism is unclear. Some of the most revered speakers around are largely rough approximations of their original models, and with many of them now made in China and re-badged, you absolutely do not automatically "get what you pay for".
+ +There's a great deal said about magnet material, and while some of it sounds plausible, there's a lot more to it than just the type of magnet. While claims abound, there's very little evidence that most have any basis in fact. This doesn't mean that you won't hear a difference, but it's likely that the difference is due to other factors, and is not due to the magnetic material used. A loudspeaker's magnet and voicecoil form the 'motor', which generates the force needed to move the cone. A high magnetic field strength and a voicecoil with many turns creates a strong motor, increasing efficiency. If one speaker is just 1dB louder than another, it will almost always sound 'better', all other things being equal.
+ +Ceramic (strontium ferrite) magnets are by far the most common, despite their relatively poor magnetic properties. The compensation is to make the magnet much larger, and speakers with ceramic magnets are normally just as efficient as those using Alnico or neodymium, but are almost always significantly heavier. The magnetic structure is usually quite different for the different magnet types, but that doesn't mean that there is necessarily any real difference in the magnetic field across the voicecoil gap.
+ +Using a particular magnet type and/ or brand name doesn't necessarily mean anything tangible. Sometimes it's simply a case of 'Famous Person' uses this type of speaker, and people imagine that by using the same driver they will sound just like 'Famous Person'. This only holds true if everything else (including their skill level) is the exact equal of said 'Famous Person', something that is rarely the case. In general, I suggest that you try different speakers until you find one you like. This is actually harder than it may seem, because there could be outside influences that taint your perceptions, such as peer pressure, a sales person's 'persuasion', or the simple knowledge that this is the same speaker that 'Famous Person' uses.
+ +You also need to be aware of the speaker efficiency, measured in dB/W/m. Most guitar speakers are between 90-100dB/W/m, so taking the lower limit, with 25W input the SPL will be 104dB. The higher efficiency speaker will give 114dB SPL with 25W. Using the more efficient speaker is the same as switching from a 25W amp to a 250W amplifier! With two speakers, the effective efficiency is increased again (by 3dB, since the amp will deliver 50W ), but the response becomes uneven. There is some additional increase due to the larger radiating area, but this is unpredictable. Even so, getting 117dB SPL at 1 metre is seriously loud, and can be tolerated without hearing damage for less than 30 seconds in any 24 hour period!
+ + +The earliest speakers (as we know them) used electromagnets, because there were no magnetic materials that provided sufficient field strength along with no propensity to demagnetise themselves. Developed in 1925 by Chester Rice and Edward Kellogg, the 'loudspeaker' as we know it was born. The magnet was a vexing problem, but electromagnets can deliver an extremely powerful magnetic field given sufficient turns and current. Prior to the mid 1930s (or thereabouts), a great deal of work had gone into development of suitable electromagnets that could also act as the filter choke for the valve ('tube') amplifiers that were used. The primary usage was for radio (or 'wireless' as it was called at the time), and some very clever designs used a 'hum bucking' coil to prevent the ripple from the DC supply from appearing in the audio. There may have been some very early guitar amps that used electro-dynamic speakers, but I don't have any details.
+ +The first really good magnet material was Alnico (or AlNiCo - aluminium, nickel and cobalt, with the remainder being iron). While it still more than holds its own against newer materials, it's also expensive. Alnico is favoured by many guitarists because it's believed to have that 'vintage' tone. However, the cone material, surround, spider and the mechanical construction of the pole pieces will usually have a great deal more influence than the material used for the magnet. There is little evidence that the magnet alone makes any audible difference. An Alnico magnet speaker from ~1970 is seen in the next photo. Over the years there have been a number of trade names for Alnico, including Alni, Alcomax, Hycomax, Columax, and Ticonal.
+ +
Figure 1 - Alnico Magnet
The Alnico magnet slug is the slightly crinkly-looking section at top centre. The conical piece below that encloses the centre pole and the end of the voicecoil so it's not open to outside contamination. It's not apparent how the Alnico slug is attached to the rear or centre pole-pieces. Some (older) Australian and NZ readers will recognise this assembly instantly - it's a Plessey Rola 12U50, a 300mm (12") 50W speaker that was very popular here in the 1960s and 70s. People are still using them, and is should be fairly obvious I have the one pictured (and another the same). Mine were actually 12UX50 twin-cone, but I removed the 'whizzer' cones because they were damaged and are rather dreadful at best, so keeping them wasn't an option. There was another version called the 12UEG ('EG' for 'electric guitar') in the mid 1960s, but I never came across one. They were rated at 30W.
+ +Ceramic magnets are more common and much cheaper than Alnico, and (at least in theory) there should be no difference in 'tone' provided all other factors are the same. This means the cone material, voicecoil construction (and material) and even the basket (the speaker's chassis) must be close to identical. The same applies to neodymium magnet speakers. These are the most recent, and 'neo' magnets are far smaller, lighter and more powerful than any previous material.
+ +
Figure 2 - Ceramic Magnet
In reality, it can be pretty much guaranteed that there will be very little equivalence in cone and voicecoil construction, and the basket will be different for the simple reason that the different magnetic materials have different needs in terms of mounting. The basket alone may change the sound, although probably not by a great deal. It's commonly claimed that Alnico magnets are more easily (temporarily) demagnetised by the flux from the voicecoil, supposedly giving a 'softer' compression characteristic than harder magnetic materials. 'Hardness' refers to the ability of a material to retain magnetism, technically known as remanence.
+ +The theory is that with Alnico magnets, as the voice coil exerts a magnetic field in response to the input signal, this magnetic field tries to demagnetise the magnet. As its effect lowers the available magnetic field of the Alnico magnet, the speaker becomes less efficient, the voice coil moves less, etc. There is no doubt whatsoever that the voicecoil's magnetic field affects the field strength across the magnetic gap, but evidence (i.e. measurement data) as to the extent which it affects the magnet itself is very difficult to find (which is to say that I found absolutely zero evidence, only claims and anecdotes).
+ +The physics of it is (supposedly) that the small magnetic domains near the surface of the magnet poles begin to change state or 'direction'. The result is said to be smooth compression, similar to the operating curve 'compression' that occurs in a valve amplifier. When the voicecoil's magnetic force is removed, an Alnico magnet will return to it's normal value - at least that's the theory [ 1 ]. While this is a very popular opinion, there's no evidence (which is the important part of any claim).
+ +Alnico 5 is a popular speaker magnet alloy made up of 8% Aluminum, 14% Nickel, 24% Cobalt, and 3% Copper, with the remainder made up of iron. The cobalt is the ingredient that makes Alnico expensive. Most of the world's supply comes from the 'copper belt' in the Democratic Republic of the Congo, Central African Republic and Zambia. These countries control the market, and cobalt is primarily used in the manufacture of industrial (and/ or military) magnets, wear-resistant, and high-strength alloys. Guitar speakers are well down the line in terms of 'need'. Cobalt currently sells for about US$60/ kg [ 2 ].
+ +The development of Alnico began in Japan in 1931. Tokushichi Mishima discovered that an alloy of aluminium, nickel and cobalt that had high ferromagnetism. The first Alnico alloy had a magnetising field strength of 400 Oe (Oersted, the old unit for coercive force). The SI equivalent is about 32 amperes per metre (A/m). It had double the magnetic field strength of the best magnet steels in existence at the time [ 4 ]. Modern alloys have significantly higher coercive force, with Alnico 5 being around 51 A/m.
+ +Alnico is a very hard material. It's difficult to machine, so most of the time it's cast or sintered into the desired shape and carefully heat-treated to get the desired magnetic properties. It was the first really powerful magnet material, and even today is only bettered by samarium-cobalt (expensive and uncommon) or Neodymium Iron Boron (NdFeB aka 'Neo') - the most powerful permanent magnet material known so far. However, neodymium magnets will disintegrate if exposed to the air, so they are always heavily plated to protect the magnetic alloy.
+ +However, this article is not about magnetic materials in detail. While the materials are different, the magnetism itself has basically the same physical properties regardless of the material. This includes electro-magnets that were common in very early loudspeakers because useful permanent magnet materials weren't available or were too expensive. The magnetic field in the gap has no 'knowledge' of the magnet material, and the same field strength can be obtained by many different materials and geometries. There is some degree of flux modulation with all magnet and polepiece materials, and high powered speakers will always have (much) greater flux modulation when pushed to their limits.
+ +There are many on-line videos that purport to demonstrate the difference between ceramic and Alnico magnets. What is not disclosed is whether the magnetic field strength, voicecoil, cone, surround, spider and dustcap are identical or not. If not, the comparison is between the different speaker configurations rather than the magnet. This doesn't mean there's not a difference of course, but without full disclosure the demos let you hear the difference between complete loudspeaker drivers, rather than the magnet materials. Advertising material rarely (if ever) describes the entire motor and cone structure. It is a mistake to assume that these are exactly the same for different magnet materials.
+ +It's worth noting that there is a limit to the magnetic induction (measured in Tesla) across the gap of a loudspeaker. The limit is mainly due to the steel used for the pole-pieces, and generally ranges from 0.8 to 1 Tesla, with 1.8 Tesla being about the limit for speakers using mild steel polepieces. Use of exotic alloys can boost that up to around 2.4 Tesla, but at considerable added cost. Most speaker designs saturate the polepieces to allow for manufacturing inconsistencies and to ensure that the 'static' magnetic field is difficult to modulate.
+ +
Figure 3 - Loudspeaker Motor Assembly (Ceramic Magnet Shown)
A 'typical' motor assembly is shown above. The important parts are labelled so you can see what goes where. If an Alnico magnet were used, it would be located on the back polepiece, directly under the centre pole, and the magnetic circuit would be a different shape to accommodate the magnet. As seen in the photo above (Figure 1), the magnetic circuit for an Alnico magnet may be closed with a 'U-section', while other designs use a pressed steel cup. The exact mechanism doesn't matter, provided there's enough steel to support the required flux density. If it's too thin, it will saturate at a lower flux density, reducing the flux across the gap. Ceramic magnets are (almost) invariably assembled as shown in the drawing, though finer details differ.
+ +Reducing the field strength reduces efficiency, and it also allows the speaker to 'do its own thing' - it is not as well controlled by the amplifier. This is particularly true around resonance, where a lower field strength increases the total resonant Q of the driver (called Qts in Thiele-Small parameters). There are two factors at play - the flux density ('B') and the length of wire in the gap ('L'), giving the 'BL' product you'll see referred to in many brochures. A high BL product gives high efficiency and good amplifier control of the speaker. The 'L' factor only applies to the voicecoil wire that is within the magnetic field of the gap.
+ +However, if the BL factor is too high, the speaker will be overdamped, and may be thought to have poor bass response. Many guitar amps provide 'compensation' by way of having a higher than normal output impedance (ZOUT). Typical valve guitar amps have a ZOUT of between 4 and 16 ohms, and it's common to use current feedback with transistor amps to achieve the same end. If you look at the ESP Project 27 guitar amp design, you'll see that it uses current feedback. ZOUT is typically about 20 ohms, but it can be set for any value the builder prefers. I did my very first transistor guitar amp using this technique in around 1968, and every instrument amp I've designed since then has done the same.
+ + +The cone material is of great importance to the sound of a guitar speaker - probably more than any other factor. However, the voicecoil, surround, spider and (believe it or not) the dustcap can also have a profound effect on 'tone'. Most guitar speakers are relatively low power, up to 100W or so is common, although a few are higher. They also have comparatively high efficiency, with up to 100dB/W/m being common. This means a light cone, and most have a modest excursion. The resonant frequency is very important, because that defines the 'bottom end' the player (and the audience of course) hears.
+ +During the 1940s through to the ’60s, guitar speakers were rarely rated higher than 15 to 20 watts, but there were a few exceptions in the later years. Most early guitar amps rarely put out more than 30 watts or so, but the 40 watt Fender Twin (using 2 × 6L6GC valves in the output stage) changed that, and later amps from many makers were typically 80-100W. Low powered speakers were fine when used singly in small venues or recording studios, and in multi-driver boxes (such as the 4 × 12 cabinets ('cabs') that became common during the 1960s). When pushed hard, the speakers started to 'break up', adding speaker distortion to the amp's own distortion when played loud.
+ +A number of speaker makers have used metal dustcaps (usually aluminium), commonly glued to the cone rather than the end of the voicecoil. While some guitarists like the extra high frequency 'bite', most do not. I was once flown from Sydney to Melbourne to find out why an amp sounded revolting in a recording studio. The problem was solved by removing the aluminium dome dustcap and replacing it with a piece of felt. The problem was that the dustcap radiated strongly above around 4kHz, with a very distinctive 'hard' and 'metallic' sound (no-one told me about the aluminium dome before I got on the plane). Reproduction of frequencies over 7kHz is generally considered harsh, and most guitar speakers are designed to roll off above 5kHz. The resonant frequency of most guitar speakers is typically between 70-110Hz.
+ +
Figure 4 - Corrugated Paper Surround
Surrounds are normally corrugated paper as seen above, which is often the same paper that the cone is made from. Some speakers use a corrugated cloth surround. A non-hardening material commonly known as 'dope' is used to make this region flexible and ensure it's airtight. Many people are used to seeing roll rubber or foam surrounds on hi-fi speakers, but these are unheard of for guitar speakers. The surround (and the overall suspension including the spider) is generally much stiffer than you might expect. One of the reasons is to ensure a reasonably high resonant frequency, and the other is to protect the speaker as a whole from excessive excursion. Guitar speakers are generally not expected to move the cone more than a couple of millimetres, and much of the movement is involved in creating cone 'break-up' - chaotic movement where the cone does not act as a simple piston.
+ +Cone breakup effects would be very difficult to design or model, and I suspect that most cones are designed empirically (i.e. by trial and error) or use tried and known materials and processes to get consistent results. This is very important, as no-one want to buy two or more supposedly identical speakers that sound completely different. Fortunately, this doesn't appear to be an issue. Ultimately, the only thing that really matters is whether players like the sound or not - people don't buy speakers that sound rubbish (well, mostly they don't, and not on purpose).
+ +
Figure 5 - Spider, Tinsel Leads And Terminals
The above photo shows the spider, as well as the tinsel leads and the terminals. The spider has a significant effect on the sound, because it's part of the suspension and is partly responsible for the resonant frequency. The surround is the other major influence. The combination of suspension stiffness and cone mass (including the voicecoil, former, dustcap, air load etc.) set the resonant frequency. The use of a light cone and stiff suspension means a high resonance (in this case it's 84Hz, but that would fall a little after the speaker has been used for a while). To reduce the resonant frequency, the suspension can be made 'looser' (more flexible), or the cone (plus voicecoil etc.) made heavier. The latter reduces sensitivity.
+ +Another influence on the sound is the length of the voicecoil relative to the magnetic gap. For speakers requiring low distortion and reasonable excursion (Xmax), either the voicecoil is longer than the gap (called an overhung design) or the gap is longer than the voicecoil (underhung). These are common for hi-fi speakers, but less so for very high efficiency drivers. Either way, the efficiency is reduced because either part of the voicecoil or part of the gap is 'unused'. For maximum sensitivity (at least at low input power), the voicecoil and gap should be the same size. Of course, when the cone travels even a small distance, some of the voicecoil will be outside the gap and the instantaneous efficiency falls. This causes distortion, which will (nearly) always be a mixture of predominantly third harmonics, with some second harmonic due to suspension nonlinearity.
+ +Because of the fall in overall (instantaneous) efficiency, there will be some degree of 'compression' as well as distortion, both of which many guitarists like because they help to increase sustain (causing notes to last longer) and add harmonics for a 'richer' sound. There are so many different factors that it's impossible to try to characterise them all, because each acts in combination with the other variables. Some differences will be very audible, while others may go almost un-noticed. In some cases you may even find that the things you most expect to make an audible difference, may in fact make barely any difference at all. I'm not even going to try to quantify what affects the sound in any direction, because I don't have a wide variety of speakers to play with, nor do I have the facilities to try to test every combination.
+ +Almost all guitar speakers share some common properties though. Lightweight cones, nearly always paper, with a corrugated surround (as opposed to roll or foam surrounds). The suspension is generally quite stiff, and the speakers have a fairly high resonant frequency (typically 70-80Hz). Most are efficient, at 95-100dB/W/m, but don't expect the same efficiency at (say) 50W that you get at 1W. I ran a basic test on this, and used 1W, 10W, 20W and 30W at 120Hz into a guitar speaker box in my workshop. The test speakers are a pair of low power guitar drivers (they've been in the box for so long I don't recall what they are), and as near as I can recall they are rated at about 25W each.
+ +At 1W, I measured 83.5dB, rising to 93.5dB at 10W (as expected). Above 10W, there was little change in the SPL, and it only managed 96.8dB at 20W and 96.9dB at 30W. As the power increased above 10W, distortion was audible, and at 30W it was very noticeable third harmonic (as expected). I deliberately used a fairly low frequency, because as the frequency increases there's less cone travel. The test was to see how much efficiency was lost as the voicecoil started to move further out of the gap. You'll find that this effect is rarely mentioned (I've not seen any mention of it when discussing guitar speakers).
+ +However, if you want to see some of the best info that I've come across, a very detailed analysis by Kippel [ 3 ] examines voicecoil displacement, flux modulation, suspension non-linearities and just about every other problem facing traditional moving coil loudspeakers. It's not about guitar speakers, but the concepts and issues are common to all types, from hi-fi to concert sound.
+ +An area where you can expect reasonably good 'equivalence' is for the top end. It's common to get a peak at around 2-3kHz, with response falling rapidly above 5kHz. This is quite deliberate, and anyone who's tried using a wide-range speaker for overdriven guitar will tell you that it sounds pretty bloody awful. The overall response of the speaker is one of the most influential in terms of its sound. A small difference in efficiency or frequency response can make a huge difference to the sound. These will far exceed any difference due to the magnet material (real or imagined).
+ +On the Eminence [ 4 ] website it says (and I quote verbatim) ...
+ +++ +"What differences will I hear between ceramic, alnico, and neodymium magnets?"
+ +"Each material, of course, has different magnetic properties and cost. Neodymium seems to be the wave of the future, especially with reduced weight and overall costs coming down. It + produces the most magnetic flux per ounce, making it ideal for use in multiple speaker cabinets to maintain performance while reducing handling and shipping weight. Alnico is a composite + of aluminum, nickel, and cobalt. It is the most rare and most expensive. Alnico is commonly thought to produce the most 'Vintage' tone and has a reputation for sounding compressed. + Ceramic is the cheapest and most common material. If you are comparing speakers that have the same magnetic flux, but generated from different magnet compositions, you probably won’t + notice a difference in tonality. Differences in tonality that are often attributed to the magnet material probably have more to do with the positioning of the magnet and resultant + differences in magnetic flux within the motor structure. Therein lies the mojo!"
+
This is in agreement with the comments I've made above. It is important to be careful with references, because a great many are not based on engineering, but are from the 'marketing' department. The principle of marketing is to tell you what you want to hear, whereas engineering tells it as it is, regardless of whether it's what you want to hear or not. Most 'reviews' leave out nearly everything you need to know - I saw one video where completely different speakers (they were even different brands!) were used to 'demonstrate the difference' between ceramic and Alnico. All it did was demonstrate the difference between two very different speakers - the magnet is immaterial if there are any differences in the other factors. Any conclusions drawn from the demonstration are based on a completely false premise and are irrelevant.
+ + +There are two metals used for voicecoils, copper and aluminium. Copper is by far the most common, having good electrical conductivity and it's easy to join using solder. However, it's much heavier than aluminium which is a disadvantage. Aluminium is difficult to terminate, so much of the aluminium wire used for voicecoils is copper plated so it can be soldered. While aluminium wire was very popular for a while, it seems to have fallen from favour to some degree. Aluminium appears to be uncommon for guitar speakers. All voicecoil wire is insulated with a high-grade, high-temperature enamel coating to prevent the individual turns from touching each other (causing a short circuit), or from moving (which will ruin the loudspeaker).
+ +In some cases, the wire is rectangular or square instead of round. This allows more wire per unit volume, and this increases the winding efficiency because there are no little gaps between the turns as you get with round wire. Edge-wound rectangular wire is at the extreme end of speaker voicecoils, and isn't common for most guitar speakers. A wire measuring 1.4mm × 0.7mm has a cross-sectional area of 1mm. A round wire with the same area (1mm²) has a diameter of 1.12mm and occupies a physical area of just under 1.5mm². There's roughly 0.5mm² of 'wasted' space, making the voicecoil larger for the same number of turns (assuming the same overall diameter). It's not quite so bad for the second layer if the turns are wound properly.
+ +Aluminium has about 50% of the weight of copper for the same length and resistance. However, the wire must be thicker because aluminium has only ~60% of the conductivity of copper. The net result is that aluminium has a slight overall advantage for weight, but the reliability of terminations still remains a problem. If it's not copper-clad, the only reliable connection is welding, which itself is not trivial with aluminium.
+ +
Figure 6 - Loudspeaker Voicecoil Options
The drawing above (somewhat simplified) shows three of the options. The 'conventional' arrangement is the most common for guitar speakers, but the number of layers (and the length of the coil itself) will vary depending on the design choices made. The former requires a strong bond to the wire, cone and spider, and also provides the termination points where the voicecoil winding is joined to the flexible braid (aka 'tinsel') that's used to bring the wires to the terminal block mounted on the basket. The entire assembly is likely to be epoxy impregnated to ensure that the windings can't separate from the former or each other.
+ +The voicecoil former (aka bobbin) has to be strong, light, and capable of withstanding the worst-case maximum voicecoil temperature without failure. Early speakers used paper (actually more like thin cardboard) which is still a very popular choice, but materials also include Kapton (polyimide), Nomex, Kevlar, aluminium, phenolic resin, fibreglass and even titanium. Metallic formers are useful to help disperse heat, but they cannot be a closed circular form because that would create a shorted turn. There is always a very small gap between the ends of the tubular former to prevent a short circuit. The wire is bonded to the former using a variety of different adhesives, many of which appear to be proprietary, so details aren't available. Many will be high-temperature epoxy or polyurethane resins, and many improvements to these have been made over the years. Few will last very long if subjected to temperatures exceeding 200°C, and nor will the enamel insulation on the wire.
+ +The ideal former is very light, strong, and free of resonances. Many proprietary configurations have been developed, and few speakers have formers that are 'inappropriate' in any way. The choice ultimately comes down to cost vs. expected power handling, but each different material has the potential to affect the sound. Whether this is 'good' or 'bad' depends on the listener, and this is never more true than with guitar speakers.
+ +Even though aluminium is light compared to most other metals, it's much heavier than paper or the various plastics or composites mentioned above. It's also important to ensure that it's well damped, as it may have unwanted resonance(s) because of the nature of most metals. By way of example, it's no accident that bells are made from metal - I've not seen a plastic bell, and doubt that it would work well .
There are a few ways that guitar speakers are classified. There's the distinction between 'British' and 'American', with both covering 'modern' and 'vintage'. In reality, these are somewhat arbitrary, and there's no particular reason that (for example) a 'vintage British' and a 'modern American' speaker couldn't have near identical sound. There are many others too of course. In Australia there were several locally made speakers that people quickly discovered were ideal for guitar, some using Alnico magnets, some ceramic. The Alnico magnet shown in Figure 1 is from a speaker that was very popular for some time in Australia during the 1960s. This was a 300mm, 50W speaker that seemed close to indestructible. They were also available in a 'twin-cone' version that was popular for column PA speakers (these were the original 'line array').
+ +Much the same happened all over the world, but most smaller countries probably don't have any viable speaker manufacturers left. There are still a couple in Australia (or there were at last count), and world-wide there must be hundreds of small 'boutique' speaker manufacturers. Whether they get their parts (baskets, cones, etc., or even finished speakers) from China is unknown. There can be no doubt that China now has more speaker manufacturers than anywhere else, but most will be 'OEM' (original equipment manufacturers) and will have their products re-branded to whatever the end customer requires.
+ +Ultimately, it doesn't matter what the speaker is called (name, style, model), it's whether you can get the sound you want from it. Consider that many professional guitarists use whatever equipment is provided by the promoter in the countries where they tour. They will have their specific requests of course, but in some cases it's simply not possible to provide the exact equipment listed in the rider (the band's wants, demands and/or needs). In the vast majority of cases, this ultimately causes no problems (BB King once had to use one of my (transistor) amplifiers because the music shop that supplied the gear didn't have a spare Fender Twin - true story). Apparently he rather liked it (but I didn't get to sell him one).
+ + +Speakers are mounted in a variety of different configurations, and with different box styles. Most 'combo' amps are open backed, because they require ventilation and the amplifier is in the top section of the enclosure. Airflow is essential with valve amps, but is no less important for 'solid state' transistor amps. The heatsink must have airflow, and having it sticking out the back is not acceptable. Leaving the back open solves this, at least to some extent. The majority of 4 × 300mm (12") type enclosures (commonly known as a quad box) are sealed, with no openings other than those for the speakers.
+ +The sound from open backed and sealed boxes is (sometimes radically) different. There is no 'better' configuration for all players and/ or venues, and the choice is very personal. Open back cabs tend to create more on-stage 'spill', which can make the sound engineer's job that much harder. However, some engineers use a microphone in (or directed towards) the rear of the box to produce the FOH (front of house) mix, preferring the usually 'mellower' speaker rear radiation. Others use a mic front and back so they can mix the two for the desired sound.
+ +There is another class called an 'isolation' cabinet. The speaker is completely enclosed to minimise the SPL (sound pressure level), and these are more likely to be used in a studio than on stage. There is a microphone inside the cabinet, and some have their own speaker while others are designed to accept a normal speaker box. Some are lined with acoustic foam while others have minimal lining. Absorbent foam helps to minimise internal reflections that can create a 'hollow' sound and it also reduces sound leakage to the outside world. Some provide input/ output connectors for the speaker and mic (respectively), while others may use a narrow slit for cables. Attenuation (reduction of SPL) depends on construction.
+ +One thing that is almost universally eschewed is a vented/ ported enclosure. While some bass players like the extra efficiency at the bottom end, most guitarists dislike the sometimes 'woolly' bass that vented boxes produce when driven from amplifiers with a comparatively high output impedance. This applies to almost all valve guitar amps, and a great many transistor guitar amps as well. This is not to say that a vented box should not be used. Like everything to do with guitar speakers, it's a personal choice.
+ +The 300mm (12") speaker has been the guitarists' favourite for a very long time. There are players who use (or prefer) 250mm (10") drivers, which may be as a single driver (usually a combo amp), or 2 × 250mm or a quad box. There are a few smaller amps (typically 'practice' amps) that use either one or two 200mm (8") drivers. These can be used in the studio or even on stage for quieter groups, and some can be surprisingly loud despite their size.
+ + +Most speakers these days are wired so that a positive voltage applied to the positive (+) terminal will cause the cone to move out. This creates a compression (an increase in air pressure) in front of the speaker. However, it has not always been like this. For some time, JBL (for reasons that no-one can explain) wired speakers the opposite way, so positive to the positive terminal caused the cone to move in. Today there seems to be general agreement that a positive voltage should cause a compression (cone moving out), and I've not seen a driver for many years that was wired differently. The polarity can be tested with a 1.5V cell. The cone movement isn't great, but it's usually easy to see (or feel) which direction the cone moves with each polarity.
+ +It's important that if two or more drivers are used with an amplifier, all should be in phase. That means they should all move outwards and inwards at the same time with the same polarity. From the amp's perspective it doesn't really matter if they are (all) in or out of phase, since there may or may not be an overall inversion of the signal from the guitar to the speaker socket. There isn't really any convention on this, although most designs do retain 'absolute polarity'. However, there can be large phase shifts at some frequencies that vary depending on tone settings, and it's unlikely that the polarity is audible. It is well known that some asymmetrical waveforms can sound 'different' if their phase is inverted, but only in an A-B test.
+ +Of somewhat greater concern is that a large array of speakers (think the classic 'double stack' - 2 x quad boxes) is very large compared to wavelength at frequencies above 1kHz or so. This causes high frequency beaming and lobing, where the upper frequencies have a very irregular coverage pattern. There isn't anything much that can be done to reduce this (other than a totally different speaker configuration), and it can create problems - especially for other band members who may have to put up with excessive treble if the guitarist listens off-axis. Even a single 300mm (12") speaker in a conventional small combo box will show this effect. Some amps have the facility to tilt backwards so the speaker can be aimed at the player. Others have a sloping baffle for the same reason, and stands are also available to do the same thing. These do help a little, but they don't solve the problem.
+ +Impedance is important. Valve amps are designed to operate into a particular nominal impedance, and if you use a speaker (or combination of speakers) with a different total impedance, the amp will not perform properly. It's even possible to damage the amp - an excessively low impedance (e.g. less than half the nominal) can cause output valves to overheat their plates, and this (sometimes dramatically) reduces valve life and reduces output power. A higher than normal impedance can cause 'flash-over' at the valve base due to excessive voltages being created within the amp itself, and also reduces output power.
+ +Transistor amps don't care if the impedance is higher than normal (including an open circuit), but they get very annoyed if the impedance is too low, and will often fail to show their displeasure. Higher than expected impedance reduces output power, and that can sometimes be used to advantage. A quad box that can be set to 8 ohms or 32 ohms (for example) can reduce the power dramatically, making the system less overpowering on stage, and much easier to manage in the studio (or bedroom).
+ +All speakers in a box should be of the same make and type, with the same impedance. Mixing impedances means that one driver may get the lion's share of amp power and fail, usually at the least opportune time. The imbalance also means that you don't really know if the impedance is alright for the amp unless you know how to calculate the combination properly. Even knowing that does not help power distribution, so the risk of driver damage or failure still exists.
+ +If there are other differences (such as the cone, surround and/ or spider), the speaker with the weaker suspension may be pushed well outside its limits. This won't occur with an open backed box, but closed back systems can develop significant pressure inside the enclosure. This is particularly true if the system is used for bass, which means longer cone excursions and more pressure (both positive and negative). Where different types or sizes of speakers are used within the same cabinet, there will be a divider so that each set has its own sub-enclosure to prevent unwanted interactions.
+ + +Most speakers are fairly 'tight' or stiff when new, and may seem bass-shy. After being driven for a time they change character slightly. The surround and spider corrugations begin to loosen up, and the net result is usually a better bottom end and the cone breakup characteristics change. The changes are usually for the better in terms of enhancing the tone for guitar work. Severe overloads will also change the sound, but usually in very much the wrong direction.
+ +Heat buildup in the voice coil has always been very real a problem, particularly as speakers were expected to handle more power. When the voicecoil gets hot, its resistance rises and so does its impedance. This reduces sensitivity, and if the heat is too great it will cause the adhesive and enamel to soften and may allow the voice coil to come apart. Even slight deformation can cause 'poling', where the voicecoil wires rub against the polepiece. This signifies an ex-speaker, and it either has to be replaced or re-coned. The gap between the winding and the poles is only around 0.25 to 0.3mm (roughly 0.010 to 0.012 inch), so it takes very little deformation to cause serious problems. I suggest that you read Power Vs. Efficiency, which covers the issue of voicecoil temperature in detail. Another article that you should read is Speaker Failure Analysis which describes what does and does not cause speakers to die.
+ +Heat isn't a major problem with lower powered speakers - unless they are pushed beyond their ratings of course. As speaker power goes up, the problem becomes progressively worse. Even with a speaker having a nominal efficiency of 100dB/W/m, 94% of all the power delivered to the voicecoil is dissipated as heat, with only 6% producing sound. A speaker that's being punished with 100W of input power has to get rid of 94W of heat. It may not sound like much, but feel how hot even a 60W incandescent lamp gets if you want some context.
+ +Many attempts have been made over the years to get the heat out of the voice coils, including the use of aluminium dustcaps, regular dustcaps with vent holes, or a vented centre polepiece. These techniques all rely on the cone's movement to create some airflow to pull heat from the voicecoil. Aluminium formers help to disperse the heat more effectively than paper or plastic.
+ +You need to be wary with vintage speakers because it may be possible for the spider to shift. This will allow the voice coil to shift too, causing poling. Most vintage baskets were painted (and the paints used were not as good as those available now), so the glue holding the spider may have been applied to paint rather than bare metal. With time and vibration the glue can lift the paint, allowing the spider to move and take the voicecoil with it.
+ +The adhesives in true vintage speakers were very poor compared to the epoxies and other adhesives available today. Cyanoacrylate ('CA' or so-called 'super-glue') is particularly strong, but it's also rather brittle unless formulated with other materials to provide resilience. There are many different formulations of many different adhesives used in speaker manufacture.
+ +Severe mechanical stress (such as a speaker cab falling off the stage) can cause serious damage, which often cannot be fixed. I've seen (and heard many tales about) speakers where the entire magnet and polepieces have become detached from the basket after a fall. The deformation of the metalwork is such that it is rarely possible to repair the driver - it has to be replaced. If you can't get the exact same driver and there is more than one, they all should be replaced because they have been severely stressed, and will probably sound different from the replacement driver anyway.
+ + +The requirements for bass (guitar) speakers are usually very different from those used with guitar. For starters, open 'E' on a 4-string bass is 41Hz (41.204Hz), or open 'B' on 5-string and 6-string basses is 31Hz (30.868Hz). Most bass players want a clean sound, so it's not at all uncommon for the amp to be rated for much more power than the speakers. Up to double the power is normally alright, but there are some significant exceptions. The primary exception is if the player uses 'fuzz' bass - either by overdriving the amplifier or with a pedal. Some bass amps have provision for built-in overdrive, and as an example it's an option for the ESP bass amp project (see Project 152).
+ +Bass cabinets are often vented ('ported' if you prefer) to get high efficiency at low frequencies. In the early days of amplified instruments, the speakers were usually just guitar speakers, which themselves were 'general purpose' speakers until speaker manufacturers started to specialise. These days, many makers have a range of speaker drivers designed specifically for bass guitar and/or amplified double-bass. These have a longer 'throw' voicecoil to reduce distortion, and generally have a much lower resonant frequency. This impacts on efficiency, so a 'decent' bass rig should normally have a lot more power than a guitar amp.
+ +While there's some discussion/ argument as to the 'best' speaker, enclosure, amplifier, etc., etc., it's not quite as polarising as for guitar speakers. One area where there are differences of opinion regards the speaker size. 380mm (15") bass speakers were once the mainstay of bass players, although guitar-style quad boxes with 4 × 300mm (12") drivers were also common. These days, many players seem to prefer 4 × 250mm (10") drivers, sometimes using two cabinets.
+ +As with guitar, the choice of speaker (or speaker system) is personal. It's not at all uncommon for bassists to use a 'tweeter' - usually a compression driver and horn to get that top-end 'bite'. This is especially useful with slap-bass styles, where the amount of 'bite' expected can be considerable. Using smaller drivers usually means a tweeter isn't needed, but may also mean that there's not enough bottom end. A good combination can be to use a 'stereo' bass, with the neck pickup driving one or two 380mm drivers, and the bridge pickup driving a 250mm quad box - with separate amplifiers of course. This used to be fairly popular, but very powerful amps and speakers rated for silly amounts of power seem to have diminished the need. Very high speaker power ratings are rarely what they seem though, and the trade-off is often efficiency, along with considerable 'power compression' as the voicecoil heats up and increases the impedance (thus reducing the power).
+ + +It's hard to come to any specific conclusions, other than to state that the selection can be very personal. In reality, most guitarists will be happy enough with most guitar speakers because the amp's tone controls will compensate for response deviations (at least to a degree). There will be exceptions, but this may be due to anything from contractual obligations, simple prejudice or familiarity. Some people just don't like change. If they didn't know that the speakers had been changed they may not even notice (provided the replacements have very similar frequency response).
+ +There are quite obviously many factors that determine the sound, but of those, the magnet is well down the list. At the very top of the list is the material used for the cone, dustcap and voicecoil. Spiders and surrounds also affect the sound, but without a large-scale blind test it's very hard to quantify the audibility of the various components. Reading forum posts and believing what random (and often anonymous) people say is certainly not a useful way to decide on the ideal speaker.
+ +Unless you have listened to a guitar speaker with an aluminium dustcap and found that it produces the sound you are after, I suggest that they be avoided. If you intend to use the speakers in a studio, then you also need to verify that the sound is right when a microphone is used. Mics 'hear' things very differently from the way we humans do, and you may get a nasty surprise if you aren't aware of the potential problems. The same applies to 'unusual' cone materials. Most guitar speakers use paper cones, but some 'universal' drivers may use polypropylene cones that may have a very different sound when overdriven.
+ +Make sure that you have sufficient speaker power to handle your amplifier. A 100W guitar amp ideally needs speakers rated for at least 200W. You have a little more flexibility if you use a valve amp, and you'll usually get away with around 150W speaker power for a nominal 100W amp. 4 × 50W speakers is close to ideal for any 100W amp, but other combinations will work too. There are some playing styles that don't stress the speakers much ('clean' guitar for example), and you can generally get away with less speaker power. However, if the amp is ever pushed hard, then it's worth the peace of mind to know the speakers can handle the full output.
+ +The decision to use Alnico, ceramic (ferrite) or neodymium magnets has nothing to do with the tone per se. Tonal differences are primarily influenced by the cone and suspension materials as described above. Of course, it may be that an Alnico speaker happens to have the exact sound you are after, and lacking an equivalent using ceramic or 'neo', that's probably going to be the one you buy. It is important not to conflate the magnet material and other parameters - they are separate, despite many of the claims you will hear. That there might be differences is certainly possible, but consensus of designers is that the magnet doesn't affect the sound.
+ +Finally - Beware of all marketing information and 'colour glossies' - they are designed specifically to sell you 'stuff', and convince you that non-existent 'differences' are real.
+ + +Please be aware that in common with a lot of material on the Net, any or all of these references may disappear at any moment. I try very hard to ensure that references are current, but this can become very tedious. In some cases, I have used other reference material that may not be listed, but that's mainly for verification of claims made in the references provided. Some claims are simply unable to be verified at all, and as such I tend not to mention things that defy verification.
+ +In all cases, the references are for further examination by the reader. There is no connection between ESP and any of the organisations that are referenced. ESP is completely independent, and does not benefit in any way from citing any company or individual. Opinions in referenced material are those of the company concerned, and are not necessarily endorsed by ESP.
+ +![]() | + + + + + |
Elliott Sound Products | +Guitar Pickup Voltages |
Contents
+The subject of this article hopefully helps to answer a question that's often asked, but with only a few answers. The majority of this page is images, all taken directly from my scope, and reproduced half size. To allow them to be read easily, each is linked to the full sized image. Even using half-size images makes the page quite large, but I figured that was better than having images that are so big that it would take forever to load. The linked full size images are brought up in the same page, so click the 'Back' browser button to return.
+ +I tried to be as consistent as possible, but it's not easy. Every time you strike (or pluck) a string it will be a wee bit different. While it might sound very similar, the oscilloscope is totally unforgiving, and will show every tiny difference in the harmonic structure and the overall wave-shape. It's not really feasible to take many waveforms from each string and try to generate an average, as one ends up with a vast number of files that must be relevant to each test. This gets very messy, very quickly.
+ +I have two guitars, one that dates back to around 1966 (yes, really) that's seen a number of modifications over its life. The most recent (still a long time ago) was fitting Di Mazio humbucking pickups. The other is a somewhat newer (only 20-odd years old) Samick 'TV Twenty' (basically a Fender Stratocaster copy with a different head), which has two standard (single coil) pickups (neck and middle), with the bridge pickup being a humbucker. All pickups are 'Duncan Designed', which no doubt means they are not 'true' Duncan pickups. Somewhat surprisingly, I've read a few good reviews of this model.
+ +Each test was with an open E1 (low E string - actually E2 on the piano scale, 82.4Hz), open E2 (high E string) and an open E-Major chord, using the neck pickup, middle pickup (Strat copy only) and bridge pickup. The bass only has one pickup, and I used an open E, open G and a two string 'chord'. Each sample started at the 250ms trigger point, and lasted for 3.75 seconds. This proved to be long enough to get a reasonable idea of the overall trend in each case.
+ +The oscilloscope shows the RMS level averaged across the full four second sweep, and while it's not particularly accurate, it is a useful indicator of what you can expect. Note that these were all taken with volume and 'tone' controls set for maximum, and with a 10MΩ load via the scope's ×10 probe. Most pickups will not start to show any significant loss of level until the preamp's input impedance is less than 68k or so, and even then it can be hard to discern.
+ +Tabulated results aren't especially useful, for the simple reason that there will be huge variations due to playing style, and what's being played. However, I did summarise the results. All numbers are millivolts (RMS) taken from the scope captures shown below. I didn't include the bass, only the two guitars. Note that I use light gauge strings, and you will get more level with thicker ones. I don't have a set for comparison, but I'd expect that you could get at least 6dB (×2) more level when played hard. The pickup resistance is also shown in the table, not because it's especially useful on its own, but you can make comparisons. It includes the parallel resistance of the volume control, as I didn't feel like dismantling my guitars for a more accurate measurement.
+ +I've only included the guitars in the table, and not the two basses I also measured. Feel free to compile your own table from the info in the bass sections.
+ +Pickup Output Voltage - Averaged RMS (Peak) + | |||
Modified Maton | Neck (2.0kΩ) | Middle (N/A) | Bridge (2.0kΩ) + |
E1 | 40 mV (150 mV) | 32 mV (200mV) + | |
E2 | 12 mV (120mV) | 20 mV (300mV) + | |
Chord | 36 mV (200mV) | 36 mV (300mV) + | |
+ | |||
Average | 29 mV (156 mV) | 29 mV (267 mV) + | |
+ | |||
Samick 'TV Twenty' | Neck (11.5kΩ) | Middle (11.3kΩ) | Bridge (15.3kΩ) + |
E1 | 44 mV (250mV) | 76 mV (300 mV) | 120 mV (800 mV) + |
E2 | 12 mV (50 mV) | 12 mV (159 mV) | 16 mV (200 mV) + |
Chord | 76 mV (450 mV) | 72 mV (400 mV) | 128 mV (850 mV) + |
+ | |||
Average | 44 mV (250 mV) | 53 mV (283 mV) | 88 mV (617 mV) + |
From the table, it's apparent that the individual voltages can vary widely, but the averages are useful for anyone looking at how much gain a guitar preamp or effects unit will need. With a maximum average output of 128mV RMS (with the peak at just under 1V), a preamp with too much initial gain will distort readily, and it's not affected by the preamp's volume control. On the other hand, an average level of 29mV RMS means that you need more gain than you might have thought. In general, the maximum gain for the first stage should be no more than 20 (26dB) for a 'solid state' preamp, but an overall gain of more than 200 (around 50dB is 'typical') is needed to drive a power amplifier to full power (assuming 2V input sensitivity). Of course, this varies with frequency and tone control settings.
+ +Valve preamps can usually handle more gain without clipping, but that's far from guaranteed. It depends on the way the input valve is biased, and high-level transients can push the input valve into grid current well before the maximum output level is achieved. The cathode voltage needs to be greater than the highest likely transient to prevent grid current. If the input valve has a cathode voltage of 800mV, the maximum level before grid current is also (about) 800mV, which may not be enough if the guitar has 'hot' pickups or is played hard.
+ +Of course, if a guitar pickup is 'hot', you can always use the volume control on the guitar to keep preamp distortion at bay, and get some 'bite' if the volume is turned up to eleven (or even just ten ). It's also easy to see why most guitar amps have a significant amount of treble boost - it's necessary because the output of the higher strings is almost always lower than expected. As the strings get thinner they have less interaction with the pickup's magnetic field, producing less output (and usually far less sustain as shown in the scope captures).
2 Maton (Di Mazio Humbucking Pickups) Measurements
+ +Neck Pickup (Mouse over to zoom.)
+ + +![]() 150mV Peak, 40mV RMS |
+ ![]() 120mV Peak, 12mV RMS |
+ ![]() 200mV Peak, 36mV RMS |
+
Bridge Pickup (Mouse over for full size)
+![]() 200mV Peak, 32mV RMS |
+ ![]() 300mV Peak, 20mV RMS |
+ ![]() 300mV Peak, 36mV RMS |
+
3 Samick 'TV Twenty' Measurements
+ +Neck Pickup (Mouse over to zoom.)
+![]() Low E - Neck Pickup: 250mV Peak, 44mV RMS |
+ ![]() High E - Neck Pickup: 50mV Peak, 12mV RMS |
+ ![]() Open E Chord - Neck Pickup: 450mV Peak, 76mV RMS |
+
Middle Pickup (Mouse over to zoom.)
+![]() 300mV Peak, 76mV RMS |
+ ![]() 150mV Peak, 12mV RMS |
+ ![]() 400mV Peak, 72mV RMS |
+
Bridge Pickup (Mouse over to zoom.)
+The bridge pickup is a humbucker, and the scale has been increased from 100mV/ division to 200mV/ division. This was needed so the waveform wasn't clipped by the scope. The peak output level is up to 800mV, a significant increase in terms of the scope, but it's only 3dB more than the highest level recorded from the neck pickup.
+ +![]() 800mV Peak, 120mV RMS |
+ ![]() 200mV Peak, 16mV RMS |
+ ![]() 850mV Peak, 128mV RMS |
+
+
4 Bass Guitar Overview
+This was done the same way as was the table for the guitars. The data are simply tabulated from the individual scope trace images, and averages for both peak and RMS were determined. The levels overall are much lower than from either guitar. I played an open E, open G and a two string 'chord' for each measurement.
+ +Pickup Output Voltage - Averaged RMS (Peak) + | |||
'Home Made' | Neck (N/A) | Middle (8.5kΩ) | Bridge (N/A) + |
E | 20 mV (90 mV) | + | |
C | 36 mV (125 mV) | + | |
Chord | 28 mV (150 mV) | + | |
+ | |||
Average | 29 mV (122 mV) | + | |
+ | |||
'Rowell' Precision Copy | Neck (N/A) | Middle (7.9kΩ) | Bridge (7.9kΩ) + |
E | 32 mV (160 mV) | 12 mV (80 mV) + | |
C2 | 44 mV (150 mV) | 18 mV (130 mV) + | |
Chord | 60 mV (300 mV) | 10 mV (50 mV) + | |
+ | |||
Average | 13 mV (203 mV) | 13 mV (87 mV) + |
The two basses may not be fully representative of original commercial offerings, but then again they might be. The 'home made' bass has always had an issue with open-E being somewhat 'subdued' compared to other notes - it may be the pickup position, but unfortunately that's not easily changed. I would expect the output to be lower in general, because the velocity of the strings is less than a 'normal' guitar. The distance between the strings and pickup is also greater, due to the heavy strings and relatively large amplitude. If the pickup is too close, the strings can easily rattle on the pickup. I also used a pick to get a (hopefully) more consistent level. I tested (but didn't capture) a 'slap' style on several strings, and the output was a great deal higher.
+ + +5 - 'Home-Made' Bass (Single Pickup) +
The neck, fretboard and tuning heads are commercial, but the original body was replaced with a piece of solid timber (suitably shaped of course) many years ago. This was a (futile) attempt to improve the E-string performance, which was always a bit 'meh'. Obviously, it can be used with appropriate EQ to restore the missing 42Hz. The pickup is a 'Fender Lace Sensor', and it's equipped with a 'Badass' bridge.
+ +Middle Pickup (Mouse over to zoom.)
+![]() 90mV Peak, 20mV RMS |
+ ![]() 125mV Peak, 36mV RMS |
+ ![]() 150mV Peak, 28mV RMS |
+
+
6 - 'Rowell' Fender Precision Bass Copy (Dual Pickups)
+The provenance on this bass is unknown, but I think it's a Chinese 'semi-copy' of a Fender Precision bass. It has no neck pickup, but has one 'middle' and one bridge pickup. The levels seem fairly consistent with the other bass, with the exception of the levels obtained by finger 'picking' rather than a pick. The pickups both measure 7.9kΩ (including parallel volume control resistance).
+ +Middle Pickup (Mouse over to zoom.)
+![]() 160mV Peak, 32mV RMS |
+ ![]() 150mV Peak, 44mV RMS |
+ ![]() 300mV Peak, 60mV RMS |
+
Bridge Pickup (Mouse over to zoom.)
+ +![]() 80mV Peak, 12mV RMS |
+ ![]() 130mV Peak, 18mV RMS |
+ ![]() 50mV Peak, 10mV RMS |
+
Middle Pickup, Finger Picked (Mouse over to zoom.)
+![]() 320mV Peak, 76mV RMS |
+ ![]() 280mV Peak, 72mV RMS |
+ ![]() 280mV Peak, 52mV RMS |
+
I didn't include the bridge pickup for this last test, as it's fairly anemic compared to the 'middle' pickup. The ratios will be more-or-less the same though, so expect to get roughly double the output shown if using finger-picking. 'Slap' (or 'popping') will naturally be higher again, but this wasn't tested (the output can be a lot higher, and most bass amps will be pushed into distortion if you use this style of playing. This usually doesn't matter, as slap bass tends to be distorted anyway as the strings vibrate on the fretboard.
+ + +The information here has been compiled with care, but your guitar or bass will be different. It's obviously impossible to provide data on every possibility, but the figures I obtained are likely to be representative of many standard commercial products. If you have active pickups (needing a battery to operate), then the levels will generally be higher, and you will have to take your own measurements to get an accurate result.
+ +Humbucking pickups usually have more output than single-coil types, and some may offer the ability to use the coils in series or parallel. Series coils will provide more output, but are more susceptible to loading if the amp's input impedance is too low. This is rarely an issue. Many things affect 'tone' (including the tone control), with long, high-capacitance leads rolling off the higher harmonics. Active pickups are usually immune from any interaction by the lead.
+ +These tests were done with a short lead (about 2 metres) with a fairly low total capacitance of 700pF. The load impedance was that of my oscilloscope probe - 10MΩ, but that doesn't mean that the levels will change significantly with higher loading (lower resistance). The worst case should still be within 1dB or so, assuming the guitar preamp has an input impedance of 68kΩ or more (this is not uncommon with some input circuits). I tested the Samick with a 27kΩ load to see the difference (bridge pickup) and it reduced the level by about 6dB. This is far lower than any guitar preamp will present.
+ + +![]() | + + + + + + + + |
Elliott Sound Products | +Gyrator Based Active Filters |
![]() ![]() |
Gyrators (aka 'simulated inductors') are an immensely useful electronic building block, but their operation appears to be deeply mysterious. This shouldn't the case at all, but since they have been used in several ESP projects and I only touched on them in the Active Filters article, I thought it would be worthwhile to discuss them in a bit more detail.
+ +There's another class of circuit that's commonly referred to as an 'active inductor', but it's really just a modified gyrator that generally doesn't work as well. While you might not think there's much point, in reality every circuit arrangement can come in handy, and it's a matter of selecting the circuit that does exactly what you need. There are also many articles that describe high frequency active inductors implemented in CMOS, and typically using voltage controlled current sources - these are not included here.
+ +The gyrator was first proposed in 1948 by Bernard Tellegen as a hypothetical fifth linear element after the resistor, capacitor, inductor and ideal transformer. A symbol was also created that you may see used in some articles (but not this one). In real terms, capacitors have far fewer issues than inductors, which is to say a capacitor has a great deal of capacitance compared to resistance and inductance. On the other hand, a 'real' inductor has copious amounts of resistance, and may also have significant (distributed) capacitance. Wound inductors are also subject to variations in the core material and stray capacitance, which make them far less an 'ideal' component than even quite pedestrian capacitors.
+ +If you are not already familiar with the concept of filters or especially opamps, it might be useful to read the article Designing With Opamps - Part 2, as this gives a bit more background information but less detail than shown here.
+ +Filters are used at the frequencies where they are needed, so all the gyrators and filters described here need to be recalculated. In general, increasing capacitance or resistance reduces the operating frequency and vice versa. Gyrators have rather different requirements, and the component selection criteria will be described where needed.
+ +Capacitors used in filter circuits should be polyester, Mylar, polypropylene, polystyrene or similar. NP0 (aka C0G) ceramics should be used for low values. Choose the capacitor dielectric depending on the expected use for the filter. Never use multilayer ceramic caps for filters, because they will introduce distortion and are usually both voltage and temperature dependent. Likewise, if at all possible avoid electrolytic capacitors - including bipolar and especially tantalum types.
+ +![]() | Most of the gyrator and filter circuits shown expect to be fed from a low impedance + source, which in all cases must be earth (ground) referenced. Opamp power connections are not shown, nor are supply bypass capacitors or pin numbers. All + circuits are functional as shown. + |
An ESP project that uses gyrators is Project 28, and that uses them configured to be variable. This provides functions that are difficult (and may be comparatively expensive) to implement using other filter types. Equalisers are one of the best examples of where gyrators can be used as a cost-effective alternative to other filter types.
+ +In all but a few cases, maths is kept to the minimum possible. Over many, many years of electronics, I have found that using complex maths equations is rarely needed, and this is doubly true since simulators have become readily available at reasonable prices (or even free, but with limited functionality). All of the circuits shown will simulate well, and measured performance will be different only in that real-life components have real-life imperfections. This is especially true of opamps, which have finite input and output impedance, as well as frequency dependent gain.
+ +All the gyrators shown here are intended for operation between DC to 100kHz, and at the top end of the frequency range very fast opamps are needed. In most cases they will only ever be used over the range of 10Hz to 30kHz, well before opamp limitations cause problems. The demonstration circuits are not suitable for RF applications, where conventional inductors are small enough (and have sufficiently low losses) that trying to synthesise them would be silly. There are applications for RF gyrators (mainly for filters), but these will not be covered.
+ + +In simple terms, a gyrator is an active impedance converter. By using a capacitor as the reactive component, the gyrator converts (or transforms) the impedance from being capacitive to inductive. Gyrators are also sometimes referred to as 'simulated inductors', but that's a bit harsh because in many cases the gyrator will be much better than the 'real thing'! Instead of using a coil of wire wound around a magnetic core, an active device - most commonly an opamp - is used as the impedance converter. This way, we can use a capacitor as the controlling element, but transform its impedance so that the circuit as a whole behaves like an inductor. An inductor will pass DC unhindered, but present an increasing impedance to AC proportional to frequency, and this gives us something to test against.
+ +For example, an ideal 1H (1 Henry) inductor has an impedance of zero at DC, 62.83Ω at 10Hz, 628.3Ω at 100Hz, 6,283Ω at 1kHz, and so on. In reality, our 1H inductor will have significant winding resistance and because it's a coil of wire with a magnetic core, it will pick up radiated magnetic fields. In addition, there is inter-winding capacitance, and that means that it will have a 'self-resonant' frequency that may even be within the audio band. The self resonant frequency is usually outside the audio band, but not always.
+ +By way of comparison, a 1H inductor realised with an opamp and a few passive components will have almost no self capacitance, and can be designed to have an extremely low equivalent winding resistance compared to the wound component. There is also nothing to pick up stray magnetic fields, so placement on a circuit board is not critical.
+ +As noted in the introduction (although it should be evident already for anyone who has read the ESP design notes), it must be reiterated that gyrator based 'inductors' are almost always used only for audio frequencies, and they are generally unsuited for RF (radio frequency) work. In this context, 'audio frequency' actually means anything below low RF frequencies, and gyrators will work happily from DC up to perhaps 50kHz or so. Higher frequencies are possible, but need very fast opamps that still have lots of gain at the frequency of interest.
+ +Because discrete gyrators are most commonly based on opamps, simulated inductors are not suitable for use in power supplies, or anywhere else where an inductor is used for energy storage (switchmode power supplies for example). Fully floating (not earth referenced) gyrators are possible, but are far more complex than the traditional types and will not be covered in this article.
+ +Gyrators actually do have the same energy storage capabilities as 'real' inductors, but their ability to generate a flyback pulse (when current through an inductor is suddenly interrupted) is limited to the supply voltage for the opamp used. The most common use for gyrators is as filter elements, but for the most popular types even this role is limited because one end of the gyrator inductor is referenced to the system common (typically earth/ ground). In many cases, traditional active filter circuits are usually a better choice than gyrators when you just need a standard high or low pass filter.
+ +An extremely useful characteristic is that the inductance of a gyrator can be varied over a fairly wide range, and this makes some circuits possible that would otherwise be far more complex using more traditional circuits. As with anything in electronics (or any other form of engineering for that matter), there are compromises, and whether these cause a problem or not depends on the application. Just like real inductors, gyrators are not perfect, but they can be made with far fewer imperfections than a coil of wire. This is very handy.
+ +You may see gyrators referred to as an 'FDNR' network. This means 'frequency dependent negative resistance', and is true of a particular form of gyrator that uses two opamps and one or two capacitors. These are potentially interesting, but are quite complex and are useful only in specialised applications. They are outside the scope of this article, and won't be covered here (well, actually they will, but only briefly).
+ +One thing that is interesting but completely pointless - you can replace the capacitor in a gyrator with an inductor, and the circuit will behave just like ... a capacitor. This works, provided that you have access to ideal inductors but are short of low performance capacitors. If this is the case, then technology is your friend, and you can create very ordinary capacitors using a really good inductor and an opamp. Or you could just use a capacitor and live with the fact that it's performance will be much better than one you can build with an impedance converter (aka gyrator). If you don't believe that this can be possible, I encourage you to try it, either with a simulator or in a circuit you can build on a breadboard.
+ +All of the circuits described here work, and can be built, with the possible exception of the circuit shown in Figure 8.2. Even that will work if you build an opamp that can run from ±100V supplies. The others are completely conventional, and it's very educational to build one so you can fully appreciate the versatility of gyrators in general. I have deliberately avoided the more complex versions that you might see elsewhere, since they offer no real benefits for normal audio frequency applications. I have also avoided all purely theoretical gyrators (those that cannot be built using real parts).
+ +Audio frequency does not mean audio in the hi-fi sense. It simply means that the circuits are designed to work within the normal (extended) audio frequency range. This includes extremely low frequencies (less than 20Hz) and frequencies up to around 100kHz (with fast opamps). Gyrators will operate right down to fractions of 1Hz, although the component values will often be rather large. In this context, 'audio' includes the telephone system, test and measurement, vibration analysis and anything else that falls within the range of DC to 50kHz or so.
+ + +In general, it is preferable wherever possible to operate all opamps in a circuit using a dual power supply. Typically, the supply rails will be ±15V, although this may be as low as ±5V in some cases. While a single supply can be used, it is necessary to bias all opamps to a voltage that's typically half the supply voltage. Dual supplies are assumed for all the circuits shown here. Opamp pinouts are not shown, because experimenters may use either single or dual opamps. Most 'ordinary' opamps will work just fine in the circuits shown here, and I suggest μA741, 1458, 4558 or TL071/ 72 or similar if you wish to build the circuits to test them. All circuits shown will work as described if they are built without errors.
+ +Note that all circuits omit the power supply pins for clarity, but it is essential that they are connected to suitable supply voltages for the opamps to work. Refer to the data sheet for the opamp(s) you wish to use to obtain pinouts and performance data. Remember to include supply bypass caps or the opamp(s) may oscillate.
+ + +Selecting the right values is more a matter of educated guesswork than an exact science. The choice is determined by a number of factors, including the opamp's ability to drive the impedances presented to it, noise, and sensible values for capacitors and resistors. While a 100Hz filter that uses 100pF capacitors is possible, the 15.9M resistors needed are so high that noise will be a real problem. Likewise, it would be silly to design a 20kHz filter that used 1µF capacitors, since the resistance needed is less than 10Ω.
+ +E12 | 1.0 | 1.2 | 1.5 | 1.8 | 2.2 | 2.7 | 3.3 | 3.9 | 4.7 | 5.6 | 6.8 | 8.2 | + | |||||||||||
E24 | 1.0 | 1.1 | 1.2 | 1.3 | 1.5 | 1.6 | 1.8 | 2.0 | 2.2 | 2.4 | 2.7 + | 3.0 | 3.3 | 3.6 | 3.9 | 4.3 | 4.7 | 5.1 | 5.6 | 6.2 | 6.8 | 7.5 | 8.2 | 9.1 + |
Capacitors are the most limiting, since they are only readily available in the E12 series. While resistors can be obtained in the E96 series (96 values per decade), for audio work this is rarely necessary and simply adds needless expense. The E24 series is generally sufficient, and these values are usually easy to get.
+ +High resistance values cause greater circuit noise, and if low value resistances are used, the opamps in the circuit may be prematurely overloaded trying to drive the low impedance. All resistors should be 1% metal film for lowest noise and greatest stability. Capacitance should be kept above 1nF if possible, and larger (within reason) is better. Very small capacitors are unduly influenced by stray capacitance of the PCB tracks and even lead lengths, so should be avoided unless there is no choice. None of this matters much if you just want to play with the circuits, so use the parts you have available.
+ +Capacitors should be polyester, polypropylene or Mylar. Never use multilayer ceramic caps except for supply bypassing! Where low values are needed, use NP0 (aka C0G) ceramic if possible.
+ +Unless there is absolutely no choice, avoid electrolytic (including bipolar [non-polarised] types) completely. They are not suitable for filters, and may cause audible distortion in some cases. Tantalum caps should be avoided altogether! There will likely be some applications where an electrolytic capacitor is the only sensible choice, but you must understand the limitations, particularly tolerance and distortion.
+ +Type | Q (1kHz) | Tempco (ppm/°C) | Temp (°C) + |
Mica | 600 | 1 to +70 | -55 to +125 + |
Polystyrene | 2,000 | -150 ±50 | -55 to +85 + |
NP0 Ceramic | 1,500 | ±30 | -55 to +125 + |
Polypropylene | 3,000 | -115 | -55 to +125 + |
Polycarbonate | 500 | +50 | -40 to +125 + |
Polyester (aka Mylar) | 100 | +160 | -40 to +100 + |
If very high Q is needed, you'll need to use fairly exotic capacitors, with polypropylene being the best. Polyester is fine for non-critical applications, especially where a small capacitance drift with temperature won't cause problems. With most audio circuits there's no need for anything special, but for precision test and measurement applications, one often needs to select capacitors with great care.
+ +For the vast majority of circuits you will build, it doesn't matter which type of cap you use. It is very rare that extremely high Q is ever needed (almost never for audio), and over the normal room temperature range the variation of capacitance is quite small and won't cause problems. Many people look down their noses at polyester and consider it to be inferior, but no double-blind test has shown that the difference between polyester and (say) polypropylene is audible. In any simple audio frequency circuit polyester is the most readily available and cheapest of the film caps, and is generally all that's needed. If you happen to be building a test instrument that needs high Q and must remain very stable over time, then use polypropylene or polystyrene (the latter can be hard to get these days).
+ + +Figure 3.1 shows an inductor and a gyrator, with the points of equivalence indicated. A gyrator will exhibit all of the equivalent 'stray' resistance and capacitance shown. Not having to deal with core losses and magnetic susceptibility are the most compelling reasons to use a gyrator where possible. Given that it is also cheap and can be made adjustable makes it all the more appealing.
+ +The gyrator is configured for an inductance of 1H, and R1 is the exact equivalent of the winding resistance of the inductor (Rw). The inductor's (exceptionally low) core loss is simulated by the 100k resistor and 100nF cap in parallel with the inductor. These also exist in the gyrator as C1 and R2. Provided the input voltage is maintained at no more than the maximum output swing of the opamp and well below the core saturation limits for the inductor, the two circuits perform identically. In each circuit, inductance is 1 Henry, Rs is the source resistance, and as shown the measured -3dB frequency for both high-pass filters is 169 Hz ...
+ ++ f = Rs / ( 2π × L )+ +
+ f = 1k / 6.283 = 159 Hz +
The reason the formula and measured results are different is that the formula assumes the inductor is 'ideal' having no parasitic resistance or capacitance. Ideal inductors don't exist, so there will always be a small error when making calculations where inductors are involved. Gyrators are no different in this respect.
+ +In the above, there are actually two traces, red for the inductor and green for the gyrator. They are so perfectly overlaid that you can only see the green trace, because the red trace is directly behind it and can't be seen. This response graph was included to show that the gyrator really is directly equivalent to an inductor. If there were a difference, it would be visible. The next step is to examine the phase shift, measured with an input signal of 100Hz. This is a sure-fire way to prove that the gyrator really is an inductor.
+ +When we look at the voltage and current of the signal into the inductor/ gyrator we get the traces shown above. In an inductor, the current lags the voltage, and in Figure 3.3 you can see that this does indeed hold true for a gyrator - the same one as shown in Figure 3.1. If the circuit appeared to be resistive, voltage and current will be in phase. For a capacitor, the current leads the voltage - it may seem impossible for the current to come before the voltage, but it does (one of the many exciting things to learn about AC circuits in general).
+ +In reality, it's often quite difficult to get an inductor to work as well as a gyrator. A small amount of DC won't affect the gyrator at all, but can have a dramatic effect on the inductor, especially at low frequencies where the core is likely to saturate. For example, it's very easy to have a gyrator with an inductance of 100H or more, but a wound component of the same value will be large, expensive, very susceptible to external magnetic fields and easy to saturate at low frequencies or with even small amounts of DC. Gyrator inductors can have extremely high inductance, yet will not saturate at any frequency as long as the current and voltage always remain within the limits of the opamp used.
+ +That's not to say that gyrators are perfect by any means. They are built with real-world parts, and while resistors are usually very good (having very low stray inductance and/or capacitance), opamps have real limitations. So do capacitors, and depending on the intended use of the gyrator the caps you use can have a profound effect. The table in the previous section shows the basic characteristics of different dielectrics. For most applications you can use standard polyester caps, but (for example) a 1,000H inductor for a measurement system may demand a cap with lower losses and higher Q.
+ +As noted earlier, a gyrator can easily be made adjustable, something that requires ingenuity and a friendly relationship with a machine shop to make the complex precision linkages you'll need to make a standard inductor variable. With a gyrator, all you need is a pot (potentiometer) and you can easily vary the inductance over a 10:1 range or more. This makes specialised tunable filters quite easy, and provides many useful options. The inductance presented by the gyrator shown in Figure 3.1 is calculated by ...
+ ++ L = ( R2 - R1) × R1 × C1 (This is almost always abbreviated to the following, because the effect of R1 is usually very small.)+ +
+ L ≈ R2 × R1 × C1
+ L ≈ 100k × 100 × 100n = 1 Henry +
The usual way to make the gyrator variable is to vary R2. If it's increased to 200k the inductance will be 2H, and if reduced to 50k it will be 500mH. To obtain a different frequency range, C1 is changed. So with the values shown but making C1 200nF the inductance is again increased to 2H and with 50nF it's 500mH. This relationship works over a wide range, but there will always be upper and lower limits for R2 and C1 - neither should be made so large or so small that the values become unwieldy, or stray capacitance and resistance affect the results.
+ +In general, C1 should be in the range of 10nF up to 1 or 2µF (otherwise the cap will be physically too large), and R2 will be in the range from 10k up to 1Meg or so. R1 should normally not be less than 100Ω as shown. Higher values will increase the inductance, but at the expense of additional series resistance that may have adverse effects on the filter circuit. In some cases the added resistance may be an advantage, so select the value as needed. Also, R2 is effectively in parallel with the inductor, and low values reduce the available Q, and this has consequences in filter circuits (especially when R2 is made variable as shown later in this article).
+ + +One of the most common applications for LC filters (whether made using 'real' or simulated inductors) is the resonant filter. This can be configured to be either a notch (aka band stop) or bandpass filter. Both are shown below. As before, the inductance is 1H. A notch filter is configured as a series resonant circuit, and the resonant circuit has a very low impedance at resonance. A parallel resonant circuit has a high impedance at resonance.
+ +The signal is fed to each resonant circuit via the resistor 'Rs' (series resistor), and the resonant circuits are completed by either a series (Cs) or parallel (Cp) capacitor. Determining the resonant frequency is done just as one would calculate the resonant frequency of a traditional LC filter ...
+ ++ f = 1 / ( 2π × √( L × C ))+ +
+ f = 1 / ( 2π × √( 1 × 100n )) = 503 Hz +
A series resonant circuit is effectively a short circuit at resonance, but that is limited by the winding resistance (R1 in the gyrator). Since that is a requirement for both real and simulated inductors, for these examples the total impedance cannot be less than 100Ω. A parallel resonant circuit is close to being open circuit at resonance, and again this is limited in real and simulated inductors by the effective parallel resistance (core loss) or R2.
+ +At resonance, the impedance of the inductance and capacitance are equal. For the above example, resonance is 503Hz for both circuits. It's easy to determine the reactance of the capacitor and inductor using the traditional formulae ...
+ ++ XL = 2π × L × f+ +
+ XL = 2π × 1 × 503 = 3.16 kΩ
+ + XC = 1 / ( 2π × C × f )
+ XC = 1 / ( 2π × 100n × 503 ) = 3.16 kΩ +
In the case of series resonance, the two impedances are equal and the signal has opposite phase through each, so they cancel leaving only the stray impedances (winding resistance and capacitor ESR - equivalent series resistance). The result in the example shown is that the impedance across the series LC network is a little over 100Ω. For parallel resonance, the two impedances also have opposite phase, but now they cancel in such a way as to appear to be an open circuit. Again, this is limited by the inductor's parallel resistance (representing core loss) and by any leakage through the capacitor.
+ +Because of the inevitable losses, the series circuit can't achieve an infinite notch depth, and the parallel circuit cannot achieve 0dB insertion loss at resonance. The notch depth is limited to -34dB (not infinite), and the maximum for the bandpass filter is -1.58dB (not zero). We can calculate that for the series circuit, the total resistance of the series tuned circuit is about 204Ω, and for the parallel tuned circuit it's around 50k. You want proof of that?
+ ++ VD = ( R1 + R2 ) / R2+ +
+ VD = ( 10k + 204 ) / 204 = 50 + dB = 20 × log( 50 ) = 34dB +
I leave it to the reader to do the same calculation for the parallel tuned circuit (and yes, it gives the result I measured ). While you don't need to remember all of this, you do need to be aware of the limitations. That way, you won't be quite so puzzled when you see some of the effects of using gyrators (or even inductors) in real circuits. This really starts to show up when you use tunable filters based on gyrators, and you'll see the Q and peak amplitude change as the gyrator's inductance is varied. This is because the Q of the circuit is changed because of the effective parallel resistance, represented by R2.
Determining the Q is (supposedly) simple, but the formulae provided in most texts are often wildly inaccurate. Q is measured by taking the centre frequency (503Hz in the example above). Then the -3dB frequency above and below resonance are determined, providing the bandwidth. With the Figure 4.1 circuit, the bandpass bandwidth is 192Hz, and Q is simply ...
+ ++ Q = fo / BW+ +
+ Q = 503 / 192 = 2.62 +
When used as a series resonant circuit, the Q is completely different! The same formula is used, but the notch filter has a much higher Q. The bandwidth of the notch is 31.8Hz, so Q is 15.8. The determination of Q is always an approximation, and while a calculated value is usually accurate enough, if you need to know the exact value it can only be done by measurement (either on the workbench or using a simulator).
+ + +An 'active inductor' is basically a gyrator by another name. However, there are some differences in both the circuit itself and the way it works. Most common active inductors use two equal value resistors, typically 1k. The schematic below shows the difference between a gyrator and an active inductor, both set to provide the same inductance - 1 Henry. As with a 'standard' gyrator, inductance is calculated as R1 × R2 × C1. The 'equal resistance active inductor' generally has worse performance than a gyrator and usually needs a much larger capacitor, but may be useful in some cases - especially where small values of inductance are needed.
+ +The circuit shown for the active inductor is fairly typical, and R1, R2 are usually equal. One issue is that the capacitor is a much higher value, and the opamp load is increased. The opamp may be expected to provide as much as 10 times the output current for an active inductor compared to a gyrator. As resistor values are reduced (to obtain a lower 'winding' resistance), the cap must be larger and opamp current increases until a point is reached where the opamp cannot provide enough current. The circuit will then distort. Opamp output current is highest at high frequencies - well above the cutoff frequency.
+ +For example with the two circuits shown above, input was 1V peak (707mV RMS) at a frequency of 1kHz. The gyrator opamp output current is 405µA peak (288µA RMS), and the opamp in the active inductor circuit will have to provide 2.46mA peak (1.74mA RMS). At 10kHz the current in the gyrator's opamp output is 48µA RMS, and that from the active inductor increases to 3.4mA RMS. At high frequencies the opamp is least capable of providing significant output current, so the active inductor is at a great disadvantage. As the frequency is increased further, gyrator output current falls, but output current for the active inductor remains high. Low frequency output current favours the active inductor, but there's no significant advantage. The frequency response is shown below.
+ +You can see that the active inductor can only reduce the level to -20dB at low frequencies, and this is due to the extra resistance (R1 = R2 = 1k). You will see that the frequency is higher than the previous example shown in Figure 3.1, and that's because the series resistors are 10k, not 1k as before. With a 1k resistor, the active inductor can only reduce the level by 6dB. This is not always a disadvantage though, as you might only need 6dB, depending on your specific requirements.
+ +In general, the active inductor offers no real benefits, but needs a larger cap for the same inductance and places higher demands on the opamp at frequencies above cutoff.
+ + +To make a gyrator variable, all that's needed is to use a pot instead of one of the resistors. Making R2 variable would be a bad idea, as that will change the effective series resistance and change operating characteristics. In the drawing below, you can see that R1 has been replaced by a pot with a resistor in series. A fairly sensible arrangement might be to use a 22k resistor and a 100k pot. For the circuit shown using 100k for both the resistor and pot, that means the inductance can be varied from 1H to 2H - a ratio of 2:1. The range can be increased, but that comes with some caveats.
+ +The above schematic shows a parallel tuned circuit. The inductive leg of the tuned circuit is a variable gyrator, and inductance is changed by using VR1. The pot can be a higher resistance to get wider range, but the circuit's overall Q will change quite dramatically as a result. This is why most variable gyrators are limited to a fairly low ratio if a reasonably consistent Q is needed.
+ +You can see in the response diagram that the Figure 6.1 circuit changes the resonant frequency from 356Hz (pot at maximum resistance) to 503Hz (pot at zero). By repositioning the parallel cap to a series tuned resonant (as shown in Figure 4.1), you have a notch filter that can be tuned over the same range. You can also change the range by changing the value of C1 or Cp (Cs for a series tuned network). As you can see, the Q and insertion loss changes as the inductance is changed, and this needs to be looked at to see why.
+ +In reality, neither insertion loss or Q is easy, and they are the result of complex interactions. The impedance on the capacitive and inductive parts of the circuit is 3.16k when the gyrator is configured for 1H inductance, and the Q is 2.6 for the LC network. At 2H, the impedance of both branches (capacitive and inductive) is 4.7k, so the Q is reduced to 2.05 - it's not as great because the impedances in each branch are closer to the value of the feed resistor (Rs).
+ +For the highest Q, the impedance of the inductive and capacitive branches has to be as low as possible - lower impedance means higher Q. This means that the inductance must be reduced and the capacitance increased. There are limits though, and it's unrealistic to expect very high Q from gyrator based filters. Trying to increase the Q beyond reasonable limits will cause much greater insertion loss. The approximate Q for a parallel tuned circuit can be determined by ...
+ ++ Q ≈ Rs / ( XL + R2 ) Where Rs is the series resistance, XL is inductive reactance, and R2 is the feedback resistor in Fig. 8+ +
+ Q ≈ 10k / ( 3.16k + R2 ) ≈ 3.07 +
Another factor that causes the Q to change is the gyrator's parallel resistance (R1 and VR1 in series). Adding the parallel resistance reduces the Q just as it will with a 'real' inductor, because it adds damping to the circuit. With physical inductors you still have the same effect, due mainly to core losses (assuming ferrite or laminated cores). These don't exist in air-cored inductors of course, but due to the low values of inductance available, they are limited to radio frequencies. As noted already, the Q is very different for two filters using the same values, but configured as series resonant or parallel resonant circuits. With series resonance, the impedance is at a minimum, and Q is determined by the series feed resistance.
+ +Fortunately, it's very rare that high Q filters are ever needed, and if you do need a very high Q bandpass or band stop (notch) filter then there are much better alternatives, such as the multiple feedback bandpass or twin-tee notch filters, or one of the 2-opamp variations described in Section 13 of this article. If you need to be able to tune the filter as well, then you can use a state-variable type, probably the most versatile filter topology ever created. The cost (of course) is complexity.
+ + +Sometimes, you might need a number of filters, and the cost of opamps may be prohibitive, both in monetary terms as well as PCB real estate. In these cases, you might consider using an emitter follower rather than an opamp. There are caveats of course, but whether they cause a problem depends on the application. In some cases, you may find that you need more parts with an emitter follower gyrator, but they will make the PCB layout much easier.
+ +Gyrators can also be made using JFETs or even valves (vacuum tubes), but the performance of both is worse than a transistor. Even a transistor isn't wonderful, but it is usable if you don't need optimum performance. JFETs and valves are even worse, and in general should be avoided.
+ + +It must be understood that there is a fairly large difference in performance if you use anything other than an opamp. If you happen to be designing a piece of test equipment then the loss of performance will probably be unacceptable, but for an audio equaliser it may be quite alright. By way of comparison, look at the circuits below.
+ +The transistor version only needs one extra resistor, but it also needs quiet power supplies because there is almost no power supply noise rejection. The output will also be pulled to a voltage somewhat less than zero, and in this case it will be around -2.3V, but it depends on the gain of the transistor and the value of R1. The DC offset may or may not be a problem, depending on the application. Because the transistor's drive capability is not as good as the opamp, R2 has been increased to 560Ω, and C1 reduced to 18nF. This still provides roughly 1H, but it will be a little less with the transistor because of its gain (only about 0.998 instead of unity). That small loss of gain makes a measurable difference!
+ +There's a little trick that you can use too, but it only works with a parallel tuned circuit as shown here. The output can be taken from points 'A' or 'B' instead of the normal output terminal, and the performance is improved by doing so. The effect is greater with the opamp (as you'd expect), but it also improves the transistor version sufficiently to make it a much more useful circuit.
+ +As you can see, the response when the output is taken from Point 'B' is better at the low frequency end of the spectrum. There's 5dB improvement at 100Hz, and it's about 8dB better at 20Hz. That's a worthwhile increase in rejection of 'out of band' signals, simply by changing the location of the output terminal. The improvement is even more dramatic with the opamp version. The high frequency end of the spectrum isn't affected, because that part of the filter is provided by the capacitor which has no significant limitations due to series resistance.
+ + +This is where things get 'interesting'. The standard circuit topologies, including the simple emitter follower, but converted to cathode/ source follower, don't work. The 'work-around' leads to an end result that's quite a clever bandpass filter, but it's not a gyrator. For starters, the circuits are configured to have gain, and this isn't the case with a 'true' gyrator. There also doesn't appear to be a sensible way to determine the frequency. Both are shown tuned to 1kHz, but this was done by simulation, not calculation. The valve circuit is based on a simulation of the first 'graphic' equaliser - the Blonder-Tongue 'Audio Baton' (ca 1956) [ 11 ], and the JFET version is simply a 'transformation' of the valve design, with impedances changed to suit the low voltage supplies. A JFET wired the same way as a bipolar transistor works, but not very well (in fact, the performance is best described as woeful). It can be improved, but only with additional complexity.
+ +With both circuits, almost everything affects the tuning frequency. The source/ cathode and drain/ anode resistors both have an effect, and the output coupling cap (C4) is necessary to roll off the low frequency component. If this is too large, the output will fall by 10dB (JFET) or 20dB (valve) and flattens out - the response does not continue to fall. With the values shown, both have a peak frequency of (close enough to) 1kHz. The JFET version has a gain of 9dB, and the valve version has a gain of 17dB. The Q of each is a little different - 1.0 (JFET) and 1.45 (valve).
+ +I don't propose to cover these in any further detail, as IMO they are marginal at best, difficult to tune properly, and far more complex than an opamp. Both the circuits are made more complex by the requirement for biasing, and it's largely that which means that they have a completely different topology from the 'true' gyrators shown throughout the rest of this article. While I'm sure that there is a formula that can be used to determine the component values for any given frequency, I don't intend to try to determine what it might be.
+ +While not especially useful, the valve/ JFET version can also be adapted to use a BJT. It's easy to make it perform better than the valve (which should come as no surprise), but it's still a bandpass filter and not a gyrator. Compared to a multiple-feedback bandpass filter (opamp based) it doesn't come close. The MFB filter also provides the option to select the gain, Q and frequency independently, something that can't be done easily with the circuits shown in Figure 7.2.1.
+ +It's worth knowing that the vast majority of 'valve gyrator' circuits you may see on the Net are nothing of the kind. Most use a current source as the anode load. While this improves linearity, it is not a gyrator by any stretch of the definition.
+ + +You may well ask why this isn't at the top of the article, but it's here for a good reason. Until you appreciate all the interesting things you can do with gyrators and see them in action, there's not much incentive to understand how they work. So, after describing what they can do, hopefully the reader will be interested enough to want to understand the finer points of how the gyrator manages to mimic an inductor. In most cases, the opamp is connected as a unity gain non-inverting buffer. Being able to 'create' an inductor is extremely useful, because it means that we can make resonant circuits very easily.
+ +One explanation for how a gyrator functions is to look at the way the cap is connected. C1 and R1 in all the examples are wired as a differentiator. The circuit's input is then 'bootstrapped' via R2, which transfers the functionality of the differentiator to the circuit's output. In most of the examples shown in this article, inductance is set to 1H, with C1 being 100nF, R1 is 100k and R2 is 100Ω. We know that an inductor fed from an AC voltage source will appear to be a high pass filter, and we need to look at some drawings to understand this better.
+ +Filter 'A' is a simple high pass filter using a capacitor - a differentiator. In many cases this is all we might need, but we may need a true reactive component that's the functional opposite of a capacitor. So, we start with the arrangement shown in 'B' and add a buffer and a resistor (R2) as shown in the previous examples. We want to end up with 'C' - an inductor. By monitoring the voltage across R1, buffering it and sending it back to the input via R2, we have created the functional equivalent of an inductor. The voltage across C1 in 'B' is shown, and it is only ever a small fraction of the input voltage because of the buffer and R2.
+ +The opamp circuit reproduces the voltage across the resistor (R1) rather than that across the capacitor (C1). Therefore the circuit effectively inverts the impedance characteristics of the capacitor. The inverse of a capacitor is an inductor, and that's what is presented to any external circuit. None of this can be considered intuitive though, and it's not always an easy concept to grasp.
+ +All the diagrams and waveforms in the world don't actually help you understand how a gyrator works. It will also require a small 'leap of faith' for the reader, as understanding exactly what happens is not intuitive. You also need to understand exactly what an inductor does when it is presented with a signal. An inductor presents a low impedance to low frequencies, and the impedance increases with increasing frequencies. An ideal inductor is effectively open circuit at the instant it is presented with an external DC voltage, and it takes time for the current to reach the maximum value.
+ +Perhaps surprisingly, one of the easiest and best ways to describe how a gyrator works is to analyse what it does when presented with a DC voltage with a fast rise time. Ideally, it will behave like an inductor, and limit the risetime of the current. To understand all of this properly, you will need good analysis skills, and a simulator that includes ideal components will be very helpful. Otherwise, you can follow the text here and just accept the descriptions I provide. I recommend that you try it yourself though - experience is the best teacher.
+ +An inductor opposes current flow, so at the first instant when the DC is applied, no current flows. Depending on the source impedance and the inductance (which determine the time it takes), the current gradually builds until it is interrupted or reaches its maximum possible value. Maximum current is determined by the applied voltage and the total circuit resistance. A gyrator should behave the same way, and with a good opamp (one approaching 'ideal') a gyrator will come very close. If we can get a gyrator to behave like an inductor then we have successfully demonstrated that they are equivalent.
+ +Simulation of an inductor and a gyrator built with the simulator's 'ideal opamp' shows that the performance is identical in all respects. Of course, real (as opposed to ideal) opamps won't perform as well, and in particular they are incapable of producing the 'flyback' pulse that one gets from an inductor. This isn't a limitation of the gyrator itself, only the opamp, which has finite supply voltages that cannot be exceeded. If you were to build a fast unity gain buffer that could operate from ±100V supplies, then everything described here will really happen! Yes, it is possible to design such a circuit, but I'm not about to do so.
+ +Assume a suitable inductor, fed from a 12V supply with a 1k limiting resistor (Rs). In the inductor, the rise of current is suppressed by the magnetic coupling. Lenz's law states that "An induced electromotive force (EMF) always gives rise to a current whose magnetic field opposes the original change in magnetic flux." In other words, the application of a voltage causes a current to flow that creates a magnetic field, which in turn generates a back EMF in the inductor that opposes the externally applied voltage. The instantaneous voltage at the inductor terminals is the full external voltage, and as current increases through the inductor the voltage will fall. After a period of time that depends on the inductance, the current through the inductor will be 12mA (12V / 1k) and the voltage across it will be zero.
+ +The above assumes an ideal inductor, and we know they don't exist. As a result, the voltage can only fall to a final figure that depends on the external resistance (R1 - 1k) and the coil's resistance - let's say 100Ω (R1). The minimum voltage across the coil is therefore just under 1.1V ...
+ ++ VD = ( Rs + R1 ) / R1 Where VD is the voltage division, Rs is the external resistance and R2 is the coil's resistance+ +
+ VD = ( 1k + 100 ) / 100 = 11
+ Coil voltage = supply voltage / VD = 1.0909V +
Now, let's see what the gyrator does, and an ideal opamp will be assumed for the time being, but using the same resistances as discussed above. When the input signal is applied, C1 is discharged. We know from my 'first rule of opamps' (see Designing With Opamps) that an opamp will try to make both inputs the same voltage. Since C1 is discharged, it acts like a short circuit (for an instant in time), so the opamp's output is the same as its non-inverting input, and the opamp and surrounding parts appear to be an open circuit. See the drawing below so that you can follow the logic. Initially we will ignore Rd (a damping resistor) - more about that soon. The gyrator inductor has a value of 10H. The 'ideal' opamp has no limitations on its supply or output voltage, and has infinite open loop gain and bandwidth. Unfortunately, it's a component restricted to (some) simulators, and it doesn't exist in real life.
+ +On the graph, the switch is closed at 10 milliseconds and the voltage across R1 will be close to the full 12V - it's actually 11.88V because Rs and R1 create a voltage divider via the capacitor (C1). As time passes the capacitor charges via R1. As it does so, the voltage at the non-inverting input starts to fall, and so too does the opamp's output to ensure that both inputs are the same voltage (my first rule of opamps). Meanwhile, the current rises as shown in the green trace. After a period of time set by Rs (source resistance), R1 and C1, the voltage at the non-inverting input will be back to zero (or close to it), and so will the output. Current flow is therefore from the external DC source, through Rs, through R2, and finally flows into the opamp's output (opamps can supply (source) and sink current). R2 is equivalent to the coil's winding resistance, so is in series with Rs and limits the maximum current to 10.09mA.
+ +Things get really interesting when the switch is opened, and it is this behaviour that absolutely proves that an ideal gyrator is exactly equivalent to an inductor! The switch is opened at 20ms, and the current has reached 7.23mA (we are still ignoring the current in Rd). C1 is partially charged, and when the switch opens there is no longer any input current. The opamp's inputs are momentarily at different voltages. Since the opamp will always try to make both inputs the same voltage, the only way that can happen is for the output to swing negative by over 70V. Rd (the damping resistor) is now in effect, as things tend to get silly without it.
+ +At the time the switch is opened, C1 is charged to 723mV, which means that there is also 723mV across R2. The current in R2 is therefore 7.23mA. The opamp must now try to send 7.23mA back through R2, then Rs and Rd (these last two being effectively in parallel with R1), a total of 9.09k. A quick application of Ohm's law tells us that the gyrator's output voltage must now be -71.64V (7.23mA through 9.09k). If Rd is not present, the voltage will attempt to reach 723V because there is nowhere for the opamp's output current to go except to discharge C1 via R1 (7.23mA and 100k = 723V), and that would be silly.
Unfortunately, ideal opamps don't exist, so the peak negative voltage will be clamped to the opamp's negative supply rail (typically -15V). It will still try to do the same as the ideal opamp, but it's a real component with real limitations. It should now be obvious that an ideal gyrator can mimic every aspect of an inductor. As long as the input is within the capabilities of a real-world opamp, the behaviour when DC is applied to the input is still the same as a real inductor - only the flyback pulse can't be duplicated if it tries to exceed the supply voltage.
+ +So as you can see, if the ideal opamp gyrator mirrors an equivalent inductor perfectly with a DC input, it follows that it will also mimic an inductor with AC inputs, as demonstrated in all the example circuits above. Once we establish and understand the limitations of a real opamp, it's obvious that the gyrator will behave like an inductor in all respects. As is also obvious, this only happens if the applied signal does not push the opamp outside its limits (slew rate, output current and supply voltages).
+ +The transistor version has limitations from the outset, but it will still attempt to mimic an inductor to the best of its abilities. With audio frequency signals, it doesn't do a bad job for such a simple circuit.
+ + +Project 28 is a quasi-parametric equaliser, and shows a fairly adventurous use of tunable gyrators. It uses all the tricks that have been described here, plus a few more. It uses 3 gyrators to cover the range from 35-150Hz, 120-550Hz and 500-2,200Hz, offering either shelving or peaking for the lowest frequency range.
+ +Above is the bass section of the equaliser. When the switch is closed (shorting out C2), the equaliser acts in 'shelving' mode, the same as a normal tone control, but with variable frequency. When the switch is open, C2 and the gyrator operate as a series tuned circuit, providing a 'peaking' response, and allows a peak or notch that can be tuned. The frequency range (peaking mode) as shown is from 30Hz to 140Hz. The graph below is very busy - lots of different traces - but it gives you an idea of what can be achieved. You can also see that the last 25% of the pot's travel covers a wide frequency range, and it is difficult to get a nice linear response from the frequency pot.
+ +Each trace on the graph is with the Boost-Cut control at 0% and 100% (maximum cut and boost), and the frequencies are shown with settings of 0%, 25%, 50%, 75% and 100% on the frequency pot. With the Cut-Boost pot centred, the response is flat. This is an extremely flexible circuit, but it wins no prizes for consistent Q (which varies with all L-C variable tuned circuits (state-variable filters can maintain constant Q). However, it is very usable - I have a full version of Project 28 in my workshop as part of my audio test and monitoring system.
+ +Any or all of the frequency determining parts can be changed to give different results. VR1 is normally duplicated for as many EQ channels as desired, and a great many graphic equalisers (even down to 1/3 octave) were built using gyrators as part of the tuning circuit. Before gyrators, ferrite pot-core inductors were often used - a vastly more expensive option, especially since there could be up to 31 of them for a mono 1/3 octave graphic.
+ +The ability to tune gyrators over a fairly wide range makes them far more suitable than coils for tone controls and other forms of equalisation. Although the opamps used will always contribute some noise, they will usually be quiet enough for the vast majority of applications. Compared to the cost, complexity and PCB real estate required by alternatives such as state variable filters, gyrator based filters can be almost as flexible, but lack the ability to change frequency without affecting Q. In reality, this is not usually a problem for filters used as 'tone controls' used to modify the response to suit the listener. Correcting for room and/ or loudspeaker anomalies only applies within limits - speaker correction is easy enough, but 'room EQ' is largely a myth. Room effects are caused by time delays, and you cannot correct time with amplitude!
+ + +A gyrator in its simplest form used to be a common line termination for so-called POTS (plain old telephone system/ service) analogue phones. It is a requirement of any phone that it should be able to pass the DC from the phone line, but present a specified impedance back to the exchange (aka 'central office'). During line testing and other activities, it is necessary to use a circuit that will draw the required current from the phone line, but not interfere with the termination impedance. In most modern phones, the gyrator forms part of the IC that controls the telephone, and is not a separate entity.
+ +Early telephones used an inductor (or more correctly a 'hybrid coil', which is a tapped inductor). As phones became electronic the inductor was a needless expense, because they have to be quite large and expensive in order to handle the DC through them. A standard phone line from the exchange can deliver in the order of 50mA to a shorted line, or around 20-25mA in normal use at the end of the cable from the exchange to a house. The telephone itself should have about 5-10V DC across it when in use.
+ +At one stage of my life I designed specialised telecommunications equipment, and particularly equipment that used standard phone lines. In order to be able to test (and take detailed measurements), I built a number of phone line termination units which mimicked a telephone, but did not have any microphones or ear-pieces. These (and the terminations in some telephone systems) used the simplest form of gyrator possible.
+ +Essentially, the circuit is just a high gain transistor with its input bypassed with a capacitor. It can only react to DC and appears to be an open circuit to AC (in this case the speech signal). Does this qualify as a gyrator? Perhaps not, but it is a simulated inductor, and behaves like a real inductor. It passes DC unhindered but in a controlled manner, but is virtually open circuit to AC above 100Hz or so. You might well ask "why not just use a resistor?" You can, but it will present the wrong impedance for AC back to the exchange, and that can cause echoes and other interference on the speech circuit. A resistor also cannot compensate for different line resistance. If we assume that the full 50mA is available from the exchange then a 200Ω resistor will work, but that is completely wrong for the speech circuit and will be far too low if there is significant line resistance.
+ +By using a specially designed gyrator, the line will appear to have the right resistance for the DC provided through the phone line, but will not affect the AC impedance so that can be handled properly by the hybrid (if you really want to know, see 2-4 Wire Converters / Hybrids). The circuit below will draw about 38mA from a 1,000Ω feed system (zero length phone line), and about 20mA if there's an additional 1,000Ω of line resistance. The terms 'Tip' and 'Ring' come from standard phone plugs, which were designed for and used in early manual telephone exchanges.
+ +The impedance of the circuit is over 40k at all frequencies above 100Hz, and is over 50k for all frequencies above the phone system lower limit of 300Hz. The zener diode ensures that the maximum current is drawn immediately, and if it's not there the circuit will fail to seize the phone line fast enough for the exchange equipment to recognise it. During normal use (about 20ms after the circuit connects to the phone line), the zener does not conduct. The effective DC resistance to the phone line is about 280Ω with a 48V supply via a 1kΩ DC feed network.
+ +In all respects, the behaviour of the circuit is the same as a high value inductor. It has a low resistance to DC, yet has high impedance for AC, exactly the conditions we expect from an inductive circuit. It is a very simple gyrator, but it satisfies the criteria to qualify. Attempting to use it as part of a tuned circuit is not recommended because it won't work. Not because it doesn't appear inductive, but because the simulated inductance is dependent on the gain of the transistor pair and is therefore unpredictable. This doesn't affect its operation as a telephone line DC terminator.
+ +Note that all phone systems have a designated impedance that applies to audio signals, and that is not shown in the above drawing. The impedance is used at both ends of the phone line so both ends of the line are properly matched. Further discussion of this is outside the scope of this article. However, it's still worth noting a few points about the circuit shown. The diode bridge at the input is required because telephone equipment should not be polarity sensitive, and must be able to function if +ve and -ve are reversed.
+ +R1 and R2 deliver a very small current to the base of Q1, which in turn drives Q2, and draws current from the line. AC (signal) to the base is removed above a few Hertz by C1, preventing the transistors from affecting the signal, only the DC. Finally, C2 and C3 are used to connect to external test equipment with floating (not earthed) balanced inputs. Telephone systems are always balanced, because they use long, unshielded twisted pairs that would otherwise pick up a lot of noise.
+ + +There are two other types of gyrator that are classified as a 'GIC' (generalised impedance converter) and FDNR ('frequency dependent negative resistance', or 'functionally dependent negative resistance'). These are usually quite specialised and are unlikely to be found in many circuits that hobbyists will come across. Some are so specialised that they are intended for inclusion in CMOS integrated circuits and aren't normally found as discrete circuits.
+ +Potentially of some interest is the negative impedance converter, and that can be built using one or more opamps. I don't know where anyone would use one, but the concept is quite fascinating. It is usually easy enough to simulate some of these circuits using 'ideal' opamps, but realisation with real opamps is often problematical to the extent that the circuits simply won't work properly (if at all).
+ +Some of these circuits can be classified as theoretical, in that they might be coaxed into functioning in theory or even perhaps in a simulator, but should you build one it will refuse to work. Others (such as the one shown below) do work, and this particular circuit works extremely well - so well that it simulates an ideal inductor, with zero winding resistance. Of course, and in deference to the 'no free lunch' principles that define physics, this comes at a cost.
+ +Firstly, it is far more complex than those we've looked at before, and its dynamic range can be severely limited if the source impedance is too low. As shown, the maximum gain is at low frequencies, and both opamps have to operate with an internal gain of 26dB (x 20) when driven from a 1k source impedance. Increasing the source impedance to 10k means that U1 and U2 operate with a gain of 2, but the dynamic range is still more limited than you can get with a simple gyrator as described earlier. The interesting part of the circuit is based around U2, which is a negative impedance converter (NIC).
+ +The NIC (negative impedance converter) effectively removes the simulated 'winding resistance' that affected the simpler gyrators, but at the cost of high internal gain and limited input level before distortion. In normal uses, this is probably not a serious limitation, but you do need to be aware that it happens if you wish to experiment. If any opamp clips, then circuit behaviour cannot be measured with any accuracy because the opamp is outside of its linear range.
+ +The inductance of the circuit is determined by the resistors coloured yellow. The others (R4, 5, 6 & 7) only need to be all of the same value (such as 10k, or perhaps 22k), and changing them does not affect the inductance. The only reason I used a different value for these was to make it obvious that they are not part of the inductance calculation. You can experiment with differing values for R4 - R7, but the results will probably not be useful. With the values shown inductance is 1 Henry, and is determined by ...
+ ++ R = R1 = R2 = R3+ +
+ L = R² × C1 +
As you can see from the circuit and inductance formula, very high inductance can be achieved with small capacitors and relatively low value resistors. To get a 10H inductor, you can use a 100nF cap and 10k resistors. If resistor values (R1, R2 & R3) are increased, so is the gain required by the opamps for a given source impedance (Rs). You can reduce the resistance and use a bigger capacitor, but then the opamps (in particular U2) may have difficulty providing enough current. There are many interactions in this circuit, and it's not sensible to try to explain them all in a short article.
+ +I haven't shown the response graph simply because it's very similar to those shown earlier, except that the equivalent inductor has zero 'winding resistance'. So, where those described earlier had the equivalent of 100Ω or 560Ω, this one has none at all. This changes the response at the extreme low end, because the Figure 11.1 circuit can provide zero resistance at DC. While this might seem to be very useful, in most cases there is no real benefit.
+ +If the capacitor in Figure 11.1 is replaced by a voltage source (AC or DC), the output will provide a constant current into any load that's within the limits of the opamps. While it is an excellent current source, it's also far more complex than necessary and has a very limited application. However, the fact that it can be used this way is one of the reasons that this class of circuit is referred to as a 'generalised impedance converter' (GIC). It simply converts impedances from one type to another, and/or inverts the reactance type, making a capacitor behave like an inductor and vice versa. In the case of a voltage source in place of C1, it converts a voltage to a current or can convert a current into a voltage (a single resistor works much better for the latter task though). All of this falls into the category of 'fascinating but not very useful'.
+ +I do not intend to cover some of the more esoteric gyrator designs because it is extremely unlikely that you will ever come across them. Some are quite interesting, but simple explanations are not possible because of the circuit complexity and interactions. Even the one shown in Figure 11.1 is far more complex that you are ever likely to need, although it does perform well. It's better in some ways, worse in others compared to the simple circuits described earlier. However, there are no applications that I can think of that actually require the use of an 'ideal' inductor.
+ + +I know I said that I wasn't going to talk about frequency dependent negative resistance (FDNR) circuits, but a simple example is certainly worth including so you recognise it if ever you come across one. The circuit shown below was adapted from a paper that seems to be very common the Net, and the exact same circuit appears to be repeated in several different papers by different people [ 10 ]. It's normally shown as a three stage filter, but I have reduced it to a single stage for simplicity. The frequency can be changed simply by varying R4 and leaving all other values the same.
+ +Although the circuit doesn't look overly complicated, I have no intention of even trying to explain the maths behind how it is designed. This is a complex circuit, with very convoluted interactions between the two opamps that are impossible to analyse in simple terms. As you can see from the near equivalent circuit using inductors, a single FDNR stage manages to emulate two floating inductors and a capacitor. The two circuits shown are only approximately equivalent, because I used standard values rather than the decidedly non-standard values that would normally be required - especially for the LC filter. The approximation is mine - if you follow through the maths behind the FDNR you will discover that exact copies of L/C filters can be created.
+ +The response of the two filters is shown above. As noted, they are not identical because I used standard value parts, but when implemented according to the rather daunting maths formulae and using high precision (very non-standard) parts for the two, they will be exactly equivalent. The filter response for both shows Chebyshev behaviour, with some ripple in the pass band just before rolloff. Changing R4 from 560Ω to 470Ω makes the -3dB frequency the same for both, but I left it as is so you can see the separate traces.
+ +The FDNR opamps will operate with some gain (especially U1), and as always this may cause the opamp to overload and ruin the filter characteristics. Input signal levels must be kept low enough to ensure that there is never any distortion, as this indicates opamp overload. Both circuits provide a rolloff of about 20dB/octave when measured between 20kHz and 40kHz (the -3dB frequency is approximately 13kHz). While the circuit is impressive and a potential source of wonder, the same results can be obtained from much more conventional active filter circuits, that are easier to design, build and troubleshoot.
+ +There are a couple of other 'GIC' (generalised impedance converter) circuits shown in Section 13. The FNDR is a 'specialised' case of a GIC, and both can use similar circuit arrangements.
+ + +The following circuits are variable capacitors, and these are not technically gyrators, although they do follow the same basic form. They are capacitance multipliers, and can be used in place of a fixed capacitor to obtain variable frequency tuning. As shown in Figure 12.1, capacitance is variable between 11nF and 110nF, although it can have much greater range if desired. For audio (such as equalisers and the like), the range shown will be more than enough.
+ +Note: A word of warning is required for the single opamp circuit. Under some conditions (which, believe it or not can be intermittent), the circuit may decide to 'latch-up' to nearly the full voltage of one polarity or the other. Although I've used them a number of times with no issues, the latest attempt (part of a stereo equaliser circuit) suffered from this problem. Mostly, they behaved themselves, but every so often one or both channels would latch to a supply rail. Turning off and back on again almost always fixes the problem, but I've never been able to see it happen on the workbench. It seems that the circuit knows when test equipment is nearby, and refuses to misbehave. The reason so far remains a mystery, but note that the circuit does have positive feedback that's DC coupled!
C2 is a recent addition, and it removes the DC feedback component and should prevent latch-up. The polarity is unimportant, as there's very little voltage across C2. Operation is otherwise unchanged. Please note that I've not been able to absolutely verify that this will work, because I've never been able to get the circuit to latch-up on the workbench.
+ +The only difference between this circuit and a gyrator is that the positions of the cap and a resistor have been reversed. Go back and have a look at one of the other circuits so you can see this for yourself. Capacitance is determined by C1, R1 + VR1 and R2 and the formula is ...
+ ++ C = C1 × (( R1 + VR1 ) / R2 + 1 ) Assume VR1 is at maximum resistance ...+ +
+ C = 1n × (( 1k + 10k ) / 100 + 1 ) = 111nF or ...
+ C = 1n × ( 1k / 100 + 1 ) = 11nF VR1 at minimum resistance +
I won't bother showing the response, because it's just a first order low pass filter and rolls off at 6dB/octave after cutoff. The only difference between this and any other capacitor is that this one can be used to create a variable cap that is far greater in value than anything you might be able to buy. Like the gyrators from which the circuit is based, it has some series resistance and that limits the ultimate attenuation. For example, with the values shown, the ultimate attenuation at high frequencies is about 40dB, due to the 100Ω resistor.
+ +In case you were wondering, no, you can't use it as part of a DC filter in a power supply. The current is limited by the opamp, and the voltage is limited to the opamp supply rails. If you need to reduce ripple from a power supply, see Project 15. This capacitance multiplier is intended for equalisation circuits or other places where you might be able to use a somewhat less than perfect, high value variable capacitor (that isn't the size of a small car).
+ +There's another version of a capacitance multiplier that uses two opamps, and although it has some advantages over the one shown here, it has some serious limitations as well. Much like the two opamp gyrator, it can require high opamp gain and may unexpectedly overload (distort) as a result. The circuit diagram is shown below.
+ +The gain of U2 is variable from zero (VR1 wiper at position 'a') up to a maximum of 10 (wiper at position 'b'). With the values shown, the maximum gain is 10, so U2 may distort prematurely if the input voltage to U1's non-inverting input is greater than around 500mV at any frequency. Capacitance is determined by ...
+ +++ ++
+C = C1 × ( G + 1 ) where G is gain set by VR1 ... + C = 1nF × ( 0 + 1 ) = 1nF with VR1 at position 'a', and ... + C = 1nF × ( 10 + 1 ) = 11nF with VR1 at position 'b' +
Unlike the two opamp gyrator (shown next), the two opamp capacitance multiplier has no series resistance so is theoretically 'lossless'. However, the internal circuit gain can be such that opamp distortion causes signal distortion, but it may not be immediately apparent and the cause may seem somewhat mysterious. Given that issue and the fact that it needs two opamps instead of one, the Figure 12.1 circuit is preferred, even though it requires an extra resistor (a small price to pay).
+ +Either of these circuits could be used in the same circuit arrangement as shown in Figure 9.1, providing a variable frequency shelving treble tone control. They can also be used as part of a parallel tuned circuit, with the inductive element provided by a gyrator. Because the circuit is earth (ground) referenced, it cannot be used as part of a serial tuned circuit, and in that respect it has the same limitations as a standard gyrator.
+ + +Another topology needs to be looked at, because it isolates the parallel resistor and improves the Q of the gyrator. By adding a buffer opamp, the 'damping' resistor to ground is 'isolated' from the input, and it no longer affects a parallel or series tuned circuit. However, this does not change the reality by as much as you'd imagine, due to the very nature of tuned circuits. Nor does it eliminate the series resistance - as with real inductors, the 'winding resistance' cannot be eliminated with this circuit. Unlike the conventional gyrator, this circuit cannot be used to obtain a series resonant (notch) filter. It only works in parallel resonance mode.
+ +The overall Q is determined by the feed resistance, and if the inductor (gyrator) and capacitor aren't both changed, the circuit's Q changes. The two-opamp gyrator isolates the parallel (damping) resistance, and this can be useful if a high Q circuit is needed. The alternative output ('Out2') may be useful if there's a need to drive low impedance circuits. It doesn't affect insertion loss (see below), but it does provide better rejection of very low frequencies because R2 can't cause the response to level out as frequency falls (that's not usually a problem, but the low output impedance is useful).
+ +The relationship between the capacitive and inductive reactance and the feed impedance all work together to determine the circuit's Q and insertion loss (which reduces the peak amplitude). If capacitance is changed by itself, Q is affected, and the same happens when the inductance is changed. So, as it turns out, eliminating the effect of the damping resistor isn't as useful as it seems at first. It's still worth including though, because it's definitely beneficial if a higher than 'normal' Q is necessary. Inductance (2.2H for the values shown) is calculated in the same way as for a 'normal' gyrator ...
+ ++ L = C1 × R1 × R2+ +
+ XL = 2π × fo × L
+ Q ≈ Rs / XL +
Q is approximated by Rs (Source resistor) divided by the reactance of Cp or L1 (the gyrator). At resonance, the reactances are equal but opposite. For example, if the capacitive and inductive reactance are both 3.18k (the case for the circuit shown), the Q of the tuned circuit is 3.145 (Rs / ( XC). However, real life isn't perfect, so the Q is always somewhat less than the calculated value. The peak amplitude is -0.83dB. The frequency is 228.7Hz, set by Cp, C1, R1 and R2. Increasing the value of Rs increases Q, but also increases the insertion loss. For example, if Rs is changed to 22k, the actual Q increases to 5.73, and insertion loss increases to 1.73dB.
+ +As with any other LC resonant circuit, the impedance is inductive below resonance (so impedance rises with increasing frequency), resistive at resonance, and capacitive below resonance (impedance decreases with increasing frequency). The Q of the circuit is of no consequence at resonance, only above and below. A high Q circuit has an initial rolloff that's faster than a lower Q circuit, but both eventually end up being 6dB/ octave (for a single tuned circuit). The ultimate attenuation of low frequencies is determined by the coil resistance (equivalent to R2).
+ +This isn't an especially common variant, but it may turn out to be handy for some tasks. The elimination of the parallel resistor provides a useful increase of system Q. Note that the Figure 13.1 circuit can be used for both series or parallel tuning (notch or bandpass respectively).
+ +There are two other 2-opamp gyrators, with the first one (Figure 13.2) being something of a mystery. I became aware of it from a circuit sent to me by a friend, and it has some significant advantages over the more common version shown above. In particular, the output impedance is low (nominally zero ohms), and it can be made to have a very high Q. There is no internal gain which could cause premature overload (clipping), but the tuning formula is dodgy. I could find no 'official' formula (and no information other than in the referenced document [ 12 ]), but I was able to work out a formula with a 'fudge factor' (aka a 'constant') that seems to be accurate ... provided R3 and R4 are 10k. If they are a different value, you need to make a correction.
+ +With values as shown for R3 and R4 (10k), the (approximate) inductance is determined by ...
+ ++ L ≈ Rt × Ct × ( ½R4 )+ +
+ L ≈ 10k × 22n × 5k ≈ 1.1H +
The value of 5k (½R4) is the 'constant' that I determined empirically. As noted above, it only works when R1-R4 are 10k, so you'll have to re-calculate it if you decide to change these values. It turns out that the constant is half the value of R3 and R4, and they must be the same value. While it may appear that R3 and R4 are effectively in parallel, this isn't the case. However, using a constant value that's half that of R3 and R4 works in order to determine the inductance. There's no requirement for R1 and R2 to be the same value as R3 and R4, but there's also no reason to make them different. The formula shown has been tested against a large number of filters in the referenced document (the published schematic uses 40 individual filters!).
+ +The ratio of R3:R4 should be maintained at 1:1. If R3 is reduced in value (relative to R4), the Q is increased, but if you go just a little too far the circuit will oscillate. R3 does not change the frequency, but R4 does. I suggest that you use 10k for each if you wish to use the circuit for anything.
+ +Note that the Figure 13.2 circuit is a bandpass type when Cp is added, and it cannot be rearranged to form a band-stop (notch) filter. Without Cp it performs like an inductor, having zero output at DC and an impedance that rises at 6dB/ octave with frequency (as expected). With Cp included as shown, the resonant frequency is 1,023Hz ...
+ ++ f = 1 / ( 2π × √ ( L × C ))+ +
+ f = 1 / ( 2π × √ ( 1.1 × 22nF )) = 1.023 kHz
+
Or using a single formula ...
+ ++ f = 1 / ( 2π × √ ( Rt × Ct × 5k × Cp ))+ +
+ f = 1 / ( 2π × √ ( 10k × 22n × 5k × 22n )) = 1.023 kHz +
As a bandpass filter, it uses more parts than a MFB (multiple feedback) filter, but it's far more versatile. The frequency can be changed by altering Rt with a pot, and Q is independently adjustable by varying Rs. While it uses twice as many resistors and opamps, that's more than compensated for by the high (and easily adjustable) Q available, and the ease of wide-range tuning.
+ +With any bandpass filter, determining the Q is generally a requirement as well. It's not hard, but providing a single formula isn't likely to be helpful. The first task is to determine either the capacitive reactance (XC) or inductive reactance (XL). At resonance, they are equal, so I'll use XC ...
+ ++ XC = 1 / ( 2π × f × C )+ +
+ XC = 1 / ( 2π × 1,023 × 10n ) = 7.071 kΩ or ...
+ XL = 2π × f × L
+ XL = 2π × 1,023 × 1.1 = 7.070 kΩ +
The small difference between the two impedance calculations is simply the result of not using all decimal places, and is not an error. The Q is simply (and approximately) the series resistance (Rs) divided by XC or XL, which works out to be about 14.14. The calculation will almost always be a little different from the measured value, but bear in mind that actually taking a measurement with any degree of accuracy is very difficult with high-Q filters. I measured the Q (using the simulator) to be 14.4, so the error is small, and for almost all applications it's insignificant. A project version of this circuit is shown in Project 218.
+ +Using JFET input opamps (e.g. TL072), you can get an astonishingly high Q. If Rs is made 1MΩ, the Q at ~1kHz is over 150 - that's a -3dB bandwidth of only 6.7Hz. It's unlikely that you'll ever need that much, but it's there if you want it (depending on component thermal stability). The only capacitors that might be stable enough are polystyrene, but with a -3dB bandwidth of less than ±3.5Hz in 1,000, that's still a big ask.
+ +This particular gyrator verges on being 'magic'. It can be tuned over a two octave range with less than 2dB variation in gain, but the Q changes. This is common with all gyrators because there's an inevitable compromise between XL and XC that is difficult to balance out. The fact that it can have a Q that's far greater than 'conventional' gyrators makes it ideal for a sharp filter to all but eliminate distortion from an audio oscillator. However, you will be limited to spot frequencies because the Q can be so high that only a few Hz difference between the oscillator and filter can reduce the output level dramatically. It can also be used to isolate individual harmonics, something I have tested and it does a fine job.
+ +A recent search shows that the original circuits seemed to have vanished (the designer died in 2011), and while the website continues [ 12 ], the schematics are buried and difficult to find. The original source of the gyrator itself remains a mystery. The first stage is a NIC (negative impedance converter) followed by an integrator, but it's the feedback that creates the 'inductor'.
+ +Another variant is the 'classic' GIC (aka FNDR) filter, shown next. The advantage over the Figure 13.2 circuit is that it uses fewer resistors, but it doesn't have low output impedance. It's capable of very high Q (over 100 is easy to achieve), and the Q is set by Rs - the series input resistance. Unfortunately, getting a high-Q bandpass filter (achieved using Cp) requires that Rs is a high value, meaning that the output impedance is also very high, and it needs to be buffered with another (preferably JFET input) opamp to be useful.
+ +I've shown the same circuit twice, with the upper drawing showing the standard way the circuit is drawn, and the lower drawing uses a more 'conventional' layout that's easier to follow. No 'fudge factor' is needed here, and the inductance of the gyrator is determined by the formula ...
+ ++ L = √ ( Rt² × Ct ) ++ +
With 10k for Rt and 10nF for Ct, the inductance is 1 Henry. Like the Figure 13.2 circuit, this only holds true if all other resistors are maintained at 10k. Rs (the input series resistor) determines the Q. For example, if Cp and Ct are both 10nF, and Rs Rt are both 10k, the resonant frequency is 1.59kHz, and the Q is 1. That means that the bandwidth (-3dB) is the same as the resonant frequency, in this case both are 1.59kHz. If Rs is increased to 100k, the Q is raised to 10, but (hopefully obviously) the load impedance needs to be many times that (> 1MΩ) or the Q and output levels are reduced.
+ +Like most FNDR filters, the Figure 13.3 circuit has internal gain. This can cause problems regardless of the source resistance (Rs), as the internal gain can be up to ×10 (20dB). Overload is likely if the input signal level is over 1V peak (700mV RMS) or so. This most likely when the circuit is used as a tuned circuit (including Cp). In general, Rs should not be less than 5k if 10k resistors are used elsewhere.
+ +When used as bandpass filters as shown in the three circuits above, the resonant frequency is determined by the parallel capacitance (Cp) and the effective inductance. The formula is ...
+ ++ f = 1 / ( 2π × √ ( C × L )) ++ +
Of these filters, the Figure 13.2 version has distinct advantages. There is no possibility of internal gain causing premature overload, and it has a low output impedance. This removes any requirement for a buffer to drive external circuitry (including summing amplifiers). The lack of a sensible tuning formula is no great cause for alarm, provided you use 10k resistors. Otherwise, you can work out a different 'fudge factor' to suit a value of your choosing.
+ + +One of the most common uses for gyrators (in audio) is the graphic equaliser. By using a series resonant circuit, the impedance is minimum at resonance, and this is used to modify the gain of an opamp configured in the same way as a Baxandall tone control, but with anything from 5 to 31 sections. Early graphic EQ circuits used 'real' inductors, which meant they were very expensive, and subject to radiated magnetic fields from nearby transformers. In the interests of brevity, I've only shown a 1 octave band equaliser, using the 'universal' frequencies that are used by almost every manufacturer.
+ +Graphic EQs vary, and can be 2 octave (5 EQ sections), 1 octave (10 sections), ½ octave (20 sections) or 1/3 octave (31 sections). However, if you build your own, you only need to include the filters you need, depending on the frequencies you wish to control. They don't have to be contiguous, nor do they need to align with the 'normal' frequencies used. The circuit must be driven by a low impedance source, such as an opamp unity gain buffer (U1). The values indicated by '*' are repeated for each filter.
+ + + +Even a 1 octave graphic EQ is not for the faint-of-heart, especially for a one-off. Of course they can be purchased cheaply enough, but finding spare parts for an older unit can be challenging. If you buy one new, it will almost certainly use SMD parts, making repairs very difficult should it break down (and getting replacement pots will likely be next to impossible). The table below shows the values that are used with the Figure 14.1 equaliser. R1, R2 and C1 determine the inductance of the gyrators. I have made adjustments to the original design to get closer to the required frequencies (some had quite significant errors (up to 10%).
+ +fo Nominal | C1 * | C2 * | R1 * | R2 * | Inductance | fo Calculated + |
32 | 120 nF | 4.7 µF | 75 kΩ | 560 Ω | 5.04 H | 32.7 Hz + |
63 | 56 nF | 3.3 µF | 68 kΩ | 510 Ω | 1.94 H | 62.9 Hz + |
125 | 33 nF | 1.5 µF | 62 kΩ | 510 Ω | 1.14 H | 121.7 Hz + |
250 | 15 nF | 820 nF | 68 kΩ | 470 Ω | 479 mH | 252.4 Hz + |
500 | 8.2 nF | 390 nF | 68 kΩ | 470 Ω | 262 mH | 498 Hz + |
1k | 3.9 nF | 220 nF | 62 kΩ | 470 Ω | 113 mH | 1.0 kHz + |
2k | 2.2 nF | 100 nF | 68 kΩ | 470 Ω | 70 mH | 1.9 kHz + |
4k | 1 nF | 56 nF | 62 kΩ | 470 Ω | 29 mH | 3.9 kHz + |
8k | 510 pF | 22 nF | 68 kΩ | 510 Ω | 18 mH | 8.0 kHz + |
16k | 330 pF | 12 nF | 51 kΩ | 510 Ω | 8.6 mH | 15.7 kHz + |
Because each frequency is double the one before, it figures that inductance and capacitance will halve for each successive band. There are errors in the values which will shift the frequencies slightly from the design point, but this is not an issue. The circuit (like all EQ stages) is intended for response manipulation, and is not intended as a precision filter. Some of the values are non-standard, which is always a problem when you have so many filters. Capacitors aren't available as standard with neatly doubled/ halved values, so parallel combinations will be needed in a few places to get the right value. I leave this to the reader, especially since this is not a construction project, but is intended to demonstrate designs and ideas.
+ +With all values as given in the schematic and table, maximum boost or cut would be quoted as ±10dB, although it measures about ±11dB. To get ±12dB, R2 and R3 (both 2.7k in the equaliser section) can be increased to 5.6k, but this will result in greater noise. The nominal Q is 1.57 for the filters shown, although it does vary a little in reality. Q isn't shown in the table, but a reasonable approximation is to use the formula ...
+ ++ Q ≈ 2π × fo × L / R2 Which can also be written as ...+ +
+ Q ≈ XL / R2 = 1.51
+ Q ≈ 2π × 1k × 113m / 470 = 1.51 +
The formula shown differs from that shown in most reference material, but is closer to reality. A simulation shows that the actual Q is somewhat lower, and I measured it in the simulator as 1.48 to 1.55 for a couple of different frequencies. Being a rather tedious process, I didn't test all filters. Given the nature of any graphic equaliser, small variations in Q don't amount to much.
+ +You always need to consider the noise gain in circuits such as this. With 10 × 10k pots, the noise gain of U12 is 16dB, even though the audio gain is unity when all pots are centred.
+ + +The gyrator is a cost-effective and convenient way to build a tuned circuit or to replace inductors in audio frequency circuits. It's not at all difficult to get very high Q filters, and it's very easy to obtain a Q of 4 as required for a 1/3 octave equaliser. Expecting a Q of more than 10 usually requires a dual-opamp version, but for audio applications it's not necessary and is almost always undesirable anyway. Very high Q filters are simply never needed for audio, but can be useful in other audio frequency applications such as analogue test and measurement systems.
+ +NIC/ GIC and FDNR gyrators offer advantages and disadvantages, but are unlikely to be encountered in any practical circuit that you might find. They are very capable, but are generally too complex for any DIY project, and will also be extremely difficult to debug should something go wrong. They have been included because they are interesting, but I don't expect to include an FNDR in one of my projects any time soon (i.e. never). GIC gyrators are much easier, and obtaining a Q of more than 20 is fairly straightforward.
+ +A Q of only ten means that the bandwidth is 1/10th the centre frequency. At 1kHz, that means the signal is 3dB down at (roughly) 951Hz and 1,051Hz, a bandwidth of only 100Hz. To get a higher Q and steeper slopes at the frequency extremes, you can use two filters in series. In some cases you may only need a (close to) ideal inductor, but remember that most 'simple' gyrators are ground referenced, so they cannot be used in series with the signal, only in parallel. An FNDR gyrator can emulate series inductors (see Figure 11.1.1), but they are the most complex form of gyrator and are difficult to design.
+ +As noted in the introduction, I have avoided using complex formulae and other 'high level' maths, because in my experience it's rarely necessary and almost never gives a better understanding than proper examples, waveform traces and down-to-earth explanations. For those who want to play with the maths involved, there are plenty of sites on the Net that use this approach. None that I saw will provide the level of understanding that I've shown here, and for the most part are more likely to cause confusion. A purely theoretical examination of any circuit (and assuming ideal components) is usually not very useful, but that is the approach taken by many of those who offer explanations.
+ +Some of the information might appear to be very comprehensive (for example [ 8 ]), but may be factually wrong in some areas. It is largely pointless unless you are involved in pure mathematics and are willing to accept a gyrator as a theoretical lossless component - which it is not! It can be a real 'component', and it will have losses and limitations. Additional circuitry that removes the losses only works within the boundaries of the opamps used, and may cause more problems than it solves.
+ +There are many university papers that discuss the theory of gyrators, and some include circuits for practical demonstrations. One of these was the basis of Figure 18, but the potential pitfalls were not examined thoroughly enough for it to have been particularly useful as a demonstration. The idea was for students to discover the pitfalls for themselves, but in my view that expectation would often lead to exasperation and confusion because some of the limitations can be too subtle for the inexperienced to notice.
+ +I hope that this article has been useful, and has provided a good insight into gyrator operation. It's always difficult to get the right balance between simplicity and complexity to arrive at something that provides good understanding without being overwhelming. Since all circuits shown in this article will work, I encourage those who want to know more to build and experiment.
+ + +Several references were used while compiling this article, and are combined with my own accumulated knowledge and/or resulting from the many simulations done in the production of this article. Material herein and some of my accumulated knowledge is due to the following publications ...
+ +![]() ![]() |
![]() | + + + + + |
Elliott Sound Products | +Heatsinks And Amplifiers |
The question posed above - "How much heatsink do I need for an amplifier?" is right up there with "How long is a piece of string?". There's no simple answer, and no simple way to work out the answer. The answer itself (to both questions) is "it depends". In fact, the answer depends on quite a few factors, and some may be imagined to be fairly complex. Although they can be simplified, there are quite a few things you need to consider.
+ +Trying to determine how big a heatsink should be for any given amplifier seems to be something that most DIY people try to avoid. This is probably with good reason, because it's not especially easy to work out. We also need to look at various amplifier classes (e.g. Class-A, Class-B, Class-D, etc.), and each is unique in terms of the heatsink needed. It's pretty much a given that Class-A needs the most, and it's also the easiest to calculate. Class-B (or Class-AB) is somewhat trickier, and Class-D can be quite difficult when all characteristics are considered.
+ +In this short article, you will only get some basic guidelines. There is a great deal more that you will need before you can make a complete and accurate calculation, and often physical testing can be the only real way to know for certain. If you haven't done so already, I recommend that you read the article Heatsinks - selection, transistor mounting and thermal transfer principles. This is a very comprehensive article, and should be considered essential reading.
+ +There are some assumptions used here, the first being that the air temperature available to the heatsink is at 25°C, and that the maximum allowable average transistor die temperature should not exceed 85°C. Cooler is better, but that can get expensive. I've also assumed that music will be the source, and that it has some dynamic range, so even if the amp is driven to just below clipping the long term average output power will typically be no more than 10% of the full power available from the amp. That assumes a peak to average ratio of 10dB. You'll find that this is not an area that's well covered on the Net, and there's surprisingly little information available that tells you just how much heatsink you need for a given amplifier. The peak to average ratio is also known as crest factor. (I will mention in passing that the crest factor of a sinewave is 3dB (a ratio of 1.414:1), but it's generally irrelevant and is not a useful parameter.)
+ +By far the biggest single problem is trying to determine how much power an amplifier will dissipate, based on the power delivered to the load. Ultimately, it depends on a great many factors, such as the amplifier's maximum power, how loud you will be listening, the type of programme material and the loudspeaker's impedance. There are no simple answers, but I will try to provide solutions that will be quite acceptable for most home listening. For professional audio (including large scale PA systems) hopefully the amp designers have already provided heatsinks that will handle the power, and almost all use at least one fan, often two or more. Forced air cooling requires testing to determine the effective thermal resistance.
+ +It's very important to make this point ... There is no such thing as a heatsink that's too big. Using a heatsink that's bigger than necessary means that it's physically larger and more expensive, but an oversized heatsink will never cause an amplifier to fail.
+ +![]() | It's imperative that you are aware that this article discusses average output device dissipation only. Safe operating area of the output + devices is not included, and is a completely separate part of the design process. For more information on this topic, see + Transistor Safe Operating Area and Phase Angle Vs. Transistor Dissipation. Peak dissipation and average dissipation + are separate design processes, and one does not predict the other. + |
This article is not meant to provide a single 'definitive' figure for the size of a heatsink. The guidelines here may over or under estimate the actual power that needs to be dissipated by the heatsink, and there is simply no way that a single figure can ever be used with any amplifier. The programme material, actual (vs. rated) speaker impedance, loudspeaker efficiency, use of compression or limiting and just how loud the sound needs to be are variables that cannot be predicted. Designing for absolute worst case will result in a heatsink that's larger and more expensive than necessary, and its capabilities may never be utilised. Designing for a (perhaps utopian) 'ideal' case will result in a heatsink that's too small. Like everything else in electronics, the heatsink will be a compromise.
+ +An anecdote is appropriate here. A chap approached someone I know with the claim that the heatsink on an amp he had built was too large. He came to this conclusion because the transistors were very hot but the heatsink was almost cold. Therefore, by his reasoning, the heatsink was obviously too big because it didn't get hot enough. Reality was different of course. The problem wasn't that the heatsink was too big (there really is no such thing), but that the transistor mounting was abysmal and the thermal resistance between transistors and heatsink was much too high. This is a critical part of the assembly, and the lowest possible thermal resistance between case and heatsink is essential for maximum power handling.
+ + +The first thing that must be considered is the thermal resistance (often written as θ) of the entire thermal path. This means the effective resistance between the transistor (or IC) die and the ambient air. The ambient air temperature is not the temperature of the air in the room, but the temperature of the air at the heatsink's surface. If the heatsink is in a hot environment, then that temperature is what has to be considered. No heatsink should be operated where it can't get free airflow, because that will increase the temperature of the heatsink, and ultimately the transistor (or IC) case and the internal die. Most of the examples that follow will assume that the amp's heatsink has access to free air at no more than 25°C.
+ +Thermal resistance (θ) includes the quoted figure from the manufacturer between the die and case, the insulating medium you use between the device's case and the heatsink, and the heatsink itself. See the heatsink article for some very detailed information about the various thermal transfer materials. There are several different ways you can insulate the transistor or IC case from the heatsink, and the most common are shown in the following table.
+ +Material | Thermal | Electrical | Thermal Resistance | Other Properties |
mica | Good | Excellent | ~ 0.75 - 1.0 °C/W | Fragile |
Kapton | Good | Excellent | ~ 0.9 - 1.5°C/W | Robust (but very thin) |
aluminium oxide | Excellent | Very Good | ~ 0.4°C/W | Fragile - easily damaged |
Sil-Pads | Fair + | Excellent | ~ 1.0 - 1.5°C/W | Convenient |
The above is simplified, and is based on the TO-220 case style. Larger cases will have a reduced thermal resistance, directly proportional to the surface area. For example, if you use a TOP-3 (plastic version of TO-3) TO-247 or TO-264 case the area is more than double, so thermal resistance may be around half that shown in the table. However, this also depends on the transistor specifications and how well you can prepare the heatsink surface and insulating medium, and how the transistors are held down. Note that silicone pads in general are a very poor choice if you expect to dissipate more than a few watts. Manufacturer's claims and reality are usually quite different from each other!
+ +There are countless variables, but for the sake of convenience we'll assume for the moment that the total thermal resistance between the die and heatsink is 3°C/W. That means that for every watt dissipated by the transistor or IC power amp (long term average) the die will increase its temperature by 3°C. This assumes that the heatsink remains at 25°C, but of course that cannot and does not happen in reality.
+ +The heatsink has to be made big enough to ensure that the die temperature remains as low as possible. This is essential to ensure that the transistors safe operating area (SOA) will not be exceeded, even when the amplifier is driven at the worst case power level for an extended period. The SOA is temperature dependent, so hot transistors can dissipate less power than cool ones. Maintaining a fairly low die temperature also allows for instantaneous peak dissipation that's much higher than the average. The heatsink's thermal mass will ensure that the heatsink itself remains at a fairly stable temperature, but the die temperature will fluctuate widely during operation.
+ +
Figure 1 - Thermal Path - Junction To Ambient (Schematic)
Figure 1 shows the thermal path that we need to look at. The heat source is the transistor or IC die, and the thermal resistances shown are the three that need to be taken into account. The capacitors show the thermal mass of each component in the chain. The junction's thermal mass is tiny and can be ignored, as can the thermal mass of the case. The heatsink's thermal mass will usually be significant, and it's very important as it allows short bursts of very high power to be absorbed quickly, so only the average power needs to be considered.
+ +
Figure 2 - Thermal Path - Junction To Ambient (Physical)
This drawing shows the thermal path in more familiar form. It shows the interfaces in their physical form rather than a schematic. The end result is the same - there is thermal resistance at each interface because none of the materials is a perfect thermal conductor, and no interface between materials can be perfect. The heat spreader is the metal part of the semiconductor's case with flat-pack devices, and it's usually nickel plated copper or similar. TO-3 style devices use a steel case, with an internal copper 'coin' or heat spreader between the die and case itself. It's notable that most counterfeit transistors have the die attached directly to the steel case, resulting in much higher thermal resistance (steel is a very poor thermal conductor).
+ +You can't directly change the junction to case thermal resistance, but you can improve matters by using parallel transistors, or using transistors with a higher power rating than are strictly necessary for normal operation. This isn't required for low to medium power (up to 100W amplifiers), but becomes critical as power increases. Running any transistor to the limits (or beyond) its rated instantaneous dissipation is a recipe for disaster, and this includes its rated safe operating area (to avoid second breakdown failure).
+ +Most heatsinks are a fairly heavy mass of aluminium, and the thermal mass is usually quite high. While the die will experience short duration sudden temperature increases and decreases, the heatsink will rise to a stable temperature depending on the average power being dissipated. A heatsink can remain almost cold for a period of time, and it heats up fairly slowly if designed conservatively.
+ +![]() | One thing you need to be aware of is the nature of a heatsink's thermal resistance. When it's specified by
+ the manufacturer or supplier, the operating temperature is rarely provided. This is most unfortunate, because you really do need to know at what surface temperature
+ the claimed thermal resistance is valid. + + As the temperature differential increases, the thermal resistance falls. A heatsink operating at 100°C in an ambient temperature of 25°C will show a + thermal resistance that's a great deal better than it will be at (say) 50°C. Because few suppliers ever tell you the operating temperature you already have an + unknown quantity that will affect all subsequent calculations. + |
The location and orientation of the heatsink also affects the thermal resistance. Unless you are using a fan, convection is the primary cause of air movement. The fins must be vertical so air can pass between the fins with the minimum possible interference. Anything with a heatsink should never be housed in a cabinet or any other enclosure that prevents free air movement into the room. Remember that the ambient temperature is the temperature of the air in the immediate vicinity of the heatsink, and this can be quite different from the room temperature.
+ +Enclosing the heatsink in the cabinet is a really bad idea, unless there are large ventilation slots above and below the heatsink(s). This also means that the cabinet needs substantial feet to keep it off the surface upon which it's standing. You also can't place anything on top that will impede ventilation. Fans can be used, but you still need ventilation slots. Hot air must be able to escape the enclosure, and fresh cool air needs to be able to get in.
+ +Placing all the power transistors right next to each other might look nice and be the most appropriate electrically, but it does nothing good for getting rid of heat. Power transistors (or other heat sources) should be spread across the heatsink area as much as possible, but remaining a sensible distance from edges and ends. The heat from each device has to be conducted through the aluminium, and because it's not a perfect thermal conductor the metal will be hotter directly behind the heat source.
+ +The top of the heatsink will nearly always be slightly hotter then the bottom, because the air received (by convection) has already passed by the fins at the bottom, and is therefore hotter at the top where it exits. The thermal gradient is usually quite small, and can be discounted if the devices are all (more or less) mounted along the centre line.
+ +If you haven't done so already, please see the article about Heatsinks. This knowledge is invaluable before you start, and it would not be sensible to repeat it all here.
+ + +The easiest amplifiers to calculate for are Class-A designs, because the dissipation is close to constant regardless of load. If the power supply is 30V and the current 1.5A, then the dissipation is simply 1.5A x 30V = 45W. Multiple devices will make it easier to keep the die temperature low, but it doesn't really matter if that power is dissipated in one or ten transistors, the total power is still the same. There are other considerations such as the thermal resistance between the transistor dies and the heatsink itself, but we'll look at that a little later. Before you start you'll need to decide on an appropriate maximum transistor die temperature. I suggest around 85°C if possible.
+ +If we have to dissipate 45W and we don't want the heatsink temperature to exceed (say) 40°C in a 25°C ambient, then the heatsink's thermal resistance needs to be ...
+ ++ Tr = 40 - 25 = 15°C+ +
+ Rt = 15 / 45 = 0.33°C/W
+ Where Tr is the temperature rise and Rt is thermal resistance +
Now we can factor in the thermal resistance between the transistor die(s) and the heatsink. With a pair of transistors they'll operate at half power - 22.5W for each. If the thermal resistance between die and heatsink is 2°C/W (difficult but achievable for high power devices mounted with care), each die will be 45°C hotter than the heatsink which we decided should run at no more than 40°C. The die temperature is simply the heatsink temperature plus the temperature rise across the case and mounting. The thermal gradient will be ...
+ ++ Ambient = 25°C+ +
+ Heatsink = 40°C
+ Junction = 85°C +
With this amount of dissipation, it will be difficult to maintain the junction temperature at a maximum of 85°C unless the heatsink temperature is kept to 40°C or less. If the thermal resistance between the junction and heatsink is greater than 2°C/W you may end up with an impossible situation. For example, if the thermal resistance between junction and heatsink increases to 3°C/W, the heatsink would have to run at no more than 17.5°C - clearly impossible if the ambient is 25°C. The only alternatives are to allow the junctions to run hotter than the (hoped for) target figure or reduce the effective thermal resistance between the die and heatsink.
+ +If more transistors are used, the heatsink temperature will remain the same, but each transistor die will run cooler. Each device still has the same thermal resistance from die to heatsink, but the power dissipated is reduced. These relationships are actually quite simple once you get your head around them. For example, if the amp dissipation is shared between four transistors instead of two, each will dissipate 11.25W instead of 22.5W, and the die temperature rise is reduced to 22.5°C, or just under 34°C if the thermal resistance is a more realistic 3°C/W between die and heatsink. This allows a smaller heatsink to be used, or means lower die temperature for the same heatsink. Lower operating temperature should always be a design goal, but is not always possible.
+ +When the amp does not dissipate a constant power the calculations become harder. Since this describes the majority of amps in use (most commonly Class-AB), there are many decisions to be made. Quiescent current is usually fairly low, no more than 100mA in most designs and usually a lot less, so quiescent dissipation is easy to calculate. If the supply voltage is 70V (±35V) then the dissipation is 7W at the maximum quiescent current of 100mA, and will usually be lower than that.
+ +The next graph shows peak output voltage and dissipation of one half of a Class-AB power amplifier. This shows the peak power in the positive output transistor with a 4 ohm resistive load, and the negative transistor has the same dissipation for negative-going half cycles. At exactly half the +35V supply voltage (17.5V), the transistor dissipation is at its maximum, 76W. The average dissipation in each output transistor is 22.7W at the onset of clipping. The situation changes with a reactive (loudspeaker) load and the peak power increases (as much as double), but the average remains much the same. This is discussed in more detail in the article Phase Angle Vs. Transistor Dissipation.
+ +
Figure 3 - Instantaneous Dissipation Vs. Output Voltage
If the amp will be driven fairly hard, a reasonable approximation for dissipation might be 50% of the output power. If the amp runs from ±35V and is driving a 4 ohm load, output power will be close to 100W, therefore total dissipation will be just under 50W. This is a worst case figure that will not be reached in practice, and it's common in commercial designs to only allow less than half that because music is dynamic and full power is never continuous. Knowing that a long term average dissipation of 25W is reasonable (if overly generous), the heatsink can be determined easily, using the same method as described above. If we can allow a maximum heatsink temperature of 60°C, we get ...
+ ++ Rt = 35 / 25 = 1.4°C/W ++ +
That figure is for a single amplifier, and power dissipated is naturally double for a stereo amp, so the heatsink's thermal resistance should be 0.7°C/W for a stereo pair of amps. In most cases this may still be considered overkill, but if you design for the smallest possible heatsink, then it's a very good idea to include an over-temperature cutout, or a thermo-fan that will turn on if the heatsink temperature rises above a preset limit. An example calculation is shown below and is from an ST Microelectronics application note [ 1 ].
+ ++ Pd = V² / (( 2π )² * RL )+ +
+ (Where V is the total supply voltage, Pd is total dissipation and RL is the load resistance in ohms)
+
For example, for the same amp described above (±35V supplies, 4 ohm load) ...
+ ++ Pd = 70² / (( 2π )² * 4 ) = 31 Watts ++ +
The dissipation calculated as above is for the complete output stage, so the average dissipation of each output transistor is half that calculated. The figures derived using this formula are in reasonable agreement with the table shown further below, but are a little more optimistic (i.e. the dissipation is somewhat lower than my table shows).
+ +If the amp is rated at (say) 20W/ channel, then you need to allow for a dissipation of up to 5W continuous. If the amplifier is a small 'chip amp' such as the LM1875, this has a TO-220 case. Therefore the heatsink has to be bigger than you think, because the IC's thermal resistance from die to heatsink is a lot higher than a pair of discrete transistors. In this instance, I suggest that the heatsink needs to be designed based on a dissipation of 10W, not 5W, so it should be about 3.5°C/W for each IC. A cooler heatsink allows for a higher temperature rise between the heatsink and transistor or IC die, while maintaining the die temperature at (or below) 85°C.
+ +I took some measurements of a music source (FM radio) to determine the worst case peak to RMS (or average power) ratio one can expect. The type of music is largely irrelevant due to original material compression plus that added by most FM radio stations. What I was doing was obtaining a figure that can safely be used to determine the dissipation that one can expect from any given amplifier used with the worst possible input signal. Programme material with greater dynamic range requires less heatsinking.
+ +
Figure 4 - Worst Case Signal Waveform
The waveform is shown above. The peak amplitude is ±800mV and the RMS voltage is 342mV. The peak to RMS ratio is therefore about 2.35:1 or 7.5dB. This does change though, because 'infinite' compression is neither possible nor desirable, and over a period of a few minutes I saw the RMS voltage go as low as 250mV - the peaks were unchanged. The RMS voltage can also be higher than the 342mV measured, but not by a great deal. On average and taken over a reasonable period with different tunes playing, the peak to average ratio was 2.5:1 - a ratio of 8dB. For a hypothetical 100W amplifier, the average power is about 16W when the amp is driven to just below clipping.
+ +When an amplifier is driven with a sinewave to just below clipping, a suitably pessimistic assumption is that transistor dissipation (two devices in push-pull) is roughly 50% (both devices) of the power delivered to the load. The worst case is to run the amplifier at half output voltage (one quarter of full power), when the total transistor dissipation will be close to double the power delivered to the load. A 100W amp operating at 25W continuous will dissipate about 50W as heat. If the RMS output voltage is higher or lower than half of the maximum, output stage dissipation is reduced.
+ +We don't listen to sinewaves, so the peak to average ratio determined above should be used. This provides a reasonable determination of the likely average power needed. This can be difficult because there are too many unknown factors. You can't design a heatsink based on how you think an amplifier will be used, because others will use it differently. If we use the signal I captured as a possible 'typical' signal, when an amplifier is at the onset of clipping the RMS voltage will be around 0.4 of the maximum possible, close enough to the half voltage point where dissipation is at its maximum.
+ +By that reckoning, a 100W amplifier needs a heatsink capable of dissipating up to 40W when driven by the waveform shown above and when driven to the maximum undistorted power. Fortunately, reality is different from worst case and the long-term average will be usually somewhat lower than calculated by using absolute worst case measurements. It's generally safe to assume the peak to average ratio to be 10dB, so the average output power will be 1/10th of the peak output. The average output power from a 100W amp will be around 10W, and total transistor dissipation will be 30W.
+ +These general principles apply regardless of whether you have a discrete or chip (IC) power amp. If a 50W IC amplifier is running at just under clipping with normal programme material, the average output power will be about 5W and the IC's average dissipation will be around 15W. Now we have enough information to devise some rules to allow the appropriate heatsink thermal performance to be worked out.
+ +Class-D amplifiers are a special case, and there are no simple methods you can use to calculate the dissipation. The switching MOSFETs have two (or perhaps three) different ways they can generate heat. The first (and simplest) of these is the power dissipated as a result of load current and the MOSFET's 'on' resistance (RDS-ON). If 5A flows and RDS-ON is 0.1 ohm, then the MOSFETs will dissipate about 250mW each (average), which is easily handled.
+ +Because no MOSFET can switch instantly, there is a very brief period where the MOSFET has a significant voltage across it, as well as current through it. The instantaneous peak dissipation can be very high, but it lasts for less than a micro-second or so and the average is low. Just how low depends on the design of the circuit, and the ability of the drive circuit to source and sink the current demanded by the MOSFET's gate capacitance.
+ +The third problem should not happen, and is commonly known as 'shoot-though'. This is a situation where the two MOSFETs conduct at the same time, which can raise the average dissipation to destructive levels. Although this is never intended, it may occur if the MOSFETs get too hot. At this point, failure is perhaps only milliseconds away. For this reason, it's essential that Class-D amps have an adequate heatsink.
+ +Unfortunately, there is no easy way to work out the dissipation of a Class-D amplifier. If the designer or manufacturer provides the information you need then it's simple enough, but if not you will only know by testing. During any test, it's very important to ensure that the MOSFETs run as cool as possible to prevent thermal runaway due to RDS-ON. It's made very clear to most MOSFET users that RDS-ON increases with temperature and thus forces current sharing with parallel devices, but the downside is that as RDS-ON increases, so does the dissipation when the MOSFET is on. Increased dissipation leads to higher temperature, increasing RDS-ON and increasing dissipation. I think you can figure out what comes next.
+ +It's a very good idea to keep MOSFETs as cool as possible, and fortunately with a well designed Class-D stage that's not especially difficult. You also need to be aware that some low cost Class-D modules (in particular those from Asia) have barely adequate heatsinks, and may self destruct if operated at maximum power for long periods.
+ + +Firstly, it must be understood that if a heatsink operates at 50°C with 25°C ambient air temperature, the heatsink's allowable temperature rise is 25°C. With no power being dissipated, the heatsink will already be at the room's ambient temperature, so in all calculations the ambient temperature has to be subtracted from the maximum allowable heatsink temperature, to obtain a figure for temperature rise.
+ +Next, we need to work out the maximum acceptable transistor or die temperature. I recommend that 85°C is a sensible maximum, as most semiconductors still have a reasonable allowable dissipation at that temperature, and it's not so high that reliability is likely to suffer too much. IC power amps are different from discrete designs, because all the parts (power and driver transistors, etc.) are located in the same package, and the total thermal resistance will be higher as a result. On average, you might expect that the thermal resistance between the die and heatsink will be about 2°C/W (although that's actually optimistic), so if the maximum power is 50W and average dissipation is 15W then the temperature gradient will be 30°C.
+ ++ Die temperature = heatsink temperature + thermal gradient ... or ...+ +
+ Heatsink temperature = die temp. - thermal gradient
+ HS temp. = 85° - 30° = 55°C +
We have 55°C at 15W remaining, and we are allowing for an ambient air temperature (at the heatsink surface) of 25°C. That means that 15W has to be dissipated with a maximum temperature rise of 30°C, so the heatsink needed is 2°C/W for each amplifier. That is a fairly large heatsink, but it will be needed if the amp is run at between half and full power over an extended period. At continuous long term full power (programme), the heatsink will run at about 55°C.
+ +However, for normal domestic applications, you'll (probably) get away with less than half the calculated thermal resistance. So a 3°C/W heatsink may be quite acceptable, even for a pair of IC power amps, but if the amps are driven to high power for an extended period they will probably shut down thanks to internal thermal protection. If the ICs don't have thermal shutdown they will likely fail if driven hard for any length of time.
+ +The same general principles apply for amps with discrete output stages, but there is a benefit that we don't get with an IC amplifier. The total output stage dissipation is distributed because the output transistors are discrete. If the stage has two output devices, then each handles half the total dissipation. With four devices, each only handles one quarter. This makes it a lot easier to get the heat out of the semiconductor dies and into the heatsink.
+ +A 50W discrete amplifier will still dissipate about 15W with programme material at full volume (without clipping). Each output transistor will handle half that - 7.5W. Assume the same thermal resistance from die to heatsink as before - 2°C/W. At 7.5W (each device), the thermal gradient is 15°C, the heatsink can now operate at up to 70°C and needs a thermal resistance of 3°C/W for each amplifier.
+ +Note that using multiple transistors you can estimate the thermal resistance by dividing (say) 2°C/W by the number of output devices. For the above example, the total average dissipation was 15W, and with 2 transistors we can estimate the thermal resistance between junction and case at 1°C/W. The thermal gradient is unchanged at 15°C.
+ +Much as I'd love to be able to provide a simple formula that would allow you to determine the heatsink size needed for any given output power, it's not possible to do so. However, based on the above calculations it's not particularly hard to work it out. Some assumptions are essential, and the heatsink needed for an IC power amp is actually larger than that for a discrete design of the same output power. You might have expected the reverse, but the IC has to dissipate the total power through a single junction to case - case to heatsink interface, so the thermal resistance is higher.
+ +A very rough way to determine the thermal resistance of a heatsink is to use the following formula. It's not particularly accurate and doesn't consider the heatsink's thickness, temperature, thermal conductivity or surface treatment, but it will give you an idea ...
+ ++ Thermal Resistance = 50 / √A Where A is the total surface area in cm² ++ +
See the Heatsinks article for a great deal more, and to look at more accurate ways to estimate the thermal resistance. Using the above, a piece of aluminium 50mm x 50mm will have a thermal resistance of ~7°C/W if both sides are exposed to ambient air (total area of both sides is 50cm²).
+ + +First, determine the maximum output power, based on the supply voltage and load impedance. Some common values are tabulated below. The figures shown are 'ideal' and do not include losses in the amplifier or power supply. Actual power levels will be between 10% and 20% lower than those shown, but the -10dB power needs to be calculated based on the values given because the supply voltage will not collapse by very much with a relatively low average power output.
+ +I could have included a heatsink size for each of the amp ratings below, but it would have to be based on too many assumptions and would therefore be worthless. Instead, you have to make some calculations using the process described, and based on the number of output devices used. We've already seen that using multiple output devices reduces the size of the heatsink required, so that has to be a factor in the final calculations. You may also find that you can run the semiconductor die at more or less than the 85°C suggested. Everything makes a difference!
+ +Supply Voltage | 8 Ohm Power | -10dB Diss. | 4 Ohm Power | -10dB Diss. + |
±15V | 14 W | 4.2 W | 28 W | 8.4 W + |
±20V | 25 W | 7.5 W | 50 W | 15 W + |
±25V | 39 W | 11.4 W | 78 W | 23 W + |
±30V | 56 W | 17 W | 112 W | 34 W + |
±35V | 76 W | 23 W | 153 W | 46 W + |
±42V | 110 W | 33 W | 220 W | 66 W + |
±56V | 196 W | 59 W | 392 W | 118 W + |
±60V | 225 W | 68 W | 450 W | 136W + |
±70V | 306 W | 92 W | 612 W | 184 W + |
±100V | 625 W | 188 W | 1.25 kW | 375 W + |
The dissipation shown at -10dB is the total output stage dissipation, and includes the positive and negative transistors in each case shown. The output stage dissipation is based on a sinewave at 10% output power, roughly equivalent to -10dB average output voltage or power but deliberately slightly pessimistic. Output powers at or below 25W will often indicate an IC amplifier, and above that will usually use discrete output transistors. Supply voltages above ±35V will almost always involve a discrete output stage, and above ±42V there will usually be at least 4 output devices.
+ +Note that the table assumes that the supply does not collapse under load, but that will almost never be the case in reality. If the unloaded supply voltage is ±35V, it's reasonable to expect that this will fall to about ±30V under load, especially with 4 ohm loads. As a result, the total dissipation (long term average) will usually be somewhat less than indicated. This means that the table is a little pessimistic, and means your heatsink may be a little bigger than is strictly necessary. This is much better than making it too small!
+ +If the amplifier uses a single IC, assume at least 3°C/W from junction to heatsink, and the entire power dissipation shown in Table 2 will be dissipated in the IC package. I included the supply voltage because that determines the maximum output power and total dissipation.
+ +For amplifiers with discrete output devices, assume 3°C/W for each device, and the dissipation shown is shared between the devices. If dissipation is 34W (±56V, 4 ohm load) it will probably be shared between 4 devices, so each will only need to dissipate an average of 8.5W which means a temperature rise of 25.5°C for each device. If we allow a die temperature of 85°C, the heatsink temperature can be as high as 60°C, and needs to have a thermal resistance of 1°C/W. A bigger heatsink means the devices will run cooler and is preferred, but the one calculated will most likely be fine for home listening.
+ +To work out the heatsink's thermal resistance, we use exactly the same method as described earlier. First, measure, calculate or obtain from Table 2, the amp's power rating and supply voltage. Determine the average dissipation based on 1/10th (-10dB) full power. Dissipation is approximately the -10dB power level multiplied by 3. For example, a 70W into 4 ohms amplifier delivers 7W at -10dB. Average dissipation will be 7 x 3 which is 21W.
+ ++ Power dissipated = ( max power / 10 ) * 3+ +
+ Pd = ( 70 / 10 ) * 3 = 21W +
Now you must consider the total thermal resistance between the junction and heatsink. For a single device (an IC amplifier), all the power is dissipated in a single package, and the thermal resistance will be ~3°C/W. For two transistors, each dissipates half the total, and the total equivalent thermal resistance is 1.5°C/W - assuming 3°C/W thermal resistance for each device. The temperature gradient is 31.5°C.
+ ++ Die temperature = heatsink temperature + thermal gradient ... or ...+ +
+ Heatsink temperature = die temp. - thermal gradient
+ HS temp. = 85° - 31.5° = 53.5°C +
Now that you know the heatsink temperature, subtract the ambient (25°C) and work out the heatsink's thermal resistance.
+ ++ Hs rise = HS temp. - 25°+ +
+ Hs rise = 53.5° - 25° = 28.5°C
+ Rth (Hs) = Hs rise / Power dissipated
+ Rth (Hs) = 28.5° / 21W = 1.36°C/W +
This is the heatsink's thermal resistance needed to satisfy the criteria set. The figure is an approximation, but errs on the side of caution. A slightly smaller heatsink will more than likely suffice for all normal listening, but should still provide a reasonable performance if the amp is pushed hard. It also helps that the thermal resistance of any heatsink improves (becomes lower) as the heatsink temperature rise above ambient increases.
+ +One thing that you don't know is how the heatsink manufacturer arrived at the published thermal resistance in the first place. Was it done with the heatsink at 25°C above ambient? 50°C above ambient? More? We don't know, because this is rarely provided, so by assuming the worst (or designing with caution) there's a fighting chance that your amplifier will survive normal use, as well as the occasional party where it will probably be abused fairly heavily.
+ +The only way to know for sure what the thermal resistance is for the designed maximum temperature is to test it.
+ + +We also need to consider the load impedance. A loudspeaker is not resistive, and the impedance varies with frequency. The nominal impedance is simply a reference to the average impedance across the frequency range, although the way it's calculated (or guessed) is often obscure. When an amp is driving a full range speaker system, the impedance at some frequencies will be lower than the claimed value, and at other frequencies it will be higher.
+ +
Figure 5 - Typical Loudspeaker Impedance Curve
The above shows a reasonably typical bass-reflex enclosure's impedance response. The double peak at the low frequency end is always present with vented boxes, and the impedance peak just below 2kHz is the result of the crossover network. This speaker has a nominal impedance of 8 ohms, but as you can see the impedance is only 8 ohms at 400Hz. Minima are seen at 85Hz and 3.5kHz at about 6 ohms or so. So even though the impedance would be classified as 8 ohms, over much of the frequency range it's quite a bit higher. The vent tuned frequency is at 28Hz (maximum output and minimum impedance at the extreme low end).
+ +This does impose a reactive load on the amplifier, but because the impedance is much greater than the nominal, the amplifier's average dissipation is less than you would have thought. This means that dissipation is reduced over much of the frequency range. The effects are complex, but an amplifier driving a speaker load almost always has an easier time than if it's driving a dummy load of the same nominal impedance.
+ +When designing the heatsink, you need to take into account the impedance variations of all speakers that are likely to be used with the amp. This is one of the reasons that dummy load tests at the minimum nominal impedance are important, as this will nearly always be a harder test than a loudspeaker. However, there are some speakers that are classified as 'difficult', often because of impedance dips that fall well below the nominal value. Whether these will cause an amplifier serious problems or not depends on how low the impedance falls and whether these dips are narrow or broad.
+ +Narrow impedance dips can cause output devices to exceed their safe operating area at one or two frequencies, but generally don't increase the average dissipation by much. Broad dips in the midrange area in particular can increase the average dissipation significantly, especially if the amp is driven hard.
+ +Ultimately though, it's not possible to design a heatsink to the absolute minimum and expect it to be able to handle every speaker system made. If that's what you need to do, then both the amplifier itself and the heatsink need to have adequate reserves to be able to handle the worst possible case. This increases the cost and size of the heatsink and the output devices. In reality, there aren't many speakers that cause great stress to most amplifiers, and some of those that do are probably best avoided anyway. If a speaker designer can't get the impedance right then it's possible that the response will be all over the place as well.
+ + +Although using a fan is a nuisance for a hi-fi amp, consider using a fan that only operates when (or if) the heatsink gets above a predetermined temperature. Project 42 is one method you can use. 99% of the time the fan will remain silent and the amp will be operating well within the output device limits. If pushed hard or used with material that has little dynamic range the fan will turn on only for as long as it's needed. Once the amp cools, the fan turns off again.
+ +This allows you to use a heatsink that's somewhat smaller than necessary for continuous maximum output power, and that doesn't intrude unless it's necessary to protect the amplifier. You can also just use a bimetallic thermal switch attached to the heatsink, preferably as close to the output transistors as possible. Choose one with a temperature rating that's suitable for you needs - around 50-60°C will be fine for most amplifiers.
+ +Even a small amount of forced air can dramatically improve a heatsink's thermal resistance, but it's something that must be tested thoroughly before you set up the amp and forget about it. Ensure that air is blown against the heatsink for maximum turbulence, and airflow must be directed so that it hits the heatsink close to where the transistors are mounted. Make sure that your thermal sensor/ switch is not in or near the airflow, or it may be cooled down faster than the heatsink and the fan will turn off before the heatsink is back to a sensible temperature.
+ +A surprisingly large number of people get this wrong, including some manufacturers of high power amplifiers! Locating the thermal sensor close to the airflow means that it will be cooler than the majority of the heatsink, so transistors can run much hotter than intended. Do not be tempted to make the fan suck air away from the heatsink! This is another common error, and the fan's efficacy is seriously reduced by doing so.
+ +Remember that if you use a fan, its airflow must not be impeded. For example, blowing air into a sealed box won't achieve anything useful, and there must be an exit point that will allow the maximum airflow possible. The exit should be at least as big as the fan, and if you include filters to prevent the electronics from being coated with fluff and dust, they have to allow good airflow and they must be kept clean!
+ +Whenever you use a fan, consider including a thermal switch that will turn the amp off before it self-destructs. If the fan normally turns on at (say) 60°C, you might use a thermal switch that will shut the amp down if the temperature ever gets to 80°C, and that can only happen if the fan fails or its airflow is impeded.
+ + +A heatsink is not a device that can magically absorb heat from active components. It requires a lot of surface area so it can transfer the heat to the surrounding air, which itself should be as cool as possible. This means that there must be air circulation to the room. Air circulation within the enclosure is (almost) completely useless if the hot air can't be replaced by cool outside air. Many people make the mistake of adding vents on the bottom of a cabinet, but fail to include vents at the top so air can't circulate through the enclosure.
+ +As noted at the beginning of this article, the answer to the question posed in the heading remains "it depends". The above provides some useful guidelines and hopefully will provide at least a reasonable starting point, but there are so many considerations that it is literally impossible to provide a single figure for heatsink size for any given amplifier. It's always better to err on the side of caution, and use a heatsink that may be a bit bigger than you really need. There really is no such thing as a heatsink that's too big.
+ +Also, consider that a 100W amp running at 10W average power (just below clipping on transients) with speakers of typical sensitivity (say 87dB/1W/1m) will be generating an average sound pressure level (SPL) of 97dB at 1 metre distance. There are two amps in stereo and you will almost certainly be closer to 2m away, but the combined average in the room will still be at least 97dB SPL. That's pretty loud in the greater scheme of things, and hearing protection guidelines indicate for that SPL the maximum exposure in any 24 hour period is only 30 minutes.
+ +Most of the time, power amps used for home systems will operate with an average power of around 1W or less. With fairly typical loudspeakers, 1W per channel will provide an SPL in the room of about 87dB. This might not seem like much, but it's a lot louder than normal speech. This means that in theory, most people could use 10W amplifiers and be perfectly happy, but it too limiting for anyone who listens to music with real dynamics. Most movie soundtracks also have a wide dynamic range and a reasonable amount of headroom is essential. 10dB is about right, which usually means around 100W per channel. At the average 1W or so listening level amp dissipation will be negligible, and often barely more than quiescent.
+ +If the amp is pushed hard (well into clipping), that's theoretically better for the amplifier because the dissipation in transistors that are turned on hard is very low (see figure 4). For example, if a 100W amp is driven to the onset of clipping, the dissipated power is about 30W. If the same amp is overdriven by 10%, the output power increases to 115W and total dissipation falls to 26W. More clipping means even lower dissipation (but of course it sounds gross and places your speakers at risk). Note too that this only applies for a sinewave, and if music is playing you are likely to greatly increase the total dissipation when the amp is pushed into clipping on loud sections and transients. This is because the average power increases.
+ +If the overall gain is increased to the point where an amplifier is clipping by 3dB (meaning that the input signal is 3dB too high), the average power is increased by roughly the same amount, so instead of the average power being at -10dB, it will be at -7dB instead. This can increase the output stage dissipation quite dramatically.
+ +So, as noted at the beginning there are no simple answers. It's usually best to design around the estimates shown in Table 2. These are conservative and will generally give a fairly close approximation to the size of heatsink you should use based on average dissipation. You still need to work through the examples to arrive at a final heatsink rating in °C/W.
+ +If you really must (for whatever reason) use a smaller than optimal heatsink, then include a thermo-fan so that if/when the amp is pushed hard it doesn't self destruct.
+ + +There are only two references because there is so little info on the Net, and the primary source of information was other ESP material as listed within this article, or obtained from measurements.
+ ++ 1 Dissipated Power And Heat Sink Dimensioning + In Audio Amplifiers ICs - STMicroelectronics AN1965+ + +
+ 2 The Effect Of Forced Air Cooling On Heat Sink Thermal Ratings - Crydom Inc. +
![]() | + + + + + + + |
Elliott Sound Products | +Using HEXFETs in High Fidelity Audio |
When we build linear power amplifiers, we always need to choose some device for the output stage. This could be any power device including valves, BJTs, IGBTs, and MOSFETs. Each has its own strengths and weaknesses which forces us to choose between them. If, perchance, we wanted to build a very simple and accurate amplifier, we can safely ignore valves, since they all need heating circuitry and are not simple for a true hi-fi amplifier. That it is possible to build a valve amp to a high specification is not in doubt, but they tend to be complex and expensive.
+ +BJTs are often used, but they do not respond well to even momentary overloads. This is because they suffer from second-breakdown - an instantaneous and catastrophic failure mode. IGBTs (Insulated Gate Bipolar Transistors) are seldom used, and will be very similar to a BJT, only with an insulated gate. They still need thermal compensation and a suitable gate drive design, and can suffer from a 'latch-up' condition in some cases. Lastly there is the MOSFET, which does not suffer any second-breakdown effects (although this is not strictly true - see below for more info). MOSFETs (Metal Oxide Semiconductor Field Effect Transistors) come in two primary types - vertical and lateral.
+ +These devices are extremely rugged, yet they do have a large nonlinear gate capacitance to deal with. If driven incorrectly, they show high distortion levels, especially vertical types - most commonly these days, HEXFETs. This is why I wrote this article - to show how to use HEXFETs properly in audio applications.
+ +The update below has some important information that I recommend you read thoroughly and make sure you understand before settling on the use of HEXFETs in your next amp project. While there appear to be many advantages to their use over BJTs, HEXFETs may often suffer from exactly the same problems - thermal runaway and a failure mode that is suspiciously similar to second breakdown. On top of this, there is a much larger voltage loss ... 2-4V is needed to bias the HEXFETs to the on condition, vs. 0.65V (nominal) for BJTs. This voltage is usually taken from the main supplies, so for a given supply voltage, expect a little less output power.
+ + +Many people say (including IR) that HEXFETs are not suited for linear audio circuits and should be avoided. Well, that is the easy route to take for designing an output stage. Any device can be used for audio and give great performance if a proper design is found. It is just easier to use more linear devices. Lateral MOSFETs are usually specified for audio, but there are relatively few different devices on the market. When found, they tend to be quite expensive. On the other hand, HEXFETs are very common, reasonably priced, and only need a good design to do well.
+ +This article is intended for Class-AB designs. HEXFETs will run Class-A with barely any problems besides driving the gate. If you are designing a class A amplifier, the first trick (see below) should be used (the second is not needed since the bias is already quite high).
+ +Alright, now for some explanations. Comparatively, HEXFETs usually have a low gate capacitance than other vertical MOSFETs, yet have a higher gate capacitance than their lateral counterparts. There is not only one capacitance to deal with, but two (one from the gate to source and the other from the gate to drain). This is the main problem: to find a way to drive the gate capacitance of the HEXFETs. Through a lot of time and molten breadboards, I found the best two things to design for are the following:
+ +++ ++
+1 Fix: Drive the gates with as much current as possible. This may include adding a class AB driver stage. + Why: HEXFETs have a nonlinear transfer curve up to about an amp or two, depending on the device(s) used. In a class AB amplifier, + this characteristic is the cause for a majority of the THD. When driven with enough current, the device will follow the 'new' linear curve, since it is + balancing out the nonlinear gate capacitance. The lower impedance of the driver stage the better. + + 2 Fix: HEXFETs like to run hot. This does not mean use an inadequate heatsink, but the bias between devices should be a bit more than + many are used to. 250mA of idle current is not a bad bias figure for these devices. + Why: To balance out the nonlinear curve, we can simply cut if off where it seems too bad by using bias. This will increase dissipation, though. +
For the design of the amplifier, I will assume a single LTP input stage. Better performance can be seen by using multiple LTPs, but this will not be a simple design (in fact it will be quite complex with high frequency stability issues needing attention).
+ +When choosing the complementary output components, one can obviously choose the IRFPxxx and its IRFP9xxx complement. If we look at these complementary device data sheets, we will see very different figures for current capability, on resistance, and, most importantly, gain (or forward transconductance). But if we use a matching tool, we will find that the gain varies considerably from the actual devices vs. the data sheet. That is why we need to buy a few extra and match them together. Since the gain varies a bit from batch to batch, it is quite easy to find a IRFPxxx and IRFP9xxx that are very similar, at least with gain factors.
+ +Also take note that HEXFETs will require a Vbe multiplier for thermal compensation, since the negative temperature coefficient does not come into play until the device has about 10 amps through it (at least for the IRFP240). The exact values around the Vbe multiplier (also known as a bias servo) are critical to ensure that the thermal performance is matched as closely as possible.
+ +In every practical design I have tried I had to use a class AB driver stage. A class-A driver will work fine if you really want an electric heater, as you will see in the next calculation. Now, in order to size-up the proper driver for the FETs, we need to do a little maths. I promise it is not hard. An example would work nicely here ... if we wanted to design a class AB driver stage with five IRFP240 and five IRFP9240 devices, how much current will we need at minimum for full functionality up to 50kHz? For a better understanding, a simplified output stage circuit is shown below.
+ +
Figure 1 - MOSFETs and Driver Circuit
We will do calculations using the gate charge method, which IR recommends (AN-944). Looking at the data-sheets, we find the IRFP240 has a total gate charge (Qg) of 70nC and the IRFP9240 has a Qg of 44nC. Don't add these yet! We will find each device's needs individually. The general formula to determine gate current is ...
+ ++ I = 100Qgf where I is current needed, Qg is the total gate charge in Coulombs, and f is frequency of operation. ++ +
The multiplication factor of 100 gives the headroom needed for accurately reproducing a square wave (or high frequency sinewave), since the gate driver needs a lot of current to quickly switch the MOSFET from OFF to ON. Although the requirement for this is minimal (the CD format is incapable of anything even approaching a square wave above a couple of kHz), it has become an expectation that power amps should be able to reproduce perfect square waves at 10kHz as a minimum.
+ +When we plug our figures in for the IRFP240, we get I = 100 * (70E-9 * 50,000) = 350mA per device
+ +For the IRFP9240 we get I = 100 * (40E-9 * 50,000) = 200mA per device
+ +Multiplying each figure by five (because there are five devices of each polarity) gives us 1.75 amps for the upper driver and 1 amp for the lower driver. So a Class-A driver would need bias set to 3.5 amps to get the job done with a reasonable safety margin.
+ +The value for R7 will depend on the linearity of the driver transistors. I had to guess and check with my ammeter to get a good value. This can range anywhere from 100 Ohms up to perhaps 5k. Make sure you check the idle current before calling the design done! These drivers (Q7 and Q8) may need a heatsink. Also note the capacitor in parallel with R7. This should be of a high value (e.g. 100µF or more), and 470µF works fine for my 10 MOSFET stage shown here. It helps with discharging the MOSFET gates by providing a path for the gate current.
+ +These current figures seem quite high, but keep in mind this current will only last a very short time compared to the signal, and virtually no current is needed to keep the devices either in the OFF or ON state. The current to reproduce a sinewave will be a bit lower, since it is a smooth curve, but this much headroom will drastically lower distortion. This is why we cannot practically use a class A driver, unless, of course we use one pair of output devices.
+ +For some comparison, below is a HEXFET setup driven by a class A driver at 13mA bias:
+ +
Figure 2 - Spectrum of HEXFET with 13mA Class-A driver
The large notch is at the second harmonic, and the small bump to the right is the fourth harmonic. Barely any third harmonic is seen. This shows 0.25% distortion at the second harmonic at ¾ power, and 250mA amp bias. Not very good for a true hi fi, unless we are making a valve-like amplifier. Even this will not show the same effects as a true valve amp - the nature of the distortion components will almost certainly be different.
+ +Adjusting the bias to 1 amp removes nearly all distortion, yet now we are approaching a heater ... I mean class A.
+ +After fixing the problem by adding a class AB driver, distortion was greatly decreased ...
+ +
Figure 3 - Spectrum of HEXFET with Class-AB driver
As the picture shows, the second harmonic was reduced considerably, while the fourth harmonic is below the noise floor. This shows 0.04% distortion solely on the second harmonic at ¾ power and still with 250mA bias. This greatly improved the amplifier. At one watt, the distortion is not measurable at all, unlike with the class A driver. Reducing the gate resistors to 4.7 Ohms to get more current through does nothing noticeable, so the use of 10 Ohm resistors is fine. There is no evidence of 'notch' distortion or any other nasty odd harmonic, only a 'nice' second harmonic added in. Also note that this amp was built on a breadboard. A compact and nicely wired PCB should decrease distortion even further.
+ +Below is the final simplified schematic of the entire amplifier ...
+ +
Figure 4 - Simplified Schematic of Complete HEXFET Amplifier
It looks very simple, and includes the Class-AB driving stage to improve gate driving. It's very simple compared to amps with multiple LTP stages. The minimum stability network (Zobel) shown is always needed, and a series inductor (with parallel resistor) may also be required. The values of these components will be found by experiment.
+ +For some further reductions in distortion, the following work quite well:
+ +HEXFETs are decent devices once the gates are driven correctly. They are much more rugged than BJTs as my burned parts pile shows, and sound very good when a class AB driver is added. I hope this short article with aid others in using these 'switching' and 'not linear enough for audio' devices to get distortion figures below many good amplifiers with 'very linear' devices. And remember one thing - any output device can be precise if a proper design is found. Finding the correct design parameters becomes more complex with non-linear devices.
+ + +The following are all PDF files, and are direct links to the International Rectifier web site ... +
+ IRFP240 data sheet+ +
+ IRFP9240 data sheet
+ AN-936 application note - The Do's and Don'ts of Using MOS-Gated Transistors
+ AN-937 application note - Gate Drive Characteristics and Requirements for + HEXFET® power MOSFETs
+ AN-941 application note - Paralleling HEXFET® power MOSFETs
+ AN-944 application note - Use Gate Charge to Design the Gate Drive + Circuit for Power MOSFETs and IGBTs
+ AN-948 application note - Linear Power Amplifier Using Complementary HEXFET + Power MOSFETs +
... And many pages from ESP
+ + +The above article is a contribution from Mitch Hodges, and ESP has not verified all aspects of the design process described. While the circuit can be (and has been) simulated quite readily with good results, this is no guarantee that everything will work as expected. I added diodes and zeners to protect the MOSFET gates from excessive voltage. It may be possible to select the zeners to achieve basic current limiting, giving the amp some protection from overload conditions. Because of the high gain of HEXFETs, this simple protection scheme will not be particularly effective. Also, remember that a series inductor may also be required.
+ +It will be noted that there are no component values supplied - this is quite deliberate, and is not an omission. The article is intended to describe the design process and how to work around the inherent non-linearity of vertical MOSFETs, and is not intended to be a construction project. Requests for component values will not be fulfilled.
+ +It must be understood that at the suggested current (250mA per MOSFET pair), the total quiescent current will be in the order of 1.25A - at a typical supply voltage of perhaps ±50V, this represents a total quiescent dissipation of 125W! This is a formidable amount of heat to dispose of, and will require very large heatsinks (and/ or forced air cooling). It is probable that the constructor will be forced to compromise, using a significantly lower quiescent current than suggested just to maintain a sensible heatsink size and temperature. Reducing your expectations of the maximum frequency that needs to be passed at full power will reduce the loading on the Class-AB drivers, but does nothing for the MOSFET low current linearity. Compromise will be almost essential (IMO).
+ +Finally, I'd like to thank Mitch for his contribution, since it describes the issues and how to solve them in an easy to follow manner, keeping complexity to the absolute minimum in the final design example.
+ +Rod Elliott+ + +
I have had occasion to build a HEXFET based power amp as a test for a client. While it worked well enough, giving the expected power output and with fairly low distortion, as noted above the required bias current is quite high to reduce crossover distortion to an acceptable figure. While the circuit I built was actually quasi-complementary (using only N-Channel MOSFETs), the basic principles apply regardless.
+ +As it transpires, the design I was looking at was unsuitable for the intended purpose, because the quiescent current needed to remove crossover distortion was too high to be practical. In many cases, the lowest heat output possible is highly desirable, and HEXFETs are simply unsuited to any application where very low Iq is desirable or necessary. Lateral MOSFETs would be fine, but are too expensive for my customer's application (in case anyone was wondering).
+ +Bias stability is definitely an issue as discussed above. It is commonly (and erroneously) stated that MOSFETs are 'safe' because they have a positive temperature coefficient, so as a device gets hotter, its drain-source (RDS(on)) resistance increases. This much is true, but this alleged 'benefit' is actually completely useless in a linear circuit. (It can also cause major problems in switching circuits, but that's another topic altogether and will not be covered here.)
+ +What is not commonly noted is that all MOSFET devices have a fairly high negative temperature coefficient for the gate-source threshold voltage (Vth). At the gate-source voltages needed to obtain typical bias currents, even a small temperature increase causes a large drain-source current increase, so the use of a carefully designed bias servo (Q5, R5 and R6 in the Figure 4 schematic) is absolutely essential. This point is made above, but is sufficiently important that repetition will not go astray.
+ +To illustrate this, Figure 5 shows the graph from the IRF540 data sheet, and although it does not continue down to the levels we are interested in, the trend is clearly visible. At a VGS of 4.5V, we see ID of around 12A at Tj = 25°C, rising to a little over 20A at Tj = 175°C. While the graph might seem to indicate that the effect will be greatly exaggerated at lower gate voltages and drain-source current, the initial tests that I did indicate that the effect is roughly similar. The graph was taken from the IRF540 data sheet, but has been colour coded to make identification of each graph easier. The use of source resistors to help force current sharing is essential, and these should be as high as practicable. While 0.1 ohms is common for BJT amplifiers, I would recommend values not less than 0.47 ohms for HEXFETs. Higher values provide more stable quiescent current with temperature variations. For example, with 1 ohm source resistors, the current can increase at a maximum of 1mA/mV or 1A/V. Should Vth fall by 100mV, Iq can only increase by 100mA. This eases the design of the bias servo.
+ +
Figure 5 - Temperature Coefficient, VGS (IRF540)
The test I ran was very simple. Apply a suitable voltage to the drain, then carefully adjust the gate voltage until a suitable measurement current was drawn. The current started at a relatively low value (around 1A in my test), and as the device heated up this was seen to rise. It stabilised fairly quickly because the heatsink prevented further (possibly damaging) heat levels, but with two MOSFETs in parallel, the current between them was different, and (more importantly) it remained different, even as they became hotter. The claims for better (and 'automatic') current sharing apply only to devices operated as switches, or where the two curves shown in Figure 5 cross over each other. Only lateral MOSFETs provide a crossover point on their transfer characteristics that is low enough for linear operation.
+ +Something that is missing from nearly all MOSFET data sheets I have looked at is gate threshold voltage vs. temperature, although it is shown in the data I have for the IRF840. This will show that the threshold voltage falls as Tj increases - a negative temperature coefficient. The positive coefficient of RDS(on) is insignificant at the current levels needed for setting quiescent current accurately. Note that the two curves cross over, but the point where the temperature curves cross is when VGS is at a current of over 40A and the gate-source voltage is 5.5V - this is not useful. Lateral MOSFETs (as used in P101) have exactly the same issue, but the curve changes from negative to positive at a much lower current (around 100mA), and this is visible on the transfer characteristic graph (but you will need to look for it carefully - it is not specified in a useful manner IMO).
+ +In an application note [ 1 ], OnSemi describe this transition as the 'inflection' point, and it is determined by VGS, although it appears to be related more to the drain current than gate voltage. However, the two are directly related, so the point is moot. Lateral MOSFETs are usually quite safe here, because the inflection point is at such a low voltage and current, but vertical MOSFETs (HEXFETs and similar) are prone to thermal runaway. There is also the possibility of a failure mode very similar to second breakdown when HEXFETs used in linear circuits, where VGS is usually (well) below the inflection point. This must remain a very good reason to stay clear of these devices for audio, unless you are fully aware of the potential risks, and how to avoid them. Note that the article above does not address this potential failure mode, (nor do many others), and your only choice is to find MOSFETs where the thermal 'changeover' occurs at the lowest possible drain current. Lateral devices are almost unbeatable in this respect.
+ +A careful examination shows that the 'inflection' point is actually the region where the negative temperature coefficient of VGS exactly compensates for the positive temperature coefficient of RDS(on). That's the reason it's at such a high current for HEXFETs (because of the usually low RDS(on) value), and it is known that the inflection point is inversely proportional to RDS(on). Note that this only applies if the device is used in linear mode. When switching, Vth and its temperature coefficient is not relevant because the gate voltage is invariably selected to provide 'hard' switching to minimise losses.
+ +In contrast, a typical lateral MOSFET (such as the 2SK1058) has an RDS(on) of around 1.5 to 1.7 ohms at ~10A, compared to an IRF540N with 0.044 ohms at 16A. When choosing HEXFETs for use in linear circuits, I suggest that you use devices with the highest value of RDS(on) that you can find. This is entirely counter-intuitive, and is almost certainly the exact opposite of what you would expect. RDS(on) is actually meaningless in a linear application until the amp starts to clip, and even then only reduces the maximum output power slightly. As a figure of merit, it only has meaning for switching applications.
+ +
Figure 6 - Normalised Vth Vs. Junction Temperature
From Advanced Power Devices, their application note [ 2 ] provides the graph shown in Figure 6. Normalising simply means that everything is taken back to a reference of unity, so simply multiply the claimed Vth by the figure shown along the left side, for the temperature at which your device will operate. If your MOSFET will undergo a temperature change from 25° to 100°, then Vth will fall to 0.83 of the ambient temperature value at 100°C. This application note also mentions the possibility of a failure mode similar to second breakdown when operating switching MOSFETs as linear amplifiers. The App. Note refers to this failure mode as 'hot-spotting' or 'current tunnelling', but it's very similar to traditional second breakdown. Indeed, two of the articles listed below refer to the fact that using a HEXFET much bigger than needed (to provide a safety margin) has exactly the opposite effect. Rather than increasing the safety margin, the larger device is more likely to fail if it is working well below the 'inflection' current in a linear circuit.
+ +Do not be mislead by claims that MOSFETs are immune from thermal runaway, although lateral MOSFETs are much better in this respect than vertical MOSFETs (HEXFETs and similar high-gain switching devices). Based on the above, it is quite apparent that vertical MOSFETs can easily get into thermal runaway if the bias servo is not set up correctly. Using just a pot (as shown in P101) is absolutely forbidden with vertical FETs - a matched bias servo thermally coupled to the MOSFET heatsink is essential to prevent both thermal runaway and crossover distortion.
+ +Further searching revealed a document from Solid State Optronics [ 3 ], where the temperature coefficient for VGS is said to be -1.5mV/°C (the above chart shows it as 1mV/°C for VGS of 4.5V at 25°C). It is claimed to be 'insignificant', and for switching applications this is true. It is definitely not insignificant for a linear circuit (as shown in Figure 4), and especially so because of the relatively high transconductance of vertical MOSFETs. A few (tens of) millivolts of gate voltage is the difference between acceptable quiescent current and overheating.
+ +What of the second breakdown effect that the manufacturers deny even happens other than in the (very) fine print? HEXFETs, and indeed most other MOSFET devices, are made using a multiplicity of individual small MOSFETs called cells. If the device as a whole exhibits a negative temperature coefficient for VGS, so must each internal cell. If one cell has a slightly lower VGS than the others (perhaps due to microscopic variations in the silicon) it will take more of the total current. This will cause it to get hotter, so the threshold voltage will fall further and it will then draw more current, causing it to get still hotter. This process continues until the cell fails due to over temperature, at which point the MOSFET suffers catastrophic failure.
+ +While this scenario is not common in switching application, if the MOSFET is used linearly it is very real, and has caused problems in the past. It will continue to cause problems if designers are unaware that this failure mode even exists - after all, most comments seen describe MOSFETs as almost indestructible. While this is true to an extent, it is obvious that it is not a general rule upon which one should rely in all circumstances. HEXFETs operated in linear mode need to be derated from the claimed maximum dissipation, and my suggestion is that a maximum of half the rated power dissipation is reasonable. Likewise, the peak current should also be much lower than the rated maximum.
+ +Now you know why International Rectifier and other vertical MOSFET manufacturers don't recommend HEXFETs or their equivalent for linear applications - they are simply not designed for the purpose. Yes, they most certainly will work, but you must be aware of the limitations. I suggest that high voltage, relatively low current devices are preferable to the reverse, as they will have an inherently higher RDS(on), and therefore a lower inflection point. Alternatively, you can look for 'vertical' MOSFETs that are specifically designed for linear operation. They exist, but are probably considered 'exotic' by most suppliers. An example is the IXYS IXTK22N100l (with 'extended' FBSOA), but at over AU$70 each (one off price) when I looked most people will think that's a bit rich.
+ +Toshiba used to make vertical MOSFETs that were designated as being suitable for audio amplifiers. The 2SK1530 and 2SJ201 were basically switching devices, but with a comparatively high (but unspecified) RDS(on) of between 0.3 and 0.6 ohms. I've tested an amplifier that used them, and they perform well enough, but it's doubtful that there was really any advantage over using bipolar output transistors (other than cost). These devices are now obsolete, but they were classified for VGS(off), with '0' classed devices having a threshold voltage of 0.8-1.6V, and 'Y' class with 1.4-2.8V. This was unique amongst MOSFETs, and I don't know of any other MOSFET type that has provided a degree of 'pre-matching' when you buy them. + +
If you intend to use vertical MOSFETs for any linear application, you need to be aware that the published SOA curves do not apply to linear operation. Feel free to read that again to make sure that you understand the ramifications.
+ +There's little or nothing in the datasheets to warn you, and many data sheets even show the SOA for DC. A few careful calculations will show that there is no way that the MOSFET can be operated at full rated power while keeping the die temperature below the absolute maximum (typically 175°C). The only way that you can be assured of safety is to keep the peak dissipation well below the claimed maximum, thus minimising the die temperature. This is a minefield, and I suggest that you tread very carefully. See the IR application note for more detailed information [ 4 ].
+ +Interestingly, some of the earliest MOSFET fabrication processes are less likely to fail if used in linear mode, because they have a comparatively high RDS(on). However, most of the early types are obsolete, and their nominal replacements are 'better' in that RDS(on) is lower than the previous version(s). While this is good for switching (reduced losses), it also ensures that they are less suited to linear operation.
+ +It's worth looking at RDS(on) for lateral MOSFETs as a point of comparison. The figure isn't quoted, but it can be calculated using the Drain-Source saturation voltage figure provided. VDS(sat) will normally be in the order of around 12V (gate shorted to drain) at a current of 7-8 amps, so RDS(on) is in the order of 1.5-1.7 ohms! Compare that with the figures you see for HEXFETs - the IRFP240 has an RDS(on) of 0.18 ohm, but it's also important to understand that completely different test methods are used so a direct comparison isn't as easy as it seems.
+ +Another specification you can look at is 'Forward Transfer Admittance' ( |Yfs| ), aka forward transconductance in Siemens. Lateral MOSFETs have a transconductance of 1.7-2.0S, while even most high RDS-on vertical/ HEXFETs have a transconductance of at least 5S, and the latest models are much higher. Early MOSFETs were much easier to use in linear circuits than new ones, because they had lower transconductance and higher RDS-on.
+ +One thing that is a dead giveaway as to whether a MOSFET is lateral or vertical is to compare the pinouts. Lateral MOSFETs have the source pin in the centre, while vertical types invariably have the centre pin as the drain. Many fake lateral MOSFETs are re-badged vertical devices (typically HEXFETs), so source and drain are in the wrong places and the amp will provide close to a dead short across the power supply via the intrinsic body diodes.
+ +
Figure 7 - Vertical Vs. Lateral MOSFET Pinouts
As far as I'm aware, there are no exceptions to the above. There are no lateral MOSFETs in the TO-220 case that I'm aware of, and all true lateral high-power MOSFETs use the TO-247 case or a small variation thereof (TO-3 lateral MOSFETs may also be available, depending on supplier). It's worth noting that the TO-220 package is useless (regardless of what's inside) for getting rid of any more than about 20W worth of heat, unless extraordinary care is taken with mounting.
+ +As should now be quite obvious, it's very hard to recommend using HEXFETs in an audio amplifier. Mitch has made a compelling case, but much has changed since the article was written, and there is also more information available. This won't stop people from trying, because HEXFETs are cheap compared to lateral MOSFETs and high power bipolar transistors, and seem to offer many advantages. As is now clear (I hope), most of the 'benefits' are an illusion, and can easily lead to tears if the warnings here aren't heeded.
+ ++ 1 On Semiconductor - AND8199+ +
+ 2 Advanced Power Technology - AN0002 - no longer found at the source, but now available from the ESP site
+ 3 Solid State Optronics - Application Note 50
+ 4 International Rectifier - AN1155 +
![]() | + + + + + |
Elliott Sound Products | +High Impedance Input Stages / Project 161 |
Contents
+ +High impedance inputs are commonly needed for capacitor (aka 'condenser') microphone capsules, piezo vibration sensors and also for biological monitoring systems such as ECG and the like. In many cases, the impedance only needs to be perhaps 10MΩ or so, but if you have a sensor that has a capacitance of (say) 250pF or so and you need to monitor to 1Hz, then the impedance has to be very high indeed.
+ +The circuit described here is primarily intended for use with piezo sensors, as typically used for vibration, noise, and other physical phenomena that involve movement. These sensors are often used in geophones, seismometers, hydrophones and accelerometers, combining relatively high output levels and a wide response range. However, they are unable to provide a static output because they are capacitive, so only AC signals can be passed. There's no reason that the techniques described can't be used anywhere that a high impedance input is necessary. The circuit is not intended for DC applications, as that requires careful DC offset adjustment, something that has not been provided for.
+ +It's expected that the most common use for a very high impedance preamp such as this will be along with a PIC, Arduino or similar based digitiser, and used to detect vibration or low frequency noise. The circuits described were developed for testing ground-borne low frequency vibration caused by nearby industrial activity. There are many application for vibration monitors, and they are fairly popular topic on forum sites. Another use is to build a piezo accelerometer that can be used to detect panel vibrations in a speaker box (I have one that's been in use for many years, but it's a different design).
+ +Although the circuit shown here will work with a capacitor ('condenser') microphone, you'd need to add the high voltage power supply and a high value resistor to polarise the capsule. Note that the circuits here are not intended to be used with pre-polarised (electret) mic capsules, because they already include an internal FET impedance converter. A typical piezo-electric (PE) sensor that would be used with this circuit is shown below.
+ +
Figure 1 - MiniSense 100 Piezo Accelerometer/ g-Sensor
The sensor shown above has a capacitance of only 244pF, so to get response down to 1Hz you need to use a 1GΩ input bias resistor. You can buy 1GΩ resistors (1,000Meg) for less than $2.00 (at the time of writing) but the amplifier itself must have a very high input impedance and extremely low input current, or it will load the sensor and you won't get any useful output at low frequencies. If an input stage has an input current of just 0.1uA, that creates (or attempts to create) a voltage of 100V across a 1GΩ resistor, so you must use a device with less than 1nA (1V across 1GΩ) input current. Many FET input opamps are specified for input bias currents in the pA (pico-amp) range. 65pA is typical for a TL072 for example, and that will cause an input offset of 65mV.
+ +Bipolar transistors are not really useful because it's extremely difficult to keep the input current low enough, although it is possible to use them with careful design. The design effort isn't worthwhile though, because there are much easier ways to get sub 1nA input currents. A FET (or small-signal MOSFET) needs almost no input current at all and the design is a great deal easier. However, it may not be that simple, because all active devices have some input capacitance, and that needs to be considered as well.
+ +There is another common circuit that's used with capacitive sensors, and that's called a charge amplifier. When used with low capacitance sensors they typically need an extremely high resistance, and the resistance is increased further if gain is needed. I don't intend to cover charge amps here because they are a rather specialised case, and don't have many particularly desirable features compared to more conventional techniques (especially if you need gain from the input amplifier). However, a charge amp is not affected by the cable or other stray capacitance because its input is a very low impedance. They are certainly interesting, but are harder to implement that the circuits described. There's plenty of information available for anyone who wants to build a charge amplifier, and a web search will provide many examples.
+ +This article discusses the various options that can be used, and the circuit shown in Figure 9 is the only one that should be constructed if you need a nigh-impedance test amplifier. Feel free to experiment though, as that's the great joy of DIY electronics. There's always more that you can learn, and the ideas covered here are interesting to play with.
+ +The final circuit (Figure 11) is different - it's a 'charge amplifier', which has the distinct advantage of having a very low input impedance. While not useful as a general-purpose test amplifier, if you do need to condition the output of a piezo transducer, a charge amp offers a significant advantage. Because of the low input impedance, it's immune from cable capacitance and, more importantly (and usefully) it picks up almost no hum!
+ + +Noise is potential problem - high impedances cause thermal noise in resistors that can be well above the level you are trying to amplify. See Noise In Audio Amplifiers for more info about noise, how it's measured and calculated. Voltage noise is worked out by ...
+ ++ VR = √ ( 4k × T × R × f )+ +
+ Where ... +
VR = resistor's noise voltage
+ k = Boltzmann's constant (1.38E-23)
+ T = Absolute temperature (Kelvin)
+ R = Resistance in ohms
+ f = Noise bandwidth in Hertz +
In high impedance circuits, noise current becomes the dominant problem.
+ ++ IR = √ ( 4k × T × f / R )+ +
+ Where IR = resistor's noise current +
Noise voltage and current can be worked out so you know just how much noise will be created just by the input resistor. A 1GΩ resistor has a noise voltage of 57µV at 27°C (roughly 4µV√Hz), and the noise current is 57pA (you can also use Ohm's law to determine current from voltage or vice versa). Other noise sources include 1/f ('flicker' noise) and 'shot' noise, both of which are developed in active devices. This article is not going to attempt to cover extremely low noise applications, because they invariably require exotic parts that may be difficult to obtain. Instead, a 'utility' amplifier will be the end result, one that will satisfy most common requirements.
+ +High impedance solutions that may initially seem perfectly reasonable can have some very unexpected consequences, so it's essential to understand the various methods that can be used, and how they react in a real circuit. In general, JFETs (junction field effect transistors) are suitable for high impedance applications, but you have to choose the device carefully. Small signal MOSFETs can also be used (e.g. 2N7000), but they are usually much noisier than JFETs and may also have a higher input capacitance. That requires 'interesting' additional circuitry to get the full frequency range.
+ +When FETs or MOSFETs are used, they will generally be connected as simple source followers. This helps to reduce the effects of input capacitance, but with no gain there is a noise penalty because the first stage of any amplifier generally determines its noise figure. Using JFETs in a cascode connection provides gain without any effective increase in the input capacitance, but that is highly dependent on the FET characteristics. Common FETs have an extremely wide parameter spread with standard production devices, making the design a challenge. Unless there's a good reason not to do so, consider using a FET input opamp, as this makes everything far more predictable.
+ +However, the noise level from common FET input opamps is usually fairly high, and this has to be considered. Depending on the level you need to amplify, you may find that noise is intrusive, but piezo sensors are capacitive and their capacitance helps to roll off the noise above the frequency determined by the input resistor and the sensor's capacitance (which includes the capacitance of the cable between the sensor and amplifier.
+ +Hum (50 or 60Hz) is a real problem, and all high impedance circuits are very sensitive to electrostatic hum fields. Without shielding, you'll almost certainly find that the hum picked up is far greater then the signal. The capacitance of the sensor (and cable) helps again, because it will roll off hum just as well as it will resistor and opamp noise. The entire circuit needs to be in an earthed metal case, and ideally so does the sensor itself.
+ +Hum loops will not be created provided there is no secondary electrical connection between the sensor housing and the preamp. If the sensor is to be buried, consider using an outer plastic housing so that the sensor's shield doesn't make direct contact with damp soil or other conductive materials. You can use a plastic case and line it with aluminium or copper foil if you prefer.
+ + +The term "bootstrap" is applied to several completely different circuit topologies, and they are not equivalent. I've shown a bootstrapped current source in many of the audio amplifiers shown on the ESP site, and a different form of bootstrapping is used in many 'high-side' MOSFET driver ICs. As used here, the term applies to input stages, where the bootstrap circuit is used to increase the apparent value of a bias resistor.
+ +The concept of bootstrapped input stages is both well known and commonly applied, and the Designing With Opamps series shows a basic bootstrapped input stage intended to obtain a very high input impedance (albeit in its most simplistic form). While the use of a bootstrap circuit seems very appealing at first glance, there are several problems that aren't immediately apparent, and are rarely discussed.
+ +A bootstrapped input circuit uses positive feedback to make the input bias resistor 'disappear'. The general scheme is shown below, and I've assumed an input capacitance of 250pF, as may be typical of a low capacitance piezo sensor like that shown in Figure 1. If we want the low frequency limit to be no greater than 10Hz, the impedance seen by the sensor can be no less than ...
+ ++ Z = 1 / ( 2π × 250p × 20 )+ +
+ Z = 637 Megohms +
This is such a high resistance that it requires special techniques to achieve it. It's not just the bias resistor that has to be considered, but the input impedance of the amplifying stage has to be such that it doesn't reduce the impedance. The general 'rule of thumb' is that the amplifier stage should have an input impedance of not less than 10 times the bias resistor - 6GΩ is required.
+ +A higher impedance won't hurt at all, and you might consider using a TL072 or LF353 opamp (Zin of 1TΩ for both). In reality, response is usually expected to 1Hz, but many piezo sensors have more than 250pF of capacitance (some being a great deal higher). The following examples will be based on a 250pF sensor for convenience. In the circuit below, the bootstrapping (via C1) boosts the input impedance to about 240MΩ, which isn't good enough to get 1Hz from a 250pF sensor. If you need lowest possible noise, the OPA627 is one suggestion (although it's very expensive) - there are others and wide price variations.
+ +
Figure 2 - High Impedance Preamp With Bootstrapped JFET Input Stage
Using a bootstrap circuit certainly achieves the high impedance required, but what can't be seen in a frequency-domain measurement is what happens to the frequency response. At a low frequency (determined by the capacitance of the source and the bootstrap capacitor) there can be a large response peak. The source capacitance includes cable capacitance and all stray capacitance, including the input capacitance of the FET or opamp used.
+ +This has two side-effects, neither of which is desirable ...
+ +The hidden issue that is rarely mentioned whenever bootstrapped input stages are discussed is the high Q filter. The source capacitance and bootstrap capacitor form a filter that will have a high Q unless it's understood and accounted for. This becomes a problem if different sensors (or cables) are used, because as the source capacitance changes, so do the characteristics of the filter formed by the bootstrap connection. The filter is not easy to see in the stage shown above, but you will know it's there when you see the response rolls off at 12dB/ octave, not 6dB/ octave as you may have expected.
+ +Note that the 12dB/ octave rolloff doesn't apply to the Figure 2 circuit because the 10µF cap (C1) is much larger than needed, and over the visible response curve it's only 6dB/ octave. It changes to 12dB/ octave at around 50mHz which is almost impossible to measure, but is easily simulated. Depending on the JFET you use, C1 and C2 may need to be reversed.
+ +
Figure 3 - Bootstrapped Input Stage Response
The high Q filter issue generally doesn't arise if you use a FET source follower by itself as shown in Figure 2, because its gain is considerably less than unity. That helps to prevent a high-Q filter from being formed, but also limits the impedance that can be achieved. For example, if the gain is 0.95 that means that only 95% of the input voltage is reflected by the bootstrap circuit, so the effective impedance of the input resistor is lower than expected but it has no 'bad' habits.
+ +As an example, we'll use a 10MΩ resistor and a JFET as shown in Figure 2. Let the instantaneous input voltage be 1V, and the output voltage 950mV (a gain of 0.95, which is actually a loss). The voltage across the input resistor is 50mV (1V - 950mV), so its impedance appears to be 200MΩ ...
+ ++ Vin = 1V+ +
+ Vout = 0.95V
+ VR1 = Vin - Vout = 50mV
+ R1(apparent) = 10M × ( 1 / 50m ) = 200M +
However, it's important to understand that the exact same circuit using a 2N7000 small-signal MOSFET has a gain of 0.993 and it will create a large peak in the response unless C1 is made much larger (at least 100µF). The gain of a MOSFET used as a source follower is temperature dependent, so caution is advised to ensure that no peak is generated if the gain should increase. If a higher capacitance sensor is used in place of the design value, the peak comes back again, but at a lower frequency. An opamp is even worse, as described below ...
+ +When an opamp is used as a voltage follower, the gain is much closer to unity. This means that there's almost no voltage at all across R1, and its impedance is raised by a factor of perhaps 1,000 or more and it virtually disappears. The gain is so high that it's essential to include a resistor to prevent extremely high gain at some low frequency. This extra resistor is shown in Figure 3, and without it there's a pronounced peak at 0.68Hz. As simulated using a TL072 opamp and a 250pF source, with R3 set for zero ohms there is a peak of almost 37dB. The circuit will be unstable, and you may never know why if you are unaware of this peculiar problem.
+ +
Figure 4 - Bootstrapped Input Using An Opamp
An opamp based bootstrap circuit is shown above, including the resistor (R3) added to reduce the amount of positive feedback. Without the resistance, instability is probable and you may find that you've built a low frequency oscillator instead of an amplifier. If it does just amplify, it will take some time to settle after power is applied - A test version that I built took over 60 seconds before it was stable, and that included R3. Note that there is no reason that R1 and R2 must be equal. That's merely an expectation, but as shown above (and below) it makes little difference provided C1 is sized appropriately.
+ +
Figure 5 - Low Frequency Peak Caused By Bootstrapping
The size and frequency of the peak both change with different source capacitance and source resistance, and the graph shows the response with a 250pF sensor (red) and a 1nF sensor (green). Most capacitive sensors have a low ESR (equivalent series resistance), so there's nothing to mitigate the peak. Adding R3 as designed (1.8k), input impedance is over 650MΩ and there's (almost) no peak at all ... until the source capacitance changes. As noted earlier, this may simply be because you use a different cable. If the 'new' capacitance is 500pF, there's a 2.6dB peak at 0.86Hz, and a 1,000pF (1nF) source causes a peak of 5dB at 0.54Hz.
+ +This is one of the biggest problems with the bootstrap circuit - there is interaction between the preamp and its source, and it will behave differently depending on the source resistance and capacitance. You can improve matters a little by increasing the value of the bootstrap capacitor, but that really only moves the problem to a lower frequency. You may not be able to measure it, but it's certainly there. For many applications this is unacceptable. Provided the sensor, cable and bootstrapped preamp are designed as one there will be no issues, but that can be very limiting.
+ +There's another problem that's hiding as well - noise. The bootstrapped resistance is 10MΩ as shown, and the broad band voltage noise from the resistor alone will be about 58µV, calculated from the formula shown above. The resistor and source capacitance form a low pass filter, so we can work out the -3dB frequency of 250pF and 10MΩ - 63Hz. That means that if you want to measure anything below 64Hz, you'll get nearly all the noise with only high frequency noise filtered out. Typical piezo geophones are useful between 1Hz and around 40Hz, so you'll get the majority of the resistor noise along with the signal.
+ +The above is not to say that you shouldn't use a bootstrap circuit, because they are inherently useful. If you only desire a modest impedance boost (no more than 10 times), you can use a 10MΩ resistor to obtain an input impedance of 100MΩ easily. While it's still possible to get an unwanted response peak, it's far less likely, and the amplitude will be within acceptable limits for a wide range of sensor capacitance. To get a ten-fold (near enough) increase of the value of R1 in Figure 4, simply make R3 one tenth the value of R2. The maximum bootstrap voltage is 0.909 (referred to the input voltage), so R1 appears to be 11 times its real value. Then again, 100MΩ resistors are readily available and inexpensive, resulting in a simpler circuit.
+ + +You can get 1GΩ resistors fairly easily, and they shouldn't break the bank. You don't need very high voltage types (they can be seriously expensive), and the ones I used for testing cost less than AU$2.00 each. This is a very simplistic approach, but this really is an application where simplicity gives the most consistent results. Obviously the opamp (or FET) needs to be carefully selected for the task, but this should not be a problem.
+ +There are many benefits to using a high resistance, and simplicity is but one. The circuit is inherently well behaved, and does nothing that can come as a surprise. When power is applied, it settles almost instantly and is ready for use, and the low frequency rolloff is absolutely predictable, being based only on the sensor's capacitance and input bias resistor value used. If a calculation is done for noise, broad band noise (up to 20kHz) looks pretty bad - 575µV is rather a lot of noise signal. However, all is not what it may seem.
+ +The use of a high resistance has a hidden benefit. Even with the hypothetical 250pF sensor that's been assumed here, the noise will be rolled off above 0.64Hz, since the input resistor and sensor capacitance form a low pass filter. It's not a wonderful filter by any means, but the majority of the noise from R1 will not cause any issues.
+ +
Figure 6 - High Impedance Preamp using 1GΩ Resistor
The circuit is completely conventional, but note the polarity of the electrolytic capacitors. I used a TL072 for testing, and the output will be negative (by around 60-100mV) with a 1GΩ input resistor. Since I included a gain of 10 (close enough), that would become -0.6 to -1V if the feedback path used DC coupling. A 1,000µF cap allows response to 6.6mHz (0.0066Hz), well below the frequency set by the sensor and R1 (0.64Hz for a 250pF sensor). The gain can be increased by reducing the value of R2, and the 1,000µF cap used for C1 allows for a gain of up to 100 (R2 = 2.4k), with a -3dB frequency of 0.066Hz.
+ +It's worth noting that all capacitor ('condenser') microphones use high value resistors. I've not seen one schematic that shows a bootstrapped input stage. The resistance you use will depend on the capacitance of the sensor and the minimum frequency needed. The sensor shown in Figure 1 has a capacitance of around 250pF (actually 244pF), and if you need to get down to 1Hz then you need a 1GΩ resistor. Other sensors can have a great deal more capacitance - a simple piezo disc can have a capacitance of over 10nF, so the resistance needed is much less. Somewhere around 20MΩ is fine, and that gets to 0.8Hz (-3dB). A pair of 10MΩ resistors in series will be fine for a 10nF sensor, and no special construction techniques are necessary because of the comparatively low resistance. Not that 20MΩ is low, but compared to 1GΩ it is .
With any very high impedance circuit, even small traces of contaminants will cause leakage that reduces the input impedance. One technique that's often used is a 'guard track' that surrounds the input components and is held at the same potential - bootstrapping again. This is fine for production, but for one-off or small quantities where a PCB isn't warranted, it's almost impossible to achieve. No special precautions are needed or necessary with bias resistors of 10-20MΩ, but they are essential with the 1GΩ input stage.
+ +The alternate method is to 'sky-hook' the input components - literally joining them in mid-air. If the FET gate pin or opamp's non-inverting input pin is bent up so it doesn't pass through the prototype board, the input resistor, protection diodes and input lead are simply soldered to the 'floating' pin with no contact to anything else. After soldering, the solder joint and FET (or opamp) must be thoroughly cleaned so there is no trace of flux or skin oils on the insulating surfaces.
+ +If you can get PTFE (Teflon) stand-off insulators you can use one of them to support the connections, but if you sky-hook carefully there should be no need for additional support. Needless to say, if you intend to attach a heavy coaxial cable to the device pin then you will need some extra reinforcement or the cable will eventually break off the FET or opamp pin when it's moved around.
+ +
Figure 7 - Photo Of Sky-Hooked Input Stage
You can see how the input parts are mounted in the photo. Everything that runs at the maximum input impedance of 1GΩ is separated from the Veroboard type PCB. The input current limiting resistor (R1) is mounted in mid-air from the connector. If there is a greater distance than allowed by the resistor leads, the resistor and added wire should be protected with heatshrink tubing and kept away from other parts of the circuit and the case. When you have an impedance of 1GΩ, every precaution has to be taken against leakage.
+ +High capacitance sensors (> 10nF) are much easier to deal with, because the resistance is so much lower. Normal prototype techniques will usually be quite ok when the bias resistor is no more than 20MΩ, but it's still necessary to ensure that the board is thoroughly cleaned after soldering. Many types of flux become conductive if they absorb any moisture, and that may alter the performance of the circuit.
+ +Not shown in the photo above is an electrostatic shield that separates the high impedance input from the output. Even though the output is some distance from the input, at high gains the preamp will oscillate because the input impedance is so high that it can pick up some of the output from nearly 20mm away. Expecting perfect behaviour is simply not possible, because the circuit is so sensitive that it's hard to use it with an open circuit input (which isn't useful anyway). I built mine to have gain switchable from x1, x10 and x100, and at the highest gain it's now stable after I added the shield.
+ + +This is another can of worms. I tested some 1N4148 diodes to measure their leakage and determine their effective resistance. With 15V across a reverse-biased diode, I measured a current of 1nA. Normally we wouldn't be at all concerned with such a low current, but if the equivalent resistance of the diode is worked out based on the current measured, you get 15GΩ, so the traditional pair of protection diodes will have a combined impedance of 7.5GΩ. This reduces the input impedance and introduces temperature dependence, because the diode resistance is in parallel with the input impedance.
+ +I tested this, and was able to bias the input of a TL072 opamp using only a pair of 1N4148 diodes. The bias level was unstable because the diodes leakage depends on temperature, and no two diodes will ever be equal. It's quite obvious that using diodes in the conventional way to protect the gate of a FET is not going to work very well.
+ +You may well ask how I was able to measure such a low current without the use of very specialised test equipment. That's easy - I used a 5 digit bench voltmeter with an input impedance of 10MΩ in series with the diode and a 15V DC supply. The meter measured 0.01V, so the current is equal to 0.01V / 10M, or 1nA. The diode passed 1nA with 15V across it, so its leakage resistance is therefore 15V / 1nA = 15GΩ. (Note that this is highly temperature dependent, and the rated leakage current of a 1N4148 is 25nA at 25°C with a reverse voltage of 20V.)
+ +Fortunately, this is an area where using a bootstrap circuit will not cause a problem, so instead of bootstrapping the input resistors, we can bootstrap the protection diodes. Under normal operating conditions, the diodes will have very close to zero volts across them, so they can no longer cause a problem. The protection scheme is shown below.
+ +
Figure 8 - High Impedance Preamp With Protection Diodes And Output Buffer
R0 has been added to limit the worst case input current. D1 and D2 are bootstrapped from the feedback network, but that's perfectly alright because the voltages at pins 2 and 3 are almost identical. D3 and D4 will not cause distortion, because the signal voltage across them can never be high enough for them to conduct, but input spikes over ±5.7V will be clipped. The values of R2 and R3 give the circuit a gain of 10 (20dB), and the gain can be raised or lowered by varying R2 (don't reduce it below 2.2k or C1 will cause premature low frequency rolloff) ...
+ ++ Gain = R3 / R2 + 1+ +
+ Gain = 220k / 24k + 1 = 10.16 (20.14dB) +
The circuit shown has been tested, and performs exactly as expected. The DC offset caused by the TL072's minute input current is about -100mV, and the final DC blocking capacitor is necessary. Needless to say, the extremely high input impedance means that hum and/or other noise is picked up very easily, and the small sensor capacitance does little to reduce the 50/60Hz noise. Accordingly, the entire circuit must be in a fully shielded enclosure.
+ + +The complete schematic is shown below. It includes a buffer stage followed by an optional low pass filter (R6 and C3) that's been included to remove frequencies outside the range of interest. As shown it has a -3dB frequency of 21kHz (3.3nF) or 72Hz (1µF), but this can be changed to suit your needs. If the output is delivered directly to a PIC microcontroller or an analogue to digital converter (ADC), you may need to bias the output to +2.5V (assuming a 5V supply) so the ADC's input is centred. You need to determine whether the DC offset is needed from the datasheet for your digitiser.
+ +The additional bias resistor is shown marked with '*', and it connects to the ADC's supply voltage (typically 5V or 3.3V). The DC blocking capacitor (C2) is required whether you apply a DC offset or not, because there will be some offset from the first opamp (U1A) when a 1GΩ input resistor is used. The -3dB frequency with the 100µF cap shown is 0.016Hz without the DC offset or 0.032Hz if it's included. No additional filtering is needed because C2 will filter out any noise from the 5V supply. The low pass filter (R6, C3) gives an upper -3dB frequency of 21kHz (3.3nF) or 72Hz (1µF), and C3 can be omitted if no HF rolloff is needed. R6 can then be reduced to 100 ohms.
+ +
Figure 9 - Complete High Impedance Preamp
A switch is included to allow the gain to be switched from x1 to x10. When the switch is open, the opamp has full feedback and operates as a buffer. When closed, R2 is in circuit and increases the gain to x10. This is optional. Feel free to add another switch and resistor that can also be switched into circuit, increasing the gain to x100 (resistor values must be changed - see below). If both switches are closed the gain will be x102 (close enough).
+ +The power supply depends on your application for the circuit. For intermittent use, a pair of 9V batteries will be perfect, but if the circuit needs to be powered continuously you will need to use either a battery pack with a 'smart' charger or a mains supply. The latter will need to be a linear supply in most cases, because switchmode plug-pack (wall wart) supplies are generally too noisy. Supply bypass capacitors are essential, and although I've shown 10µF (C4 and C5), you can use higher values if preferred. Adding parallel 100nF ceramic caps is not necessary with the TL072 or LF353, but they do no harm and can be included if you wish.
+ +C4 is not optional if you have a switched gain of ×1 and ×10 (or ×100). It will reduce noise (and signal) above 20kHz with 330pF at a gain of 10. It can be increased if you include the output filter - you can use up to 100nF if the circuit is only to be used for low frequency measurements. For example, if you use the 1µF output cap (C3) and 39nF for C6, the upper -3dB frequency is 67Hz, with a 12dB/ octave rolloff. There will be some (unwanted) high frequency boost caused by the capacitance of the two zener diodes when the circuit is operated with a gain of unity or x10, and that is partly mitigated by including C4.
+ +If you wish to use a single supply (12-18V for example), add a 1k resistor from each incoming supply to the common earth/ ground terminal, and increase C4 and C5 to at least 100µF. Because overall current drain is quite low, this simple voltage divider arrangement will work perfectly.
+ +While the schematic shows the circuit with a switchable gain of ×1 (0dB) or ×10 (20dB), my prototype has gain that's switchable to ×1, ×11 and ×101. R3 is 100k, and I used 10k and 1k resistors for the R2 feedback resistor, with series switches to C1. With C2 at 1,000µF as shown, the response extends to 159mHz with 1k for R2, so low frequencies are not compromised. Remember to include C4!
+ +While the circuit will happily provide a signal to a PC sound card, none has the low frequency response needed for measuring low frequency noise or vibration. It's expected that anyone building the circuit will already have decided on the method of recording or logging the output, and it's only the analogue part that causes any problems. This is now common, as many people have mastered the art of programming microcontrollers and capturing the data to a memory stick or PC hard drive. The analogue side of electronics is often considered deeply mysterious, and countless forum posts show that this is a real problem.
+ +R1 will usually be selected based on the sensor you are using. With high capacitance sensors you will probably only need somewhere between 20MΩ and 100MΩ, with 1GΩ as shown only necessary for sensors with a capacitance below 1nF (1,000pF). A 10nF sensor with a 1GΩ load has a theoretical lower limit of 0.16Hz, and it will be very sensitive to thermal effects. Pyroelectric properties come free with most piezo ceramic materials, so a stable operating temperature is essential.
+ +One way to make the preamp as 'universal' as possible is to build it with an input impedance of 1GΩ, and if you have other sensors you simply add the appropriate loading resistor directly in parallel with the sensor itself. This provides the maximum flexibility. For example, if you have a 15nF sensor that is intended to get down to 1Hz, simply wire a 10MΩ resistor in the same (shielded) box as the sensor, with the resistor in parallel with the sensor itself.
+ +Experimentation is the key to getting the results you need from the sensors you want to use.
+ + +If you need a maximum gain of 100, you need to increase the feedback resistors by a factor of 5, so R3 will be 100k and R2 (x10) will be 11k, and R2 (×100) will be 1k. That arrangement is more sensitive to stray capacitance, and C4 has to be reduced to around 68pF. You can keep R3 as 22k, but then C1 will cause low frequency rolloff when R2 is only 240 ohms (gain of 100). There are many options of course - unity gain with perfect flatness is guaranteed if you use a switch to short R3, but beware of stray capacitance from the switch and its wiring to the input. It will prove almost impossible to minimise coupling between the two, which will cause instability (oscillation) with a gain of 100.
+ +
Figure 10 - Complete High Impedance Preamp With x1, x10 & x100 Gain
The version above shows the changes for a preamp with switchable gain up to ×100. Although shown with two SPST toggle switches, you can use a 'centre-off' toggle if you can find one with two latching 'on' positions, one either side of centre. These are commonly known as on-off-on. You can't use a switch where one or both positions off-centre are momentary. Capacitive coupling between the input and output must be minimised, and an electrostatic shield will be needed around the (sky hooked) input section. I have to leave this to the constructor, because it depends on the physical layout used.
+ +Frequency response is ok - it's good for around 13kHz with a gain of 100, increasing to 23kHz with a gain of 10, but there is a high frequency rise on the x1 setting because C4 can't compete with the zener diodes capacitance plus any stray capacitance from the Veroboard. This is unlikely to be a problem in real life, because most accelerometers and other piezo transducers don't have useful output at high frequencies. Consequently, you might decide to make C4 larger than 68pF, restricting the high frequency response. I'll leave this to those who build the circuit.
+ +
Figure 10A - Alternate High Impedance Preamp With x1, x10 & x100 Gain
The drawing above shows another way to increase the gain, that also maximises the audio bandwidth. Each stage can be operated at a gain of unity or x10, and with both set for x10, the gain is 100 (actually 101.83 if we take it to the letter). You don't have to use 1.1k resistors (R2, R6), and with 1k the gain will be x11 or x121. This will rarely be a problem, as this isn't intended to be a precision test set. Resistor tolerance (1% metal film types are recommended) will have a small effect as well. Perfect x10 and x100 is possible, using 27k and 3k resistors for each feedback network.
+ + +From a document titled 'Bob Pease Lab Notes' (1989-1990) [ 8 ] there's a high impedance probe that's fairly specialised, but may come in handy. Unfortunately, the JFETs used are no longer available (2N5486 or 2N5485), and a simulation using J113 FETs gave roughly equivalent results. While the original info claims input impedance of 1011Ω (100GΩ), this appears only to hold true at low frequencies, below 1Hz. I'm unable to test the claim, but simulation shows that Zin falls with increasing frequency. The simulator shows impedance to be above 100GΩ at frequencies below 2Hz. It was claimed that bandwidth extended to 90MHz, and while probably true, the simulated input impedance showed 10MΩ at 1MHz. The full bandwidth can't be realised unless the source impedance is low (6kΩ or less).
+ +
Figure 11 - Ultra-High Impedance Circuit (Bob Pease)
The circuit is shown exactly as Bob Pease published it, with the exception of R8 which prevents possible oscillation if the emitter-follower has a capacitive load. There is zero input protection, and the input signal must never exceed the ±15V supplies. While 3 × 10MΩ resistors are shown in series as an option, there's no reason that you can't use a 100MΩ or even 1GΩ resistor instead. Input capacitance is claimed to be 0.29pF, and in theory that would result in a -3dB frequency of 548Hz with a 1GΩ source impedance.
+ +From the original document, it's hard to know exactly what Bob's intentions for the circuit were. He stated that it was optimised for input impedance and not frequency response, and unfortunately I don't know if the simulator is telling naughty fibs about the response with a high source impedance. It's more than likely telling the truth, because I've run enough simulations to know when the results are completely unexpected (and likely wrong). The results I obtained seem ... plausible.
+ + +Charge amplifiers are less common than high impedance circuits, and aren't suitable for a general-purpose high impedance bench amplifier (for example). While this article describes high impedance inputs, a change amp has a very low input impedance. The capacitance of the transducer and feedback cap (Cf) determine the gain. If Cf is smaller than the piezo capacitance, the circuit has gain (by the ratio of the two capacitances). Making Cf larger than the piezo capacitance makes the gain less than unity, but it can accommodate much higher input levels.
+ +If you need to 'condition' the output of any capacitive sensor (including piezo types), a charge amp offers the unique advantage that hum pickup is almost eliminated. It's also insensitive to cable capacitance, so a high capacitance cable won't attenuate the signal - however, it will increase the opamp's noise output. In the drawing below, the charge amp itself is based on U1A, and has unity gain if the piezo has a capacitance of 250pF. If the piezo has more capacitance, Cf can be increased in value (ideally the same as the piezo), and Rf can be reduced. For example, with a 10nF piezo and 10nF for Cf, the gain remains at unity and Rf can be reduced to 2.5MΩ for the same -3dB frequency (6.4Hz).
+ +
Figure 12 - Charge Amplifier, 6dB Gain
If the charge amp is configured for unity gain (which is the ideal) but more gain is needed, it's added with the second stage. With the values shown for R3 and R4, gain is 6dB, but it can be changed as required. Purely as an example, the circuit is shown using a single supply (which can be anything from 5V to 30V with the OPA2134 shown), or a split supply can be used.
+ +This circuit has been included simply because it's very common with accelerometers and many other scientific/ industrial sensor systems. Because it doesn't have high input impedance it is a little out-of-place, but not including it would limit your options. The low input impedance makes it almost immune from hum, something that will always be a problem with 'true' high-Z preamps (and I know this from personal experience).
+ + +Most of the salient points about construction have already been covered. As already noted, the non-inverting input pin of the TL072 opamp (pin 3 as shown) must be lifted so it's not inserted through the Veroboard, and resistors R1 and R2 connect directly to the IC pin, along with D1 and D2.
+ +The remainder of the circuit is not at all critical, but you will need a very well shielded enclosure to minimise 50/60Hz hum. The output buffer is optional, as is the low pass filter shown. If you are using a piezo sensor as a geophone (for example), most of the interesting signals will be below 20Hz, with some being a great deal lower (0.1Hz is easily achieved with a reasonably high capacitance sensor).
+ +I suggest a BNC connector for the input, because I've tested a sample of a few I have to hand, and their insulation resistance is too high to measure. You also have to be careful with the cable used to the sensor, as it also needs extremely high insulation resistance or low frequency performance will be impaired. RG174/U is one suggestion, as it's small (less than 3mm diameter), and has acceptably low capacitance at around 100pF/ metre. It also seems to have fairly low triboelectric noise. Keep the cable as short as possible.
+ +The cable's capacitance has two side-effects. The first is that it reduces the output level from the sensor, because it forms a capacitive voltage divider. If the sensor has a capacitance of (say) 100pF and the cable has the same, the level will be reduced by 6dB.
+ +The second effect actually works in our favour. Because the cable and sensor are in parallel, the effective capacitance is the sum of the sensor's and cable's capacitance, so using the same values as before, the total capacitance is now 200pF, and the -3dB frequency is moved lower by one octave. Where the low frequency -3dB frequency would normally be 1.6Hz, the cable capacitance moves that down to 0.8Hz (assuming a 1GΩ resistor).
+ +However, be aware that as the cable is moved it will generate a voltage (triboelectric noise) that cannot be distinguished from that from the sensor, so everything needs to be kept very still while a measurement is being made. The amount of signal generated by the cable depends on the dielectric used, and some cables will be a lot more sensitive than others. I haven't tested a range of cables and can't make a specific recommendation based on self-noise, but a coax cable using a foam polyethylene dielectric with copper wire would be a safe choice. I use RG174/U cable quite a bit, and it seems to have fairly low triboelectric noise from the limited tests I've done.
+ +When the circuit is built, I suggest that you measure the DC offset from the input opamp before soldering in C3 (electrolytic capacitor). You will need to use a capacitor from the input to ground, or a measurement will be impossible due to 50/60Hz hum pickup. Depending on the input resistance you may see either a positive or negative DC voltage at the output of U1A, but it will typically no more than 100mV. I measured -100mV with a 1GΩ resistor, but +20mV with 20MΩ input resistance. While electros aren't bothered by small reverse DC voltage (< 1V), it's not hard to measure the voltage on pin 1 and orient the capacitor so its polarity is correct. You may need to do the same for C2 if you won't be using the 2.5V DC offset feature, because it can be reversed as well. If the 2.5V DC offset is used, C2 must be oriented as shown above.
+ +No special precautions are required with the charge amplifier, unless the feedback capacitance is very small and the feedback resistor is a high value (> 10MΩ). If that's the case, you need to take the same precautions described above, namely using 'sky hook' techniques to minimise leakage across the resistor. You will also need to use a capacitor with particularly good insulation, or that will compromise performance.
+ + +As stated earlier, this project is not (and is not intended to be) the be-all and end-all of high impedance preamplifiers. It's designed to be cheap, easy to build, and for general experimentation. The fact that it works very well is a bonus . I built mine to have a maximum gain of 100, and once I fitted an internal screen to protect the input, I was finally able to run it with no source connected.
To give you an idea of how sensitive a 1GΩ input stage can be, I found that I could measure 50Hz hum that was picked up by the inner terminal of the BNC connector. The only way to eliminate the hum completely was to press a piece of metal across the front of the connector to shield it from the outside world. A tiny 1mm diameter pin socket recessed inside the earthed BNC connector was enough to pick up several millivolts of hum - I would never have believed it if I hadn't seen it for myself.
+ +Mine also has no high frequency rolloff built in, because its purpose at this stage is not exactly undefined, but I wanted it to be flexible. As a result, a noise test with the input open-circuit shows it to be ... abysmal! This is to be expected of course, because the TL072 isn't the quietest around, but most of the noise is due to the 1GΩ resistor. The resistor alone contributes 575µV, so with a gain of 10 that becomes 5.75mV, and a gain of 100 yields 57mV. Yes, 57mV of noise, and this has been (more or less) confirmed by measurement. The measured noise with a gain of 100 was actually 45mV on average, and listening to it proved it to be wide band white noise.
+ +However, as soon as a piezo sensor is connected, the noise level falls dramatically, depending on the capacitance of the sensor. This happens because the sensor (and cable) capacitance filters all but the lowest noise frequencies at 6dB/ octave, with the -3dB frequency determined by the total capacitance.
+ +All in all, if you have a need to measure signals at very high impedance levels, this is another very useful tool for your arsenal. The cost is moderate, and it's probable that the case and connectors will cost far more than the circuit itself.
+ + + +Some of the material presented here has been prepared based on information gathered from the Net, but the majority is based on experimental data, simulations and bench testing. The references below will be of assistance to those who want more information.
+ +Copyright Notice. This material, including but not limited to all text and diagrams, is the intellectual property of Rod Elliott, and is © 2015. Reproduction or re-publication by any means whatsoever, whether electronic, mechanical or electro-mechanical, is strictly prohibited under International Copyright laws. The author (Rod Elliott) grants the reader the right to use this information for personal use only, and further allows that one (1) copy may be made for reference. Commercial use in whole or in part is prohibited without express written authorisation from Rod Elliott. | +
![]() |
Elliott Sound Products | Hybrid Relays using MOSFETs, TRIACs and SCRs |
Electromagnetic relays (EMRs) remain one of the most popular switching devices ever created. They have low losses, and are used in countless applications for consumer, automotive and industrial systems. When used within ratings, relays have a very long life (typically up to a million operations), and are very reliable. However, they are supplanted in many systems by SSRs (solid-state relays) using TRIACs or back-to-back SCRs. Being 'solid-state' devices, they have an almost infinite life, provided they are kept well within ratings at all times.
However, SSRs are not without their problems, one of which is power dissipation. Typically a TRIAC or SCR has a constant voltage drop when conducting, and it's such that these devices dissipate around 1W for each amp of current. For a load that draws one or two amps, this is of little concern, as 1-2W is easy enough to dissipate. The situation changes rapidly if the current is 10A or more, and a heatsink becomes essential. Indeed, this is still the case for lower current, or the device(s) may otherwise exceed their maximum rated operating temperature (typically a junction temperature of 125°C).
Power dissipation becomes a limiting factor for currents of 20A or more, and most high-current SSRs end up in fairly bulky packages that need even bulkier (and expensive) heatsinks. This is not desirable for a variety of reasons, and particularly because consumers and systems engineers are nearly always looking at ways to minimise wasted power. This has become more critical as there are frequently (IMO often unrealistic) requirements to fit the highest possible power into the smallest space.
Switching DC proves particularly difficult with high voltages and/ or high current. Predictably, the combination of both creates some significant problems. In terms of DC, anything over 30V is a problem, and with even relatively low currents (e.g. 5A or so), there will be a significant arc as the contacts open, breaking the circuit. However, the traditional electromechanical relay has very low losses when the contacts are closed. Contact resistance will generally be only a few milliohms, so power dissipation is negligible. For example, 30A contacts with 3mΩ contact resistance will dissipate only 2.7W, where a TRIAC would be dissipating 30W at the same current. An electromechanical relay also has an actuating coil, but these normally dissipate somewhere between 500mW and 1W for most standard types. I've tested a 10A relay at 10A and obtained a contact resistance of 6mΩ (increasing to 6.6mΩ at 20A). At rated current, that's a dissipation of only 600mW.
While it's easy to design a hybrid system when the DC shares a common ground (or other rail that's common both to the relay and the electronics), greater difficulties are assured when complete isolation is required (such as switching mains voltage). Nothing is insurmountable of course, and I commend the reader to look at the article MOSFET Relays to see some examples. As described, MOSFETs are good contenders for low-loss switching of AC or DC, but they still have internal resistance (few are less than 20mΩ) so will dissipate power. Assuming 20mΩ and two MOSFETs (a total of 40mΩ), dissipation is 30W with the same current as before (30A, AC or DC). EMRs will usually have less than 6mΩ contact resistance (5.4W contact dissipation for the same current).
Photo Of Dismantled Sample Relay
The relay style used for the examples presented is shown above. This is a very common relay, and it's the same one recommended for Project 39 (mains soft-start circuit). The essential ratings are shown on the cover, namely 10A at 30V DC, 10A at up to 250V AC (resistive, cosΦ = 1), or 3A at 240V with a power factor of 0.4 (cosΦ of 0.4 - inductive or capacitive). Although it's hard to see, the contact clearance is about 0.4mm. Based on the 'quick and dirty' estimate of 3kV/ mm, the contacts can withstand at least 1,200V without 'flash-over' (breakdown of the air). However, it would be a very foolhardy design if it were stressed to that voltage. The safe limit is as marked on the relay - 250V AC (353V peak). The relay shown is a '1 Form C', meaning a single contact set with changeover contacts. This is the same relay that measured 6mΩ contact resistance. Higher current relays can usually be expected to have less. We only use the normally open contacts in the designs shown, so a '1 Form A' relay can be used.
Photo Of Relay Destroyed By Arcing
The photo above was submitted by a reader, and shows what happens when a small relay is expected to break a high-current arc. The contacts and their supports are totally destroyed, with only the 'stumps' of the contact arms remaining. A similar photo of an industrial (much more rugged) relay is shown in the Relays - Part 2 article. Once an arc is maintained by the applied voltage and current, there is almost nothing you can do to prevent subsequent failure. The only option is to prevent the arc from forming in the first place.
For hybrid relays, any semiconductor switch can be used to bypass the EMR (electromechanical relay), including BJTs (bipolar junction transistors), MOSFETs, IGBTs (insulated gate bipolar transistors), SCRs (silicon controlled rectifiers, aka thyristors) or TRIACs (bidirectional AC switch, originally a trade mark, but now generic). Each has advantages and disadvantages, but the only solution for switching audio is to use MOSFETs, as all other devices introduce gross distortion. For most power control systems this is irrelevant, and there are only a few places where high-current audio requires switching (DC protection circuits don't need to be linear, as they only operate under fault conditions).
Most commercially available hybrid relays use TRIAC or SCR 'solid state' switching in tandem with the EMR. This is fine for switching mains or other mains frequency voltage to a load, and they are a reliable and mature technology. However, if you need to switch high-current audio signals, they are of no use. In addition, they cannot be used with DC, nor if there is a DC component in the switched supply voltage. For the things that most audio people will want, the only method that will work is to use MOSFETs.
No BJT switching circuits are shown here, as they are uncommon in hybrid relays. Unlike MOSFETs or any of the other switching systems, a BJT requires considerable base current to turn on fully, and this is difficult to provide with any common optoisolator. It can be done of course, but I don't know of any commercial circuit that uses them.
Note that while some commercial hybrid relays may turn off the 'solid-state' part of the circuit when the relay contacts are closed, there is no requirement to do so. The circuitry becomes far more complex, and while it does save a small amount of power (around 10mA or so) this is not worth the added complexity. The circuits shown here keep the MOSFET, TRIAC or SCR turned on for the duration of the switch-on cycle, and only the MOSFET will dissipate any power at all (about 10mW, assuming a contact resistance of 6mΩ). It's fair to say that this is negligible.
In the circuits shown, there is no attempt to reduce the EMR's drop-out time using zener diodes or other techniques. All circuits shown use a diode for back-EMF suppression, and while this causes the contacts to remain closed for longer after de-activation, the solid state switching circuit is delayed for long enough to ensure this isn't a problem. Relay drop-out can be made faster as described in Relays - Part 1, but this may require additional circuitry to handle the higher back-EMF without compromising the electronic switching or delay circuits.
While the circuits below show a comparator as the timer, this does make the circuit more complex. I showed comparators because their operation is easy to understand (the output takes the polarity of the most positive input), and they are common in precision time-delay circuits. However, the time delay can be implemented with alternatives, as shown in Section 6. Feel free to use any timer with any switching circuit, as they are comparable in all respects. The time delay for all circuits is about 40ms (after the relay supply is removed).
One thing that you'll see over and over again elsewhere is hybrid relays using zero-crossing (aka zero-voltage) detection. The loads that can be switched this way are very limited, being incandescent lamps (now becoming extinct) and switchmode power supplies. The latter includes the SMPS used in most LED lamps, but as most are relatively low power, zero-cross switching is of limited value. Many loads are inductive, including mains-frequency transformers and one of the most common of all - motors.
Although these are always referred to as being inductive, this is only true at power-on and/or with no load. When loaded (even to as little as 10%) they are only partially inductive. The critical part is at power-on, and zero-cross switching is the worst possible option, as it guarantees maximum possible inrush current. Almost without exception, 'random' switching is used with motors and transformers, often controlled by a manually operated switch or a process controller (in industrial installations). Household motors used in fridges, pool pumps and the like are switched by either a thermostat and/or a timer. These are also random - they apply power when needed, and do not consider the mains voltage phase angle.
The misguided application of zero-cross switching for everything is just that - misguided. In many cases it's assumed that zero-voltage switching must be better, because it lets people experienced with microcontrollers show off their skill, but these same people rarely have enough knowledge of purely analogue processes to understand when (and why) zero-voltage switching is or is not appropriate. I've literally lost count of the number of allegedly 'general purpose' switching systems (using SSRs or hybrid relays) that have specified zero-crossing detection for the design. It is true that there may be a small reduction of noise when switching (say) a 2kW heater at the zero-crossing, but these things are usually switched on and off over a period of several minutes (sometimes a lot longer).
A TRIAC or SCR based SSR will make electrical noise when it's conducting (see Solid State Relays, in particular section 5, where the voltage waveform of a TRIAC is shown. When used in a hybrid, this disappears except at the instant of switch on/off and it's generally unobtrusive. Making the circuit zero-voltage switching means that it's limited to a few applications, with motors and transformers excluded.
Unfortunately, once an idea (good or bad, but with a definite bias towards 'bad') gets some attention, it becomes repeated ad-nauseam until a sizeable number of people will think that it's the 'right' way to do something. Silly (or stupid) ideas are treated as gospel, and are accepted without question. This is something I've had to confront many times, and the use of zero-voltage switching is just one of many. So, unless you know absolutely that your load will benefit from zero-voltage switching, don't even attempt it.
Consider that millions of pieces of equipment are switched on and off using EMRs (which are random switching), and this isn't likely to go away any time soon (if ever - at least until AC mains distribution disappears in favour of DC). Likewise, a random switching SSR or hybrid relay is almost always the better choice except for some specific loads that require greater sophistication. For this reason, all circuits shown in this article are random switching, as are those in the SSR article.
A switching system has three major components - the power source, switching system and the load. All must be matched to the others, not just AC/DC, voltage and current, but the nature of the load. If you get your matching wrong, bad things can happen. For example, if you were to use a zero-voltage switching circuit with a 1kVA toroidal transformer, you guarantee maximum possible inrush current, every time the transformer is turned on. Everything is stressed to the maximum - the switch itself, the transformer and even the house wiring. It's quite likely that you'll trip the mains circuit breaker at regular intervals, all because you didn't realise that you used the wrong switching type. As a side issue, such a transformer doesn't need a 'special' switch, it needs an inrush current limiter. This will also reduce the load on the switch itself and ensure stress-free operation for the life of the equipment.
The following is adapted from a relay datasheet [ 1 ], and shows the derating curves for both AC and DC operation. For the relay to meet its life expectancy, the current and voltage must not exceed the limits shown by the red curve (DC) and the green curve (AC). There are two ratings, one for DC and the other for AC. Should the ratings be exceeded, the relay contacts will be subjected to arcing that will either reduce the life and/ or destroy the relay contacts. A serious overload (e.g. 14A at 56V for a power amplifier DC protection circuit) will destroy the relay - probably the first time it's used!
Figure 1 - Relay Switching Capacity
The graph shown above is quite possibly the most important graph you'll ever see when it comes to relays switching DC. The relay itself doesn't matter very much, because the only thing that normally changes is the maximum current. The data can be extrapolated for higher current relays, but unless the datasheet specifically provides a similar graph showing higher DC current switching capacities, assume that 30V DC is the maximum permitted voltage for rated current. The current derating required at higher voltages is very clear. At 40V DC, the allowable current is reduced to less than 2A, with an absolute maximum voltage of 100V DC at 500mA or less. Ignore this at your peril.
This same graph is also shown in the Relays - Part 2 article. DC loads (even within ratings) reduce the life of any relay, and high voltage and current cause arcing that reduces the life of the contacts. The idea of a hybrid relay is to offload the switching to an external device, which for most of this article will be one or more MOSFETs.
It's probable that very few readers will ever have downloaded a relay datasheet, and many suppliers don't make them available. Hobbyist suppliers usually don't, and even if you do get the datasheet, some are less informative than others. The above graph is (almost) unique, in that it's one of only two such graphs I found amongst all of the relay datasheets I have downloaded. With sixty different PDF files, the remainder failed to include anything similar. They all state that rated DC current is only permitted up to 30V, but the others did not include the detail seen above.
An electrical arc is a very potent destroyer of anything nearby, including the conductors that initiated the arc. Electrical arc welding (in all forms) is a clear demonstration of how much material can be moved from one electrode to the other, and it also demonstrates the heat produced (along with light - including short-wave ultraviolet which cause skin burns). The more current that's available, the easier it is for an arc to be self-sustaining, even at surprisingly low voltages. A 'typical' stick-welder may be supplying 50A at a voltage of only 15-20V, and melting the welding rod and work piece quite happily. The same thing happens inside a relay when the contacts open, and it's up to the circuit designer to ensure that a sustained (and therefore destructive) arc is not produced.
You might be tempted to exceed the relay's ratings once there can be no arc (thanks to the added electronics), but that would be unwise. I checked a couple of high-current relays (including a 40A automotive relay) to determine contact resistance, and it's not always as low as you may imagine. The automotive relay measured 269mV at 50A, as resistance of 5.2mΩ, and a heavy-duty octal relay measured 338mV at 50A (6.76mΩ). The power dissipated in the contacts and internal conductors can be surprisingly high - the octal relay dissipated almost 17W (although it was operated at double its rated current). The automotive relay would dissipate a bit over 8W with 40A (measured across the normally closed contacts).
One limitation that you'll come across is that many relays have a lower current rating for their NC (normally closed) contacts than for the NO (normally open) contacts. This is largely due to the fact that more contact pressure is available when NO contacts are closed by the coil. All relays use a spring to restore the armature after operation, and that spring must be weaker than the available magnetic force or the relay won't activate at all. As the armature gap closes, more electromagnetic force becomes available, allowing higher contact pressure for the NO contacts. For the applications described here, this isn't a problem. The normally open contacts are used to connect a load, assisted by whatever electronics are added.
The first option is the simplest, and will work when the relay circuit and the load supply share the same common (nominally 'ground') connection. With appropriate choice of the relay and MOSFET, you can switch almost any voltage or current you need with this arrangement. However, like anything that's been simplified to the lowest possible complexity, it's inflexible, and isn't suited for most applications because the 'ground' end of the load is floating when the relay is inactive. This is fine for motors are other simple loads, but is not acceptable where the positive supply needs to be switched, as will be the case with most electronic circuits.
Figure 2 - Simplified DC Hybrid Relay
The circuit relies on two things (both of which are normally true). Firstly, the relay is assumed to have a small delay before the contacts close, and secondly, it's assumed to have a similar delay when the voltage to the coil is interrupted. The dropout (release) time for most relays is in the order of only 5ms, but that's without the diode. Because it's nearly always included, the release time will be similar to the pull-in (operate) time, around 10-15ms. This varies from one relay to the next, but these figures are enough to work with.
While the circuit shown has limited usefulness, it is easy to analyse. When +12V is applied to the relay coil, C1 is charged virtually instantly via D2. This forces the non-inverting comparator input (U1, Pin 2) high, so the output goes high, turning on Q1 (MOSFET). This also turns on almost instantly, applying power to the load. After around 10-15ms, the relay has overcome its internal inductance and inertia and it activates, shorting the MOSFET and reducing its power dissipation to almost zero. The relay therefore carries the load current, with the very low losses we associate with relays.
When relay power is removed, C1 remains charged, and starts to discharge via R1 and the relay coil. After around 10-15ms the relay releases, but the load current is provided by Q1, so the relay only breaks a very small current at a correspondingly low voltage. There is no arc when the contacts open, regardless of supply voltage. With the values shown, the MOSFET will turn off after about 70ms, and because the relay contacts are already open there is no arc. The MOSFET is selected to suit the load's supply voltage and current, and the only limitation is the maximum DC voltage that the relay can withstand.
Average MOSFET dissipation is low (depending on the MOSFET of course), and it's only intermittent. Even if the peak dissipation is 10W or so, a heatsink isn't needed because of the low duty-cycle. If the relay is expected to operate no more than once every 5 seconds, even a fairly 'ordinary' MOSFET should keep average dissipation below 1W. Switching 50V at 12.5A or more is easy, using a 20A relay and (for example) an IRFP240 MOSFET (200V, 20A, RDS-On of 0.18Ω - pretty 'ordinary' by modern standards). While the MOSFET will dissipate a bit over 28W when the relay is opening or closing, these periods are short.
As shown, U1 is a comparator, not an opamp. While they share the same symbol, they are quite different, in that comparators operate 'open-loop' with no negative feedback. They are therefore much faster than any opamp, but almost all use an external 'pull-up' resistor at the output (R5). If you use an opamp, choose one that's fairly fast, or the MOSFET dissipation is increased.
While it may seem as if there's quite a lot involved, the whole circuit uses only a few cheap parts. The comparator needs to be fast to minimise MOSFET peak dissipation, but even a TL072 (an opamp, and much slower than a 'true' comparator) is more than fast enough for the task. If the system is controlled by a microcontroller or PIC, the comparator and associated circuitry can be omitted because the micro can control the relay and MOSFET to get the timing right. It's rather pointless showing this arrangement as it will depend on the micro being used, and everything is controlled by software.
This is the general principle behind MOSFET hybrid relays, but don't expect to be able to go out and buy one - the original idea was patented in 1997 (Patent # US5699218), but using a TRIAC instead of MOSFETs. This is a perfectly valid way to build a hybrid relay, but MOSFETs provide advantages, and are more suited for DC and linear AC applications. You can buy MOSFET relays, but most are fairly expensive and it's usually cheaper to build your own. For example, a 48V, 20A MOSFET relay may cost anywhere from AU$120 to AU$500 - each!
Most MOSFET relays you can buy are isolated, making them (more-or-less) equivalent to electromechanical relays. However, as noted above, they are expensive, and may not be ideal for use in a hybrid setup. Many have a slow turn-on time (around 1ms is typical), so dissipation can be very high for the turn-on period. With a 50V supply and a 4Ω load (the same as used in the previous example), dissipation will peak at 156W as the MOSFET turns on. While the available devices are designed for that, it limits the duty-cycle. This isn't normally a problem, because almost no-one uses relays for high repetition rate switching.
As described in the MOSFET Relays article, an IC is now available that renders all that came before essentially obsolete. The SiC8752 is a capacitively-coupled MOSFET driver that can supply far higher peak current than common photo-voltaic optocouplers (which are used in most MOSFET relays). The datasheet can be seen here, and I suggest that the SiC8752 (diode emulation) be used as it's simpler. Unfortunately, these ICs are only available in a SOIC-8 (SMD) package, and at the time of this update are hard to get (unfortunately).
The basic control circuitry is identical to that shown in Figure 2, but the MOSFET relay drive circuit uses the Si8752 to provide gate voltage to the output MOSFETs. The circuit shown below can be used with AC or DC, and for DC the two MOSFETs can be paralleled, doubling the current rating, but making the circuit polarity-sensitive (as you'd expect).
Figure 3.1 - AC/ DC Hybrid Relay Using Si8752
The control and controlled sections are isolated, limited only by the characteristics of the isolator. These are rated for 2.5kV, but I would be wary of using one to isolate mains voltages, because the minimum creepage and clearance distances are so small. With a body width of 3.8mm (typical), this may not be considered acceptable, but the IC does have approvals from all the major regulatory agencies (UL, CSA, VDE, and CQC certifications) according to the datasheet.
Operation is identical to that described for the Figure 2 circuit, with the only difference being that the diode emulator is driven with 12mA via R5. This is a compromise between MOSFET activation time and 'diode' dissipation. According to the datasheet, 'on' time for the MOSFETs is 41µs (typical) and 125µs (maximum) with 10mA, and 'off' time is typically 15µs. This may not be as fast as you'll get with direct connection of a drive circuit to the gates, but it's a great deal faster than most other isolated MOSFET drivers.
For higher speed, the value of R5 can be reduced, with a maximum permitted current of 30mA. With a 12V supply, that would mean reducing R5 to 330 ohms, but you must ensure that the comparator can sink that much current when the output is low. The LM311 comparator (for example) can sink up to 50mA, so that's unlikely to be a problem.
While there are several photovoltaic (aka PVI - photovoltaic isolator) MOSFET drive ICs available, the Si8751/2 have such a performance improvement that it's difficult to recommend any other system. ICs such as the VO1263AB and VO1263AAC (dual PVIs) or the TLP590B or APV1122 (single PVIs) certainly work, but they all suffer from having a very poor output current capability (around 15µA). This means that the MOSFET(s) turn on rather slowly, and in extreme cases may not be fast enough to start conducting before the EMR contacts close. These devices are useful, but IMO the Si8751/2 are so superior that I wouldn't use anything else.
Figure 3.2 - AC/ DC Hybrid Relay Using VOM1271
Photovoltaic optocouplers are mostly rather feeble, with a very low output current that can't charge the gate capacitance quickly. For a hybrid relay, that's not a major problem if you only need DC arc suppression. Loudspeaker protection systems are a case in point. The relay closes a few seconds after the power amp is turned on, and at that moment there's not likely to be a significant voltage present. If there's DC present, the relay doesn't close at all. A photovoltaic optocoupler will turn on the MOSFETs within perhaps 100ms or so, but the relay contacts are closed and there's little or no current through the MOSFETs. Should the relay have to open due to a DC fault, the relay contacts open while the MOSFETs are still turned on. The DC fault current is interrupted by the MOSFETs, not the contacts, so there's no arc.
The VOM1271 (Vishay) is shown in Fig. 3.2, but there are other options (e.g. TLP591B [Toshiba], APV1122 [Panasonic] or PVI1050N [Infineon]). None of these devices come close to the Si8752, but they will work well in a hybrid relay. They are not inexpensive ICs though, but compared to a commercial hybrid relay the circuit can be built for a fraction of the price. Note that only the turn-off part of the timing waveform shown below applies if you use a photovoltaic optocoupler. The relay will almost certainly close faster than the MOSFETs can turn on with a gate supply current of (typically) less than 20µA. Most of these ICs are designed to have a fast turn-off (note 'Turn Off' block inside optocoupler), so the MOSFETs are protected against excessive power dissipation. Not all photovoltaic optos use the turn-off circuit though, so choose carefully.
Note that for particularly high power applications, you may choose to use an IGBT (insulated gate bipolar transistor) instead of a MOSFET. Not all IGBTs include a reverse diode, so if your application is AC, you need to choose one that does, or add an external diode. The diode must be capable of handling the full load current. IGBT hybrid relays are not covered in any further detail here, but note that unlike the MOSFET hybrid relays shown, IGBT versions are not suitable for switching audio, as they will introduce considerable distortion. As a hybrid, this will not be audible except when turning off under load. This is unlikely to cause problems.
Figure 3.3 - Possible Commercial Implementation For A Hybrid Relay
Commercial hybrid relays would typically use a PIC or ASIC (application specific IC) to perform timing functions, as this requires only a single IC and a bit of code to create the required delay. This could be expanded to include functions such as load detection (to verify the semiconductor(s) haven't shorted out), or other functions that the manufacturer deems worthwhile. None of this changes the basic operation, which as shown above is fairly straightforward. Even the smallest PIC will have more than enough processing power, but it may not be able to supply much output current to the optoisolator.
To show how the hybrid system works, the following timing diagram lets you see the process in detail. The relay 'on' time was deliberately kept to the minimum so both 'on' and 'off' sequences were on the same graph. The DC load power supply was 50V, with a 4Ω load. The simulation isn't perfect, as a real relay will show some contact bounce, especially when the contacts close, and I didn't add that as it would make the graph too busy.
Figure 4 - MOSFET And Relay Contacts Timing
Power is applied to the relay circuit exactly at the 1 second mark. The MOSFET starts conducting within a few microseconds, and this isn't visible at the time scale used. The MOSFET carries the load current until the relay contacts close (about 10ms). The MOSFET current is then reduced to almost zero - perhaps 100mA or so, depending on the relay and the MOSFET. When the relay 'on' signal is removed 50 milliseconds later, the MOSFET continues to carry the current until after the relay contacts have opened. This prevents any arc across the relay contacts, because the voltage across them is so low. The exact voltage depends on the MOSFET's RDS-On (about 0.18Ω for an IRFP240), so the relay contact potential will be only 2.25V for the examples shown here (4.5V with two MOSFETs in series). Either voltage is far too low to allow an arc to be created, which is the whole purpose of this scheme.
While this arrangement will always extend the total release time for the system as a whole, it's uncommon that there's a precise timing requirement for relay circuits. This is because designers know (or should know) that relays take time to activate and release, and while the specifications generally show very fast release times, this is invariably without the back-EMF suppression diode. The 'relays' articles show that the release time is usually extended to be roughly equal to the pull-in time when the diode is used, and it's very rare to omit it in any switched circuit. Deactivation can be made faster if necessary, as described in the 'Relays' [ 4, 5 ] articles.
Without the diode, the back-EMF from the relay coil can easily exceed 400V, and that will destroy most switching transistors. The design criterion that needs to be applied for a hybrid relay is based solely on the relay's worst-case release time, and the MOSFET must conduct for this time, plus a safety margin of (ideally) not less than 10 milliseconds. If it's known that all examples of the electromechanical relay release within 15ms (as an example only), then the MOSFET drive circuit should be arranged to ensure that the MOSFET conducts for at least 25ms after the relay drive signal is removed. This is easily achieved, even with simple circuitry.
In these examples, the timing is set by R1 and C1. To reduce the delay before the SSR section of the circuit is deactivated, simply reduce the value of either R1 or C1. With the other component values as shown, the delay time is approximately ...
t = R1 × C1 × 0.7
100k and 1µF therefore gives a delay of 70ms as seen in the timing diagram. I've shown C1 as an electrolytic capacitor, but a film cap is preferred for long-term reliability. R1 can be increased in value, but more than 220k is not advisable (and the positive feedback resistor [R4] would need to be increased to around 2.2MΩ). There is a great deal of scope for experimenting, and you can make changes as needed to suit your particular requirements. For example, if C1 is 220nF and R1 is 150k, the delay is about 23ms. This should be more than enough time for an EMR to release, but it must be verified!
Because there is an inevitable delay before a hybrid relay can release, they are not suitable where very fast circuit deactivation is a requirement. An example is a DC protection circuit for loudspeakers, as the delay may be sufficient to cause speaker damage before the DC is interrupted. As with everything in electronics, the end use must match the capabilities of the device(s) used.
I have included a TRIAC (bi-directional triode thyristor) and SCR (silicon controlled rectifier) hybrid relay for AC applications. Regardless of the type of hybrid relay, inductive loads may create problems if no form of protection against back-EMF is provided. This is sometimes easier with a MOSFET solution, because avalanche-rated MOSFETs are now readily available to handle an over-voltage condition. The same condition with a TRIAC generally causes spontaneous conduction - the TRIAC turns on due to the voltage 'spike', and will remain on until the current falls to zero, but this cycle may repeat. TRIACs have a rather odd terminal nomenclature, being 'Main Terminal 1' (MT1) and 'Main Terminal 2' (MT2). The gate (G) is adjacent to MT1. These are shown in Figure 5. TRIAC and SCR hybrid relays cannot be used with DC, as the TRIAC/ SCR cannot turn off.
Figure 5.1 - TRIAC Hybrid Relay
Because a TRIAC (or an SCR) will continue to conduct until the current falls to zero, by it's very nature the supply is always interrupted as the voltage and current (for a resistive load) falls to zero. This minimises back-EMF with reactive loads, but if the voltage and current are out-of-phase (inductive load), the TRIAC drive circuit needs additional components to ensure reliable turn-off. This is detailed in the MOC302x datasheet, and isn't shown in the circuit above. Consequently, Figure 5 is usable with resistive loads only. Unlike a MOSFET hybrid relay, the TRIAC circuit can be used only with AC. If the power supply is DC, it will turn on, but will never turn off until the supply is interrupted by other means. Note that the TRIAC shown is for convenience, and is one of many that can be used. The BTF139F-600 is rated for 600V (peak) and 16A RMS. R7 and C2 create a snubber that may be necessary with some loads. This is not covered here.
It's worth pointing out that if the AC load is inductive (a transformer or motor), you should never use a zero-crossing TRIAC driver (they are available). The worst case inrush current for inductive loads occurs when the supply is turned on at the zero crossing, so the driver must be a 'random' type, which turns on as soon as the required current is available, regardless of the AC voltage. Zero crossing drivers are better for resistive loads, as EMI (electromagnetic interference) is reduced.
Rather than a conventional TRIAC, the so-called 'snubberless' TRIAC deliberately inhibits conduction when the gate voltage is in the (often troublesome) 4th quadrant. This topic is outside the scope of this article, but there's some detailed information available in Project 159. STMicroelectronics has a TRIAC they call an ACST, which is a dedicated AC Switch with high immunity against ΔI/Δt commutation. Similar devices are known as 'Alternistors' or High-Commutation (Hi-Com) TRIACs, depending on manufacturer.
The best way to trigger an TRIAC is almost always quadrant 1 (MT2 and gate positive) and quadrant 3 (MT2 and gate negative). This is provided by default by the optocoupler. I leave it to the reader to explore the options.
Figure 5.2 - SCR Hybrid Relay
The SCR version is very similar to that used for the TRIAC, except that extra resistors (R7, R8) are required because the trigger current is lower. In addition, a conduction path is necessary for reversed polarities. The BT152-600R SCR is rated for 600V at 16A RMS (22A RMS with two for full-wave), and again is only a suggestion. Otherwise, performance is similar to that using a TRIAC. SCRs are available in higher current ratings than TRIACs, so this scheme is likely to be more common for high-current applications. SCRs are also less susceptible to the change of current vs. time (ΔI/Δt), which can cause spontaneous conduction with many TRIACs.
After deactivation, a TRIAC or SCR circuit will continue to conduct until the current half-cycle is complete, because they rely on zero current to turn off. This may extend turn-off time by a further 10ms (50Hz) or 8.33ms (60Hz). This applies to all TRIAC and SCR relays, hybrid or stand-alone.
The circuit can be simplified somewhat by using a 555 timer. There's a useful reduction of parts needed, and this may be appealing. With the values shown for timing (R1 and C1), the delay is about 43ms, so the EMR should have enough time to release (as the 'Relay' input is open-circuited or raised to 12V) before the electronic part is disconnected. Normally, a 555 timer expects the trigger pulse to be shorter than the delay, but we can use it to our advantage. The internal discharge transistor is replaced by Q1.
Figure 6.1 - 555 Timer Delay Circuit
As long as the input is held at +12V, the EMR is on and the timer can't start. The output will be high for as long as the EMR is powered. The timing starts only after Q1 turns off so C1 can charge via R2. The timer duration must exceed the EMR's dropout time. The optocoupler can be anything suited for the application, including the Si8752, MOC3022 or even a 4N28 or similar for a DC relay. The choice depends on the application, so it's left to the reader to decide.
This is probably the simplest (and cheapest) option, but it requires the user to understand the operation of 555 timers. Like the previous circuits, this one uses +12 to operate. The need for Q1 is a nuisance, but the 555 timer has to be used in an unconventional way to make a hybrid relay, and the transistor can't be avoided.
The next version shown here is also potentially useful, and uses a CMOS hex-inverter to perform the logic and timing. With one IC, two resistors and one diode, in terms of parts count it's lower than any of the others, although a 14 pin DIP IC isn't the smallest footprint around. It could also be done with an SMD IC, but would be a great deal harder to assemble.
Figure 6.2 - 4584 CMOS Hex Schmitt Trigger Delay Circuit
When the Relay input goes high, C1 is charged via D1 (1N4148 or similar), so the output of U1.2 goes low within a few microseconds. This causes the paralleled outputs of the remaining Schmitt triggers to go high, turning on the optocoupler. When the Relay input goes low, the EMR will release in the more-or-less typical time of 20-30ms, and C1 discharges through R1. Once the Schmitt trigger threshold is reached (around 45ms), the optocoupler is turned off and the 'solid state' relay section is disabled.
The circuit can also be made with an opamp instead of a comparator (a small parts saving), or there are some dedicated timer ICs that could be adapted for the purpose. Ultimately, you can use anything you like for the timer, provided it meets the primary criteria. It must activate the solid state relay instantly, and keep it engaged for at least 10ms after the EMR releases. Anything that you use must be tested thoroughly to ensure that it's 100% reliable. This is particularly important if your application involves switching DC at elevated voltage or current.
Figure 6.3 - Fully Discrete Delay Circuit
Some people like the discrete approach, so the circuit above is suitable. The circuit component values are only a guide, but with those shown it provides a 40ms delay. It doesn't have the accuracy of the comparator circuits shown in the reference designs, but it does use fewer parts and is easy to implement on Veroboard or similar. The transistor and low-power MOSFET are not critical, and can be anything you have to hand. The timing will vary with the gate-source voltage (VG-S) of the MOSFET, and the delay can be adjusted by varying R1. R3 raises the detection limit to a little over 2V to ensure better repeatability. Switch-off time is less than 1ms.
There are depressingly few timer ICs around to chose from - the 555 and its cousins turn up in almost every timer circuit you'll come across, and there aren't many other options. Despite the apparent complexity, a comparator based timer is one of the best - they are fast and very predictable.
While there's no reason not to use a PIC or similar microcontroller for the timing functions, for the most part it's a bit like using a sledgehammer to kill a mosquito. The timing function is very simple, and a 555 timer is the most economical choice. The amount of messing around with level-shifters and a 5V regulator make the idea of a microcontroller rather pointless, as you will end up with more parts and an IC that has to be programmed. A simple analogue timer can be 'programmed' with a trimpot if you think that's necessary. The code is trivial, but if you need to make an adjustment (to the turn-off delay for example) then the device has to be reprogrammed. I don't think it's worth the extra complexity, and it won't work any better.
With so many applications now using high-voltage DC (think electric cars for starters) it's useful to look at a DC only solution. One that caught my eye some time ago was a patent document from 1987 [ 7 ]. Although it's somewhat sub-optimal in many respects, the idea is interesting. The biggest problem with it is MOSFET dissipation after the mechanical contacts open, but careful capacitor selection will keep the conduction period short enough to prevent the MOSFET from overheating.
The MOSFET has high dissipation because it's operated in 'quasi-linear' mode. The capacitor creates a negative feedback path from the drain to the gate, so the MOSFET never gets a high enough voltage to create a 'hard' switch-on. As the voltage at the drain falls, so does the gate voltage. That means that the MOSFET can only ever turn on partway, so its dissipation is high. The 'worst case' dissipation is at at half the supply voltage (and therefore half the load current). For example, an 80V DC supply with a 10Ω load means the peak MOSFET dissipation is 160W. That may only last for perhaps 0.5ms, but it's not the way that MOSFETs are normally used.
Figure 7.1 - 'Passive' MOSFET Arc Quench
A sample circuit is shown above, being the version I tested by simulation. A similar arrangement was also bench tested. The additional set of contacts is necessary if the DC source is liable to be turned on via a switch, and it prevents the MOSFET from turning on if a voltage 'step' occurs. When the relay opens, the MOSFET gate voltage will only ever get to somewhere from 4.5V to 6V (depending on the MOSFET itself), so the MOSFET is in linear mode. Selecting a MOSFET with very low RDS-on is a bad idea for a MOSFET operated (even momentarily) this way, so you need to be fairly careful with your choice.
The circuit is shown with a ground connection for the activation and switching sections. However, they are completely independent and can be used with any voltage between the two sections that's within the isolation voltage rating for the relay. Like any other relay, the contacts can be at any (sensible) voltage, as can the control circuit. C1 must be rated for the DC voltage used, and it would be sensible to use a Y-Class cap as they have a very high voltage rating and are 'fail safe'. If C1 were to become shorted, the MOSFET will be 'on' permanently.
Figure 7.2 - 'Passive' Circuit Waveforms
The waveforms are shown above. The red trace is the relay control voltage, but it does not show the inevitable delay when the voltage is removed. This is dependent on the relay itself, and with a more-or-less 'typical' relay and a modified back-EMF circuit (R1 and D1) it will release the contacts in about 4ms. At the instant the contacts open, the drain voltage increases rapidly, and the rising voltage is passed by C1 to the gate of Q1, turning it on. The gate voltage is shown in the brown trace. As you can see, it never reaches the voltage needed for full conduction.
It's important for any hybrid relay to have a delay that's sufficient to allow the EMR's contacts to separate widely enough to be below the arc initiation voltage. A 'safe' assumption for air at sea level is around 1kV/mm, so if you have a voltage of (say) 500V, the absolute minimum contact separation is 0.5mm. Most compact relays have a separation of around 0.4mm, and this will prevent an arc if the voltage is static, but not if the contacts open with DC applied. A common (10A/250V AC, 30V DC) relay has a maximum DC voltage of 30V at rated current. If this is exceeded, a sustained arc can be created that will melt the contacts and often the entire contact assembly as shown in the photo at the beginning of this article.
The original patent document also shows an AC version, but I've not included it because (IMO) it's unsuitable for general usage. It's certainly possible, but one of the other techniques described above is a far better option because they are fully controlled and provide full MOSFET conduction. The delay is also adjustable, and the MOSFET is turned off very quickly, something the arrangement shown in Fig 7.1 cannot achieve. It's inevitable that the MOSFET will always have very high dissipation as it turns off, and the time this lasts is critical. Dissipation is greater than 50W for 1.5ms, which isn't a disaster, but it's longer than is desirable.
For example, 1kW for 1µs is fine, but 1kW for 1ms is not a good idea at all. Have a look at the SOA (safe operating area) curves for any MOSFET, and you'll see that they are capable of extraordinary dissipation if the time is short enough. The SOA curve for the IRFP460 shows 500V (VDS) at 50A for 10µs - that's an instantaneous (single pulse) dissipation of 25kW! If the time is extended to 1ms, the maximum single pulse dissipation is reduced to 2.25kW. That's still a lot of power, so it's not a serious limitation.
I decided that it was worth testing the Fig 7.1 circuit, because it's easy to do and I have a test relay set up to run tests for arcing and other 'interesting' things. My first test has a much shorter time-constant for C1 and R2, and the MOSFET couldn't conduct for long enough for the contacts to be far enough apart to prevent an arc. Needless to say a nice fat arc was produced. I increased the value of R2 to that shown, and it worked very well. However, it's not entirely without issues. If the relay is turned 'off' then back 'on' very quickly, C1 doesn't have time to discharge and an arc resulted. The secondary set of contacts will prevent that, and they also ensure that nothing can turn on the MOSFET unless the main contacts are opening.
All-in-all, I would describe this circuit as interesting, but it's limited to DC and the requirement for the second set of contacts has been proven. I used a test voltage of 90V (78V when loaded to 4.85A) and it does stop the arc. However (and unlike the other circuits shown here), contact bounce is not suppressed. The MOSFET turns on and off during the bounce period because it relies on the sudden increase of voltage when the contacts open, and there isn't enough time for C1 to discharge between 'bounces'. IMO, while this circuit works, it is sub-optimal in too many respects to be truly useful. Just because something has been patented that doesn't mean it's a good idea.
If you happen to be a relay manufacturer (in this case, Omron [ 8 ]), then you can build a 'special' relay, with extra contacts arranged to operate slightly differently from the main (current-carrying) contacts. This lets you simplify the design quite dramatically. The RL1a contact must close first, activating the TRIAC. A fraction of a second later, the main contacts (RL1b) close, and take up the load current. The TRIAC turns off, because it has almost no voltage across it. When the coil voltage is removed, the main contacts should open first, and the TRIAC takes over until the voltage falls to zero. The TRIAC then turns off.
Figure 8.1 - Commercial AC Hybrid (Omron G9H Series)
There's a snubber (R3 and C1) to ensure that the TRIAC does turn off with difficult loads, along with a varistor to limit inductive spikes. I'm sure that a fair bit of engineering has gone into this design, but as a user, you pay for it. I checked the price, and they are around AU$175 each (priced in 2023). You can build your own for a small fraction of that.
Unfortunately, we can't build our own specialised contact set, so the circuit becomes more complex. However, the parts needed are mostly fairly cheap, and the circuitry isn't complex. Mostly, you'd select the simplest possible circuitry unless you have a specific requirement for very predictable timing - at least within the normal range of commonly available relays.
This article is intended as a primer on hybrid relays, and there are many considerations that need to be considered for a 'real-world' application. As with many ESP articles, it's provided to give information that can be used for your own designs. Timing, relay, MOSFET and/ or TRIAC selection depend on the application, and the information here has guidelines, rather than complete 'ready-to-go' designs. While the circuits shown will all work, it's up to the end-user to determine the power components, based on the final requirements of a system.
The disadvantage of a hybrid relay is that it uses semiconductors. While these make the idea possible, they are also a point of failure. Most semiconductors will fail short-circuit, so the relay will never turn off, and this can place operators and/or anyone else at risk. A hybrid relay should never be used in a safety-critical application, and extensive testing is always necessary to ensure that all parts will not be subjected to voltage or current beyond the ratings of the devices used.
The circuits (and results) described are simulated, but have not been built and tested. This is not a limitation in any way, as the fundamental principles are easily established, and a simple 'thought experiment' is all that's needed to verify that operation is exactly as described. It's possible that at some stage in the near (or not-so-near) future that I will test (some of) the circuits, and the MOSFET relay based on the Si8752 has already been built and tested, and is described in the MOSFET Relays article and as Project 198. Indeed, it was a result of running tests on the prototype board I made that prompted this article.
One specification that is almost never provided is the contact clearance within any electromechanical relay. In most cases, it's a great deal smaller than you might expect. One of the few relay datasheets I have that even mention this parameter is for an automotive relay (nominal voltage 14V, switching up to 40A), and the contact clearance is stated to be 0.4mm. That isn't very much, and I dismantled a relay that I'd been using to test the static contact welding current and measured 0.4mm clearance (shown in the photo at the top of this page). This is an almost identical relay to that shown in the Figure 1 graph, rated for 30V DC and 250V AC, at 10A. In case you're interested, the NC (normally closed) contacts welded themselves together with 50A AC, and the relay wouldn't operate until I applied 24V to the 12V coil. The lesson from this is clear - don't exceed the rated current!
The three articles on relays [ 4, 5, 6 ] are worth reading if you haven't done so already. A vast amount of research went into the compilation of those, and they provide more information than you'll find almost anywhere else. While these common components appear to be simple, like most 'simple' parts they are far more complex than you imagine. None of it is hard to understand, but there are things that you won't find in most 'blogs' - including those from manufacturers. Knowing the limitations is very important to ensure reliability.
It is possible to buy hybrid relays, but expect them to be seriously expensive. If you need one, the DIY approach will most likely save you a considerable amount of time, effort and (probably) money as well. You can use an EMR and MOSFETs (or SCRs, TRIACs, etc.) to suit your application. In one on-line article I saw, the process has been taken to extremes using an Arduino for control. This is basically a silly idea, as it makes the end result far more complex than it will ever need to be for a practical circuit. Even turning off an optocoupler's LED while the EMR is engaged (to minimise lumen depreciation) can be accomplished with simple timers if you wanted to go that far.
Reference 6 is particularly useful, as it describes 'solid state' relays (SSRs) in detail, including the advantages and disadvantages of each type. Most aren't suitable for audio, other than a MOSFET relay.
![]() | + + + + + + + |
Elliott Sound Products | +IC Power Amplifiers |
+ Contents ++
Power amp ICs such as the LM3886 and TDA7293 are popular, and for good reasons. The circuits are easy to assemble, with a minimum of external parts needed to complete an amplifier. Unlike discrete amps (such as P3A), the IC power amps are much simpler. However, there are some notable restrictions on the use of these IC amps, due to their comparatively low maximum dissipation limits. For the basic design (which has a PCB available), see Project 19. I used this to build and test the circuits shown in Figures 3 and 6.
+ +By necessity, the output current is limited because it's simply impossible to get the heat from the power transistor junctions to the heatsink efficiently. While an LM3886 can deliver a claimed 40W into 8 ohms from ±28V supplies, power into 4 ohms is limited to 68W (typical), and using ±35V with a 4 ohm load provides the same output, because the amplifier's internal protection circuitry won't allow more current. The internal current limit is ±11.5A (typical, claimed) but it will usually be lower because the SOA protection will reduce it when the voltage (and/or temperature) is higher than 'normal'.
+ +Peak output current is claimed to be 11.5A, but that's for a maximum duration of 10ms with 20V supplies. Operation at full power with 35V supplies pretty much guarantees that the IC's internal thermal protection will operate, shutting down the amplifier until it cools. The (absolute) maximum IC power dissipation is 125W, and that is a lot of heat to move from the IC die to the heatsink via a relatively small thermal tab. The 'full pack' (fully insulated) package has a greatly reduced thermal rating, because the insulation layer is fairly thick and is a poor heat conductor.
+ +Another issue that users face is the IC's SPiKe™ protection system. The acronym stands for 'Self Peak instantaneous Temperature' (temperature is 'Ke' for Kelvin). This protects the IC, but the artifacts are decidedly unpleasant if the protection is triggered while you are listening to music at a level that's above the trigger point. A waveform drawing (taken from the datasheet) is shown below, and it sounds just as nasty as it looks.
+ +
Figure 1 - SPiKe Protection Waveform
The condition under which the waveform was taken are not disclosed in the datasheet, but I know from experience that what you see is typical of the LM3886 driving a reactive load (such as a loudspeaker). It requires surprising little overdrive into a typical 4 ohm speaker, and the only way to avoid the protection circuits from operating with programme material is to reduce the supply voltage. In most cases, ±25V is a sensible maximum for 4 ohm loads, and that (usually) avoids tripping the protection unless the load is especially nasty.
+ +Unfortunately, this reduces output power. While that's often not a problem for home hi-fi used at reasonable levels, the wrath of the SPiKe will come and bite you if you listen at high levels for an extended period. Fan cooling the heatsink (or using a heatsink that's much larger than usually suggested) will reduce the problem, but it won't go away completely.
+ +The TDA7294 has a rated package dissipation of 50W at 70°C. While this seems much lower than the LM3886, the latter doesn't allow for temperature, and assumes a 25°C case temperature. It's a challenge for most hobbyists to work out what they think they can get away with. The allowable power dissipation is reduced as temperature increases, and the maximum die temperature is 150°C, at which temperature the allowable dissipation is zero! The circuit described can also be used with the TDA7294, and all comments apply equally (especially in terms of distortion at higher frequencies).
+ +The TDA7293 has protection, but it's not as drastic as the LM3886, and even if the IC is driven into clipping it doesn't do anything more unpleasant than simply clip the waveform. The challenge with either of these amps comes around if you think of using one to drive a subwoofer. Since you typically need as much power as you can get (within reason of course), neither IC power amp is really suitable.
+ +++ ++
++
Note that most of the circuits shown include a 0.7µH inductor in series with the output. This is recommended for the LM3886, but it is entirely optional when boost + transistors are added. Its purpose is to ensure that the amp remains stable with capacitive loads, but the load is isolated from the amp IC by means of the 2.7Ω resistor used to turn + on the external transistors. It wasn't included in my test amplifier, and no oscillation was seen. If used, the inductor is made by winding 10 turns of 0.4mm-0.5mm wire onto the body + of a 10Ω 1W resistor. +
Before embarking on any of the ideas shown here, I recommend that you ready the Heatsinks article, as that will help you to decide on how much heatsink you need, and the best ways to mount the IC and power transistors. It's not at all uncommon for hobbyists (and even manufacturers) to underestimate the amount of heatsinking needed for a high power application, and a failure can be expensive - especially if it destroys your speaker(s).
+ +You'll also see that most circuits include a pair of diodes from the output to each supply rail. These are optional, because the external transistors will prevent the IC from going into protection mode, and this is where the diodes are needed (to dissipate back-EMF from the load). Since the protection is disabled, the diodes are largely a 'cosmetic' addition. I didn't use them on the test amp I built, and never saw the IC's protection kick in - even when the amp was delivering over 110W into 4 ohms!
+ + +There is a a way to get (a lot) more power from IC amps (actually, several ways). By means of two (or more) external transistors the IC has an easy job, as it only needs to provide the transistors' base current (plus a bit of its own power - it would be silly not to get at least some of the power needed from the IC). This arrangement is far more stable (and considerably simpler) than the versions you'll find elsewhere. These typically power the external transistors from the supply rails (often from an opamp), but the overall concept has some serious flaws and is best avoided. The LM3886 is shown, but the additional transistor arrangement is identical for other power amp ICs. The alternate method is shown in Figure 3, but it's the least 'friendly' of the various techniques.
+ +
Figure 2 - Added Power Transistors
By adding a pair of output transistors as shown above, they now handle the majority of the output current. The IC as shown will supply around 1A peak, and the transistors supply 6A (peak) or more, depending on the supply voltage and load impedance. With ±35V supplies and a 4 ohm load, it's possible to get over 100W, with the transistors dissipating an average of 25W (70W peak). The LM3886 will dissipate only around 18W (average) or less than 40W peak. You can even add another pair of transistors (R8 must be increased) to enable the circuit to drive a 2 ohm load.
+ +As shown, we can assume a 'worst case' current gain of around 16 for the transistors (the datasheet claims 10 at 15A, so the estimate is fairly close). That means that when the transistors are passing 6A, the IC only needs to provide less than 400mA to the bases, and a total current of about 1.2A peak. The transistors take most of the stress off the IC, so it should run fairly cool, even when the circuit is delivering over 100W continuous. Naturally, the transistors must be in excellent thermal contact with the heatsink, as their dissipation can be rather high.
+ +This looks like an 'all-win' approach, but as always there are caveats. The main issue you face is distortion. The LM3886 is claimed to have distortion of around 0.03%, but adding the transistors will cause this to increase, with the increase directly proportional to frequency. Below 500Hz or so, the increase is 'acceptable', and may not be noticed. However, at higher frequencies the distortion rises, and you can expect it to reach at least 0.5% at 10kHz. Distortion increases as the level is reduced! It can easily reach over 1% depending on the level, and the only way it can be avoided is with added complexity that provides bias for the output transistors (or as shown in Figure 6, but that's still less than ideal).
+ +This is not 'hi-fi', but the distortion will not be noticed if the amp is used for a woofer (reproducing nothing higher than ~500Hz) or subwoofer, because it is reduced at lower frequencies where the LM3886 has more gain. It's the open loop gain of the IC that ensures that there's enough feedback to overcome the Class-B operation of the added transistors. The circuit is the equivalent of running a normal Class-AB output stage without bias, but the IC provides the power until the voltage across R8 exceeds ±0.7 volts. After that, the external transistors provide the majority of the output current. Another ESP project that uses a similar principle is the Project 68 subwoofer amplifier.
+ +One thing this also does is effectively disable the protection circuits inside the LM3886. If the output is shorted, Either Q1 or Q2 will almost certainly fail, because the IC no longer 'knows' if the current is excessive. There are techniques that you can use that might provide full protection, but it's one of those things that needs to be thoroughly tested if you plan to implement it. Consider that protection circuits are intended to protect the amp against abuse, and many amps don't include protection yet survive for decades without any issues if they are used sensibly. Note that the bypass caps have been simplified for clarity, but they need to be as shown in Figure 2. Additional diodes are shown for these boosted circuits, but they may not be necessary, because ideally the 'SPiKe' protection circuitry will never be invoked.
+ +
Figure 3 - Added Power Transistors (Alternative)
This version is not particularly common, but I've seen it used and there are a couple of circuits on the Net that show it. There is a potential issue with this arrangement, and that concerns proper bypassing of the power amp IC. You cannot use bypass caps at the IC supply pins, because they will cause cross-conduction in the power transistors, leading to rapid overheating and failure. This is more likely at high frequencies, because the bypass caps slow down the rate of change of the base current into Q1 and Q2.
+ +C9 is optional. There is a small risk that it may cause some cross-conduction if the value is too high, so I suggest that the value shown is a realistic maximum. If the circuit oscillates without C9, it's obviously necessary. This is not an arrangement that I would normally suggest, as it doesn't have any particular advantage over the Figure 2 circuit. Once the current threshold is reached, Q1 or Q2 will turn on just as quickly, and the feedback is unable to provide full correction. The external power transistors will conduct when the LM3886 draws more than 3A from either supply rail, and it's unlikely that U1 will ever have to deliver more than ±3.5A (peak) from either supply rail.
+ + +Sometimes, just adding a pair of transistors may not be considered enough, especially for (very) low impedance loads. While such loads aren't usually a particularly good idea (cable losses become excessive for a start), there may be times when you need to drive a low impedance load. The following circuit will drive 2 ohms easily. It may even be possible to drive a 1 ohm load, but I wouldn't advise it because cable resistance will cause too much power loss. It's also difficult to build a power supply that can handle ±25A peak current!
+ +
Figure 4 - Added Power Transistors In Parallel
This would normally be completely out of the question, but the extra transistors make it easy to do. Due to the relatively low supply voltage, dissipation remains within tolerable limits, but when run at full power there's still a lot of heat to get rid of. Total average dissipation will be about 125W, with roughly 25W for each transistor and the same for the IC. Total average power into the load should be at least 200W.
+ +Under normal circumstances, it's not really advisable to use most power amp ICs in bridge (aka BTL), because that means the load impedance cannot be less than 8 ohms, and the ICs will struggle with the low impedance. A reduced supply voltage helps, but that reduces power. By adding transistors as shown, the IC can easily drive an 8 ohm subwoofer to around 200W when used in BTL mode. Perhaps more interestingly, if the output transistors are duplicated as shown in Figure 4, you will be able to drive a 4 ohm sub to around 400W, provided your power supply can handle the massive current. Each amplifier has an equivalent load impedance of only 2 ohms.
+ +Mind you, you will have to provide heatsink space for two power amp ICs and eight power transistors. If the suggested transistors are used, it's still a fairly inexpensive way to get 'lots of watts' from a fairly simple circuit. At around AU$3-4 each, they are inexpensive devices compared to those required for a discrete amp. Naturally, higher power transistors can be used in place of the TIP35C/36C suggested, but they may cost more.
+ +You could use MJL3281A (NPN) and MJL1302A (PNP) or similar for roughly the same price as a pair of the TIP transistors, which is a cheaper option because you don't need the 100mΩ emitter resistors. It's very unlikely that you'll ever reach the limits of these higher power devices, as they are rated at 250W each (vs. 125W for the TIP transistors). However, you have less thermal conductivity between the dies and heatsink with the higher power single transistors) and that makes the thermal interface more critical.
+ + +Plenty of application notes, DIY circuits and even commercial products have tried using a pair of LM3886 amps in parallel. Pretty much without exception, this is a disaster waiting to happen. I have seen (and bench tested) one commercial attempt, and it was so poorly executed that it was completely unusable. There are several attempts at DIY versions, and some of these also contain serious flaws that are likely to cause ICs to shut down due to overheating ... or blow up.
+ +The issue is that even a very small DC or AC offset causes a heavy current flow between the IC output pins. Most circuits recommend 0.1 ohm, but if there is a 1V difference between the outputs of the two amplifiers, that means a current flow of 5A. A more-or-less representative (but simplified) parallel circuit is shown below. While it may appear to be alright, you must consider the resistor tolerances and IC offset voltages. Note that the drawing is simplified, with the mute taken directly to the -ve supply, and bypass caps are not shown. By using a single capacitor for the feedback coupling (C2), the two amplifiers have exactly the same low frequency rolloff, preventing the likelihood of very low frequencies causing large offsets at the outputs of the power amp ICs. This is missed in most circuits published, but it's an important consideration.
+ +
Figure 5 - Parallel LM3886 ICs For More Current (Simplified)
Most circuits use 1% tolerance resistors, and these are usually perfectly alright to ensure that circuits function as expected. However, in the circuit shown you have to check for the worst case error, where resistor tolerances accumulate such as to create the maximum error (as per Murphy's Law). Just for the sake of this example, assume that resistors are exact, except for R2 (1% high, 22,220Ω) and R5 (1% low, 21,780Ω). That means the first IC has a gain of 23.22 and the second has a gain of 23.78. With an instantaneous input of 1V, U1 therefore has an output of 23.22V and U2 has an output of 22.78V, a difference of 440mV.
+ +440mV doesn't sound like very much, but with only 200mΩ between the two outputs, a current of 2.2A will flow between the output of U1 and U2 ... with zero load on the output ! Imagine just how bad this can become if someone is foolish enough to use 5% resistors and the smallest (and separate) capacitors possible the feedback coupling to ground (i.e. separate small caps for each feedback network). I can tell you from personal experience that an Asian manufacturer did exactly that, and the results were completely predictable. This arrangement works only if resistors and capacitors are closely matched (0.1% tolerance ! ), or if you use the (IMO massively over-complicated) method shown in the LM3886 application note.
+ +If 0.1% tolerance resistors are used, you can expect the worst case circulating current between the ICs to be around 220mA at the same peak voltage, which represents a significant reduction. This will reduce instantaneous no-load dissipation from perhaps 28W (in each IC) to less than 3W (output voltage dependent). Note that DC offset hasn't been considered, but this has to be taken into account. It's fairly low of the majority of power amp ICs, but if the ICs are used with full DC coupling it could be as much as 100mV. This approach is obviously unwise with paralleled power amps. You also have to consider the risk if one IC goes into thermal shutdown and the other does not. This was also seen with the unit I've described, and the results were not a pretty sight (at least one LM3886 failed during testing). Worst of all, it's unpredictable because the output stages were never intended to have to sink significant current from outside the IC itself. Especially if the IC is supposed to be shut down!
+ +The best possible advice I can give on parallel operation is "don't !" Yes, there's a Texas Instruments application note (AN-1192) that shows you how to do it, but the requirement for 0.1% resistors makes it more costly than it should be, and even the app note includes an error in the value of the feedback capacitors. They should be at least 100µF, and preferably 220µF to ensure that their wide tolerance doesn't cause serious problems with any infrasonic input signal. Such signals may be thought uncommon, but a warped vinyl disc can easily cause very high levels. If you were to use the parallel-bridged version (shown in the same app note), you then add 4 opamps and even more extra close-tolerance resistors, to end up with a circuit that will cost more than a discrete design. Figure 17 in the same app note is seriously flawed, because the 22µF caps are way too small, and there may be significant circulating current at very low frequencies. Electrolytic capacitors are as far as you can get from being a 'precision' component.
+ + +BTL (bridge tied load) is a commonly described application, but with most IC power amps it's not a good idea. The TDA7293 can be used in bridge, but only with an 8 ohm load, and only if the supply voltage doesn't exceed ±35V. Adding external power transistors makes it possible to use LM3886 power amps in bridge, but the overall circuit ends up being fairly costly, and probably isn't an economical option. It's not even a 'simple' circuit, because the PCB layout ends up being quite complex, and the two (or more) power transistors and the IC itself all need to be on the heatsink. Using multiple heatsinks just makes the mounting process harder and more expensive.
+ +Where it's appropriate, it's advisable to use an external balanced line driver circuit to derive the two signals. One signal is not inverted, while the other is inverted. The two signals are in anti-phase, so the effective signal across the speaker is doubled and provides four times the power - in theory. Almost invariably, the combination of low impedance load and high current demand from the power supply means that a pair of 50W amplifiers may only deliver 150W, and not the 200W you expected.
+ +In addition, each amplifier 'sees' half the impedance, so with an 8 ohm speaker, the load on each amplifier is equivalent to 4 ohms. With all IC power amps, this increases their internal dissipation and with sustained high power operation the IC's internal thermal protection circuit may cause one or both to shut down. This can cause a real problem if one shuts down before the other (which is almost a certainty), and the IC that's still operational tries to force current into the output stage of the other. This may cause the IC that has shut down to fail, as they are not designed to sink current. There is no information anywhere to suggest that the common ICs are 'safe' in shutdown mode, and it's normally not a consideration because the IC is shut down.
+ +Along with parallel IC operation, by suggestion is don't attempt bridging with IC power amps unless you test every possibility very carefully before you connect it to your speakers. This is doubly true if you add booster transistors.
+ + +Many years ago, Peter Walker (of QUAD, UK fame, 1916-2003) astonished everyone with the 'current dumping' amplifier, the QUAD 405, released in the mid 1970s. It used a low power Class-A amplifier, and added 'dumping' transistors to provide the current when the small amp ran out of power. There were many people (including well qualified engineers) who doubted that it could possibly work, and arguments raged in magazines for many years after it was released. It's doubtful if the arguments have ever actually stopped, and there's a lot of conflicting opinions on the Net to this day. Admittedly, much of the current criticism relates to the noise level (high by modern standards), 'limited' low frequency response (-3dB at ~15Hz) and rather aggressive current limiting, but that's another story.
+ +An iconic article on the subject was written for Wireless World magazine in 1978. Titled 'Current dumping — does it really work?' it was written by J. Vanderkooy and S. P. Lipshitz (University of Waterloo, Ontario). There was much theoretical analysis, but to take measurements they had to modify an audio generator to get below 0.002% THD. The current dumping principle was effectively validated, but the arguments didn't stop. Notwithstanding any of the above, a similar principle can be applied to a boosted IC power amplifier, as shown below.
+ +
Figure 6 - Current Dumping Booster
Don't expect this to equal the QUAD 405 or any of the later models that used the same technique, but distortion at 10kHz is reduced from around 0.5% to 0.04% (based on a simulation - not a measurement). An order of magnitude distortion reduction is definitely worthwhile. The 1.5µH coil will need around 17 turns of enamelled wire on a 5mm former, wound with a coil height of no more than 12mm. The wire should be at least 1mm diameter to limit power losses due to its resistance. With 1mm wire, the resistance should be just over 0.02Ω. However, see the measurement results described below - in reality there is not much (if any) difference! All the more reason to be wary of simulation results.
+ +It won't make a great deal of difference if the inductance is a little more or less than 1.5µH, because the limiting factor is the IC power amp's open loop bandwidth. Unlike the QUAD 405 (etc.) the open loop gain of an LM3886 at 100kHz is less than 40dB, and there's not enough feedback for it to effectively minimise the crossover distortion of the unbiased output transistors at higher frequencies. I also tried using a 10µH inductor, but that increased the distortion quite dramatically.
+ +While adding the one extra part (the inductor) will take up some space, the reduction of distortion at high frequencies may still be considered worthwhile, and might make the difference between a very ordinary amplifier and one that will satisfy a great many constructors. If you look at the available literature on the topic of current dumping, it's claimed that a bridge using two reactances and two resistors is required, but this isn't necessarily the case. The fundamental part of the process is to 'slow down' the current delivered by the 'current dumping' transistors to the extent that the rate of change is accommodated by the IC's feedback network. By doing so, there are (at least in theory) no rapid transitions that the feedback can't control, and distortion is reduced accordingly.
+ +The distortion in the current waveform from the emitters of Q1 and Q2 is quite high (around 2.5% at 10kHz), but the current through R5 is 'adjusted' via the feedback network to compensate. It's inevitable that the total distortion is dependent on output level (the figures quoted above are at close to full power). As the level is reduced the distortion will increase, but it's not as drastic as you'll measure with the simpler arrangements (without L1). Ideally, the power amp IC should have a much wider bandwidth than is available from any of the available devices, but that's not an option so performance is limited. However, the circuit shown in Figure 6 will outperform all of the others, especially at (slightly) higher frequencies.
+ +I suggest that you don't expect the ultimate fidelity from the circuit shown, but it may be better than the more basic circuits shown above. The only other way to achieve low distortion is to bias the output stage, but this adds a great deal of complexity and doing so makes the final circuit almost as complex as a discrete design, but without the advantages thereof.
+ + +The TDA7293 offers an intriguing option, where another TDA7293 IC is used as a 'booster', utilising only the power stage in the second IC. This is described in the datasheet, but the end result is not inexpensive and shouldn't be necessary for the majority of applications. Also described is a Class-G (multiple supply rail) design, with external transistors in a fairly complex arrangement that I doubt any hobbyists have built (and likely no manufacturers either). Since these designs are shown in the datasheet, I don't intend to duplicate them here.
+ +Almost any power amp IC can be used with booster transistors, so for a smaller amp you could use an LM1875 for example, allowing it to deliver more power. The usefulness of this is debatable, since you'd typically only use that device when you only need low power (up to 25W or so), and obtaining more power is limited by the device itself and its supply voltages. There will be an advantage if you wish to drive a 4 ohm load, because the internal current limiting normally only allows the same maximum power into 4 ohms as you get with 8 ohms. With a supply voltage of ±25V (recommended maximum), it should be possible to get close to 40W into a 4 ohm load if booster transistors are added. In terms of cost and difficulty, you'd be better off using an LM3886 (at ±25V or so) instead, as the total cost will be about the same and construction complexity is reduced.
+ +The final alternative is a fully discrete design. The PCB is larger and there are more parts, but the output transistors are usually the only components that need to be mounted on the heatsink. Examples of discrete designs from ESP include P3A, P101 and (for high power subwoofers) P68. These are all well used designs, and generally create very few issues with construction. The numbers built by customers range from many hundreds to several thousand, and these amps are 'mature' designs. There are no surprises, and they all perform exactly as intended.
+ + +I built an amp using the techniques shown here, and managed to get over 112W into 4 ohms without any trouble (my variable power supply was the limiting factor, and I had to use a tone-burst to get the measurement). However, the overall distortion is not wonderful, particularly at low levels. From an output voltage (at 1kHz) of around 1.5V RMS up to 4V or so, distortion sat resolutely at a bit over 0.05% which is just alright. At lower levels (where the output transistors don't conduct at all), distortion dropped back to around 0.05%, and it fell below 0.03% at higher levels and approaching clipping. There is no doubt that this method works (and is better than the simple approach), but it's not something I'd suggest for a hi-fi system. If used for a subwoofer, you'll most likely never hear the distortion, as it reduces with reduced frequencies. I didn't run tests at less than 400Hz, but performance was noticeably better just by reducing the frequency by a bit over one octave (from 1kHz to 400Hz).
+ +Somewhat surprisingly, the distortion measured at 400Hz both with and without the inductor shown in Figure 7 was almost identical. A larger inductance was tried (around 12µH) but that made the distortion worse, not better. The maximum distortion measured was 0.04% at 2.4V (RMS) output, falling to below 0.02% at levels below 1.5V. When driving 4 ohms, distortion was roughly twice that measured at 8 ohms, a not entirely unexpected result.
+ +
Figure 7 - Output And Distortion Waveforms At 3.4V Peak (2.4V RMS) Output
At any output voltage above around 6V RMS, the distortion fell again, being below 0.03% up to the point of clipping. Unfortunately, this means that the worst case distortion occurs at the levels people are most likely to be listening at, but as already noted, I do not recommend this technique for a full range amplifier.
+ +
Figure 8 - Output And Distortion Waveforms At 15V Peak (10.6V RMS) Output
The distortion waveform seen has some sharp spikes on the 15V waveform, which are created by the external transistors turning on. While they appear to be at the zero crossing point, they are actually a bit above, and correspond to the turn-on voltage of around 0.7V (peak). Despite the spiky waveform, the distortion measured only 0.02%, and this is a clear indicator of why it's so important to monitor the distortion waveform. Simply relying on the numbers can be very misleading when there are sharp discontinuities in the waveform.
+ +So, the technique works pretty much as expected. I wouldn't bother trying to implement the 'current dumping' version (although it does no harm), and usage should be limited to loudspeaker drivers that have poor high frequency response. When testing, you may not notice the distortion - 0.04% is not particularly wonderful, but it's not exactly woeful either. Beware of very low impedances though, because the distortion rises almost in direct proportion to the impedance reduction. For example, at 400Hz and a 4 ohm load, expect distortion to increase to around 0.08%. I didn't try a 2 ohm load, but I'd expect the distortion to (roughly) double again.
+ +One thing is certain - the SPiKe protection is effectively disabled, and it's possible to get a great deal more power than the IC amp was ever designed to deliver. However, the dissipation in the output transistors can get very high (70W peak, 25W average with a 4 ohm load and ±35V supplies), but also consider that you can get up to 110W output from an IC that's rated for a maximum of 68W (which it normally cannot achieve in real life). Meanwhile, the theoretical increase is just under 3dB, so you have to ask if it's worth the trouble.
+ + +In this case, I leave (most of) the conclusions to the reader. Adding booster transistors does allow an IC power amplifier to deliver more power into lower impedance loads than is otherwise possible, but it comes with caveats. The greatest of these is distortion. It won't be audible if the amp is used for a woofer (in a 3-way system) or subwoofer, but is likely to sound rather harsh if you try to use this technique with a full range amplifier. You also have to decide if it's even worthwhile doing - the IC can't be operated at a higher voltage than its rated for, so power into most typical loads won't be improved by much.
+ +Because there's no PCB available designed for boosted operation, there's a degree of messing around needed to get the circuit wired up, but it's not difficult to do. Make sure that power transistors are mounted using thin mica insulators or Kapton tape, and use thermal grease to minimise thermal resistance. Do not use silicone pads - they do not have the thermal conductivity necessary to keep the transistor temperature to the minimum.
+ +I've shown TIP35C (NPN) and TIP36C (PNP) transistors in each of the designs, because they are rugged and very reasonably priced. They don't qualify as 'premium' parts and some may question the wisdom of using comparatively slow devices (FT is 3MHz). In reality, their speed is perfectly acceptable in this role, because they don't need to be fast. At less than AU$3.00 each, this is one of the cheapest high-power transistors available. The 'C' versions are rated for 100V (far more than will ever be used), but the lower voltage 'A' and 'B' versions don't seem to be available any more. 2N3055 and MJ2955 or other TO-3 transistors can also be used, but are harder to mount, more expensive than the TIP transistors.
+ +Once the added complexity of mounting the power amp IC and the extra output transistors onto a heatsink is considered, you need to decide if there's any net gain. Most of the time, a discrete power amp will give better performance anyway, so the wisdom of boosting an IC's output power should be subjected to scrutiny before you start building. Using 'current dumping' is certainly worth trying, and it does give you more insight into things that are possible (whether or not the outcome is 'better'). The cost of the IC power amp (whether LM3886 or TDA7293) has to be considered, and when you add the other parts the cost difference may not be worthwhile.
+ +Warning: Buying IC power amp ICs from on-line 'auction' sellers (i.e. not major suppliers) comes with some risk, as many are not the 'real deal'. Some could be factory rejects, and others may be counterfeits. There is no doubt that some are (claimed to be) genuine, but the sellers are hardly likely to say otherwise.
+ +You need a substantial heatsink (preferably with a fan) if the amp is to be used for any kind of test system. I mention this because I've had a couple of enquiries recently about low frequency current sources, capable of up to 10A RMS into low impedance loads. This kind of arrangement is close to ideal for this kind of application, because it's comparatively straightforward to implement. For sustained high currents (whether AC or DC), using parallel transistors is highly recommended, because it's too difficult to get a low thermal impedance between the transistor and heatsink with a single device. Even using three transistors in parallel isn't as silly as it may sound at first! The power supply becomes critical too, because the extremely high current involved places serious constraints on the power transformer, bridge rectifier and filter caps.
+ + ++ LM3886 Datasheet+ +
+ TDA7293 Datasheet
+ Current Dumping Technology (QUAD - 'Our Story')
+ Current Dumping Power Amplifier - by P. J. Walker (Wireless World, December 1975) +
Elliott Sound Products | +CFL Intestines (with PFC) | +
This photo shows the internals of a power factor corrected CFL. While the PFC circuit is fairly crude (it's just an inductor), it reduces the big current spike to something that looks a bit more like a sinewave. Rather than use a fusible resistor, this circuit is fused using a tiny glass PCB mount glass fuse. This is a much safer option, but still cannot protect the lamp from everything that could happen to it (by way of component failures). + +
Figure 1 - CFL Internal Components
The fuse is the small glass tube right at the very front of the PCB. I thought at first it may have been a thermistor, but the resistance is almost zero, indicating a fuse. The blue and yellow inductor is the PFC choke. For reasonable performance, it needs to be around 500mH to 1 Henry at 50Hz. The rest of the circuit is fairly traditional, the larger inductor (the big red one) is used to limit tube current, and there is a tiny transformer to provide transistor base drive at the back. Part of the latter can just be seen to the right of the electrolytic capacitor (white coloured toroid, with enamelled wire). The electrolytic capacitor is 10uF 400V, and is a 105°C type. The blue capacitors you can see are all rated at 400V - whether AC or DC is not stated. The PCB material is the cheapest you can get - it's a phenolic resin, which usually has paper reinforcement. The transistors are marked DK55, but no data could be located for them. + +
The lamp in question has seen somewhere between 200 and 500 hours of service, and is already noticeably dimmer than it should be. The area around the tube heaters is blackened (not visible in the photo), as is typical of a fluorescent lamp that it nearing end of life. The dark spot you can see in the top right of the photo is the transverse tube that joins the lamp sections, not a cathode black spot. + +
The lamp itself is a "Reliance" brand, and is rated at 20W. I don't recall when I bought it, I but haven't seen this brand on sale lately, so it seems to be one of the many marketing fatalities that have befallen CFLs in the last few years.
+ +Figure 2 is the insides of an old CFL - typical of when they were first released (and yes, I did get a couple way back then - now I know why I kept this one after it became rather dim many years ago). At 475g, it is massively heavier than its incandescent equivalent. It was a long time ago, but as I recall, this lamp didn't last anywhere near as long as was claimed.
+ +
Figure 2 - Old Style CFL
The only technology involved here is how to cram a conventional fluorescent light into a small enough housing to warrant the term 'compact'. The circuit is identical to that of a conventional (straight tube) fluorescent lamp. You can see the ballast choke (inductor) and the starter unit in the photo. The ballast is a very neat fit between the bends of the glass tube, and is rigidly secured with a fairly heavy gauge steel plate that hooks onto the edge of the glass outer envelope.
+ +Elliott Sound Products | +Wasted Heat | +
A topic commonly raised by proponents of a ban on incandescent lights is that the generated heat is wasted. In many areas (even in Australia), the heat is not wasted at all. It is in addition to other heat sources (radiators, reverse-cycle air conditioners, convection heaters, etc.). + +
Quite obviously, this doesn't apply when the outside temperature is 40°C (or even considerably lower), but even in temperate regions like Sydney, the little bit of extra warmth is perhaps usable for about 5 months of the year, or around 7 months in places like the UK. Small though it may be, having a 100W lamp switched on for a few hours will make some difference, even if only to make up for heat lost through window glass, ceilings, etc. In colder climates, the heat will hardly ever be 'wasted' - it is a usable form of additional heating for the home. Not much, but a number of people have brought this up on forum sites and elsewhere. It is not a 'silly' point as some have suggested. The heat does not simply go straight to the ceiling because hot air rises - most of the heat is radiated, and accompanies the light in exactly the same directions. + +
Any lamp that is outdoors wastes all of its heat output, so outside lights that are on for extended periods should be as efficient as possible. For a light that might be on for a few minutes every so often, the saving is obviously so small that it's of no consequence. For lights that are on for longer periods, you should ask yourself if they really need to be on at all. In many cases the answer will be no, so they should simply be switched off (too easy ).
Although rather trivial in the greater context, this is a point that has caused some fairly bitter disputes among experts (self appointed or otherwise). A document (apparently) exists that was produced by the 'Building Research Establishment' (BRE). In this, there was some information about the heat from incandescent lamps not being wasted at all. Unfortunately, I don't have the document or access to it, but there is another document [8] produced by the 'Lighting Industry Federation' (LIF), that attempts to refute the document from BRE. Without seeing both, it is obviously impossible to determine who is (or might be) right, but it is interesting that so much effort was spent to refute the argument. Much of the effort seemed to focus on the fact that most heat is radiated, so doesn't heat the air. Countless people in countless locations use electric radiators (bar heaters, or whatever other names may exist for them). We all know that they do manage to make us feel warm - this despite that fact that most of their heat is also radiated. No-one has claimed that incandescent lamps will replace heating, but their heat is not necessarily wasted when it's cold. + +
As to whether the "wasted" heat is more expensive that other forms of heating depends on what is used, where it is used, and many other factors. This small point could easily accommodate a full research programme, however, it is probably fairly trivial in the greater scheme of things. Some people use low power incandescent lamps to maintain a constant temperature for bird hatching - the heat is most certainly not wasted there. The same process is sometimes used to keep welding electrodes warm (and therefore dry) to improve weld quality, and no doubt many other examples can be found. CFLs have enough wasted heat themselves to perform the same duties, but temperature regulation is a lot harder to achieve. Just thought I'd mention that .
Most of the arguments (both for and against) the wasted heat issue are based on very limited existing data - limited because this is a new argument, and is without precedent. Mathematical extrapolation may be used to 'prove' that it is cheaper/more expensive to use supplemental heating, yet no real tests or trials seem to have been done to verify that the facts substantiate the claims. In a court of law, almost every argument either way would be thrown out as hearsay or conjecture, but no such limitations apply to people with a vested interest in the competing camps. + +
Elliott Sound Products | +Power Factor (Continued) | +
An anecdote on the power factor issue was sent to me ... Apparently a company in the UK installed a large number of CFLs in a building where the lighting was primarily on one phase. It burnt out the neutral link in the fuse box and caused a small fire! The high peak current of all non-power factor corrected CFLs can cause problems where they are used in large numbers. For example, 25 x 75W (incandescent) lamps will draw 7.8A - just within the 8A rating for lighting circuits in Australia. The power factor is 1 because of the resistive load. If replaced by 25 x 13W CFLs, although the RMS current is lower, the peak current is over 10A (based on the 410mA peak current as shown in Figure 11). No problem at all so far, but ... + +
What if the installer decides that many more lamps can be connected to the circuit because of the lower power? Based on the claimed RMS current for a typical 13W CFL (~95mA is typical), it would seem that you can run 80 CFLs on the same lighting circuit (80 x 95mA = 7.6A). Unfortunately, the peak current is 80 x 410mA = 32.8A. The wiring won't overheat, but in-line connections (junction boxes), switches and other terminations may fail because they are expected to handle the high peak current continuously - well above their design ratings (especially if a connection is very slightly loose). Remember too that the switch-on surge (inrush current) will be many times higher again - if we assume only 4A (fairly low in reality), the first cycle inrush current could be as high as 320A if all lamps are turned on at once! + +
Things can be worse if the lighting is spread across a 3-phase system. With resistive loads, the current in the neutral wire will be zero if all 3 phases have equal loading, or up to a maximum of the current in one phase if the load is spread over one or two phases (or is not balanced). With non-linear loads, the neutral current can be as much as double the phase current. This is a real problem with non-linear loads, because many wiring codes allow the neutral conductor to be smaller than each of the phase conductors!
+ +Elliott Sound Products | +Sealed Luminaire Test | +
Because I didn't have a stray light fitting I could use for the test, I fabricated a test jig that would at least show the problem first hand. I ran two versions of the test simultaneously, using two temperature sensors. The temperature was measured at 10 minute intervals. The main test had the CFL set up as shown below, with a bead thermocouple taped to the lamp socket. This was installed in a housing, as shown further down. The second set of test results were obtained with a probe thermocouple that was used to measure the air temperature inside the test fitting, with the very tip of the probe just touching the metal top cover. The probe was inserted into the hole where the bead thermocouple lead exits the housing.
+ +
Figure 4 - CFL in Socket, With Thermocouple Attached
The housing is the lens from an outdoor fitting, but the base section is still attached to the house, so I had to find another. Using metal gives an optimistic final figure because it can conduct some of the heat to the outside air, but most fully plastic fittings (or a fitting attached to the ceiling) will give higher final temperatures than I achieved. + +
Likewise, the fitting is much larger than most, and the CFL somewhat smaller (lower power). A higher power CFL in a smaller enclosure will get a great deal hotter. I conducted the test in my workshop, where the ambient temperature was measured at 23°C at the beginning of the test, and the test fixture was just above floor level.
+ +
Figure 5 - Complete Test Fixture
The approximate dimensions are shown. The housing shown contains about 3 litres of air, and the lamp socket just sits in the hole at the top (it is not airtight). Before the test, I ensured that the CFL was at ambient (room) temperature. Remember that this is a highly optimistic test - not too many CFLs are operated in such a large sealed enclosure with a metal top, and a rather tiny 10W lamp as the test subject.
+ +Time | Temperature (°C) | |
(minutes) | Bead | Probe |
0 | 23 | 23 |
10 | 48 | 34 |
20 | 55 | 39 |
30 | 58 | 40 |
40 | 58 | 42 |
According to countless Q&A sites, it would be considered perfectly alright to install a 23W CFL in this enclosure, yet the test shows quite clearly that even a 10W unit will reach or exceed the typical maximum ambient temperature of 50°C in just over 10 minutes (based on the bead thermocouple). Even the highly optimistic figures here show that with an electrical power dissipation of only 9W (assuming a generous 10% overall efficiency) is enough to cause a significant reduction in the life of the electronics. Imagine a 23W unit - now dissipating over 20W as heat - in the same enclosure. It will get a great deal hotter, and even the optimistic probe thermocouple will indicate that the maximum ambient temperature is easily exceeded. + +
Further tests show that the internal temperature will typically be 20-25°C higher than the external (ambient) temperature, so for a recommended maximum ambient of 50°C the internals will be at around 70-75°C. This just qualifies as a safe operating temperature, and the electronic components will probably survive for the claimed life - remember that only 50% of lamps need to survive for the full rated life - the remainder will have died already. + +
The original fitting that the lens was from was rated for a 100W incandescent lamp. The heat won't cause the incandescent any problems, although as you can see, the lens has discoloured quite badly from when it was installed (it's supposed to be clear). In case you were wondering, the lens was removed because it had discoloured enough to reduce the light output noticeably - the original incandescent lamp is still installed, but without the cover (it's been there for over 10 years !). + +
This test is not especially rigorous, and it was only ever designed to give me an idea of how much power can be dissipated in a small enclosure without exceeding the maximum permissible ambient temperature. It is important that the reader understands that in the context of all electronics circuitry, the ambient temperature is that measured in close proximity to the electronics - it does not mean the ambient temperature in the room. If electronic circuitry heats up its own immediate environment, then that is the ambient temperature that the individual components experience. + +
The temperature inside the plastic housing of the CFL's electronics will be 20-25°C higher than measured by either probe or bead. A higher power CFL in a smaller (or even the same size) housing that is completely airtight (as required for outdoor use) will get far hotter (and faster) than shown in the table. Any claims that less than 50% of existing light fittings are suitable for use with CFLs is completely justified on the basis of this test. Based on looking at available fittings as of early 2013, I expect the claims are very optimistic, and I'd be surprised if even 30% of fittings are suitable. For outdoor fittings, make that 1% - virtually none!
+ +Elliott Sound Products | +Dimmer Phase Angle Test | +
These results were obtained from a circuit simulator, which allowed me to capture all the data I needed, without having to use test equipment attached to the mains. The results are not quite the same as with a real lamp, because the filament actually changes its resistance with temperature. The table below shows the theoretical power, current and power factor, ignoring the changing resistance.
+ +Phase Angle | Volts RMS | Current RMS | Power | Power Factor |
18° | 19.28 V | 33.47mA | 645.3 mW | 0.08 |
36° | 52.93 V | 91.89 mA | 4.86 W | 0.22 |
54° | 92.53 V | 160.6 mA | 14.86 W | 0.39 |
72° | 132.9 V | 230.7mA | 30.65 W | 0.55 |
90° | 169.7 V | 294.6 mA | 50.00 W | 0.71 |
108° | 199.9 V | 347.0 mA | 69.35 W | 0.83 |
126° | 221.4 V | 384.5 mA | 85.14 W | 0.92 |
144° | 234.1 V | 406.4 mA | 95.14 W | 0.98 |
162° | 339.2 V | 415.3 mA | 99.35 W | 0.99 |
180° | 240.0 V | 416.7 mA | 100.00 W | 1.00 |
For the simulation, I used a 100W load, based on a supply voltage of 240V. This gives a resistance of 576 ohms, which is 100W at 240V. The phase angle is a measure of how many degrees of each half-cycle the dimmer allows through, and is in 10 steps. The power factor is as shown in the table above, and at most usable settings, it's no worse than a typical CFL. Since those pushing for a ban of incandescent lamps have never looked at power factor anyway, to them it is presumably irrelevant.
To explain the table, a cycle of mains power is traditionally divided into 360°, so a half-cycle is 180°. I used 10 steps of 18° for the table, but real dimmers can use any phase angle as set by the control - they are not limited to discrete steps.
+ +Elliott Sound Products | +'Normal' Failures | +
From even a cursory look at the components used in most CFLs, it is obvious that the cheapest possible parts have been used, and many of these parts are simply not suited to the voltage, current and temperature to which they will be subjected. The use of 400V DC capacitors across the 230-240V mains is of particular concern, since it is known to a great many technicians and engineers that these capacitors will (not might) fail in this position. This is not a problem with 120V mains, as the capacitor can usually withstand the lower voltage without failure. Since they only have to last for a few thousand hours, the manufacturers obviously think that's enough.
+ +No-one seems to care if the lamps fail with a flourish, but such failures will damage consumer confidence very quickly. Some manufacturers claim that their 400V DC capacitors are rated to 220V AC. Since the nominal mains voltage in Europe and Australia is 230V, even the makers' rather adventurous rating is exceeded anyway. Also, no-one seems to have noticed that using these caps at high frequencies imposes a derating curve from as little as 2kHz. A 33nF 400V Vishay or Philips MKT polyester cap is rated at only 32V AC at 30kHz. As the temperature increases, the voltage rating is reduced even further. These caps are not safe, and should not be used if their voltage rating is exceeded (which it is, in almost all cases). A data sheet for these caps is available from any number of sources. Check for MKT370 data sheet(s), or click here.
+ +Although not actually stated on the specifications, the 220V AC rating is not for continuous use. If it were, why do the same companies make other - and more expensive - capacitors that are designed for connection across the mains? Simple, the 400V DC caps may be used in all sorts of equipment where AC voltages will be present for periods of time, but will not be continuous. In most cases, the AC voltage across the capacitor will be minimal if it's used for audio coupling in a valve amplifier for example. These circuit applications will also be relatively high impedance (limiting the maximum current flow), and designed so that a capacitor failure will not cause clouds of smoke. The device will stop working with a blown fuse perhaps, but normally nothing else will happen. This is in contrast to the use of the cheapest possible parts where there is little or no limit to the maximum current, other than the house fuse or circuit breaker.
+ +Some of the photos shown here are courtesy of Doug Hembruff's Impact website. The examples are of US or Canadian origin, but the failure modes are universal. There are additional photos on Doug's site, and similar pictures are scattered across the Internet.
+ +According to various industry groups, these failures are considered normal. As noted in the main article page, the CFL is the only product ever offered to the public that includes acrid smoke and severe burning of the outer casing (caused by component failure) as a supposedly normal end-of-life experience for the purchaser. Any other consumer product that failed in this manner would be subjected to an instant suspension of further sales, and a total recall of affected models.
+ +The manufacturers and distributors would also be subjected to fairly intense scrutiny, since the product is obviously faulty. Why is this not the case with CFLs? I cannot understand how a product can fail in this manner, and not only does no-one seem to care, but they don't even think there's something seriously wrong.
+ +In the US, even the Underwriter's Laboratory (UL) claims that smoking and overheating was a common occurrence for this type of lamp at end of life. It beggars belief that anyone, anywhere, would call this normal.
+ +
Severe Burning Around Tube Base
The above lamp (Commercial Electric - North America region) overheated and burnt the plastic housing filling the user's bedroom with acrid smoke. The lamp did not shut down and continued to smoke until power was removed. This lamp was directly over the user's bed - very fortunate that he was there to switch it off before anything worse happened. This failure mode seems to be fairly common, and even a quick check will reveal just how hot the filament ends of the tube become. In normal use, the filaments dissipate at least 3W each and are enclosed in the glass tube - they get very hot indeed.
+ +Would any lamp that failed in this way drip burning plastic? Have you ever seen a guarantee on the pack that the lamp will not (and cannot) catch on fire, or drip burning plastic, glue, or anything else?
+ +I know I've never seen any such guarantee. Note too that the neck of the tube got hot enough to crack the glass near the melted area. There is no way that this (or the following) failure can be considered normal - as long as this continues, CFLs are potentially very dangerous products. To allow the general public access to them is crazy - they should be restricted to professionally trained lighting experts, not sold at supermarkets.
+ +
Hole Burned Through Base
The above photo is of another Commercial Electric CFL from Home Depot in the US. In this case the hapless user had no luck for some time when trying to contact the supplier. In more or less the user's own words ... "Commercial Electric was not too helpful, in fact I could tell [the woman on the phone] was reading from a script when I described my trouble. She said it was due to the ballast becoming lose during shipping and normal use. To me that is a defect. I was not that concerned about the warranty but more for safety."
+ +"Normal use" does not cause a hole to be burned right through the casing. The position of the hole is about where I'd expect a fusible resistor to be located, so it is possible that this lamp (and others that have the same problem) drew excessive current - perhaps because a dimmer was in the circuit. Unfortunately, there is no additional information or a photo of the insides, and no way to know for certain.
+ +Since smoke and burnt plastic is apparently "normal", perhaps our legislators will modify existing standards for other products - it could become very exciting if all consumer goods were allowed to fill rooms with smoke or burn holes in the case as a normal way of telling us they no longer work.
+ +In the US, several CFLs actually were subjected to recalls because of overheating and melting/burning plastic. One can only assume that the affected lamps were really bad, because what is shown above obviously wasn't enough.
+ +The next three photos show what can happen when a CFL is installed into an un-ventilated luminaire. The individual housings have no ventilation holes at the back, so there can be no airflow through the fitting. This ensures that the temperature will increase until the fitting achieves thermal equilibrium, but this won't happen until the internal temperature is in the order of 100°C.
+ +
Unsuitable Luminaire
The results of 23W CFL lamps being installed was quite predictable, although the actual nature of the failure was somewhat unusual. The CFL literally exploded, and vigorously expelled the body of the lamp from the housing, leaving only the Edison screw base.
+ +
Result Of CFL Explosion
The ejected lamp is seen above. Apparently, the mains wiring insulation had degraded badly due to the heat, and one of the mains wires was in contact with the bridge rectifier diodes. Eventually, the insulation failed and caused a direct short-circuit between active and neutral. The exploding wire developed enough pressure inside the electronics housing to literally blow it apart. The CFL guts were ejected and ended up on the floor, along with multiple glass fragments (and a small quantity of mercury).
+ +
Close-up Result Of CFL Explosion
Vaporised copper, missing diode lead, a totally vanished mains lead and general mayhem are clearly visible. One wonders if this falls into the category of a 'normal' failure mode. One thing it does highlight in no uncertain terms is that CFLs and sealed/ unventilated light fittings create a recipe for disaster.
+ +This kind of failure is directly attributable to the lack of public awareness and education, poor instructions and usage information on the package, and numerous sites that state that compact fluorescent and incandescent lamps are directly interchangeable without any precautionary information whatsoever.
+ +(Photos supplied by Phil Allison - the lamp shown exploded in his neighbour's kitchen. Pix and text added 17 December 2012)
+ +![]() | + + + + + + + |
Elliott Sound Products | +Instrumentation Amplifiers Vs. Opamps |
![]() ![]() |
The term 'instrumentation amplifier' (aka INA or 'in-amp') is not always applied correctly, sometimes referring to the application rather than the architecture of the device. It used to be that any amplifier that was considered 'precision' (e.g. providing input offset correction) was considered an instrumentation amplifier, as it was designed for use for test and measurement systems. Instrumentation amplifiers are related to opamps, as they are based on the same basic (internal) building blocks.
+ +However, an INA is a rather specialised device, and is generally designed for a specific function. They are not basic 'building blocks' that can be interchanged at will. INAs are not opamps, because they are designed for a rather different set of challenges. You can build an INA using opamps, or using a separate (including discrete component) front-end. Project 66 is a perfect example - it's a true INA, but in this case, specifically optimised for use with low level microphone inputs.
+ +If you need particularly low and/or predictable DC offset performance, then it's better to use an off-the-shelf INA rather than try to make one using opamps or a discrete front-end. Because everything is in one package, thermal performance (in particular) is usually better than you'll ever get with a 'home made' solution. However, there's no reason not to use opamps for a roll-your-own INA, especially if the DC performance isn't critical. For audio applications, it's often easier (and significantly cheaper) to use opamps rather than a dedicated INA.
+ +Instrumentation amplifiers are particularly useful when a very high CMRR ('common mode rejection ratio', sometimes shortened to 'common mode rejection' or 'CMR') is necessary. A common mode signal is one that appears on both input signal wires at the same voltage, and is most commonly noise picked up by long cable runs. There are other situations where CMRR is important too, especially in instrumentation systems, and this is where the name 'instrumentation amplifier' comes from.
+ + +An instrumentation amplifier is a purpose designed device, and unlike opamps there is no user accessible feedback terminal. The gain can be controlled by a single resistor, and the reference can be earth/ ground (as is normally the case), or some other voltage as required for your application. The specifications for INAs are usually quite different from those for opamps, because of the way they work.
+ +There are some specs that are the same or similar as you'd expect to find with opamps, but others are quite specific to the INA. Supply voltages are commonly up to ±18V, and some can operate with only ±2.25V supplies [ 1 ], others up to ±25V [ 2 ]. Unlike opamps (which mostly have 'industry standard' pinouts for any given number of opamps in a package, typically 1-4), you cannot expect to find the same with INAs. Some will be the same as other similar devices, but many are not (even from the same supplier).
+ +The general form of an INA is shown below. No values are given, because they vary from one device to the next. The feedback resistors are internal, and only one resistor is needed to set the gain. Some include an internal resistor to preset the maximum recommended gain - typically 100 (40dB) or 1,000 (60dB). Some INAs have offset null connections to allow the DC offset to be minimised, but others do not.
+ +
Figure 1 - General Form Of An Instrumentation Amplifier
Many INAs are specified for low or very low noise, but, like opamps, there are others that are more pedestrian. One area where most excel is common mode rejection, and this is the thing that sets an INA apart from a seemingly similar opamp circuit. This is not to say that equivalent performance can't be obtained from opamps, and as noted above this is often easier and cheaper. However, even the simplest INA made from opamps requires a dual device plus one other opamp (along with feedback resistors etc.), and the PCB real estate needed is far greater than a dedicated INA. Depending on the specifications you need for the application, prices range from under AU$5.00 to AU$50.00 each or more, so you need to select very carefully.
+ + +There are two main different configurations used for commercial INAs. One is as shown in Figure 1. INAs all have balanced inputs, but simply having a balanced input does not make a circuit into an INA. The balanced input stage is used internally with many INAs, so it has to be examined first.
+ + +A standard balanced input stage is shown below. While this is the basis of most (but not all) INAs, it is not an instrumentation amplifier in its own right. There are several well known and understood limitations of this circuit, with a major problem being its input impedance. R1 and R3 set the impedance, but R2 and R4 must be scaled accordingly to obtain the desired gain. For example, if you needed an input impedance of 100k and a gain of 10, R1, R3 would have to be 50k, and R2, R4 would then need to be 500k. This creates a large noise penalty. As shown, the gain is unity, and that applies whether the input is balanced or not. However, the gain for the positive input is unity only if the unused negative input is grounded.
+ +Another problem is that the input impedances are not the same for each input. This isn't always an issue, but it's real and needs to be understood in the context of your requirements. Firstly, we'll assume a perfectly balanced ground referenced input, so the voltage applied to each input pin is exactly half the total (±500mV). The impedance of the positive input is clearly defined as being 20k, because it's made up by R1 and R2, which are effectively in series (ignore the input impedance of the opamp itself).
+ +The negative input is another matter, because there is feedback around the opamp and applied to the opamp's -ve input pin. With the balanced input, the impedance seen at the inverting input by the source is 6.67k. This somewhat unlikely sounding figure is based on the voltage across R3. At the input end, it may have (say) 0.5V, but at the other (opamp inverting input) there's -250mV. The current through R3 is therefore not what you'd expect with 0.5V and 10k (500µA), but is 750µA, giving an apparent resistance of 6.67k.
+ +
Figure 2 - Balanced Input Stage
If the source is fully floating (not ground referenced) such as a microphone capsule or other floating source, the impedance imbalance is of no consequence. The current into each input is the same, with (say) ±50µA flowing into each for the 1V source shown (50µA because the +In terminal has a 20k input impedance). The voltages measured at each input are radically different though, with the full 1V peak signal appearing at the +In terminal, and (close to) zero at the -In terminal (a few hundred microvolts is typical, opamp dependent). If you find this hard to grasp I can't blame you, as it initially seems to defy the laws of physics. I recommend that you build the circuit so you can verify that what I've claimed is, in fact, quite true.
+ +The impedance at the +Ve input is 20k (as expected), but on the -Ve input it's almost zero (but only with a fully floating source). You'd expect it to be 10k (due to R3), but that isn't the case. Note that this anomalous situation can only occur when the source is fully balanced, having no ground reference. Balanced (floating source) input impedance is 20k, which is what you would hope for, but may not expect based on the voltages measured. Once the input source is ground referenced (e.g. a centre-tapped transformer or active balanced output circuit), the input impedances become 20k (+Ve input) and 6.67k (-Ve input, and still not as expected, but the reason is described above). These issues are fairly well known, but not always remembered when it's necessary to do so.
+ +For common-mode (noise) signals, the impedances are balanced, despite everything seemingly indicating otherwise. This is the only thing that we are generally worried about when differential input amplifier circuits are used. The low output impedance of the balanced line driver swamps any variations that are seen at the inputs of the balanced input. However, CMRR reduces with increasing frequency, because the opamp has less open-loop gain at high frequencies (due to the internal compensation capacitor).
+ +The impedance imbalance means that this circuit cannot be considered to be a 'true' INA. One of the requirements of an INA is that input impedances should be equal. While the circuit shown is useful, and it works well, never imagine that it can be used in place of the real thing. By all means use it for balanced microphone or line inputs, but not where any kind of precision is necessary. This is especially true for any application where the input impedances must be (close to) identical, or where good CMRR is needed at high frequencies. Be aware that even INAs will show degraded CMRR at high frequencies, because they also require internal compensation and they don't have 'infinite' bandwidth.
+ + +The next version is the same as the balanced input circuit described in Project 87. It's used in several commercial INAs, but there are a few limitations you need to be aware of. The main limit is minimum gain - unity gain is not possible. There is also a limit to the common mode voltage that can be accommodated. This requires explanation, but fortunately it's not as hard to understand as the Figure 2 stage.
+ +
Figure 3 - 2-Opamp INA Circuit
This circuit is a 'true' INA in most respects, and although it is used in some commercial ICs it is a compromise. It has the advantage of using only two opamps (rather than three), but in terms of IC fabrication that's hardly a problem. RG can be included (or omitted), and if it's there it increases the gain. Using 10k for RG increases the gain to 4, and 1k increases it to 22. Unfortunately, if it's not included, the gain isn't unity - it's two. The gain cannot be reduced to unity without attenuating the inputs, which will impose a potentially serious noise penalty.
+ ++ Av = 2 + ( 2 × R3 ) / RG Where Av is voltage gain, and R3 resistors are all equal ++ +
Of more concern is where you have a situation where there is a significant common mode signal. The first opamp has a gain of two, and that applies whether the signal is differential or common mode. If there is a 1V common mode signal (i.e. the same voltage applied to both inputs at once), the output of U1 will have a voltage of 2V. This isn't changed by R7 (if used), but it does mean that the maximum peak common mode voltage is somewhat less than half the supply voltage. This is not a problem for the most part, because high common mode voltages are uncommon in the 'real world' (especially for audio), but it's something you need to be aware of.
+ +You also need to beware of high frequency noise. The two opamps act in series for common mode signals, so the small propagation delay reduces the available CMRR at high frequencies. No opamp (or any other circuit) is instantaneous, so the useful range may be severely limited if very fast opamps are not used. For example, with TL072 opamps (as an example only) CMRR at 50Hz might be around 63dB, it's reduced to only 37dB at 1kHz and a rather woeful 17dB at 10kHz. This isn't always a problem though.
+ +Otherwise, the circuit is genuinely useful, and it works well - provided you don't need unity gain or extended response for common mode signals. The input impedance is high (set primarily by the input resistors R1 and R2), and common mode rejection is as good as the resistor tolerance used for the 10k resistors. I've shown 10k resistors for all values of R3, but they can be any suitable value that doesn't overload the opamps. If reduced to (say) 2.2k, resistor thermal noise is reduced. Naturally, higher values can be used, but they will increase the noise level.
+ +This circuit works by subtracting the common mode signal from U1 with U2. If the signal is differential, the signal from U1 is added in U2, so a 1V input gives a 2V output. R7 increases the gain, but doesn't affect the CMRR. The gain equation isn't as straightforward as you might hope, because the circuit relies on several feedback paths.
+ + +The concept shown in Figure 1 is a 'real' INA in all respects. There are several benefits to this arrangement that are not available in the 2-opamp version. The input buffers can be operated at unity gain, giving the overall circuit unity gain as well. Gain is adjusted with a single resistor, and the gain formula is straightforward. However, you do need to know the values of R3 and R4, which are normally provided in the datasheet. They are nearly always all equal and commonly laser trimmed for high precision. Note that R6 is not connected to earth/ ground by default, but is designated 'Ref', because it's the reference pin. It is usually (but by no means always) connected to the earth or system common (zero volt) bus in the equipment. Note that the 'Ref' pin must be connected to a (very) low impedance or CMRR will be degraded. The impedance must be low for all frequencies of interest, including the common mode noise component.
+ +
Figure 4 - 3-Opamp INA Circuit
Like the 2-opamp version, input impedance is set almost entirely by the external resistors. The CMRR of the circuit depends on the performance of U3 and the accuracy of R3-R8, assuming that U1 and U2 are (close to) identical which is usually the case. Because both inputs are subject to the same delay, use of slow opamps does not impair the performance. You can build this circuit using opamps, but it will take up a great deal more space than an INA chip. Unless the resistors are 0.1% or better, you won't get the performance of a dedicated IC.
+ +Simulated using TL072 opamps, the Figure 4 circuit provides better than 85dB of CMRR at all frequencies up to 10kHz. A better opamp for U3 will extend this, as its performance at higher frequencies is the limiting factor.
+ +The gain is set by RG, but you must know the value of R3 and R4 - these are normally provided in the datasheet. Assuming 10k as shown, the gain is determined by ...
+ ++ Av = ( R3 × 2 ) / RG + 1 Where Av is voltage gain, R3, R4 are equal and R5 - R8 are equal+ +
+ Av = 20k / 10k + 1 = 3 +
Different formulae may be provided in datasheets, but they will still give the same answer. In some cases in IC versions, R3 and R4 are equal, and R5-R8 are also equal, but a different value from R3 and R4. This doesn't change the gain equation, which relies only on the feedback resistors used on the input opamps.
+ +The gain of the two input opamps is unity for common mode signals, regardless of the value of RG. It might not look that way at first, but remember that both opamps see the same signal (amplitude and polarity) for common mode inputs. RG therefore has no effect, as there is no voltage across it. When you examine specification sheets, you'll see that CMRR increases as the gain of the device is increased, because it's a ratio of the wanted (differential) signal to the unwanted (common mode) signal. If the wanted signal has more gain and the unwanted signal always has unity gain, the ratio between the two must increase.
+ + +Like many IC circuits, there are tricks and techniques that can be applied to improve performance. These can be critical to getting the results the application demands. It's only possible to cover a few of the more common (and/or useful) techniques, and datasheets and application notes for the selected device(s) are always a good place to start looking. It's common that you can often find just the solution you need in the datasheet for a related (but perhaps otherwise unsuitable) device, but fortunately most of the tricks will work with any device that uses a similar internal circuit.
+ + +Where common mode noise is a problem, sometimes it's worthwhile to use another opamp to drive the cable shield. Figure 5 shows an active shield driver that is configured to improve the CMRR by bootstrapping the capacitance of the input cable's shield, and thereby minimising any capacitance mismatch between the two inputs. A common mode mismatch will show up at the junction of the two gain resistors, and this is used to drive the input cable's shield.
+ +
Figure 5 - Common-Mode Shield Driver Example
When techniques like this are used, it's important to test the circuit thoroughly, matching the 'real world' operating conditions as closely as possible. The above circuit also shows filtering resistors (Rf1 and Rf2) and capacitors (Cf1, Cf2 and Cf3), and Cf1, Cf2 need to be matched to maximise the common mode rejection. These parts should be carefully matched to within 1% or better if possible. Exact values are not important, it's only the difference between them that will cause a reduction of the CMRR. Cf3 doesn't need to be exact, as it's across the two inputs.
+ + +There will also be occasions where high voltage at the inputs are likely (or possible), so protection has to be added to ensure that the systems survives. Ideally, the system will be protected against any foreseeable 'event', but this is not always possible. In audio systems destructive events aren't common, but in an industrial setting all of that changes very quickly. It's not usually economically possible to protect against everything (a direct lightning strike for example), but a reasonable level of protection is always needed for anything that operates in a commercial or industrial environment.
+ +Some INAs have protective diodes built into the chip, but if present they are usually limited to around 10mA or so. In most cases, diodes are connected to the supply pins, but this can easily give a false sense of security. If an external fault that delivers (say) +25V to the input(s) is diverted to a supply pin, it's quite possible that the ICs absolute maximum supply voltage may be exceeded. 99% of common regulators can only source current, so if something forces the supply rail to a higher than normal voltage, the regulator can't prevent it.
+ +
Figure 6 - Zener Diode Input Protection
A safer (but more expensive) option is to protect the inputs with back-to-back zener diodes. Using 10V 1W zeners means that the inputs can't be forced beyond ±10.6V, and the zeners can conduct up to 90mA continuously (depending on PCB heatsinking), and around 500mA for transient events. The zener circuits have to be protected against excess current, and the filter resistors (Rf1 and Rf2) shown above can also provide current limiting.
+ +The 1k resistors shown would allow input voltages of up to ±100V for short periods, but the resistors have to be able to take the power (a little over 8W) for as long as is likely to be necessary in the application. This is completely dependent on the system itself, and the likelihood (or otherwise) of severe over-voltage. It's important that equipment is designed to suit the conditions. Trying to accommodate any possible fault condition is usually excessively costly, so the designer must be aware of probable (as opposed to possible) faults, and design for that.
+ +In extreme cases, it might be necessary to use PTC (positive temperature coefficient) thermistors in place of (or in addition to) Rp1 and Rp2. Also known as 'Polyswitches', these will become high impedance if there's a fault, protecting the INA and the protective zeners. Care is needed to ensure that the zener junction capacitance doesn't cause problems such as reduced CMRR at high frequencies due to mismatched capacitance. It's likely that a circuit intended for harsh conditions may use both the filtering in Figure 5 and the protection shown above. In some cases even more protection may be needed before the circuitry shown. This might include MOVs (metal oxide varistors) as shown above, or 'Transorb' diodes, which are designed for very high peak currents.
+ +The selection criteria for any and all protection circuits are application specific, and the designer is expected to know (or find out) the likely fault conditions for the equipment. It's beyond the scope of this article to provide any further details. It can be surprisingly easy to end up with protection systems that are more complex and/or costly than the circuitry it protects, but there's no choice if the equipment is required to be 'fault tolerant'.
+ + +CMRR is an important part of any INA, but it's not always necessary for it to apply at all frequencies. As noted above, the 2-opamp INA has rather poor CMRR at high frequencies, but if your application is DC (or very low frequency), this is not a limitation at all. The incoming signal leads can have a (relatively) vast amount of noise, but it can be filtered out so that only the DC component (and perhaps some low frequency noise) remains. Unlike the circuit shown in Figure 5, the tolerance of the filter capacitors isn't a major problem, because there is no need for good high frequency performance.
+ +There are many applications where the system speed is such that no-one cares about high frequencies. A weighbridge (for example) doesn't have to work at high frequencies, and if it takes a couple of seconds before the reading is stable, that's usually preferred. For this type of application, a relatively slow response is essential to prevent the reading from moving around too much. Even 'lesser' applications (such as bathroom scales) usually have a fairly slow response so the reading doesn't jiggle around (essential when the display is digital, because you can't read rapidly changing digits easily - if at all).
+ +
Figure 7 - Wheatstone Bridge Using A Strain Gauge
A very common use for INAs is for strain gauges. These can be part of anything from a weighbridge to 'bathroom' scales, and the only real variable is the sensitivity of the strain gauge. A detailed discussion of strain gauges is outside the scope of this article, but they are common in many weighing systems, for monitoring stresses in bridges or buildings and torque measurements for machinery.
+ +The Wheatstone bridge is a very good example of a system where there is a large common mode signal, and INAs are ideal candidates to measure the small variation of resistance while a comparatively large DC offset is present. The strain gauge changes its resistance ever so slightly when it's under stress, and the INA is used to detect the resistance change. However, it must ignore the common mode signal, and react only to the differential component created by the Wheatstone bridge. VR1 is used to balance the bridge when there is no strain applied to the gauge. Values have not been shown because of the wide variability of static resistance for strain gauges, which may be anything from a few ohms up to 10k or more. Note that no temperature compensation is shown, but it's usually essential.
+ +A typical 'load cell' (a strain gauge in a specially designed housing to monitor force/ weight) may only provide an output of 2mV at full load with an excitation voltage of 10V. Although only a single strain gauge is shown in Figure 7, it's common to use at least two and sometimes four, with strain gauges for all four sections of the Wheatstone bridge. This requires that two will be in compression and two in tension, and the output is increased by a factor of 4 times. In this case, the 4 strain gauges form the Wheatstone bridge, so there are no other parts. These are usually (but not always) temperature compensated because all 4 sections of the bridge are matched, and at the same temperature.
+ + +As this article has (hopefully) demonstrated, the instrumentation amplifier is a specialised device, and is particularly suited to situations where there is (or may be) a significant common mode voltage along with the desired signal. INAs are also used as microphone preamps, and basically can be used anywhere that requires good common mode rejection. The choice of INA is critical for applications where there may be high frequency common mode noise. Not all are effective across the audio band, so it's essential that you look at the datasheet closely before making a decision.
+ +The specs can be a little daunting for the uninitiated, but once you are acquainted with some of the terms and how they apply you'll be able to work through them easily enough. It can be helpful to search for a device that is specifically designed for your application. It's unrealistic to expect that there will be an INA that's an exact fit for everything, but you can get something that suits your needs once you understand the devices well, and know how they can be adapted.
+ +One thing that can be very important is the earthing (grounding) scheme used in an application. Improper earthing arrangements can cause serious errors, so PCB layout is often very important. This is especially true when very small signal levels are available, and high gain is needed to bring the signal to a level that can be used by the following circuits. Datasheets and application notes are essential reading if high accuracy is needed.
+ +There are several INAs that are not designed specifically for instrumentation, but are optimised for very low noise. These can have different titles, but there are some that are described as 'self contained audio preamps' or similar. These don't use opamp based front-ends, and are intended for microphone preamps and other low-level preamps, with the emphasis on audio rather than instrumentation. I haven't listed them here, and some are now classified as obsolete so you wouldn't be able to get one even if you wanted to.
+ + ++ 1 INA128 Datasheet (3-opamp INA)+ +
+ 2 INA126 Datasheet ('Micropower' 2-opamp INA)
+ 3 INA103 Datasheet (Very low noise 3-opamp INA)
+ 4 AD623 Datasheet (Low power, low voltage 3-opamp INA)
+ 5 What's The Difference Between Operational Amplifiers And Instrumentation Amplifiers? - Kevin Tretter (Electronic Design)
+ 6 Measuring Strain with Strain Gauges - National Instruments
+ 7 A Designer's Guide to + Instrumentation Amplifiers, 3RD Edition, 2006 - Analog Devices
+ 8 INA217 Datasheet (3-opamp INA) +
![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Australian (Worldwide?) Ban on Incandescent Lamps |
PLEASE NOTE: My apologies for the length of this article, but this has turned into something of a horror story. Only a short while ago, I thought that the power factor issue was most important, then that a vast number of enclosed light fittings (probably hundreds of millions worldwide) cannot be used with CFLs was critical. Now, it turns out that dimmers are a far bigger issue than first imagined. What happens in houses where dimmers are fitted? These must be removed completely, not simply set to maximum and left there. Who's going to pay to have millions of dimmers worldwide removed by electricians? You, the homeowner - that's who.
+
+Power factor is still very important ... while you only pay for the actual energy used (as shown on the packaging), power companies have to provide the full voltage and current (also shown on many packages and/or other literature). The relatively poor power factor increases distribution losses and therefore the cost of getting electricity to your house.
+
+Now, we also have the European Union (EU) singing the same silly song. It was recently announced that the 490 million citizens of the 27 member states will be expected to switch to energy-efficient bulbs after a summit of EU leaders yesterday told the European Commission to "rapidly submit proposals" to that effect. I wonder just how much research was done before this piece of lunacy was announced? None, perhaps?
+
+Speaking of the EU, these mental giants have recently decided to ban mercury altogether. Apart from the considerable annoyance to people who use it for manometers, barometers, certain antique clocks, etc., the ban is inconsistent. While they will probably eliminate a few kilograms of mercury from those who would use it responsibly, there will be hundreds or perhaps thousands of kilograms (in CFLs and conventional fluorescent lamps) in the hands of the general public. Most will end up in landfill unless there is a very comprehensive education campaign for the householders throughout the EU and elsewhere. So far, there appears to be little or no effort anywhere to ensure that the public are made fully aware of the risks involved. As of early 2010, there are still people who remain blissfully unaware that CFLs contain mercury!
+
+Nothing in this article is conjecture or CFL bashing (I like CFLs used sensibly, and have (had) installed them wherever possible in my home and workshop), merely simple facts that a great many people have overlooked. The reasons are described below (yes, it's mostly technical), and for those who want to know more about power factor, the use of CFLs in existing luminaires, or any of the other factors involved, please read on ...
+
+(External links in this article are for information only, and do not necessarily reflect the opinions of the author of this page.)
Please Note: Since this article was written, I have made the transition to LED lighting almost exclusively. All linear fluorescent lamps have been changed out for LED 'tube' lights, and there are now only three CFLs and not even one 'high efficiency' halogen incandescent lamp in my house and workshop. The Australian 'phase out' of incandescent lamps appears to have stalled, and products that were slated for exclusion from sale are still available. The selection of lighting from supermarkets now includes several LED types, not so many CFLs, and quite a few halogen 'bulbs'. While these have higher luminous efficacy than standard incandescent lamps, they don't come close to LEDs, which are now commonly providing better than 100 lumens/ Watt (including the power supply losses).
+ +However, little has changed regarding suitable luminaries, and it's still challenging to find fittings that have adequate ventilation. The public's understanding of thermal performance hasn't improved, and many LED 'bulbs' run far too hot for the good of the internal electronics. Dimming continues to be a problem with all forms of electronic lighting, because home users in particular don't understand why legacy dimmers are unsuited to electronic lighting power supplies. Please see the articles on dimmers for detailed information ...
+ ++ Lighting Dimmers+ +
+ Lighting Dimmers - Part 2
+ Dimmers And LEDs +
As described in the above articles, conventional leading-edge dimmers (by far the most common) are completely unsuitable for use with any electronic load. Trailing-edge dimmers are much kinder to the components in the lamp, but whether they work properly is a lottery. The only dimmer that provides predictable performance and causes minimum stress is a 3-wire trailing edge type. These are not common, and most home wiring is done in such a way that some re-wiring is needed so that a 3=wire dimmer can be used - if you can find one!.
+ +Project 157 is (at the time of writing) the only design on the Net for a complete 3-wire trailing edge dimmer. It's been built and tested, and works with any dimmable LED or CF lamp, as well as incandescent lamps and even some non-dimmable electronic lamps (with varying degrees of success, depending on the design of the lamp's power supply (aka 'ballast')
+ +While LED lighting is currently the best choice for efficacy and longevity, not all problems are 'solved'. In particular, proper ventilated housings are essential to ensure that the temperature is kept as low as possible. Budget LED lights can't be expected to last very long, because the makers will skimp on all essential parts, especially the heatsink. LED 'replacements' for 12V halogen downlights have been a disaster, because the form factor of the standard MR16 downlight lamp is too small to allow a decent heatsink. Several have been made with tiny fans inside, but that's not a solution, it's a band-aid.
+ +As of 2020, many of the problems have gone away, since CFLs are becoming less common, some outlets don't even sell them any more, and LED lighting has taken over for the most part. As a result, many of the problems with CFLs described below are no longer relevant. Luminaire ventilation (for fittings with replaceable 'bulbs') still hasn't been addressed, but the LED lamps you can get today seem to be very reliable for the most part. There will always be early failures ('infant mortality' as it's known in industry), but most will happen during the warranty period.
+ +My house and workshop are now illuminated exclusively with LEDs, either tubes or 'bulbs'. I'm hard-pressed to recall the last time I had to change one due to failure, but the odd few have been swapped around to get the balance right in a couple of rooms. The energy savings are easily calculated, and forgetting to turn a light off occasionally doesn't make any discernable difference to my total energy use. I haven't removed the earlier information because it may still be interesting, even though most of the issues have gone away with the plunge in use of CFL lamps.
+ + +Several sections have been moved to separate sub-pages to try to reduce the size of this article. For the links to work properly, you must have Javascript enabled on your browser. The sub-pages use script to create popup windows with a 'close' button. You can open the files in a new window by right-clicking the link if you prefer. Because you might easily miss some of the sub-sections, there is an index of these extra pages below. These links do not rely on Javascript.
+ +Index of Sub-Pages +
It is now illegal for anyone to import conventional incandescent lamps (light bulbs) into Australia, except for a few specialty types. In most shops, there isn't an incandescent lamp to be seen, although some have small fancy types as might be used in some specialty chandeliers or similar fittings. Insanely, halogen downlights are still readily available because they pass the MEPS (Minimum Energy Performance Standards) criteria ... just. They have also caused a number of house fires because ceiling insulation was too close to the fitting (there are special clearance requirements for downlights and any type of insulation).
+ +So far, it's fairly safe to say that few households will have seen a dramatic reduction in their power bills, and the governments (local, state and federal) have remained stoically silent regarding any form of mandatory recycling scheme to prevent a build-up of mercury in land fill waste disposal facilities. The Copenhagen conference came and went with no firm commitment by anyone.
+ +No-one seems to have noticed that it is immaterial if global warming/ climate change is man-made or not. The fact is that we cannot continue the way we are because the resources we are depleting will eventually be gone. It won't have a major impact on those who are around now, but future generations will have good reason to curse us to eternal damnation for the massive waste of valuable resources over the last hundred years or so. And rightly so - what we have done (and are still doing) is nothing short of shameful. Governments are more interested in being seen to be doing something (large, highly visible projects) than actually doing anything ... like switching off unused lights in government buildings at night.
+ +Meanwhile, the cry to ban the humble incandescent lamp (also known as GS - general service or GLS - General Lighting Service) may not seem like such a bad idea at first glance, but there are a number of issues that have not been addressed (or even thought about, based on what has been heard so far). Incandescent lamps are inefficient, typically over 95% of all energy consumed is converted into heat - not light. By comparison, the CFL (compact fluorescent lamp) has a dramatically higher efficiency, although it falls well short of a full sized (18W or 36W) standard fluorescent tube. The latest tube-type fluorescent lamp is the T6 - thinner than the traditional T8 we mostly use, and can only be used with an electronic ballast. These have greater efficiency (more lumens/Watt) than all earlier fluorescent tubes, and use a ballast that doesn't get thrown away with the tube.
+ +Many people have tried CFLs in any number of locations, but they are not always liked because of their colour rendition (many colours look wrong under all forms of fluorescent lighting), and because they are considered by many to be rather ugly. These dislikes are not necessarily major issues of course, although there are many users who would disagree.
+ +Lighting is actually a very complex topic, and although it seems pretty simple on the surface, there are many factors to consider that proposed legislation will utterly fail to address. Just look at the European RoHS (restriction of hazardous substances) legislation as an example of how wrong things can get when governments become involved in things they don't understand.
+ +This article is not intended to be a complete and final discussion - because lighting is so complex, I am bound to miss things, and I can only rely on the information I can get my hands on. There is undoubtedly a great deal that I won't find. Hopefully though, this article may get a few people thinking of the long term implications of the proposed ban (which is almost completely meaningless in real terms).
+ +As a side issue, although I have (mostly) used the term 'efficiency' in this article, this is actually relatively meaningless for lights. The correct term is luminous efficacy, usually expressed in lumens / Watt. While not strictly accurate, comparing the relative efficiency of different light sources does make it easier to comprehend - few people outside of the lighting industry will really have a proper grasp of the concept of luminous efficacy, so I have elected to keep the term 'efficiency' in the interests of making the article as easy to understand as possible.
+ + +The nice people at LV Lighting (now gone) saw this article some time ago, and sent me a LED lamp to trial. The lamp is excellent, and I would have no hesitation recommending these to anyone. The colour temperature is good, and the lamp doesn't get excessively hot in use. This is not to say that it runs cool - it doesn't. The front bezel is a heatsink, and this gets quite hot after it's been on for a while. Now over three years later, it's just as good as when new (it gets used for up to 4 hours a day), and I now have quite a few LED lights around the house too.
+ +As with CFLs, the current crop of LED lamps need good ventilation, because the electronics (and LEDs) must always be kept cool in order to obtain maximum life. With a minimum rated life of 20,000 hours (up to 50,000 hours is also claimed on the pack, 80,000 to 100,000 hours elsewhere on the Net), no CFL can even come close. There's also no mercury involved, so disposal is less of an issue. While it pains me to see perfectly good electronic parts being thrown away, reality indicates that it will happen whether I like it or not - at least there's no risk of contaminated landfill. Wasting perfectly good aluminium (used as the heatsink) is cause for some concern though, because aluminium production is extremely energy-intensive.
+ +
LED PAR20 Lamp
The lamp I was sent is a PAR20 style. PAR (Parabolic Aluminised Reflector) lamp sizes are based on the number of units of 1/8 inch that indicates the diameter, so this lamp is 2½ inches in diameter (or 63mm in real measurements). The metal section around the front is the heatsink for the LEDs, and given that the lamp's rating is only 8W, it dissipates a surprising amount of heat. The main difference between this and a CFL is that the heat is predominantly external, and the electronics are not subjected to the main heat source ... with CFLs, the source of most heat is the tube filaments, and these are inside the tiny housing that holds all the electronics. The light source is 6 x 1.3W Cree XRE LEDs, and the LEDs are powered by a fairly conventional (but very nicely built) switchmode power supply (yes, I've had the lamp apart).
+ +There seems little doubt that this is the way of the future. By comparison, the CFL that's presently installed in the same lamp standard comes a rather poor second place, even though it's also rated at 8W. At around $50-70 each, the biggest disadvantage with the LED lamps is their cost, however that can be expected to fall as production and demand increase. Even at the current price, the LED lamp is actually a better (although not yet cheaper) choice than a CFL. While it may not appear so at first glance, the LED based lamps can be expected to outlast up to eight CFLs, but they suffer few of the disadvantages.
+ +Lamp Type | Power | Life | Cost | Total Cost | Per Hour + |
Incandescent | 75W | 1,000 Hours | $0.50 | $11.75 | 1.175 Cents + |
CFL | 8W | 10,000 Hours | $4.00 | $16.00 | 0.16 Cents + |
LED | 8W | 50,000 Hours | $60.00 | $120.00 | 0.24 Cents + |
+ Total cost is purchase price plus electricity cost based on $0.15 / kWh for the total rated hours of operation. Per hour cost is total cost divided + by rated life in hours. Should a CFL or LED lamp last less than the rated number of hours, the cost per hour will increase. It should be noted that the actual + cost of electricity has risen dramatically since this article was written, but I have not updated every calculation for obvious reasons. ++ +
While the LED lamp appears more expensive, remember that unlike the CFL, it is immune from premature failure due to switching cycles and does not need to be on for up to 5 minutes while you wait for the light output to reach the normal level. LED lamps are also not bothered by low temperatures, so extremely low light output (or none at all if it's cold enough) isn't an issue. As the price falls, expect the total cost to fall significantly. Also, note that only a 'mid priced' CFL was used for this comparison. A premium brand may actually last as long as claimed, but will be more expensive than shown above. So far, I seriously doubt that any CFL I've used has lasted (or will last) the rated number of hours.
+ +Overall, there is good reason to expect that CFLs are merely an interim solution. While they are presently very cheap (unrealistically so in my view), the ever-increasing demands from environmental groups to force proper recycling will ultimately drive up the cost. Meanwhile, the LED lamps will get cheaper as production methods and technology improve their cost effectiveness. In Europe, the WEEE directive will apply regardless, but recycling LED lamps will be far cheaper than recycling CFLs, because there is no requirement for capturing and storing mercury and mercury vapour.
+ +However, even LED lamps fail to address all the issues. Just like CFLs, they can't be used in very hot environments (such as oven lights), and they can't be used in completely sealed luminaires as are required for outdoor or hazardous/explosive atmosphere lighting. However, LEDs are so easy to use (no fragile glass or high voltages) that these problems can be solved by producing specialty lamps with provision for external heatsinks (for example). These are now available from many sources, as streetlights, floodlights for home and industry, and many other specialised applications.
+ +Speaking of heat, there's a bold warning on the pack that the LED lamp must not be used in sealed light fittings. Just like CFLs, the electronics don't take kindly to being overheated, and doing so will cause premature failure. Although I didn't test it, I expect that this lamp would also be completely unsuitable for use with a dimmer (even turned to the maximum setting). I didn't run a test because of the fact that the SMPS (switchmode power supply) is rated for use with any voltage from 100 to 260V - so reducing the voltage with a dimmer will have little or no effect.
+ +Power factor (see below for more on this topic) is still an issue. The SMPS used in low cost LED lamps has around the same power factor as typical CFLs, so the peak current drawn will be of a similar order. This means that mains waveform distortion remains a problem, but this can be solved. Nothing will happen until supply companies start charging residential customers for Volt-Amps (VA) used, rather than power. It is worth noting that quite a few recent LED lamps are using active PFC (power factor correction), which reduces the high peak current and reduces mains harmonics. A typical 9W LED lamp with active PFC will only draw around 10VA - a power factor of 0.9
+ +Because LEDs are low voltage devices, the SMPS used to drive them is very easily made to have complete isolation to double-insulated standards. The LEDs and their heatsink can be accessible without fear of electric shock, making the construction of LED based lamps far more flexible that can ever be achieved with CFLs or even traditional incandescent lamps.
+ + +Many people would have seen the story circulating the Net (some time ago) about a woman in Maine (US) who broke a CFL in her daughter's bedroom, and was quoted $2,000 to clean up the mercury. This is what happens when bureaucrats become involved in things they don't understand (like lighting for example). This story is scare-mongering at its lowest. While I have no doubt that the figure is correct, it would be plain stupid to involve bureaucrats in something as trivial as a broken CFL.
+ +Yes, mercury is a potent neurotoxin, but metallic mercury is relatively safe. The real danger comes from the vapour and various salts and compounds (particularly methyl mercury as may easily be created in landfill for example) ... not from 5mg of mercury buried in the carpet. Having said that, I'm not sure I'd be happy letting a small child play on the floor where any fluorescent lamp had been broken. Kids have enough things to cause them damage or injury without adding tiny glass shards and mercury to all the other concerns.
+ +Perhaps governments and CFL manufacturers could provide the necessary cleanup procedures that should be undertaken to ensure that the area is reasonably safe after 'contamination'. At present, you will find a great many conflicting opinions as how best to clean up after a breakage, but almost no usable information about the possible risk from the mercury itself. For myself, I'd probably not be at all concerned, but my kids are grown up and have their own homes. With small children around, I'd want to know with reasonable certainty that a recommended cleanup process would make the area safe enough for them to play on.
+ +He has visited Chinese factories where CFLs are made, and tells me that mercury spillage is common during the manufacturing process, and that the workers have zero protective clothing, masks or anything else to safeguard their health. This means (as many could easily have predicted) that while our environment may benefit by using CFLs, the Chinese environment and factory workers most certainly do not.
+ +In years to come, there will be massive clean-up bills to decontaminate factories and surrounding areas where CFLs were made, and with spillages happening regularly the long term health of the workers is certainly at risk. This is not confined to just one factory either - the same thing has been seen in several facilities visited by my corespondent.
+ +It seems that no-one cares (or wants to care) about things they cannot see. Until governments world-wide can ensure that proper safeguards and decent safe working conditions are a requirement for 'environmentally friendly' products, these products should simply be banned from sale.
+ +It is also well known that Chinese test houses will cheerfully fake test results that are required information for the certification of products in the countries where they are sold (Australian Standards, UL, CSA, VDE, etc., etc.). On Australian TV only recently, it was shown that Chinese made air conditioners (with full test documentation) were found to fail the mandatory Australian 'Minimum Energy Performance' criteria - despite Chinese lab test results that clearly showed that it passed. Does anyone really think that all products that come from China will match the test results that come from Chinese laboratories? I certainly hope not, because one would have to be extremely naive to believe that these overseas labs will be as rigorous and thorough as those in the target importing country.
+ + +There is one thing that could have been done, and it could easily have been implemented. Needless to say, nothing was done that was even remotely sensible. A surcharge (indexed each year) on all lamps below a given luminous efficacy can be used to finance a carbon 'offset' programme, with all money collected devoted 100% to planting trees or other viable efforts towards reducing our 'carbon footprint'. The extra (and increasing) cost of low efficiency lamps of all types will encourage people to use CFLs (or other high efficiency lighting) wherever it is sensible to do so, and will help to ensure that as light fittings are replaced by new ones (during remodelling or because of breakage etc.), the replacements will be designed to be CFL friendly. Some people may even want to use tinted glass to recover the 'warm' glow they are used to. We also get more trees, something that many areas throughout the world have depleted to depressingly low and aesthetically unappealing numbers.
+ +Such an approach causes the minimum disruption, minimises waste from CFLs that fail prematurely because of inappropriate light fittings, and is a far more sensible approach than imposition of a blanket ban that will cause many people much grief. The surcharge can be altered as CFL (or better still, LED) technology improves (allowing better dimming ability for example), and eventually, only a few lamps in most households will use incandescent globes because CFLs cannot be used (see the rest of the article to find out why CFLs cannot be used in some areas).
+ +This approach is sensible (one good reason for government avoidance), and over a period of only a few years has the potential to exceed the (claimed) benefits of an outright ban by an order of magnitude. Such a programme will have real and immediate results - something that is suspect at best (and possibly substantially negative) with the present plans to simply ban incandescent lamps.
+ +Eventually (my guess is within 5 years, but see above if you missed it), LED lighting will have improved so much that the whole mess may be resolved anyway. However, even LED (light emitting diode) lamps cannot be used at high temperatures (such as oven lights), so the incandescent lamp will never really die. They do allow full dimming ability though, but this capability needs to be included in the circuitry. There are already some LED based lamps available that are not just a usable alternative - one I was given is excellent, and I'm highly impressed.
+ +Consider too that lighting is normally used at night (this will surprise no-one). In Australia, electricity companies offer very cheap rates at night, because they have Megawatts of capacity just spinning around with not much to do (known as spinning reserve). The lights that we use domestically offer very little loading, so where's the saving in greenhouse gases? The alternators aren't just shut down, because it takes up to 12 hours to get a large coal-fired alternator on-line. Incentives are offered to get people to use the spare capacity at night for storage hot water systems (for example). This isn't to say that electricity should be squandered, but merely to put it into some perspective. (Note that the off-peak system does not operate in many parts of the world.)
+ +Wherever possible, sensible and safe, I highly recommend using CFLs. You will reduce your power bill, and you will save electricity. If you are mindful of the limitations, there are real benefits and these should be embraced. As noted in several places, I now use LED lamps everywhere I can, both in the house and my workshop. None of my main lighting uses conventional fluorescent lamps, all are now fitted with LED tubes, which provide the maximum efficiency for domestic lighting. There are still a couple of CFLs in use though.
+ + +If the powers that be (wherever in the world they are) are serious, then the obvious answer to working out if there are any genuinely worthwhile benefits to a ban on incandescent lamps is fairly simple. Conduct a trial. Select a small town, and choose 50% of randomly selected dwellings to continue the way they are already, and get the other 50% to use CFLs exclusively. No modifications to light fittings, no changes to anything other than the type of lamps used.
+ +With careful monitoring of both sets for lamp failures, total energy usage (electricity, gas, heating oil, etc.) and overall satisfaction or otherwise, a realistic set of statistics can then be developed to show exactly what the outcome of a wholesale ban would achieve. This is real science, using a controlled test environment to gather information that can be expected to be reasonably representative of the benefits to the area tested and anywhere else that has similar climate. Data may be extrapolated to determine a realistic potential outcome for other localities.
+ +While businesses may be included, many (if not most) will be found to be using conventional tube fluorescent lamps, because of the necessity for good lighting in most areas of business (cinemas, nightclubs and many restaurants being notable exceptions).
+ +Such a trial needs to be run for 1 year, and at the end, people will have real data from real homes in a realistic environment. This is a far cry from the situation at present, where we have a few zealots sprouting figures that either make no sense, are often obviously false, or are simply the same as the (often wrong) figures sprouted by other zealots. I'm getting rather fed up with some of the claims, as they seem to be based entirely on fantasy. One I saw claimed that "Changing one incandescent lamp for a CFL will save £9 in one year, or £100 over the life of the lamp." (or along those lines - I can't find the quote this time around). Based on those figures, the lamp has to last for over 11 years - a fairly unlikely scenario. In common with many such claims, the lamp power wasn't mentioned, what it replaced wasn't mentioned, and no supporting data was mentioned either. In other words, the figures claimed have no substance at all - pure horse-feathers.
+ + +Because most of this section seemed to create more distraction than benefit, it has been removed. However, some sections are still worthy of inclusion.
+ +One thing I have seen in countless forum sites, blogs and other areas is especially disconcerting. Some people seem to have a completely black and white approach to many things related to CFLs. There is often a complete refusal to accept that anyone else's experience is valid, because it either disagrees with published data, the experience of others, or for reasons unknown.
+ +Some people may delude themselves, albeit unintentionally. They may grossly overestimate the life of the lamp ("Well, it says on the package that it lasts 10,000 hours, so it must have done.") - in fact, the lamp may have lasted a great deal less than its rated life. In reality - unless you keep a log - how does anyone really know how many hours of use any lamp in their house has actually lasted if it's switched on and off? We don't. We make estimates, based on what seems to be the case, tempered by expectations and boosted by advertising (or other) promotions. After a year or more, we are very unlikely to remember when it was changed last.
+ +The same 'logic' has been used to proclaim that CFLs work "just fine" with motion detectors and/or timers. Others have claimed that they don't work at all. Neither is right ... see below for more information.
+ +Similar arguments are applied to colour rendering index, the 'human friendliness' of the light and almost any other area that pertains to the debate. This topic - like any other of importance - needs to be examined dispassionately. The points laid out below are a combination of measured data, simple and demonstrable facts, and information from manufacturers and lighting professionals. Passion and personal preference carry little weight (either for compact fluorescent or incandescent lamps) in what follows here. This article has its basis in facts, not any personal vendetta against one technology or the other.
+ +You can find more information at any number of sites on the Net, and if anyone doubts that there really are problems, then a web search should disabuse you of such notions pretty quickly. Make sure that the information has basis in reality - anyone who simply raves or rants (for or against) with no technical information is not a source of useful information.
+ + +As noted above, the term 'efficiency' is fairly meaningless for lighting. Luminous efficacy is a measure of how much light one obtains for a given power input. If one uses the maximum theoretical luminous efficacy figure (683 lumens / Watt) as a starting point, then an approximate efficiency figure can be worked out easily enough. The following table is condensed from that shown on Wikipedia [1]. Click here for the full article.
+ +Lamp Type | Power | Luminous Efficacy (lm/W) | Efficiency ¹ |
Tungsten incandescent | 40W | 12.6 | 1.9% |
Tungsten incandescent | 100W | 17.5 | 2.6% |
Quartz halogen | n/a | 24 | 3.5% |
Fluorescent (compact) | 5W - 24W | 45 - 60 | 6.6% - 8.8% |
Fluorescent tube (T8 120cm / 4 ft) | 36W | 93 (max, typical) | 14% (max, typical) |
Fluorescent tube (T5 115cm / 45 in) | 28W | 104 | 15.2% |
LED (various formats) | n/a | 60 - 110 | 7.5% - 15.5% (approx) | +
Xenon arc lamp | n/a | 30 - 50 (typical) | 4.4% - 7.3% |
High pressure sodium | n/a | 150 | 22% |
Low pressure sodium | n/a | 183 - 200 | 27% - 29% |
Ideal white light source | 242.5 | 35.5% | |
Theoretical maximum | 683.002 | 100% | |
¹ - The term 'efficiency' is actually fairly meaningless. This is a measure of the 'overall luminous efficiency', and is + included as a comparative figure only, calculated such that the maximum possible efficiency is 100% + |
Where the power rating is indicated as 'n/a', this indicates that luminous efficacy is not affected significantly by the power rating. Many lamps become more efficient as their power rating increases, with incandescent and CFLs being good examples. While it is easy enough to imagine that this will be so with traditional lamps, it is a little more subtle with a CFL. Essentially, the electronic circuitry has limited efficiency, and will consume some current just to operate. For low power lamps, this basic operating current is a higher percentage of the overall, so the effective efficiency of the assembly is reduced accordingly.
+ +As becomes readily apparent from the above, even a CFL with a reasonably high efficiency still discards over 90% of the energy supplied as heat. While the total input energy is less than for an equivalent incandescent lamp, the maximum temperature to which the lamp may be subjected is also a great deal lower because of the electronic components. This means that CFLs can only be used successfully with well ventilated fittings (see below for more information on this topic).
+ +With all lighting types, something that is of particular interest to HVAC (heating, ventilation & air-conditioning) engineers is the total heat load from lamps. In general, this is actually the full power rating. All light (whether visible or not) that is emitted eventually lands on surfaces and is converted to heat. After all, light is energy, and that energy is simply converted to heat when the light is absorbed. More efficient lighting means less total power for the same light output, so overall luminous efficacy is the only really important factor to consider.
+ +LEDs are improving all the time, and prices are coming down. The chips themselves are commonly better than 100 lumens/ watt, and the reductions for a complete system are largely due to the power supply. High quality supplies will obviously cost more than those that will just do the job, and the sensible approach is to keep the LED array and power supply separate. This allows either the LED array or power supply to be replaced independently. Other than commercial lighting where this is common, most consumers just want something that looks like the old style 'bulb' they are used to seeing. While many of these are now a viable alternative to other lighting, they are still a compromise.
+ +LED lighting is covered in more detail in several of the other articles on the ESP site. See the Lamps & Energy Index for more details.
+ + +Interestingly, there is a 'standard' table of equivalence (power Point Presentation) for CFLs vs. incandescent lamps (supposedly accepted worldwide). It is interesting in that the figures claimed are much less than the above table would lead one to believe. The table is shown below. For example, a 100W incandescent is shown as having an output of 1,246 lumens, yet the above table indicates that it should be 1,800 lumens, and a 40W incandescent should provide 504 lumens, yet is downgraded to 386 lumens. The problem is that no-one seems to disagree that 17.5 lm/W is reasonable for a 100W incandescent lamp, so how did it get changed? I find this kind of deception very annoying (to put it mildly). The US Energy Star programme has a different standard, as well as a set of standards that few CF lamps will meet (of those sold in Australia, at least). See ENERGY STAR (Criteria, Reference Standards and Required Documentation for GU-24 Based Integrated Lamps) to read the requirements in full. Their equivalence table is more in line with reality than the previous reference, but other Australian government departments (see below) and CFL manufacturers seem to prefer the other.
+ +One maker may claim their 13W CFL to be the equal of a 60W incandescent, another says their 13W CFL is equal to 75W - the lack of any standardisation allows huge leeway for makers and advertisers (as well as politicians). The consumer loses out by getting less light than expected, tarnishing their opinion of CFLs in general.
+ +Power (W) | 150 | 100 | 75 | 60 | 40 | 25 |
Lumens | 2009 | 1246 | 874 | 660 | 386 | 214 |
Lumens (Energy Star) | 2600 | 1600 | 1100 | 800 | 450 | 250 |
So who decided on the 'standard' equivalence table? Why are the figures so different from what we should expect? I would be very interested to know who decided to downgrade a 100W lamp from 17.5 lm/W to 12.46 lm/W - could it have been the CFL manufacturers perchance? This is but one of many anomalies that you'll come across when you start to look into the subject carefully. Even the Energy Star ratings have been criticised as too low! Luminous efficacy does increase with reduced operating voltage (120V vs. 230V for example), and this is because the filament is thicker and stronger and can be run at a higher temperature (12V halogen downlights are a good example).
+ +Many people have complained that CFLs that supposedly replace various incandescent lamps are not as bright as expected (these gripes can be found all over the Net). Remember too that CFLs lose light output as they age - the effect is sometimes very noticeable, and I've replaced several that were too dim to be useful, but had no more than around 500-1000 hours of use.
+ +A couple of documents from Australian government groups (see 'EnergyAllStars' and 'National Appliance and Equipment Energy Efficiency Program' both use the table shown above. Based on any tests that anyone might want to perform themselves, these figures inflate the light output from CFLs compared to incandescent lamps. Most people who have used a CFL know that the light output is often not equivalent to the claimed 'equal' incandescent lamp. There are (supposedly) perfectly good reasons for the discrepancies, but so far I remain unconvinced.
+ + +Most industry websites claim that the CFL is simply an equivalent product to the GLS (General Lighting Service) incandescent lamp. This is extremely misleading. The only 'equivalence' is that both are designed to produce light, but the technology involved in CFLs makes them an altogether new product. As a new product they should be subjected to new tests, based not only on their ability to produce light, but also on the impact of the new technology on safety - electrical and environmental. This has not been recognised by politicians, and appears to have also been missed by the regulators - whether intentionally or otherwise is unknown.
+ +As one unfortunate homeowner (see Fire Risk below) pointed out to the press ... "I don't read light bulbs, I wouldn't think I'd ever have to.". This is in large part because no-one has ever had to do so before, and since the CFL is marketed (and promoted) as simply a more efficient light bulb, it is assumed to be equivalent in all significant respects. The marketing has concentrated almost exclusively on the advantages of power savings and less heat, but has never explained that this is new technology (from the consumer's perspective), and that there are major differences that must be considered. A CFL is not simply a different type of lamp - given the amount of technology embedded in the small housing, it's an appliance in its own right.
+ +Because this isn't explained, people are expected to read the packaging (which no-one ever did before), and decipher often cryptic symbols that indicate certain criteria that determine the life and possible safety of the product. People don't. They are sold an 'equivalent' product that they are told will save them money, and lacking any detailed knowledge of what is involved in the replacement will commonly assume that this 'new' lamp can be used in the same way as the old.
+ +Because CFLs run at much lower temperatures, possible risks are no longer obvious. A CFL (even connected to a dimmer) may operate apparently normally for weeks or months before it fails, and it's impossible to predict exactly how it will fail. The ultra-cheap electronic ballast is a new development, and is something that 99% of the populace is unaware even exists. Lamps are such commonplace commodities that the average consumer will simply assume that they are all equivalent products - indeed, the lighting industry and government bodies alike insist that the CFL is an equivalent to a normal GLS light bulb.
+ +If there was ever any doubt, this article should disabuse you of that notion pretty quickly. That CFLs have their place is obvious - everyone likes to save money and help the environment if they can do so with little or no sacrifice, and there are many applications where CFLs are perfect replacements for GLS lamps. There are also many situations where CFLs are absolutely not appropriate, but this is rarely stated other than occasionally in fine print on the packaging (that almost no-one reads anyway). Even though this article has been on-line since 2007, it's hardly mainstream.
+ +About 2 months before these latest amendments were written, a local TV station raised a fuss about CFLs. They had only just 'discovered' that CFLs contain mercury. This is information that is readily available, but you'd need to know what to look for in order to find it. Put another way, if you already knew that CFLs contain mercury you'd have no difficulty finding out that they do, but, if you didn't know, the information hardly leaps out at you. Even if you are the type to read the packaging, in many cases there is little mention of mercury, although it seems that new regulations insist that it be stated.
+ +Overall, the CFL is not simply a different type of light bulb - it's an entirely new product, with an entirely new set of rules for safe operation. This has not changed at all since this article was first published in 2007.
+ + +Although it appears simple, the modern incandescent lamp is the result of many years of research. Small but important improvements have been made over the years, and considering the minimal cost of a typical 75W lamp, they are quite remarkable value for money. There is a veritable feast of available information on the development of the incandescent lamp, and it would be foolish of me to even attempt to cover the topic. A web search is recommended for those who want to know more. I only intend to cover the topics that I feel are important to the discussion at hand ... should they be banned?
+ +The light source is simply a filament - a coil (or a coiled coil) of thin tungsten wire. This is supported on a pair of wires, and the whole is enclosed in a glass bulb. When an electric current passes through the filament it gets hot, in fact it gets to such a high temperature that it glows white - the operating temperature (closely related to colour temperature) is typically around 2,400 - 3,100 K (about 2,130 - 2,830°C). It is standard practice to rate colour temperature in Kelvin (the term 'degree' is not used). Zero Kelvin is about -273°C, and represents the complete absence of heat (absolute zero).
+ +In the early days, the bulb was evacuated (a vacuum), but this leads to the tungsten 'boiling off' and being deposited on the glass. Because of the loss of metal, these early lamps had a short service life, and most standard lamps are now filled with a low pressure inert gas (argon, nitrogen, etc.). The use of an inert gas does not prevent the liberation of molecules of tungsten, but it does slow the process significantly.
+ +Because of the presence of the gas, modern lamps usually have a section of the support deliberately thinned to act as a fuse. When the filament breaks, it can cause an arc or fall across the support wires, and the fuse prevents excessive current flow.
+ +By using a halogen (typically either iodine or bromine gas), an interesting phenomenon occurs - the halogen causes vaporised tungsten to be re-deposited on the filament. This is one of the main reasons that quartz-halogen lamps last so long (as well as usually having much thicker filaments than conventional high voltage lamps). As a side issue, quartz is used because ordinary glass would soften or melt at the typical operating temperature of a quartz-halogen lamp. Halogen lamps are usually far more efficient than traditional incandescent lamps, and may reach 9-10% efficiency. Not wonderful, but better [1].
+ +Because incandescent lamps are pure resistance, they have unity power factor. This means that the electricity meter registers exactly the power drawn by the lamp. A 75W incandescent lamp (traditional or halogen) draws 75W from the mains (326mA at 230V). Where any electrical device has reactance, the power factor will be less than unity. This means that it may draw 75W (and that's what you will be charged for), but might draw a current of 652mA (again at 230V). This is 150VA, and although you don't pay for the extra current, the supply utility still has to generate and supply that current through the entire grid. This 2:1 ratio of VA to Watts represents a power factor (PF) of 0.5 - generally considered to be the lowest acceptable PF for normal usage.
+ +The constituent materials in a standard incandescent lamp are all used in small quantities, and nothing is toxic by normal definitions. The basic ingredients are ...
+ +Although it is possible to recycle incandescent lamps, the small amount of all material and lack of anything even remotely toxic probably makes the process uneconomical. IMO this is a pity, because I prefer to recycle everything I can, but economics must intrude somewhere I suppose .
Incandescent Lamp Characteristics
+Benefits ...
Deficiencies ...
+ +In may be premature to write off the poor old incandescent lamp anyway. General Electric (GE) is apparently developing an incandescent lamp that matches the efficiency of typical CFLs [4], and no doubt others will follow before too much longer.
+ +One site I looked at claimed that it takes about 1kWh to manufacture an incandescent lamp. No further details were given.
+ + +The Compact Fluorescent Lamp (CFL) also seems simple from the outside - you can't see what's inside, but there is quite a bit of technology involved (see below).
+ +The tube itself contains around 5mg of mercury, mercury vapour (mercury is an extremely potent neurotoxin ), and various phosphors that emit visible light when stimulated by the intense ultraviolet radiation emitted by a mercury arc discharge. There is still some conjecture regarding the toxicity of the phosphors, with various claims and counter-claims. It is generally better to err on the side of caution with any chemical compound, so a designated recycling program is essential before the mandatory use of CFLs becomes a reality. Such a program should be in place now to deal with standard fluorescent lamps, as these also contain the phosphors and the mercury. In Europe, the WEEE Directive (Waste Electrical and Electronic Equipment) has already addressed the issue of recycling, but it has not been mentioned so far for Australia. Interestingly, some CFL manufacturers have even stated that the expected boom in CFL sales will create problems with the mercury (it's true - look it up).
+ +Proponents of the anti-incandescent lamp stance will point out that the reduction in energy usage by using CFLs will prevent far more mercury entering the atmosphere than will be liberated by the (inappropriate) disposal of defunct CFLs. While this may be true at present, there are serious moves afoot to reduce mercury emissions from coal-fired power stations [2], so the point may be lost to scientific advances before too long. Consider too that mercury from power stations is distributed, not concentrated in landfill.
+ +The CFL is not as efficient as a standard full-size fluorescent lamp, but still manages to achieve quite respectable performance. An efficiency of around 6-10% seems to be indicated, but there are so many factors that influence the apparent efficiency that direct comparisons are difficult.
+ +The technology used in modern CFLs is quite astonishing for a throw-away product. The incoming mains is rectified to obtain DC, and there is some degree of ripple reduction by a filter capacitor. A switchmode inverter is then used to obtain the necessary voltage to strike the arc within the tube, and additional circuitry is included to limit the current to the nominal value needed to produce the required power. All of this must fit into the base of the lamp itself. Dedicated lamp housings are becoming available so that only the tube itself needs to be replaced (at present they seem aimed primarily at commercial applications).
+ +The disadvantage of all this is that the power factor is far worse than an incandescent lamp. You don't pay for the extra current drawn, but the power utility must still provide cabling, transformers and generating plants that can handle the total load current, regardless of the power factor. There is still a significant saving, but this could easily be eroded because of two significant failings of CFL technology as it exists at present.
+ +Readily available CFLs cannot be dimmed effectively with a normal wall-plate dimmer, so must run at full power at all times (some provide a low power setting by switching off and back on quickly). Incandescent lamps are often dimmed to very low power levels for extended periods (while watching TV for example), so their power usage will be perhaps 20% of the rated power, in some cases even less.
+ +CFLs will fail prematurely if switched on and off many times a day. Many people already know this, so may be tempted to leave lights on that would otherwise be switched off, so a household might have 4 or 5 CFLs running for hours at a time, where they may have had only 1 or 2 incandescent lamps switched on (and possibly on dimmers, thus reducing power significantly).
+ +Another area where CFLs cannot be used is at very low or very high temperatures. Most will not start at all at temperatures below -20°C, and a lot will refuse to start (or will have very low light output) at even higher temperatures. Because of the electronics in the base of the lamp, temperatures above around 50°C will shorten their working life considerably. Electronics components have highly accelerated failure rates as temperature goes up from the standard 25°C 'reference' ambient.
+ +The constituent materials in a typical CFL vary widely, because there are many technologies that can achieve the same (or at least similar) results. In general though, the basic ingredients are ...
+ +Items marked with * are in addition to the basic ingredients for an incandescent lamp. Although it is possible to recycle CFLs, there is little or nothing in Australia geared towards any form of recycling of these (or any other) fluorescent lamps. This must be addressed and fully functional before any ban on incandescent lamps can be implemented.
+ +Items marked ** are in addition to materials used in conventional lamps, but are either toxic, or may be toxic when mixed with other chemicals in landfill and/or when heated to high temperatures.
+ +Compact Fluorescent Lamp Characteristics
+Benefits ...
Deficiencies ...
+ +Note that premature failure (* above) is very difficult to judge unless the switching is logged. Some makers quote switching cycle data, most don't. Some newer models of CFL use active inrush current limiting, so will not stress switching systems when CFLs are used in large numbers (from the same switch). A standard CFL has the potential for an inrush current of up to around 4 to 5A, since it is limited only by the equivalent series resistance (and to a lesser extent the capacitance) of the filter capacitor, along with any series resistance. Series resistance will usually be kept to a minimum, as it contributes nothing more than heat (and reduces overall efficiency).
+ +From much of the above, the reader could be excused for thinking that I dislike CFLs - I don't! I use them wherever possible (or practical), and for the most part I will never change back to incandescent lamps in the places where CFLs (or LEDs) are ideally suited. At the same time, I will not change to CFLs where it is obvious that traditional lamps are more practical (such as the lights in my house that have dimmer controls). My workshop is almost exclusively LED tubes now, but with CFLs used in most of the desk lamps I use for drill presses, lathe, milling machine, etc. Many other light fittings in my home are also fluorescent - there are actually only a few incandescent lamps used (down to about 2 as of early 2013 ). I strongly recommend that others use fluorescent, CF or LED lamps wherever possible - the modern CFLs are considerably better than the originals that people may have tried, and they should be used wherever it is sensible to do so. LED technology is advancing in leaps and bounds, prices are falling and quality improving - this has to be a good thing.
However, an outright ban on incandescent lamps is simply foolish - as has been demonstrated in the UK and California, where calls for a ban have been largely met with the contempt they deserve. However the moronic government in Australia has simply trampled on our rights to choose without even asking us.
+ +The site I mentioned above that claimed 1kWh to manufacture an incandescent lamp also claimed 4kWh for a CFL. I would expect this figure to be less than half the real (total) energy usage. The ceramics and semiconductors alone would easily account for that figure. My guess (and that's all it is) is that somewhere between 10 to 20kWh would be needed to produce all the materials used and make the lamp. Distribution cost is higher because the CFL weighs a lot more.
+ + +A very common question in forum sites is along the lines of "My light fitting says that the maximum lamp power is 60W. Can I use a 20W CFL that has the same light output as a 100W lamp?"
+ +The standard answer given in Q&A sites is an unqualified "yes", however there is one major factor that must be considered but rarely gets a mention. Some CFL packaging states that the lamps must not be used in fully enclosed light fittings, but in reality, no CFL is suitable. The reason is temperature. Because of the electronic circuitry, all CFLs can only be used where they have reasonable ventilation to prevent overheating. Excess heat doesn't bother an incandescent lamp, and temperatures well in excess of 100°C won't cause them any problems at all. Remember that the filament is already operating at around over 2,000°C, so a bit more won't hurt (although wiring insulation and even the lamp socket itself will be damaged eventually). Some sealed light fittings use high temperature wire internally, because they get too hot inside for ordinary PVC insulation - which will fail quite quickly at elevated temperatures.
+ +Because of the electronic circuitry, the maximum ambient temperature for a CFL should remain as low as practicable, with most manufacturers warranting their products to a maximum of 50°C. This has forced a complete re-design for recessed downlights [7], and many other light fittings are completely unsuitable. If the heat from the tube and the electronics cannot escape, the temperature will potentially rise to well over 50°C, and the lamp's life and light output will be badly affected.
+ +There are far too many factors that need to be considered to even try to answer the question here, but as a guide, if the light fitting is completely sealed (or recessed into the ceiling with no way for hot air to escape) then the answer is no. Not simply "no" to the question, but no to the use of any CFL in a completely sealed (or even just poorly ventilated) light fitting.
+ +Many of the sites that offer advice have zero technical expertise, and a lot seem to assume that CFLs emit almost no heat at all. Anyone who has used one knows that this most certainly is not the case. This is shown very clearly below ...
+ +
Figure 2 - CFL Killed by Overheating [A]
This is a perfect example of what happens. The photo was sent to me from the US, and the lamp failed after about 200 hours - somewhat shy of the typical claimed life (to put it mildly). You can see that the electrolytic capacitor is bulging at the end, and it had ruptured its safety seal and leaked electrolyte. The heatshrink tubing around the inductor got so hot that it split, and the 'Greencap' capacitors are all seriously discoloured.
+ +So, what would cause this? Simple. Most existing home light fittings are designed for conventional incandescent lamps, and have little or no ventilation. Many of the popular fittings typically have no ventilation at all - especially the 'oyster' style, which has a glass dome clipped over a metal ceiling mount unit. There are many other styles of light fittings (luminaires) that are either fully enclosed, or are open only at the bottom.
+ +The heat will build up quite quickly, and because it has nowhere to go, will remain in the fitting. Since the maximum ambient temperature for an operating CFL is 50°C, it will only take a few minutes to reach this temperature. Test results for this are shown below. The result is quite clear, although (for whatever reason) some CFLs will manage to survive in some enclosed fittings. Unfortunately, quite a few people who have commented on this particular topic seem to think that because they have not had a failure, that this somehow implies that no-one else will either. One word sums up my response to these claims ... bollocks!
+ +Do not use CFLs in fully enclosed light fittings !
+ +As an example of the ratings of one of the key components in any CFL electronic ballast, we can examine the typical specifications for aluminium electrolytic capacitors. These are supplied in either 85°C or 105°C temperature grades, and the manufacturers usually claim a 'typical' life of 1,000 - 2,000 hours when operated at the maximum temperature. This is obviously far shorter than the 'typical' life of most CFLs, and the only way the capacitors can be made to last longer is to operate them at a lower temperature. Should the temperature exceed the maximum rated, then the life of the capacitor will be reduced dramatically. The same principle applies to most of the other components used too. + +
Semiconductors (transistors or MOSFETs) will run fairly hot in most CFL circuits - in fact they are responsible for a fair proportion of the total losses within the system as a whole. These components must never be allowed to exceed a junction temperature of (usually) 150°C - and this means that the case temperature must be somewhat lower than the maximum permissible. The only way to get the maximum life from any CFL is to keep the electronics as cool as possible - preferably well under the manufacturers' recommendation of 50°C. + +
Ultimately, this is the biggest downfall of the technology, and means that if incandescent lamps are banned, there will be an enormous consumer backlash when long-life lamps fail well before their supposedly short lived incandescent predecessors ever would. The environmental impact of thousands of prematurely failed compact fluorescent lamps is also a disaster - especially when you consider the energy that went into making them. This will (not might) result in exactly the reverse of what governments are 'planning' - with a net energy loss and a huge consumer outcry.
+ +
Figure 3 - Light Fitting Suitability
The above is a small example of fittings that are (or may be) suitable, and some of those that are not. Needless to say, there are hundreds of different styles, and only a full inspection (and perhaps a controlled test) will reveal if the fitting is or is not capable of being used with a CFL. The key factor is ventilation. Any fully sealed fitting is almost certainly unsuitable, because there can be no air circulation, and the temperature will rise sufficiently to cause premature failure. Elevated temperature also reduces the light output, so you will not be able to get as much light as you hoped for, as well as shortening the life of the electronic circuitry (probably drastically). + +
There isn't a technological breakthrough around the corner that will fix this - electronic equipment cannot be made to function properly and reliably at severely elevated temperatures. Householders will be faced with the rather daunting (and very expensive) requirement to replace all non-ventilated light fittings with new ones that have sufficient airflow to maintain a safe temperature. Because the fittings must be installed by a licensed electrician in most countries, this is yet another expense. + +
Any potential saving in energy bills is gone ... for quite a few years, until the cost of the fittings and their installation is amortised. There is also the enormous waste of replacing perfectly good light fittings with new ones, so the environmental impact is also negative - probably by a large margin. It will take many, many years before the household or the environment start to get any real benefit, because of the vast waste that was created to impose an 'environmentally friendly solution'. + +
In the UK, the Market Transformation Programme stated that ...
+ ++ The availability and the current stock of light fittings heavily influence what types of lamp are being used in the domestic sector. Research showed that + less than 50% of the existing light fittings are suitable to fit a compact fluorescent lamp. ++ +
This means that, on average, householders will have to replace more than half of all their light fittings to accommodate CFLs. I doubt they will be pleased if this is forced upon them, either in the UK, Australia, Europe, or anywhere else! It is also likely that lighting retailers will be rather annoyed, since a great deal of their existing stock will have to be scrapped. Doesn't sound quite so environmentally friendly now, does it? + +
One other area has been pointed out as well - spot lamps. While these are primarily decorative and it can be argued that they are not essential, the fact remains that people use them, and will want to continue doing so. Because of the large radiating surface of a CFL, it is not possible to focus them to anything like the same degree as a bi-pin quartz halogen lamp. These are almost a point source, and are easily focused to a very narrow beam. Even a conventional incandescent lamp can be focused fairly well - far better than any CFL. + +
For displays and in some home decorating schemes, designers use the 'sparkle' effect that one can get with point source lighting. This simply cannot be achieved with CFLs because of their large area. While LED lamps can achieve point source sparkle effects, they are not seen as a mature technology yet, and luminous efficacy is only marginally better than incandescent lamps for the majority. Colour temperature and colour rendering index of LEDs are currently not well controlled, and in general are far worse than CFLs. While one can argue that none of these special effects are needed to sustain life as we know it, people have expectations. They get very annoyed if they can no longer do what they want - or used to be able to do. Whether this is important or not depends entirely on your job or personal tastes.
+ + +This is an important area, and tests were conducted to find out just how hot a CFL would get in a sealed enclosure. I fabricated a test jig that would show the effects and ran two versions of the test simultaneously, using two temperature sensors. The temperature was measured at 10 minute intervals. The main test had the CFL set up as shown in the sub-page, with a bead thermocouple taped to the lamp socket. This was installed in a housing. The second set of test results were obtained with a probe thermocouple that was used to measure the air temperature inside the test fitting, with the very tip of the probe just touching the metal top cover. The probe was inserted into the hole where the bead thermocouple lead exits the housing. Measured temperatures were ...
+ +Time | Temperature (°C) | |
(minutes) | Bead | Probe |
0 | 23 | 23 |
10 | 48 | 34 |
20 | 55 | 39 |
30 | 58 | 40 |
40 | 58 | 42 |
This is not a good result. A 10W CFL in a 3 litre enclosure is over temperature in just over 10 minutes. The full test details and a photo of the test jig used have been moved to a sub-page ... Click Here to View. + +
The temperature inside the plastic housing of the CFL's electronics will be 20-25°C higher than measured by either probe or bead thermocouple. A higher power CFL in a smaller (or even the same size) housing that is completely airtight (as required for outdoor use) will get far hotter (and faster) than shown in the table. Any claims that more than 50% of existing light fittings are unsuitable for use with CFLs is completely justified on the basis of this test.
+ + +There have been a few reports of CFLs literally exploding when switched on. While it isn't really possible to give a detailed analysis without having such a unit in my possession, there are a few reasonable explanations that may cover the issues.
+ +Moisture: If a CFL is operated where it is exposed to outside air and/or moisture, there is a chance that condensation may collect on the PCB. Because of the high operating voltages (325V DC for 230V mains), even a small amount of moisture may be enough to cause significant current between PCB tracks. This is commonly called 'tracking' and the initial discharge carbonises parts of the printed circuit board substrate - commonly a paper-epoxy or paper-phenolic material.
+ +Once an arc is started, it will continue until enough material has been burnt away that there is no longer a conductive path - this means that PCB tracks, components, or anything else in the circuit can be blown apart - often quite violently and usually with lots of black soot and at least some smoke.
+ +Insects, Dust, Etc.: Small insects can get into most CFL housings easily enough, and are more than capable of starting an arc. This will have the same results as moisture, with a violent cascade of failures within a few milliseconds.
+ +Component Failure: Because the CFL is a throwaway product, the cheapest components that will work with the voltages involved are used. For example, the CFL intestines shown in Figure 3 (below) includes a 150nF, 400V DC capacitor directly across the mains. This device is absolutely not designed for direct connection across 230V AC, and it will fail. +Usually, specifically designed mains film capacitors die quietly, simply losing capacitance as more and more of the metallised layer is damaged. Most CFLs use non-mains rated capacitors, and these commonly fail with acrid smoke and pyrotechnics - especially since they are often used well above their safe current ratings.
+ +Inrush Current: As noted above (and below), most CFL electronic ballasts have a rather high initial inrush current, which can easily exceed 5A and often a lot more. This current is limited in some electronics by one component - the main filter capacitor. If it has a relatively high ESR (equivalent series resistance), this will hopefully be enough to limit the current to a safe value. The problem is that as an electrolytic capacitor gets hot, the ESR falls. When the lamp is cool, the ESR should be high enough to prevent problems, but if the lamp is switched on while still hot, the ESR will be very much lower - as little as one tenth of its original value. A much higher initial current flows (it could easily reach 20A or more), and may cause a diode to fail, for example. One diode failure in a bridge rectifier circuit will always be followed by another, and within milliseconds, the filter capacitor may be connected directly across the mains. Spectacular failure is guaranteed within well under 100ms (one tenth of a second).
+ +The failure modes described above are educated guesses, but the lamp failures are not. The possible causes listed are all quite plausible, and all can be demonstrated. Which is the most likely or most common is unknown at this stage, and will remain so until one of mine fails so it can be analysed, or I find some additional information ... either by more searching on the Net, or if some kind reader lets me know what was found in a few failed CFLs.
+ +On this basis, use of CFLs in bathrooms is obviously not a good idea - some manufacturers warn against using CFLs in bathrooms, most don't. Lots of water vapour from hot showers is likely to cause condensation that could cause spontaneous failure. Likewise (well apart from the known problems of CFLs not even starting if it is cold enough), outdoor use means that water may enter the lamp itself, or insects, small spiders and other matter may also get inside. As detailed above, using a fully sealed housing will shorten the life of the lamp dramatically , so our options are very limited.
+ + +It seems that as far as many manufacturers are concerned, melted plastic, evil-smelling smoke and other similar issues are considered normal modes of failure at the end-of-life of a CFL. According to EnergyIdeas, one manufacturer stated that "some overheating after a lamp fails and the ballast remains energised may cause minor melting of plastic and leakage of non-toxic glue." He indicated that there is a fuse that will blow before fire danger develops. The implication is that this kind of failure is within normal expectations.
A so-called 'normal' failure is shown in the image - I must say that I do not consider such damage to be normal by any definition of the word. More information may be seen here. There are a couple of photos of failed CFLs, some additional information, and technical data that was moved from this page.
+ +These claims make the CFL the only consumer product ever made that is expected to fail in a comparatively spectacular manner. When any other product fails, smoke, melted plastic and/or small fires (whether seen or not) are considered abnormal - protection devices are fitted to ensure that any normal failure is 'silent' - the device simply stops working, and you don't need to ventilate your house afterwards. There will be exceptions, but these should be rare, and triggered by an abnormal failure - not by a reasonable percentage of units that have simply reached their end-of-life.
+ +Various bodies have reported consumer concerns about CFL failures, and we know for a fact that a significant number of these lamps have not failed silently, but have advertised their demise by making noises, emitting smoke, or other behaviour that is simply not expected of any normal consumer product. None of this is helped by the fact that most packaging fails to make it absolutely clear whether the lamp is suitable for various light fittings (luminaires), if it can be used outdoors, or even state that the lamp must not be used with a dimmer ... even if set to maximum.
+ +There are countless examples of failed CFLs at Doug Hembruff's site, and some of the photos and descriptions are sufficient to make one think that perhaps it is the CFL that should be banned. Because of misleading advertising and packaging, and a number of zealots insisting that CFLs can (or must!) be used anywhere at all (but never citing any proof or documentary evidence), users often have unrealistic expectations. No-one expects the lamp to fail and burn - this is simply not anticipated with any consumer product, and nor should it be.
+ +Above, it was indicated that CFLs are fitted with a fuse. Unfortunately, the most commonly used 'fuse' is a fusible resistor, and these are simply not suited at all when in close proximity to a thermoplastic enclosure. The photo on the right is a 10 ohm 1W fusible resistor, subjected to 8W. That this is more than capable of melting and charring plastic is pretty obvious - the temperature is roughly the same as a car cigarette lighter, and will easily set fire to any flammable material that comes in contact with the resistor body. (Please note that the resistor is not quite as hot as it looks in the photo. It was enhanced a little so everything was more visible, and that makes it look hotter.)
The ESA (Electrical Safety Authority, Canada) is concerned that it can be difficult for consumers to distinguish between what is normal and what may be a precursor to fire or some other hazardous condition. As a safety precaution, they encourage consumers to replace CFLs at the first sign of failure or aging - not always easy with a lamp in a light fitting on the ceiling! The early warning signs to look for include flickering, a bright orange or red glow, popping sounds, an odour (typically a burning smell), or browning of the ballast enclosure (base). The ESA has pointed out that CFLs should not be used in exactly the same places as indicated elsewhere in this article, and I suggest you read their information on the topic.
+ +The ESA is well ahead of Australia (as well as many other countries), and there's not even talk of an incandescent ban in Canada. They are encouraging product manufacturers to review packaging information to support consumers in making safe product decisions. The ESA also has plans underway to update the existing Canadian safety standard for CFLs to address consumers' end-of-life product issues.
+ +Many of the parts used in CFLs are simply not suited to the purpose. There is more technical detail in the Exploding CFLs and Other Failures sub-page, along with the failed CFL photos (Updated 17 Dec 2012).
+ +It has to be said that the current situation is not merely intolerable, it is a disaster waiting to happen. Many people use lights (often on timers) when they are away, so that it looks like there's activity in the home. I wouldn't use any CFL as supplied for unattended use, because there is no way to know when (or how) it will fail, or what exactly will happen when it does fail. I'd be perfectly happy to use a conventional fluorescent lamp in this role, as I have never seen one 'crash and burn' when it fails. Likewise, an incandescent lamp will fail silently - no smoke, fire or brimstone - they just stop working. Unattended operation may not pose a big risk, but it's something we never had to worry about before.
+ + +There's a surprising amount of information that needs to be understood to realise the full implications of a complete ban of incandescent lamps, and more information will be supplied as it comes to hand. In the meantime, as I continue research, I hope that the amount I have been able to supply so far helps you to understand some of the potential problems.
+ +It's interesting to see how much electronics has been packed into such a tiny space. It is also worthwhile to perform some measurements. I also recommend a web search - there is much to learn and a vast amount of information is available.
+ +It is worthwhile to look at the circuit (or equivalent circuit) of a CFL and an incandescent lamp ...
+ +![]() Figure 6 - CFL Simplified Circuit [3] |
+![]() Figure 7 - Incandescent Equivalent Circuit |
The level of technology for the CFL (even simplified) is quite clearly vastly greater than for a conventional lamp. Likewise, the potential waste material at the end of its life means that recycling is not optional for CFLs. Suitable initiatives should be put into place immediately, if not sooner, and should be mandatory for all forms of fluorescent lamp.
+ + +It is interesting to look at the guts of a typical CFL. The photo in Figure 8 was sent to me, Figure 9 is a standard incandescent lamp.
+ +![]() Figure 8 - Inside a modern CFL |
+![]() Figure 9 - Typical Incandescent Lamp |
There is some additional info and another photo in a new sub-page ... Click to View. The newer CFL featured has some measure of power factor correction - not perfect, but a lot better than most. The old one shown (formerly Figure 10) is purely for interest's sake.
+ +As you can see, there's no contest as to which takes more energy to produce, and look at all the parts that will be discarded when a CFL fails. The incandescent lamp uses so few materials (and so little of them) that it weighs only 31 grams, vs. 98 grams for a 15W CFL. The incandescent lamp shown is 100W, so is a little heavier than a lamp with claimed equal light output to that of a 15W CFL.
+ + +Most CFLs cannot be dimmed, so I tried an 8W, 10W and a 15W CFL (plus a few others) attached to a Variac (variable voltage transformer). Using this, they can be dimmed, but the effect is generally completely unsatisfactory. Some CFLs claim that they can be dimmed, but only with 'rheostat' type dimmers, which should have been phased out worldwide many, many years ago. Although the measured light output is approximately linear, our vision (like our hearing) is logarithmic. A dimmed CFL varies (visibly) from 'bright' to 'a bit less bright' to 'off'. Below around 90V on most I tested, they become erratic and/or go out. So, even using expensive dimmers intended for use with CFLs, the user's experience will probably not be a happy one. The 15W unit I tested would function down to a bit less than 80V and actually dimmed quite well. This is the first I've seen that will do so, but it can't be used with a normal wall-plate dimmer.
+ +Judging from some of the bizarre comments I have seen when the topic of dimmers is raised, this is something that needs to be addressed. Modern (i.e. less than ~30 years old) conventional lamp dimmers use a TRIAC (bi-directional thyristor), in a phase control circuit. The dimmer works by preventing any voltage from getting to the lamps filament until a certain point is reached, when it switches on. As the applied AC passes through zero, the TRIAC switches off again, and waits for the next pulse to turn on. This process takes place 100 times a second (120 / second for 60Hz countries). The circuitry is very simple, but also very effective, allowing lamps to be dimmed from almost nothing right through to full brightness.
+ +The losses in the dimmer itself are very low - typically around 1W or so for a 100W lamp, since it is either on or off. The intermediate states (the transition between off and on) are so fast that power loss is minimal. The voltage waveform is shown below, and there is a great deal more information available on the Net. I did run a simulation and run a test with a real lamp though, because it is important that people understand that a dimmer does reduce the power consumption of an incandescent lamp, and does so very effectively. There is significant waveform distortion though, and this remains cause for concern.
+ +I used a circuit simulator to see the effect of changing the phase angle, and was easily able to measure the power, VA and power factor. The results are shown here for those who are interested to know more. + +
With an incandescent lamp, there is a complication ... the resistance of the filament changes depending on its temperature. At low settings of the dimmer, the filament is cooler, so has a lower resistance. Like most metals, tungsten has a positive temperature coefficient of resistance, so resistance increases with higher temperatures. This means that at low dimmer settings the lamp draws more power than the table may indicate (the table is based on a constant resistance). Nevertheless, it is obvious that the power delivered falls as the phase angle is reduced and the lamp is dimmed. A setting that just gives a slight glow (a bit less than a candle) is pretty much ideal for watching TV, and that will correspond to about 18W for a 100W lamp (from measured data below).
+ +
Figure 10 - Phase Angle Control of Lamp Voltage
Although this section is not intended to be a complete lesson on dimmers or how they work, I have included Figure 10 to show the incoming (mains) voltage, and the voltage applied to the lamp. The phase angle is set at 72°, so the lamp gets 130V RMS from a 230V RMS incoming mains supply. The dimmer works as a switch, and blocks the incoming voltage for a number of milliseconds, at which point it turns on. The 'switch' automatically turns off as the current to the lamp falls to zero.
+ +This example that shows quite definitively that dimming incandescent lamps most certainly does reduce their power consumption, your electricity bill, and greenhouse emissions. Anyone who claims otherwise is talking through their hat - the effects can be simulated or measured easily, and the results are perfectly clear.
+ +Brightness | Volts RMS | Current RMS | Power | Resistance |
Off | 0 | 0 | 0 | 44 Ohms |
Just On | 42 V | 167 mA | 7 W | 252 Ohms |
Dull Glow * | 79 V | 231 mA | 18 W | 342 Ohms |
Half Brightness ** | 169 V | 350 mA | 59 W | 483 Ohms |
Fully On | 239 V | 433 mA | 103 W | 552 Ohms |
The above table shows measured data, using a TRIAC dimmer and a 100W (nominal) incandescent lamp. The total power is higher than the simulation (on the sub-page) shows, because the tungsten filament has a strong temperature dependence, so the resistance varies with the lamp's brightness. Even so, at a dull glow (marked *) typical of the level we use when watching TV, a 100W lamp is using just over 18W, or less than one fifth of the rated power. The level marked as 'Half Brightness' ( ** ) is a visual estimate, but corresponds well with the setting that gives a conduction angle of 45° - power at this conduction angle is just under 60W as shown. It is also worth noting that using an incandescent lamp at a slightly lower voltage than rated will give significantly increased life. Operation at around 90% of rated voltage will increase life by a factor of 3, but light output is reduced to about 70% of the normal level [9]. The overall efficiency of a filament lamp is reduced even further by using a dimmer, but there are very few options that provide the versatility offered by the combination ... and you do still save power.
+ +Dimmers can reduce the power consumption of incandescent lamps significantly, and are a (reasonably) environmentally sound proposition if lighting needs to be adjustable. Conventional dimmers cannot be used with CFLs, and dimmers designed for CFLs cannot come even close to the range available from a traditional lamp and dimmer with the current basic CFLs available to consumers. Dimmable CFLs do exist, but are more expensive and don't work very well according to my own experiences and many forum posts worldwide (this will probably change though).
+ + +This is not an area that anyone seems to have looked at closely, so some tests were run to find out exactly does happen if a CFL is connected to a dimmer. The results were a complete surprise. The assumption is that the CFL probably won't work at all, but most do (although they don't dim). What you can't see is the RMS and peak current drawn from the mains!
+ +The symbol on the left means 'no dimmers', but may or may not be understood by users - most of whom are normal householders with little or no technical know-how. I don't think it's clear enough, and it certainly doesn't make the point as strongly as it should.
Warning! ... If one decides to test what happens if a CFL is used with a dimmer, at some settings (with possibly most CFLs) it may actually appear to work perfectly normally. One could easily be excused to imagining that there is no problem, as long as the dimmer is set to the maximum and left there. There's no visual clue, with normal light output and no nasty noises. Certainly, the lamp can't be dimmed, but that may not seem a major concern. I have seen this done - the dimmer knob was taped to hold it at the maximum setting.
+ +Don't do it! While it may appear to work normally, the current drawn by a typical CFL used this way increases up to 5-fold, to the point where it is potentially very dangerous. The current spikes are very narrow, but can exceed 8A with an 18W CFL. The RMS current drawn can be as high as 0.5A - over 5 times that drawn with no dimmer in the circuit (and that's with dimmer set to maximum!).
+ +Where the CFL has a fusible resistor at the mains input, this is present to limit the maximum (peak) current, and prevent internal short-circuit failures from blowing the main circuit breaker or fuse. Fusible resistors do not react (fuse) with excessive dissipation, so if the lamp is used with a dimmer (even if set to maximum), there is a very real chance that the fusible resistor (and/or other parts) will overheat due to the massively increased current, possibly leading to a (hopefully) small fire. The fusible resistor value can vary widely. Some have a very low resistance, so the chance of serious overheating is small. Others can use values ranging from 10 ohms up to 22 ohms. Some don't use one at all, but you don't know from the outside.
+ +This is also a potential issue with electronic timers, motion sensors and home automation systems as discussed below. One thing is of great concern in all cases - either the lamp will have a very short life (assuming it doesn't choose to catch on fire), or the dimmer or other switching circuit will be severely damaged - or both !
+ +While many CFL packages do state that they should not be used with dimmers, some don't, and others use a rather cryptic symbol (shown above) that users may or may not understand. While we still have a choice there isn't a major problem, because people will use incandescent lamps where they have dimmers (after all, that's why the dimmer is there). Once the choice is taken away, people no longer have a choice. Those in rented premises can't remove dimmers without the owner's approval, and those who own their home (or have permission) will usually have to get an electrician to remove the dimmer and its wiring and blank off the hole. Many will find that the lamp seems to work fine, so will leave it there. The consequences are potentially very dire, if seemingly somewhat remote at first glance.
+ +At anywhere between 3 to 5 times the normal current, the chance of a fire may seem pretty small, but even if only one house burns down or is badly damaged as a direct result, what if it's yours? Will your insurance even cover it ("You caused the fire yourself by using a CFL with a dimmer")? What if someone dies? This isn't idle speculation - several CFLs have been tested, and the same problem has shown up with all of them. The chance may be 1 in 1,000,000 but with several million CFLs being forced upon people following a ban, we have far too many opportunities for a disaster.
+ +Tests I ran showed that the operating (RMS) current could easily increase from a normal current of 90mA up to 300mA, with peak currents as high as 3A measured. Other tests run [10] showed higher currents because a different dimmer was used, namely a standard wall-plate dimmer, as used in most households. The one I used is a unit I built many years ago and is designed for heavy loads. These measurements (tabulated below) also showed current spikes of around 4.4 amps worst case, reduced to 2.2 amps with the dimmer on full (peak currents are not shown). The RMS current measured 0.42 amps at 75% and 0.24 amps at 100% dimmer setting - this equates to 110 VA and 59 VA respectively.
+ +CFL Power | Current Drawn (RMS) | ||
---|---|---|---|
Nominal | Dimmer 75% | Dimmer 100% | |
13 W | 83 mA | 450 mA | 245 mA |
11 W | 80 mA | 420 mA | 240 mA |
8 W | 80 mA | 330 mA | 190 mA |
5 W | 40 mA | 260 mA | 200 mA |
These test results are from real CFLs, connected to a dimmer set to 75% and 100%. Why test at 75%? Because it will happen - people (especially children) will fiddle with the dimmer, and they may be highly amused by the CFL becoming a flashing lamp at some settings (although not all do so). If the dimmer is in circuit, a setting of 75% looks alright, in that most CFLs don't flicker or flash, and have more or less normal light output, so it could easily remain like that for some time. Even if the dimmer is glued, taped or nailed at the maximum setting (not that I recommend the latter ), the current is still much, much higher than it should be. At the very least, lamp life will be reduced, at the worst ... ?
Just look at the current drawn! The average increase is 5 times, which means that 25 times more heat is generated in any current limiting resistor in the lamp's ballast circuit. It is inevitable that this will cause a failure, and probable that the circuit board will be badly charred or set on fire. While there is no guarantee that the lamp will catch on fire, there is likewise no guarantee that it won't. The waveform of a CFL with a dimmer in circuit is shown below, along with the normal waveform for comparison.
+ +If the fusible resistor is rated at 1W (fairly typical) and has a value of 15 ohms (also not uncommon), it will normally dissipate about 100mW - a perfectly safe power level. In the worst case shown above, the same resistor with 450mA through it will dissipate 3W, so it will get extremely hot. Certainly hot enough to cause failure in adjacent components, hot enough to melt the solder holding it into the PCB, and very likely hot enough to cause the PCB to catch on fire. I've seen boards that have caught alight because of overheated resistors enough times to know that there is a real possibility of the same thing happening in a CFL drawing 5 times its normal current.
+ +To reiterate ... never use a CFL with a dimmer in the circuit, even if it is set (and kept) at the maximum setting. Doing so places you at risk of fire, and at the very least will dramatically shorten the life of the lamp and the dimmer. Remember that these figures were all measured using a normal dimmer and with a variety of different CFLs - nothing is guessed, surmised or imagined - this is real data !
+ +Although you probably won't find information this detailed anywhere else on the Net (although there are brief mentions of just this topic), that's because almost no-one has done the tests (although many people have experienced burn-outs, melt-downs and even fires).
+ +If tests have been done, the results have not been publicised. Anyone with the skills and test equipment can verify the results, and I encourage those who are able to do so. Your results will almost certainly be slightly different because of differing mains voltage and lamp types, but the general trend will be the same. These results are compiled from tests run independently by two people, using different lamps and test gear, but with very similar results. Again, a total lack of any form of comprehensive mandatory standards means that no-one knows which lamps will just die quietly and which ones may exit in a blaze of glory (see below for suggested standards).
+ + +Firstly, it is important to understand that the above section on dimmers may also apply with any electronically switched lighting circuit. Unless you have extensive electrical and electronics experience, there is no way to know for certain, and the packaging or instructions will probably not say whether the switching system is suitable for use with CFLs. Unless it is specifically stated that the equipment is designed for use with CF lamps, it is far safer to assume that it is not suitable. While it may appear to work fine, you can't normally measure the current, so excessive current may be drawn and you'd never know.
+ +Several articles, many people, and some CFL packaging claim that CFLs cannot be used with time switches, motion sensors or other automated switching systems. This is only partially true - many auto-switching systems will work perfectly with CFLs, while many others will not. Some may appear to work, but will have the same problems as described above when a CFL is used with a dimmer (because of simple TRIAC switching circuits).
+ +Any switching system that uses a relay (an electro-mechanical switch) or a contact closure to operate the load will work with CFLs. Unfortunately it is not always easy to know, but the following might help ...
+ +The above is nearly all 'should', 'may' and 'probably' for a reason. Because of the vast range of motion switches, timers home automation systems, etc., it is very difficult to know whether they will work with CFLs or not, unless it is specifically stated on the packaging or in the instructions. This is a very grey area, and it is simply impossible to provide a simple way to know beforehand whether CFLs will work with a particular auto-switching system. The only way is to test it - but even if it appears to function there can still be a potential serious risk. Either get a new switch that specifically states that it is suitable for low energy lamps, or use incandescent lamps.
+ +You may not know if your system is really working properly, or only seems to. Unfortunately, there is no easy way to determine if any given electronic switching circuit is causing the problems referred to in the dimmer section. This is a very insidious and potentially dangerous area, usually with no tell-tale signs that anything is wrong. If there is any doubt whatsoever, please do yourself and your family a favour and stay with traditional lamps.
+ + +With any AC load, there are two things that can influence the power factor (the difference between the actual power used, and the volts and amps (VA) drawn from the mains). Most of the formulae available only deal with sinusoidal voltage and current waveforms, because the maths are simple and the result is quite clear. To refresh the memories of those who 'used to know this stuff' and to help those who never did, the following should help ...
+ +I recommend that anyone who doubts that power factor is an issue reads (and understands) the information from Integral Energy ...
+ Harmonic Distortion in the Electric Supply System
This technical note describes the ramifications of harmonic current and its implications for all power supplies that have a poor power factor caused by the non-linear current waveform. As more and more switching power supplies are added to the network, the problem simply becomes worse.
+ +At the power station, the alternators produce power in Watts (or more commonly Megawatts). A 1MW alternator can provide 1MVA (a million Volt-Amps), and that is its maximum. A bad power factor means that the maximum power available from the alternator is reduced, because some of the energy produced is VA rather than true Watts. If an alternator is faced with a power factor of 0.5, its output power is reduced to 500kW, but it will get just as hot as it would if generating 1MW. Like all electrical machines, the alternator is heated by losses in the wiring, and if the maximum current is 1,000A at 1,000V (1MW), a poor power factor will still cause 1,000A to flow, but the power delivered is reduced in proportion to the power factor. In theory, less input power is needed, but now we need more machines.
+ +While we can be sure that the power companies will have measures in place to correct the power factor wherever possible, they cannot correct for waveform distortion caused by 'discontinuous' load current. This is an harmonic component of the mains waveform that is extremely difficult to correct once it has been distorted. Harmonic waveform distortion can only be fixed by using power factor correction in the power supply of the offending appliance(s).
+ +Every transmission line and every transformer in the grid is subjected to resistive losses in the wire that are related to the current being drawn by every customer attached to the power grid. A bad power factor increases the losses by a ratio that is inversely proportional to the total power factor of the attached loads. A total PF of 0.5 means that twice the current is drawn for the power delivered, and the losses are not merely doubled, they are quadrupled. This is in addition to the reduction of the capacity of the alternators as described above. Because of the transmission losses, in order to deliver the same power to the customers, more power must be delivered to the grid.
+ +This is not a trivial issue. + +
Most CFLs have a claimed power factor of around 0.52 (where the figure is given at all), so a 15W CFL will actually draw just under 29VA. Because the load is not linear, the current waveform is in phase with the applied voltage, but is discontinuous. This simply means that current is only drawn at the peak of the waveform, and this effect causes a poor power factor just as readily as a phase shift between voltage and current. It's actually quite easy to obtain a power factor of less than 0.35 with a simple rectifier and filter cap circuit, but how many CFLs are that bad? I don't know, but the almost complete lack of any form of standard doesn't help matters.
+ +Remember, the supply companies must provide the total load in VA, not Watts. Based on a reasonably typical quoted PF of 0.52, each CFL in use requires almost double its rated power, because of the poor power factor. Therefore, rather than talking about a 15W CFL, we should be thinking in terms of a 30VA CFL. Just because we don't have to pay for the power doesn't mean that coal, uranium or some other fossil or non-renewable fuel isn't being used up to cover the total RMS voltage and current distribution losses caused by each and every load. You won't find this mentioned in too many articles (and none by politicians lobbying for green votes), but it is true nonetheless. Since these distribution losses can reach 20% easily (and I have even heard as high as 50% in some cases where extremely long feeders [several hundred kilometres] are used), the power factor is very important. A poor power factor will also reduce the capacity of power generating equipment, so more machines are needed to provide the same total load power.
+ + +![]() |
+13W spiral CFL +Peak current = 410mA +IRMS = 93mA +Crest Factor = 4.4 +VA = 22.4 +PF = 0.58 | |
Figure 11 - Current Waveform of a Modern CFL |
+ +
![]() |
+13W spiral CFL +Peak current = 2.2A +IRMS = 245mA +Crest Factor = 9.0 +VA = 59 +PF = 0.22 | |
Figure 12 - Current Waveform With Dimmer In Circuit |
The waveform shown in Figure 11 above gives a power factor of around 0.58, so the 13W CFL tested will actually draw 22.4VA - not as big a saving as is usually claimed in real terms. As you can see, it is a great deal worse if a dimmer is in circuit. Sure, you don't pay for the extra current, but the power company and the environment most certainly do. Larger transformers and heavier gauge distribution cables are needed to handle the extra current, plus more coal (or whatever) to generate the volts and amps needed to overcome the distribution losses.
+ +The arguments assume that the number of CFLs used is the same as with incandescent lamps, but this may not be the case. There is a very real chance that the load will increase by the use of many more CFLs than anticipated. Because CFLs are known by many to be adversely affected by switching them on and off all the time, we may find that users decide to leave them on for much longer 'in case' they may need to go into the room later - the loo (dunny/toilet/restroom etc.) is a prime candidate for just that. Silly though it may sound, the widespread use of CFLs could actually increase the power generating requirement unless people are convinced that the lamps can be switched on and off repeatedly without damage. The only way people will be convinced is when manufacturers actually solve the problem. Note that just because you don't have a problem switching CFLs on and off, that doesn't mean that others don't. When it's cold, CFLs are rather dim when first turned on - yet another reason to leave them on rather than put up with very low light levels when you next go to the loo.
+ +The nasty waveform created by CFLs is another thing that is going to come back and bite us on the bum. Any spike waveform means that significant harmonics are added to the mains waveform, and although CFLs are only a small percentage of 'nasty waveform generators' at present, the situation will get a lot worse. Power factor correction is possible (some up-market CFLs have it now, and have done for some time), but it does add to the cost - plus more electronics to end up in landfill.
+ +Harmonics generated by non-linear loads are causing major problems in power distribution, as well as increasing the overall level of RF energy that surrounds us all. Many people think this is dangerous, others say there is no problem - exactly the same situation that existed when global warming/climate change was first raised as an issue. While older people may not need to concern themselves, what of small children? No-one really knows, but caution is well advised.
+ +There is a little more on this topic here.
+ +It is worth noting that mains waveform distortion is now becoming big business. There are more and more companies selling large inductors for use as mains filters for critical applications. Likewise, complete filter units are becoming more readily available than ever before, because the cost of replacing motors that fail because of high harmonic currents is considerable ... both the cost of the motor and machine down-time make failures very expensive.
+ +This topic is of sufficient importance that a new article will be written to describe the problems and their impact on equipment. It's high time that governments stopped messing about with things that will only annoy people, and started making rules that will have a positive effect on the whole power grid. The potential savings are a great deal more significant than banning incandescent light bulbs!
+ + +I measured the 8W CFL referred to above. I also measured a 10W version from the same manufacturer. Interestingly, both claim 80mA at 230V, and they can't both be right! Since some people have complained about strobing (which can make a moving machine appear stationary), I also measured the light modulation ('flicker') frequency. The results were tabulated so we can get a better idea of what's happening ...
+ +Power | Claimed Current | Measured Current | +Peak Current | VA | Power Factor |
8W | 80mA | 70mA (RMS) | 270mA | 16.8 | 0.48 |
10W | 80mA | 80mA (RMS) | 338mA | 19.2 | 0.52 |
The 'flicker' frequency of the 10W lamp I measured was 38kHz, so strobing is very unlikely. The light intensity was modulated at 100Hz (this will be 120Hz in 60Hz mains countries) to a depth of about 50%. I say 'about', because it is difficult to measure accurately because of the nature of the light waveform. In general, strobing is not a problem with modern compact fluorescent lamps, although it may get worse as the lamp ages.
+ +The 38kHz flicker frequency has caused infrared remote controls to malfunction - a reader described this exact issue to me some time ago. While this is hardly a common gripe, it can and will happen in some instances, and will most likely be intermittent. Sometimes the remote will work, other times not. Most users will not make the connection, because they will be unaware that CFL electronic ballasts and IR remote controls operate at similar frequencies.
+ +I also measured inrush current - the current drawn at the instant the lamp is switched on. Inrush current always varies, depending on the exact moment the lamp is switched on, but the measurements I took showed that the minimum was 2.6A, and the maximum (that I saw) was 5.8A. If you had (say) 20 CFLs in a festoon (using multiple lamps all attached to a single cable), the average current will only be 1.6A for 20 x 10W CFLs, but the peak inrush current could easily be as high as 116A for a couple of milliseconds. Your circuit breaker may or may not allow you to turn the lamps on - it will probably be intermittent.
+ + +As noted above, there is a definite fire risk with so-called 'normal' failure modes using CFLs (although it is much, much lower than with 12V halogen downlights for example). There are several news reports on the Net that have described fires (or the imminent danger of fire), with the worst so far being a description of a US$165,000 fire (the article was linked, but has now disappeared) apparently caused by a CFL that had been connected to a circuit with a dimmer in circuit. The details in the article are somewhat sketchy, and it's rather unlikely that a follow-up will be done when the real cause has been identified. Fortunately no-one was hurt in this case, but there will undoubtedly be a fatality at some stage (if not already). It's not possible to have tens of millions of products such as CFLs being sold without any incidents, but it would be nice if someone actually took the risk seriously.
+ +Part of the problem is that a CFL is a time bomb - the house fire above happened around 12 months after the CFLs were installed. While fires have been caused by incandescent lamps is not disputed - they get hot enough to cause fires quite easily, but in most cases the likelihood of a fire is almost immediately apparent.
+ +Any GLS lamp installed where it's in contact with flammable materials will only take a few minutes before the risk is obvious, and corrective action can be taken before a serious fire ensues. Fairly obviously though, the risk may not be present at the time when the lamp is installed, but can occur at any time thereafter. Yes, incandescent lamps are a potential fire hazard, but most people are aware of this because the heat is delivered immediately, and nearly everyone is aware of the risk because they know that traditional lamps get very hot.
+ +The press and most of the websites that talk about CFLs and LED lamps often mention heat - but usually only the relative lack of it compared to incandescent lamps. There's almost no information about using CFLs or LEDs with enclosed fittings, dimmers, light/movement activated switches or home automation systems, other than in articles such as this. The websites that extol the virtues of the energy saving lamps are hardly going to discourage people by disclosing the facts - largely because they don't even know. Since governments either don't understand the risks or choose to ignore them, no-one's about to be prosecuted for misleading advertising when governments and their own agencies are providing exactly the same misleading 'information' to consumers.
+ +As always, be careful with news stories and comments you find on the Net. Unless the article has good technical details and really does describe the problem in detail, it's often best to ignore it. Too many such stories are based on hearsay or journalistic 'license', and few have sufficient real information to be credible.
+ + +Well, not really, but another issue has been raised. I ran some tests, and sure enough, you can have your own little disco strobe lamp given the right (or wrong) set of circumstances. Although the flashing effect is usually quite faint, it certainly won't look faint if you are trying to sleep! In a bedroom, even the smallest amount of flashing light may disturb sleep patterns, and is definitely not recommended.
+ +In many households, you will find ceiling fans with a light under the fan. Ignoring the fact that these are usually fully sealed (so will overheat the lamp as described above), some have remote control units. This usually means that the switching is performed using a solid state switch (typically a TRIAC). When a TRIAC is used for switching, it is customary to add a small (typically 47nF or so) capacitor in parallel with the TRIAC to suppress extremely fast pulses that can cause the TRIAC to 'spontaneously' trigger. Capacitors may also be connected in parallel with relay switching systems to help prevent arcing.
+ +Where used, this capacitor will cause quite a few CFLs to flash at a rate between around 2Hz to 6Hz - as noted, the flash is very dim and may not even be visible in bright lighting, but you most certainly will see it if the room is dark. I tested 3 CFLs I had in my workshop, and 2 of those flashed quite cheerfully. The other seemed immune (I tried a range of capacitor values). This tallies with the test details I was given, and means that there will be installations where a CFL simply cannot be used because of the low-level flashing. On the basis of the two sets of tests run, I would guess that around 50-60% of CF lamps may be affected to some degree.
+ +Further reports (including a response from an Australian CFL distributor to someone who had the problem) acknowledge this issue, and in some cases CFLs will flash simply because of the cable capacitance or the way the switch is wired. Some have described the effect as a bright flash, but this seems unlikely - it probably just seems bright in a dark room.
+ +It isn't known if this will shorten the life of the lamp - my guess is probably not, because the energy is so low. Even so, it represents a tiny amount of wasted energy, and it may transpire that lamp life is reduced by means not yet understood. Most certainly, the flash will be sufficiently irritating in a dark room to force its removal - especially in a bedroom. That fact that many lamps will flash if there is a capacitor in parallel with the switching device (or because of the way their house is wired) means that there are further applications where CFLs cannot be used at present. While the number of people so affected will probably be small, if incandescent lamps are ever banned, users will have to search for a lamp that doesn't flash, or have their ceiling fan(s), home automation system or house wiring modified or replaced. I doubt they will be amused.
+ +It is likely that CFL manufacturers will fix this problem once it becomes well known to the public - it is already known to many (most?) of the manufacturers. In the meantime, it is a real issue, and will affect a lot of people who want to do the right thing. I have CFLs installed in my main bedroom (in a 3-lamp fitting), and had to install a small incandescent lamp in one socket to prevent the flashing at night. No LED lamp I've tested has this problem.
+ + +The most recent information to hand (from a reader whose wife has Lupus) indicates that there most certainly are health issues. There are several auto immune diseases (Lupus being one of them), where UV light and/or light flicker cause sometimes extreme physical pain. See Wikipedia (Fluorescent Lamp, Other Health Issues) for more on this. A web search will quickly show that there are several conditions that create extreme sensitivity to UV light ... once you know what to look for.
+ +For the remainder of the population, there is no evidence to suggest that humans are adversely affected by high frequency modulated light, but there is also no evidence that there are no long term effects. There is a great deal of concern (and a certain degree of conjecture) that fluorescent lamps of all types may contribute to health problems, in particular cancer. Try a web search for many articles that describe possible reasons in some depth.
+ +Of particular concern is the amount of UV (ultraviolet) light that all discharge lamps emit - it is significantly higher than that from an incandescent lamp. Are these claims real or just scare mongering? Based on the information above, it seems that the claims are indeed real, and will affect a considerable number of people. Some additional scientific study certainly wouldn't go astray - preferably before government lunatics impose any bans on GLS lamps. This is a fairly hot topic on the Net, and a search finds a great many sites (over 1.2 million results) discussing the link between artificial lighting and breast cancer in particular.
+ +Ultimately, it is better to err on the side of safety, but modern realities can make that extremely difficult. This information has been included in the interests of completeness, and the reader is advised to read the available information and make any decision based on that. It may prove that incandescent lighting is no better (or worse) than fluorescents in this respect, but it is not my intention to discuss this in any more depth than has been done in this section. I have neither the research material nor the medical skills needed to be able to make any recommendations, but it seems plausible that the claimed link between lighting and cancer may have some credibility - what we can do about it is another matter altogether.
+ +If a lamp decides to fail and let the magic smoke out, there is definitely a serious health risk. Despite claims that the smoke poses no danger, it depends entirely which component it comes from. A burning polyester capacitor is very bad news - the smoke is toxic, as with most burning plastics. This also applies if the PCB is severely overheated and smokes or burns. Although it is uncommon for electrolytic capacitors to catch on fire, it has happened ... I've seen the results on a number of occasions. The fumes from burning ethylene glycol (part of the electrolyte in electrolytic capacitors) should not be inhaled - ethylene glycol itself is toxic, and the smoke is unlikely to be beneficial.
+ +Never use a CFL as an all-night light for small children. Lamp failure could result in toxic fumes and possible serious injury.
+ +This problem is well known - even a politician supporting the ban commented that the CFL in her hallway didn't give enough light when first turned on, and this made it "difficult to find something dropped on the carpet" (this was the (then) E.U. Council president Angela Merkel !). The solution is easy - just leave the light on for 5 minutes, right? An incandescent lamp may need to be on for no more than 30 seconds while one descends stairs or finds something dropped on the carpet, so its energy usage will be (say) 30s x 100W = 0.8Wh (Watt Hours). Leave a 23W CFL on for 5 minutes until its light is adequate for the task, and you use 5m x 23W = 1.9Wh - more than twice as much as the incandescent! Not so energy efficient now, is it?
+ + +It must also be considered that there are some websites that are guilty of serious scare-mongering. While the material presented may have some (often tenuous) basis in fact, it is often exaggerated to the point of being somewhere between 'quite silly' to 'insane'. One such site has made the most outrageous claims I've ever seen, and this kind of idiocy will only make it harder for people to have a sensible argument on the topic.
+ +It should be remembered that we've been using fluorescent lighting for a very long time, and the CFL is simply a compact version of the traditional tubes that are ubiquitous in offices and shopping centres. Despite the use of fluoro tubes, the world has not ended, and huge sectors of the population do not have panic attacks nor get serious UV burns as a result of working beneath them. I have 2 x 1,200mm (4') tubes directly above my workbench (as well as a LED tube), and they are no more than 500mm from me while I'm working. My eyesight has not been ruined and I've never even had the tiniest sunburn as a result of working so close to them. Likewise CFLs - I (used to) use lots of them in my workshop!
+ + +Dirty Electricity
+A favourite for a while (and it's still a topic) is so-called 'dirty electricity'. The CFL electronics supposedly pollute the normal mains sinewave and this is claimed to have serious health issues. This seems on the surface to be utter bollocks! Digging deeper doesn't help either.
I have never seen any technical data that describes the 'scientific' meter that is used to measure this 'dirty electricity', and as far as I'm concerned the entire topic is just nonsense. I'm sure that some frequencies above 50Hz (or 60Hz for North America) can be measured, but most will probably be barely above the frequencies that are typical for audio equipment. All countries have strict rules about the amount of RF interference that electronic products can produce.
+ +No claim for this 'dirty electricity' causing harm has been proved to my knowledge, and I'd be very surprised if anyone managed to show any connection whatsoever. This is exactly the kind of disinformation that makes any attempt at credible criticism very difficult. There may actually be a connection somewhere, but no-one is going to try to look for it or defend their results against the nonsense that's being generated by these pseudo-scientific purveyors of snake-oil.
+ +In short, there are some issues, there is a small amount of UV that may affect some people, and CFLs do contain mercury. They don't contain enough so that one broken CFL will pollute the water supply of a small city, and 'dirty electricity' is a myth until someone explains what they are measuring. They won't do that though, because their claims can then easily be challenged. As long as they keep their tests in the realms of magic, no-one can level any sensible complaint against them.
+ + +A topic commonly raised by proponents of a ban on incandescent lights is that the generated heat is wasted. In many areas (even in Australia), the heat is not wasted at all. It is in addition to other heat sources (radiators, reverse-cycle air conditioners, convection heaters, etc.).
+ +Even in temperate regions like Sydney, the little bit of extra warmth is perhaps usable for about 5 months of the year, or around 7 months in places like the UK. Small though it may be, having a 100W lamp switched on for a few hours will probably make some difference, even if only to make up for heat lost through window glass, ceilings, etc. In colder climates, the heat will hardly ever be 'wasted' - it is a usable form of additional heating for the home. Not much, but a number of people have brought this up on forum sites and elsewhere.
+ +Because this really is (or seems to be) a relatively trivial issue, the original material from this section has been moved. Click here to see the entire topic. It turns out that it's no so trivial after all though - see 'However' below for more.
+ +A link in the sub-page may look a bit silly if launched from the popup window, so you can access it here ...
+ +Building Research Establishment
+ +As noted in the introduction, the only way to prove or disprove the wasted heat argument is a properly conducted trial. Humans don't normally apply maths and science to their everyday activities, so using these tools to prove a subtle point is, well ... pointless. It may turn out that the heat from conventional lamps is completely wasted, it could be that it makes more difference than anyone thought, or it may not make any difference either way. This can only be determined by testing real people in a real environment - not by someone dragging out reams of data and using maths to prove a point one way or another.
+ +There are now special non-combustible 'hats' available that can be placed over the downlight that prevent heat loss (or heat gain during summer) due to convection, but in my opinion this is a stop-gap measure only. While these devices can make a very big difference, the traditional 50W halogen downlight is likely to fail the next round of energy performance standards amendments, and because the enclosure is now sealed it may not be suitable for CFL or LED downlights.
+ + +Australia is way behind the US in this area. At this stage it's only a draft but I recommend you read ENERGY STAR (Criteria, Reference Standards and Required Documentation for GU-24 Based Integrated Lamps) to see the requirements in full. While Australia has adopted the Energy Star system, there appears to be little or no activity with CFLs.
+ +Although the packaging may claim 10,000 hours or more for a CFL, there is usually no guarantee that this will ever be achieved. I've used quite a few CFLs in the house and workshop, and I seriously doubt that the claimed life is/will be ever reached. While some manufacturers will provide detailed technical data sheets, most don't, and even for the ones that do you have to really search for the information. The stated lifespan is generally taken as that where 50% of lamps are still working - in other words, half are not still working. What is not stated is the light output at end-of-life - it may be as little as half that when the lamp was new. Claims that incandescent lamps also get dimmer as they age are complete rubbish. When was the last time you saw a (mains operated) filament lamp that was blackened on the inside of the glass, but was still working? It can happen, but I haven't seen one for many years. The old style inductive ballast CFL shown in the sub page above still works, but its light output is uselessly low compared to when it was new.
+ +We can reasonably safely assume that the life is 'typical', based on other similar products, and with the lamp running continuously. Manufacturers are not going to test lamps for over a year before selling them to verify that the claimed life is accurate. It is well known that switching reduces life, but there is usually very little (or no) information provided on the pack, the maker's website or anywhere else.
+ +Standards covering these important questions appear to be somewhere between few to none. Likewise, there appear to be no standards (at least in Australia) that specify what 'warm white' actually means. It is not at all uncommon to have a number of 'warm white' CFLs from different makers all having quite different colour temperature and colour rendition. Even lamps of different vintages from the same manufacturer (and with the same claimed colour temperature) can be quite different from each other - especially those from supermarkets or department stores, which will nearly always be at the low end of the price range.
+ +Actually, there are standards, but they are completely voluntary. The only mandatory standard that applies is electrical safety, because CFLs are a 'prescribed product' in Australia. Two sets of standards (also referenced earlier) from Australian government groups (EnergyAllStars and National Appliance and Equipment Energy Efficiency Program describe minimum energy requirements and other standards, but there is no obligation for these products to comply. According to these documents, a CFL should retain 80% or more of its light output after 2,000 hours - that is obviously a joke if a lamp is claimed to last for 10,000 hours or more. According to these 'standards', at end of life for a CFL, you'd be better off with a candle.
+ +Mandatory standards that specify minimum performance criteria should be in place now - waiting until incandescent lamps are banned (as has already effectively happened in Australia) is too late. At the minimum, these mandatory standards should cover the following ...
+ +Of these, only colour temperature is commonly provided, but this is not meaningful without the CRI figure. For example, low pressure sodium vapour lamps have a colour temperature of perhaps 2300K (my guess, but it's actually an irrelevant figure for these lamps), but have a CRI of almost zero (they are the most efficient lighting source currently available, but are an orange/yellow colour, and are typically used for street lighting).
+ +A (now empty) CFL pack I have states the colour temperature as 3500K, and says that the lamps are 8W (equivalent to 40W). It also claims the current to be 80mA (but I measured it as 60mA, a rating of ~17VA ... not 8W at all, giving a power factor of 0.47). The actual generating capacity needed is therefore closer to ½ that of the 40W incandescent lamp, not ¼ as claimed. People are being seriously mislead by the term 'power' - as noted above, this may be what you pay for, but is not what must be generated and distributed.
+ +If it seems that I'm really pushing the power factor issue, that's because I am!. It is important, and almost no-one has commented on it (or even seem to know the problem exists). Power factor is real, and reduces the claimed savings in CO2 generation to significantly less than that claimed.
+ +For power savings, we've all seen wide variances for apparently equivalent CFLs. A 100W incandescent lamp gives a total of around 1,800 lumens. Assuming 60 lm/W for a CFL, that means you need a 30W compact fluorescent lamp to replace a 100W incandescent. My maths tells me that the CFL uses 30% of the energy of an incandescent - not 25%, not 20% and definitely not 12.5%. Anyone claiming that an 18W CFL is equivalent to a 100W incandescent lamp is trying to trick you - a 100W incandescent lamp will provide around 1,800 lumens as noted above ... not 1,350 lumens or thereabouts as is often stated in various websites and other so-called 'information'.
+ +As a side issue, the claimed colour temperature of all fluorescent lamps is meaningless, because they are not 'black body' radiators of light (look it up )
Many CFLs used to come in a plastic 'blister' pack with a thin cardboard sheet with printed details. This has now changed (for the most part) to thin cardboard boxes similar to those that were common for incandescent lamps. Almost none has a recycling symbol or disposal warning to be seen anywhere. Some show a wheelie bin with a cross through it - that's pretty clear, but there is no information about disposal. Presumably we simply hoard the old lamps until someone offers a means of disposal.
+ +With the rapid acceptance of LED lamps, recycling is even more important. There are no nasty chemicals, but the is an aluminium heatsink which is perfectly recyclable - as long as someone is willing to take the trouble to separate it from its constituent parts. It's also worth noting that in many cases the LED module is fine, and the fault that made it stop working is in the power supply. I've seen this with commercial offerings on many occasions. Provided replacement power supplies are available (and can be removed without destroying the lamp) the lamp can be repaired - sometimes by the customer! For this reason, there's a great deal to be said for using separate LED modules and power supplies (common with commercial products), as the full life of the LEDs can be obtained.
+ +Dedicated end user recyclers could potentially reclaim the steel (and possibly the glass) from incandescent lamps, but the quantities produced by the average household are so small that it would take a very long time to make the effort worthwhile. As noted above, this same thin cardboard package is becoming more popular for new CFLs, which is a step in the right direction.
+ +Many people all over the world have commented that the push to use CFLs really has little to do with the environment and a lot to do with consumption. I don't necessarily agree with this (it is rather cynical), but sometimes you wonder when you see all this packaging and electronics, with no indication as to whether it can be recycled or not. It becomes even harder to disagree when you consider the millions of existing fittings that will ruin a perfectly good CFL in as little as a couple of hundred hours, because so many light fittings are inadequately ventilated. Almost no-one who is pushing for a ban of incandescent lamps even mentions the limitations of CFLs, or any precautions that users should observe to get the maximum life from them. Now, to me, that is cynical beyond belief.
+ +As noted above, recycling is imperative, and can do a great deal to reduce CO2 production and waste. With CFLs, it must be mandatory, but what are those supporting bans on incandescent lamps doing about it? Bugger all as near as I can find.
+ +The latest from governments (in Australia at least) is the claim that we need to reduce our consumption of electricity to minimise 'carbon pollution'. A figure of 33% was mentioned recently ... pie-in-the-sky (with sauce) if ever I've heard it. It's unclear exactly how we will be able to reduce consumption by such a massive amount - many home appliances are already reasonably efficient, and the remainder can never reach the gains expected. It's really easy to make a blanket statement like that, but much harder to implement it. In this case, I'd suggest that it's impossible unless our lifestyles are drastically modified, and that simply is not going to happen other than by catastrophe.
+ +I have no idea who is advising governments these days, but they are undoubtedly either seriously over- or under-medicated.
+ +![]() |
Elliott Sound Products | Inrush Current Testing/ Project 225 |
![]() ![]() ![]() |
While this article is listed primarily as an article, it's 'dual purpose', and it has both construction details (like a project) and the detailed explanations common to an ESP article. Consequently it's listed in both the projects and articles indexes. If anyone wishes to construct it, be aware that it requires considerable experience with mains wiring, and there are some elements that may require some experimentation because of the way it works.
Inrush current is often a big problem, and although there are easy ways to mitigate it (see Project 39) it's a great deal harder to measure it accurately. A web search finds a depressingly large number of clamp-meters and other 'solutions', but methods for activating the mains waveform at a predictable phase angle are few and far between (the most useful are zero-crossing and 90°). A random power-on will certainly show the current measured, but it will almost always be different for every measurement because power-on is random. You need to be able to ensure that the mains connects at the worst time every time, or the measurement is not useful. I tried a number of search terms without success, so it's safe to assume that the specialised zero-crossing and peak switching circuit described here is unique. I have no doubt that there are lab instruments that can do much the same thing, but they will come with lab equipment prices. This is a DIY solution that works very well.
Many years ago I designed a tester that lets me apply power at the zero-crossing point of the mains waveform or at the crest of the waveform (5ms for 50Hz, or 4.17ms for 60Hz). Mine is calibrated for 50Hz since that's the mains frequency in Australia, and I don't need to test at 60Hz. The switching is predictable, with zero-crossing being the worst for transformer and other 'inductive' loads, and peak switching is the worst for switchmode supplies and other 'capacitive' loads. Note that 'capacitive' is in quotes because an SMPS is not electrically capacitive; the capacitor is isolated from the mains by the bridge rectifier. Likewise, 'inductive' is in quotes because the inductance is not a problem, it's core saturation caused as the magnetic field is initiated.
The need for a tester such as the one I built is probably minimal. For test labs and the like, they'll most likely use a programmable supply, capable of generating a sinewave at either 50 or 60Hz, and designed to turn on at the selected phase angle. One I saw was the Kikusui PCR500M AC power supply, but with an estimated cost of over AU$3,000 I doubt there will be a queue of people waiting to buy one. As with any supply that generates the voltage needed, its output current will be limited. The datasheet says it can output a peak of 3 times the rated current - that's only 6.5A at 230V, so high-current tests are not possible. A switched-mains solution may lack the ultimate precision of a dedicated lab supply, but it has the advantage of being able to provide very high current so the measurement is representative of reality.
![]()
Please note that the descriptions and calculations presented here are for 230V 50Hz mains. This is the nominal value for Australia and Europe, as well as many other countries. The US and Canada, along with a few other countries, use 120V 60Hz. This is not a problem - the timer can be calibrated for whatever mains frequency is appropriate.
It is likely that testing and/ or background knowledge will be needed before you will be able to make use of the tester described here. In addition, the reader/ prospective builder will have to select parts that are readily available, rather than those suggested. Component suppliers do not always provide information in the same way, and some info included by one supplier is omitted by others. This can make selection a challenge at times.
![]() | WARNING: This article describes circuitry that is directly connected to the AC mains, and contact with any part of the circuit may result in death or serious injury. By reading past this point, you explicitly accept all responsibility for any such death or injury, and hold Elliott Sound Products harmless against litigation or prosecution even if errors or omissions in this warning or the article itself contribute in any way to death or injury. All mains wiring should be performed by suitably qualified persons, and it may be an offence in your country to perform such wiring unless so qualified. Severe penalties may apply. The circuit described is designed for testing at very high instantaneous currents, and these may cause damage to test fixtures or other equipment because tests will usually involve multiple 'worst-case' events. It is the readers' responsibility absolutely to determine the suitability of the tester itself and any/ all other test fixtures. | ![]() |
In many cases, you can estimate the probable 'worst-case' inrush current, but if it's a manufactured product an estimation isn't good enough. In spec sheets, people generally expect measured values rather than estimations, 'educated guesses' or simulations. While these can be just as accurate as a measurement, they are theoretical rather than actual. Mostly it doesn't matter that much, since no-one knows the final location of the product, along with potentially location-specific factors such as the mains impedance. I've measured it at my workbench as ~1Ω, but it can vary, even within the same installation (house, factory, test-lab, etc.).
The two switching points of interest are shown above. Switching at the peak of the AC waveform is best for transformer loads, as well as many other inductive loads such as (some) motors, AC powered solenoids/ electromagnets, etc. (see Fig. 1.1). For capacitive loads, switching at the zero-crossing point results in the lowest inrush current. This includes switchmode power supplies, where the capacitance is isolated from the mains by a bridge rectifier, but the capacitor(s) still present a capacitive load at power-on (see Fig. 1.3). Once operating, an SMPS is not a capacitive load, it's non-linear. Failure to understand this is common, and often leads to wildly inaccurate assumptions.
Note: If your mains are at 60Hz, the timing for the waveform peak is 4.167ms, not 5ms as shown above. The tester is designed to let you select the exact timing needed, by altering the threshold voltage of an opamp wired as a comparator.
The tester is designed to let you switch between peak and zero-crossing, so the worst-case condition for a load can be determined. When equipment is used normally (with turn-on at a random point in the AC waveform), the peak inrush current can never exceed the worst-case condition. A simple power switch is effectively random, as the user cannot synchronise the moment of turn-on to any point on the AC waveform other than by accident.
When performing inrush current tests, you ideally should know the expected range in advance. The estimate doesn't need to be too precise, but without it you may spend a silly amount of time trying to make sense of the measurement data you get. A rough idea of the series resistance will get you a reasonable estimate of the worst possible inrush current. For example, a transformer with a primary resistance of 10Ω cannot draw more than 23A from 230V mains (based on the RMS value of the AC waveform). Switchmode power supplies often have minimal series resistance before the main filter caps, and the current can be much higher than you expect. Even a 220nF X2 capacitor can draw over 100A, but it lasts for such a short time (typically less than 2μs) that they don't cause a problem for switching systems.
The inrush current is usually specified as peak, and converting it to an RMS measurement is (IMO) inappropriate for a transient event. Often, you can get a fair idea by simple calculation, depending on the nature of the DUT. Switchmode power supplies are fairly easy if you have access to the internals (with power off of course). If you measure the series resistance of all parts between the mains input and the filter capacitor(s), that is the limiting resistance when power is applied at the crest (peak) of the mains waveform. The diode bridge makes this a little harder unless you know the dynamic resistance of the diodes, but you can just use an educated guess of around 1Ω for most. If you measure a total resistance of (say) 5Ω, add 2Ω for the diodes (two are in circuit for each polarity of the mains waveform). The ESR of the input capacitor can be added, and this will be somewhere between 100mΩ to perhaps 2Ω, depending on value and voltage rating.
Since I use 230V, 50Hz, I know that the peak of the AC mains waveform is ~325V at 5ms after the zero-crossing. We calculated a resistance of 7Ω, so the worst case peak current is about 46A. Should the AC be turned on at any other phase angle, the inrush current will be lower than the calculated value. The value calculated this way will always be somewhat pessimistic, but you need to consider that the mains voltage is not a fixed value. The tolerance is generally ±10%, but it can (and does) vary by more on occasion.
With most SMPS there's another influence as well. There will invariably be an X2 capacitor in parallel with the mains, and generally another following an input common-mode inductor (choke). Even in circuits that have an inbuilt inrush limiter, the input filter is not subject to any such limitation, so if the mains is turned on at the peak of the AC waveform, the current through the X2 caps is limited only by the mains impedance and the ESR of the X2 caps. This is usually very low, so the input current is very high, but only for a very brief period. This is sometimes included in any inrush current specification, but in reality it's immaterial because the current peak is so brief.
An 'ideal' 220nF X2 capacitor will theoretically draw as much as 300A when power is applied at the mains peak, but the duration is less than 1µs. This is real, but of no real consequence because it's so brief. The only limitation is the mains impedance and any series inductance. As little as 1µH of inductance will reduce the theoretical peak current to around 110A, and spreads the current pulse to about 1.5µs. This is still immaterial, as no fuse or circuit breaker can possibly react to such a short current pulse.
The process is less well defined for mains transformers. Ideally, they would be turned on at the peak of the AC waveform, as this minimises the inrush current (quite the opposite of what many people believe). Unfortunately, the transformer is almost always followed by a bridge rectifier and filter capacitor(s), and to get minimum capacitive inrush current zero-crossing is better. These two are in opposition to each other, so neither is optimal.
In theory, the worst-case transformer (saturation) inrush current is limited only by the primary resistance of the transformer. However, it's impossible to get this under any realistic test conditions, and you can get a rough idea of the maximum by dividing the AC RMS voltage by the primary resistance. For a transformer with a 10Ω primary, you'll probably measure a maximum inrush current of up to 23A or so.
Figure 1 has two captures combined into one, and shows the inrush current waveform captured when power is applied at both the mains zero crossing point and at the peak. The transformer is a single phase, 200VA E-I type, with a primary resistance of 10.5 Ohms. The scope scale is 5A/division. Absolute worst case current for a transformer is simply the peak value of the mains voltage (325V), divided by the circuit resistance. This includes the transformer winding, cables, switch resistance, and the effective resistance of the mains feed. The latter is usually less than 1Ω, and allowing an extra Ohm for other wiring, this transformer could conceivably draw a peak of about 28A. My inrush tester also has some residual resistance, primarily due to the TRIAC that's used for switching. Although it's bypassed with a relay, there is a time delay before the relay contacts close and this reduces the measured inrush current slightly. Peak switching quite obviously reduces the inrush current dramatically, from a measured 19A down to 4A.
A toroidal transformer will be much worse than the one shown. They saturate much harder than E-I transformers, and usually have a lower primary resistance for the same ratings.
For reference, Fig. 1.2 shows a 3.3µF cap switched on at 90° (325V peak). Look carefully at the scale - 10V/division. The current transformer's output was 100mV/A, so with 23V peak shown on the scope trace, that makes the current 230A!. That means there's a total series resistance (mains, test unit and leads, etc.) of about 1.4Ω, which is in line with my expectations. The pulse lasts for only 500ns (give or take) and the 'wobbles' after it are a direct result of parasitic inductance. A very rough estimate is about 11nH of inductance, which is actually less than I would have guessed. This isn't important of course, but it is slightly interesting. Note that I recommend a 10Ω burden, giving 10mV/A.
Of greatest interest is the peak current. The current transformer I used is an AC1005, which is sold as a 5A CT, but I already knew it extended well past that. I didn't expect it to give a reading with 230A though - that is rather extraordinary performance from a cheap 5A CT. The measurement is almost certainly fairly accurate, although the current transformer will saturate at this current. The peak current can be estimated, given the known mains impedance at my workshop and the internal resistance of the tester and its mains leads. The capacitor used was designed for pulse operation, so has minimal internal resistance.
The stress placed on a bridge rectifier when subjected to such high current (even briefly) is considerable. Given the current measured, you can imagine how bad it can be with a switchmode power supply. The very high inrush current applies for both 'standard' (no PFC) and supplies with active PFC. These almost always use a NTC thermistor to limit the peak current. While effective, there are some serious compromises needed in most cases. For more information, see Inrush Current Mitigation, which discusses the techniques and limitations.
Fig. 1.3 shows a capture for a small SMPS. It's only rated for 60W (24V at 2.5A), and it has active PFC. There's a narrow spike that extends to 44A (partly obscured by the trigger marker), which is due to the input EMI filter cap (330nF X2). The main PFC input capacitor draws 32A peak. This is one of several captures, and the result was identical each time. Mains was applied at the peak of the AC waveform (325V nominal), indicating that the total series resistance in the SMPS is around 9Ω (the mains impedance is roughly 1Ω at my test bench). Repeatability is very important for tests, and the consistent results I obtained are what should be expected. If the supply is turned on at the zero-crossing, the peak current is only 6.2A (again, absolutely repeatable, but not shown).
A simulation of the circuit using an 82µF main filter cap gave almost identical results to those I measured. Given that a simulator is capable of accuracy unmatched by any test equipment this is a good result. Importantly, theory and practice are in agreement. This is always a requirement when performing tests - if you get wildly different results for a simulation or estimation and the measurement, something is wrong.
A measurement system relies on a defined turn-on behaviour. While it is possible to detect the peak of the mains waveform, using it as a reference is imprecise and it's not practical. The zero-crossing point (where the mains voltage reverses polarity) is easy to detect with acceptable accuracy, and is the method used. Zero-voltage detectors (ZCDs) are common in many phase-controlled devices, particularly dimmers and other circuits where AC phase control is required. This circuit is no different.
There are only two possibilities of any great interest; turn-on at zero or 90°, where 90° takes the turn-on point to the peak of the waveform. Zero-voltage switching is the optimum for SMPS and other capacitive loads, as the maximum rate of change is that created by the mains voltage itself. A 230V 50Hz waveform has a maximum rate-of-change (aka slew rate, ΔV/Δt or dV/dt) of 102,102V/s (0.1V/µs). A 120V 60Hz sinewave has a maximum ΔV/Δt of 63,968V/s (0.06V/µs). When power is applied at the peak, the ΔV/Δt depends on the TRIAC used, but will typically be several hundred volts per microsecond. The maximum slew rate of a sinewave is at the zero-crossing, and is calculated by ...
ΔV/Δt = 2π × f × VPeak
At least in theory, if the mains is connected at the zero-crossing, the inrush current into any capacitive load (including rectifier-capacitor loads) is mitigated. However, this isn't always as easy as it sounds. If a relay is used, there will be contact bounce when it operates. This will create multiple 'events'. A TRIAC can be used, but it will be unable to turn on with zero volts across it, as no current can be drawn. TRIACs are also unsuitable for most 'electronic' loads, because there is inevitably a 'dead' period. The solution I adopted was to use a TRIAC, along with a relay in parallel. The TRIAC triggers immediately (or when the voltage is more than ±25V or so). Near the zero-crossing point, the voltage across the TRIAC reaches 50V within 500µs as indicated by the ΔV/Δt formula above. The relay contacts will close after about 2-10ms (relay dependent), bypassing the TRIAC to prevent misbehaviour with electronic loads. It's important to ensure that the relay operate time is as fast as possible. If the relay is too slow, there may be a momentary 'disconnect' if the TRIAC turns off due to a difficult load. I used a perfectly ordinary 12V relay and have never seen a 'problem' waveform.
To trigger at a particular point on a waveform you need a reference. It's possible to detect the peak, but it's poorly defined and subject to small voltage variations all the time. Almost all processes that need a specific reference for any AC waveform use a zero-crossing detector (see Zero Crossing Detectors and Comparators. Although there is always some 'dead-time' around the zero-crossing point, as long as it's predictable it doesn't matter.
The next part of the system is a timer. The zero-crossing detector discharges the timing capacitor when it goes high, and the timer starts when the voltage falls low again. The timer is set for just under 5ms to obtain peak switching (corresponding to the peak of a 50Hz waveform), or 10ms for zero-crossing. The timing needs to be set with a suitable monitoring system (a small isolation transformer and an oscilloscope), and carefully adjusted to get as close as possible to zero and peak. For reasons that escape me now, my tester uses a PIC rather than an analogue timer. It does reduce the parts count a little, but it also means that the PIC has to be programmed for the proper delay. (Yes, I know that I could have used an ADC input and a trimpot, but if I did that the parts count advantage all but disappears.)
The idea proposed here is to use an analogue system, as it's then possible for anyone to build it without having to devise the program for a PIC. The circuit consists of a zero-crossing detector, timer, and a switch that controls both a TRIAC and a relay. The current waveform is obtained using a current transformer. I know that this arrangement works, as evidenced by the waveform captures shown in Fig. 1. It also requires a power supply, and the innards of a switchmode plug-pack (aka 'wall wart') is by far the easiest way to provide power with the minimum of fuss. As noted in the introduction, experience with mains wiring is an absolute requirement.
Note: Do not use an external plug-pack/ wall-wart with a DC connector to power this circuit. Even though the supply and low-voltage circuitry is isolated from the mains, it's a particularly (potentially) hazardous piece of test gear, and maintaining total isolation between the mains, control circuit and current transformer output is (IMO) an absolute requirement.
There are a number of zero-crossing detectors shown in the ESP application note linked above, but the one selected is the simplest arrangement that works well. It uses an optocoupler to detect when the AC voltage is within about ±5V of zero, which gives a pulse just under 1ms wide, centred on the mains voltage passing through zero. Any small timing error is easily dealt with by calibrating the timer. Since the TRIAC can't turn on until it has at least a few volts across it (depending on the load), it's not possible to turn on at exactly zero, but the error is small. Ideally, the relay should be selected to ensure its contacts close in less than 10ms.
The block diagram shows the sections used. There are three main blocks, excluding the power supply. The zero-crossing detector (ZCD) is powered directly from the mains, using an optoisolator for safety. The output pulses trigger the timer, which produces a high output 5ms (or 4.17ms) after the zero-crossing. The output switch uses a TRIAC and a relay connected in parallel. Monitoring is provided by a current transformer, with a pair of terminals for an oscilloscope. All current transformers require a 'burden' resistance (usually 100Ω for small ones) indicated as RB on the drawing.
RE and CE are used to ensure that the electronics and accessible terminals aren't floating with respect to the mains safety earth. This prevents the likelihood of getting 'tingles' if you touch the CT output terminals or the optional trigger output, and the resistance is high enough to prevent ground loops that may induce hum into the measurement. The small switchmode supply suggested will have a 'floating' common output voltage of up to half the mains voltage, and the network prevents this.
For zero-voltage switching, the TRIAC and relay are turned on with the first zero-crossing pulse detected after the 'on' button is pressed. The 5ms (or 4.17ms for 60Hz) delay timer provides an output pulse at its output of about 250µs wide. This delayed pulse is used for 90° switching. The first pulse to arrive from either the zero-crossing detector or the timer (after the 'operate' switch is pressed) activates the TRIAC and relay. I suggest a momentary normally closed pushbutton, because there's no reason to keep the external device turned on once you've captured the inrush waveform. Selection is simply a switch to connect the impulse desired. There is a very good reason for having the output indicator LED powered directly from the switched output. This is discussed in detail below.
If you think you need it for some obscure reason, VR1 can be installed as a front-panel control, allowing you to set the phase angle to something other than the default 90°. If you were making a dimmer or power controller this might be seen as 'useful', but in this application it's not. The two phase angles needed for any inrush test are 0° (zero-crossing) and 90° (AC peak voltage). Other phase angles will give a result partway between the two, but that's rarely necessary or desirable. The circuit is designed to apply power only when the 'Operate' button is pressed, and it turns off again when the button is released.
The heart of the system is the ZCD, as this determines the timing of the output switch. An optocoupler (4N25, 4N28, LTV817, etc.) has its LED powered from the mains, via a resistor string and a bridge rectifier. The output is buffered by an emitter-follower (Q1) so it can provide enough current for the timer and latch circuits. See Fig. 2.3 or 2.4 for more accurate ZCDs, which will work better with optocouplers having a low CTR. Please note that the two brown terminals (active/ live) are joined, as are the two blue (neutral) terminals.
Note: R15 is shown as 10Ω and I've verified that the AC-1005 CT can handle up to 70A without saturation with that burden resistance. If you need to measure higher current, R15 should be reduced further (to 1Ω), and the output will be 1mV/A. Alternatively (and preferably), use a CT designed for higher current. I've tested to over 150A (the maximum I can achieve as a continuous current using Project 207 - High Current AC Source with a 10Ω burden (R15) with only minor signs of saturation. I'd expect linear output up to at least 200A, and probably more. R14 can be switched between 100Ω and 10Ω if you need to measure very high current.
The timer capacitor (C1) is shown as 1µF, and it must be a film cap. Don't use a multilayer ceramic or electrolytic cap, as they are not stable over time and temperature. Test equipment must be reliable and predictable, but the wrong type of capacitor will be neither. The exact value isn't critical, as VR1 has plenty of range to account for tolerance.
The first comparator (U2A) is the timer, which runs continuously and is reset by the zero-crossing signal. The second comparator (U2B) is configured as a latch. When the output switch is in the 'off' (not pressed) position, the latch is inhibited (zero volts output). When the switch is opened the first impulse to arrive turns on the latch, the relay and TRIAC. The impulse can be selected for 'zero' or '90°' as required with Sw1. The optional scope trigger output is derived from the output of the latch. Power is only supplied to the DUT while the 'Operate' button remains pressed. Note that you cannot substitute 'any old' opamp for the LM358. It was specifically selected because its outputs can get down to (close to) zero volts, so it can switch the transistors with the minimum of parts. You can use a dual comparator (e.g. LM393), but then you must add resistors from the outputs to +12V. A value of 1k will be fine.
The output from the ZCD and timer are positive-going pulses, at a 100Hz (or 120Hz) repetition rate. Because the timer is adjusted to 5ms (or 4.17ms), this corresponds to the AC waveform's peak voltage, which may be positive or negative. While it's not difficult to set up the circuit so that it will always trigger on (say) positive peaks or an initial positive half-cycle, this was considered to be unnecessary. In 'real life', switching phase angle and polarity are random, so at least the polarity is allowed to be random in this design. The trigger output will ensure that the scope always triggers at the instant the mains is applied.
The suggested TRIAC is a BTA41, rated for 40A continuous or 400A peak (non-repetitive). You could use a 'lesser' TRIAC such as a BT139, but they are only rated for a peak current of 140A. As seen above (Fig. 1.2) this may be limiting. The TRIAC is turned on using an MOC3021 (~10mA trigger current). Any of the MOC302x series photo-TRIACs can be used, but you may need to adjust the LED current. These devices have been with us for a very long time, and are available from most suppliers. All wiring (including Veroboard traces if applicable) has to be capable of withstanding a peak current of at least 400A without vaporising.
The zero-crossing detector's optocoupler isn't critical, but some have a rather poor CTR. If you're prepared to modify the circuit a little, you can use whatever you have available. The CTR is simply the ratio of LED current to transistor current, so a CTR of 1 (or 100%) means 1mA through the LED will cause 1mA collector current. The relay shorts out the TRIAC, and they are activated at the same time. Most relays will close within around 10ms or so, and while I didn't test mine it's been used on a wide variety of products and hasn't missed a beat.
![]()
Note: Do not use a zero-crossing TRIAC driver (e.g. MOC303x or MOC306x series). The device used must be classified specifically as 'random phase' or 'non-zero crossing'. Zero-crossing optocouplers will not work in this circuit, because they cannot be triggered at the peak of the waveform. This should be obvious, but it's also easily missed.
One thing that's (surprisingly) important is that the 'Mains On' indicator is powered directly from the switched mains output. During some tests, I discovered that the TRIAC can 'self-trigger', so the output is still on, even after the relay and MOC are no longer powered. Capacitive loads are the most troublesome in this respect, and the 3.3µF capacitor I tested (see Fig. 1.2) would cause the TRIAC to re-trigger itself reliably. Had the indicator been powered from the DC control signal, there would have been no indication that something was amiss. It's a minor point, but IMO a fairly important one. Some transformers will also cause re-triggering, although most don't.
You may wonder why I specified 1N4148 small-signal diodes for what look like mains rectifiers. Because they come after the resistors (series strings to ensure there's no voltage breakdown), the maximum voltage across them is only a couple of volts, as determined by the LED forward voltage. These resistors should be ½W. For 120V operation, simply omit one resistor from each leg of the network. This reduces the total resistance from 40k to 20k.
One thing you'll almost certainly have to test thoroughly is the ZCD. With the values shown, peak LED current is 8.1mA. There's a balancing act with a simple ZCD, because we want the pulse to be as narrow as possible, but the LED current has to be as low as we can get, consistent with reliable operation. The optocoupler I used is an LTV-817C, which has a claimed CTR of 200-400%. This is much higher than the more common 4N25/8 which has a CTR of 1 (100%) at 10mA LED current.
The ZCD shown above is adapted from the ESP app. note AN005 - Zero Crossing Detectors and Comparators. It's a little more complex, but it does have the advantage (at least in theory) of a very narrow output pulse (less than 500µs). It's also more dependable than the simple version shown in Fig. 2.2, and is less likely to need any adjustment to get it right. Because the optocoupler you use will be different from the ones I've used, using a circuit that doesn't rely on a high CTR has many advantages. Having bench-tested this ZCD, I found that the minimum pulse width is about 650µs - not quite as good as a simulation indicates, but better than the simple version.
It also draws far less current from the mains, so you can use high-value input resistors. The 200k resistors shown will be 2 x 100k in series. For 120V operation, the total resistance needed is 100k on each 'leg', so omit 2 x 100k resistors. If you need a higher LED current (unlikely but possible), reduce the value of R4. As shown the LED current is about 9mA, and if R4 is reduced to 100Ω, the peak LED current is 12mA. The output is taken from the emitter of the optocoupler because the optocoupler is normally off, and it's turned on when the mains voltage passes through zero. This has the advantage of keeping the average LED current low, ensuring a long life.
It is possible to make a ZCD with a pulse-width less than 150µs, using a Schmitt trigger logic IC. It can be powered directly from the mains (no additional supply needed), but it's overkill for this application. The timing for the TRIAC for zero-cross switching is immaterial because it's the MOC and TRIAC that determine the switching point. For 90° switching, the timing is adjustable, so an ultra-high precision ZCD serves no purpose there either. This is definitely a case of not letting 'perfect' be the enemy of 'good'. The ZCD shown in Fig. 2.2 is quite sufficient.
Note: For what it's worth, my unit uses a BTU2540 TRIAC (long obsolete, but I have a bag full of them). This device is rated for 25A continuous and 250A peak, so it was severely overloaded when I did the 3.3µF capacitor test. It has managed to survive every abuse I've thrown at it in the 12 years or since it was built, so using a BT139 TRIAC is likely to be quite alright for the majority of tests. I leave it to the reader to decide if a peak current of 400A (BTA41 TRIAC) is really necessary. The BTU2540 finally blew up when subjected to a 1.5kVA toroidal transformer (a total series resistance of less than 2Ω).
The supply voltage must be regulated, as the timer relies on a consistent voltage for the comparator (U1A). The simplest is to 'cannibalise' a plug-pack/ 'wall wart' supply and extract the PCB from it. This must be mounted securely within the enclosure. I leave it to the constructor to work out the details, but it's not at all difficult. The current needed is minimal, and will be less than 100mA under all conditions. Test the supply to make sure that it maintains regulation at low current, as some are fairly poor with less than 12mA. That's roughly what you'll get when the tester is 'idle' (waiting to be activated). Once the relay is activated the current will be about 50mA.
Using a supply such as that shown means that most of your circuit testing can be done safely, and you don't need switches rated for the mains voltage. Make sure that the supply board is securely mounted, and that any wiring below the acrylic (or other plastic) is protected. This style of mounting is very secure if done properly, but there are live terminals that are accessible. If possible, use an enclosure to house the PCB, to ensure that accidental contact with the mains is not possible.
Be very careful when running initial tests, because the ZCD is powered from the mains. Insulation between the control circuit and the mains switching devices is critical. I suggest the that mains side of the ZCD be assembled as a unit, and enclosed in heatshrink. Do the same for the output 'on' LED. Note that the LED leads are still regarded as being at a 'hazardous voltage', and contact with them may be fatal.
Note that the mains earth/ ground should not be directly connected to the common of the circuit. This allows you to use an oscilloscope without creating a ground loop which may cause false triggering in some cases. If the idea of 'floating' electronics doesn't appeal to you, you can use a 10Ω resistor between the mains/ chassis earth and the common. One side of the current transformer must be connected to the circuit common so there's a reference for the trigger output (if included).
The only thing needed for setup is to change the resistors feeding the optocoupler and 'Power On' LED. These circuits both use 4 x 10k resistors for 230V operation, or 2 x 10k for 120V. No other changes are needed, as the power supply will have been selected to suit the mains voltage applicable for where you live. The timer has plenty of adjustment range and will work with either 50Hz or 60Hz without changing any parts.
For calibration, you need a dual-trace oscilloscope. One channel monitors the zero-crossing pulse ('Zero' output to the switch) and should be set as the trigger channel. The second channel is used to monitor the output of the differentiator (the junction of C3 and R6). Adjust VR1 until the delay is exactly 5ms for 50Hz or 4.17ms for 60Hz. The 'Operate' switch should be pressed ('on') for this step). If the switch is closed (output off) the differentiator output is greatly reduced, but it should still be visible on the scope.
Make sure that you verify that the power supply's regulation is within about ±20mV or so over the full likely mains voltage range, as the voltage determines the current into C1 and therefore affects the timing. If the supply isn't good enough, use a 10V 1W zener diode to clamp the voltage at the 'top' of R1. You'll need a resistor between +12V and R1 of about 100Ω to get 20mA zener current. R1 will need to be reduced to 47k. This isn't shown in Fig. 2.2 but it's easily added if needs be. Most small SMPS are fairly good, but you can add the zener if you choose.
The simulation capture shows the voltages and timing you're looking for. Once this is set the tester will trigger the SCR and relay at the appropriate time. This can be verified using a small transformer at the output, and monitoring the secondary voltage. Because the mains voltage waveform is always subjected to some distortion (usually 'flat-topped') there is a little leeway for the 90° setting, but if necessary you can 'fine tune' VR1 to get the trigger point as close as possible to the actual (as opposed to theoretical) peak voltage. In real terms it makes little difference, but it's worth getting it to be as accurate as possible.
The latch will be turned on about 150µs early with the zero-crossing signal, because it's not a particularly narrow pulse. While this can be improved by using the Fig. 2.3 ZCD circuit, it makes no difference to the measurement. The zero-crossing turn-on point is controlled by the MOC3021 and the TRIAC, and these can't be changed. I originally experimented with the idea of using just the relay, with the timing adjusted to suit. Relay contact bounce caused far more problems than the TRIAC, so that approach was abandoned.
Because it's self-contained, this tester is very easy to use. The DUT is connected to the output socket (typically a standard mains outlet), ensure that the 'Operate' switch is off (if you use a toggle switch), and select the desired phase angle. Connect an oscilloscope to the 'Meas.' (measure) and GND terminals, and set it for single sweep. You'll need to set the scope's trigger appropriately to capture the waveform. If used, connect the 'Trig' output to the scope's trigger input and set the trigger input as needed (rising edge). Note that a digital scope is required so the trace will be displayed after it's been captured. You will almost always have to run some preliminary tests first so the scope is set to capture the entire start-up event. If you're testing a complete power supply ('linear' or SMPS), remember to either use a load or discharge the filter caps before each measurement (a load is much easier). You need to test with no residual charge in the filter caps, or the test results will not be accurate.
With the suggested current transformer, the output is 10mV/A, so a 100mV peak signal means a current of 10A. Ideally, you'll capture a number of 'events' to ensure that the waveform you see is representative and consistent. It is possible (although actually quite difficult to do even if you try) to switch the output 'on' part-way through a trigger pulse, and this may give an inaccurate reading. Normally, if the circuit doesn't trigger on the first impulse received, it will trigger on the next - the latch ensures that this occurs. The output is normally unambiguous, and the waveforms shown above were easily reproduced many times. There will often be minor variations in the peak current displayed, usually as a result of not waiting long enough before repeating the test (especially important with switchmode power supply testing).
Because the circuit operates with random phase, if you don't include the 'Trig' output you'll find that not all activations are shown on the scope, depending on its trigger settings. For example, if the scope trigger is set for positive-going pulses of greater than zero, you may not see negative-going activations. This is solved by using the optional 'Trig' (trigger) output. This is a positive-going signal that's synchronised to the mains switching point.
A test example is shown above. The power supply is a 1kVA transformer with 22,000µF filter caps for each rail (±90V) and a 1.2Ω primary resistance. Without an inrush limiter it will trip the circuit breaker for my workbench nearly every time, but it's normally only powered on via a Variac so the voltage is ramped up and there's no stress. The test waveform shown was obtained with zero-voltage switching, and only reached a bit under 10A peak. This is reduced to 7.5A peak with peak switching at 90° (also with 30Ω inrush limiting). The current is a bit less than the theoretical maximum due to added resistance in the test fixture I used. The current jumps up again after the bypass operates at ~350ms, but it only increases to about 6.5A and falls quickly after that.
This demonstrates both the tester and the inrush limiter in action with a real power supply, something you don't normally see at this level of detail. The effectiveness of both the tester and inrush limiter are quite obvious. Without inrush limiting, the power supply can draw over 100A when powered on, and the current 'surge' lasts long enough to trip my circuit breaker.
Unlike AC, DC inrush tests are easy. DC is continuous, and you can apply power whenever you like to measure the inrush. Consequently, there is no specific circuitry needed, just a switch and a means of measuring/ monitoring the peak current. You can't use a current transformer to measure steady-state DC because they only work with AC, although you can use one to measure inrush. I've tested this, and the results are pretty accurate.
Lest you think I must be mistaken, the capture shows DC inrush measured using a 100mΩ resistor (yellow) and a current transformer (violet). Neither was calibrated and there's a small error, but even so it's minimal and not worthy of much comment. The capacitor used was 220µF 400V, charged via a switch from a 50V supply. The 'wobble' at the beginning of the trace is due to contact bounce in the switch. Note that the RMS voltages shown are not accurate, and should be ignored.
The only resistance used was in the test leads and 100mΩ resistor. Both traces show 100mV/A, so with a peak voltage of 3V, the peak current is 30A. That implies an effective series resistance of 1.67Ω. This was set up as a quick test, so I made no real effort to minimise resistance. However, it still shows the trend very clearly, and all switched DC circuits with a capacitor following the switch will do the same. Without external resistance (or peak current limiting), the only limitation is the ESR of the capacitor.
The benefit of using a current transformer is that there's no additional series resistance. It works with DC, but only to measure the inrush current, and any current drawn by the load is only registered if it changes suddenly. I suspect that few readers would have thought that this could work, but Fig. 6.1 shows that it displays the current change (Δi/Δt) very well indeed. If a larger capacitor were used with minimal series resistance, it's easy to see that the peak current can exceed 100A. DC inrush is a very real phenomenon, and regularly catches people unaware. It should come as no surprise that a simulation shows nearly identical results (but without the contact bounce).
In some cases it may be inconvenient to have to use an oscilloscope to measure the peak inrush. If this is the case, a sample and hold detector can be used, with a multimeter to measure the peak. Since we only have a single supply available, this is made harder than it would be otherwise, but a good reading is still possible within around 5 seconds after the 'event', while still using a budget dual opamp. There are limitations of course, but it should be easy enough to get a reading within 2% or so. Be aware that the circuit shown is not super-fast, so very short transient impulses may not be captured accurately. The output is only 10mV/A, so it won't capture and hold low current accurately.
It's probably easy to look at the circuit and think "that can't possibly work", but it does. See the Precision Rectifiers application note, Fig 8. It's not much use with very low current, and with the suggested current transformer and 10Ω burden resistor (R15, Fig 2.2) the lower limit is around 10A (100mV peak output). Operation up to 900mV peak (90A) is possible with fairly good accuracy. The biggest issue is the LM358 opamp, which has a significant input current. This causes the hold capacitor (C1) to charge, so the voltage increases with time. If R15 is 10Ω, the minimum current is about 100A (1V = 100A). To improve low-level accuracy, R15 can be increased to 100Ω - you can use a switch to change the range.
Readings have to be taken fairly quickly (ideally within about 5 seconds) or the measured result may be incorrect. Within the 5 second 'window', you should get a reading that's well within 1%. The worst-case input current for an LM358 is 100nA, which will cause C1 to charge at a rate of 10mV/s. For the 'typical' value (45nA), the cap will charge (at least in theory) at a rate of 4.5mV/s. However, this is not a linear relationship, and it's hard to compensate accurately. However, a bench test showed that it's not likely to be an issue, largely due to the much larger than 'typical' hold capacitor.
By using a much larger cap than normal, the 'voltage creep' is minimised. To use this circuit, press the 'Reset' button, then trigger the inrush tester. The peak voltage will be held for long enough for you to read it, and the output is 10mV/A. If you read 3.5V on your multimeter, the peak current was 35A. Don't delay between the reset and operate processes, as C1 will start to charge as soon as the Reset button is released. If you're really lucky, the input current from the LM358 will match the leakage of the capacitor, but I think that's probably too much to ask for.
D3 also has some (temperature dependent) leakage, and that helps to prevent voltage drift. Ideally, U2A would be a JFET input opamp (e.g. TL071), but they won't work properly in this role with a single supply. You can use a CMOS opamp (e.g. TLC277) which will maintain the peak measurement for a very long time, but it's an added part that probably isn't worth the extra trouble. If you were to use a TLC277 or similar, D3 should be removed.
I tested an LM358 as a buffer with a 33µF 25V cap, and it started at 6.4mV (the output can't get exactly to ground). It took 42 seconds for the voltage to rise to 50mV (1.2mV/s), much better than I expected. With the cap charged to 5V (equivalent to 50A), the voltage remained quite stable, so capacitor leakage and bias current were presumably balanced. Provided the measurement is taken fairly quickly, you can expect it to have more than acceptable accuracy. A scope is still the best of course, as it lets you see just how the inrush current progresses with time.
![]()
Something you'll see advertised is a clamp-meter, specifically for measuring current without having to break the wire. Many have a 'peak hold' function, which is alleged to let you measure inrush current. I have one, and it can't - despite the claims. Yes, it has 'Hold', 'Min' and 'Max' functions, but unless it samples at the exact moment the inrush surge occurs, it can't capture it. Try as I might, I couldn't get that to happen. I tested it with the same capacitor used for the Fig 6.1 capture, and it failed every time. My scope assured me that the full 30A was delivered, but the highest reading I obtained was 1.8A, well shy of reality. These meters will measure inrush for long-duration events (a motor starting for example), but they cannot (and do not) handle short-duration events at all.
Normally, this would be published as a project, because it includes a full schematic and lots of construction info. However, because it's so specialised I decided that it was more appropriate for it to be presented as both a project and an article. I don't expect that many people will actually want to build one of these testers, because it's not needed for any 'normal' hobbyist testing. However, I may be mistaken, and as it's not particularly expensive to build, it is educational to see just how much current some products draw when switched on. Whether this justifies construction or not is up to the reader.
It's also ideal for verifying that a soft-start circuit (aka inrush limiter) works properly, and has an acceptable delay. I've used mine both to test commercial products (mainly lighting) and to verify that the Project 39 soft-start circuit works properly with a variety of transformers and filter capacitors. I know that I can easily estimate (and/ or simulate) the results fairly easily, but being able to prove it on the workbench is always useful. It's also been used to show scope captures of transformer inrush in other articles and projects.
Without a tester like this, it's very difficult to capture the worst case inrush current. Most electronics are turned on with a manual switch, which is random. There are also connections and disconnections caused by contact bounce (which occurs with all mechanical switches, including relays). The TRIAC ensure that there is no 'bounce', because it turns on cleanly. If the relay is a bit slow to pick up it's theoretically possible for the TRIAC to cease conduction, but I've not seen any evidence of that happening when using my unit. The 5A current transformer may seem like it's a serious limitation, but as shown above I had no difficulty measuring 230A with a 3.3µF capacitor, and every measurement I've taken was well within expectations.
There's a likelihood that some people will be mislead by claims that clamp-meters can measure inrush. If they do manage to get a reading, it's at a random phase angle, and may underestimate the reading very badly. They do not capture the peak value accurately unless it occurs at the instant the meter takes a sample, something that's highly unlikely in most cases. The sample & hold circuit shown in Fig. 7.1 may come in handy, but an oscilloscope is needed to get a meaningful reading. Inrush testing isn't easy, but most of the time it's not essential either.
There are no references. Information on how to build your own inrush tester is virtually non-existent (I found nothing at all), and my tester was designed using basic principles. Although the original unit I built uses a PIC, the version shown here uses discrete parts throughout. The PIC I originally used required a delay loop determined by experiment, as it had an inherent delay that skewed the results. The discrete version uses a simple timer that can be set easily. There are a few references throughout this article, all from the ESP website.
![]() ![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Inrush Current Mitigation |
Inrush current explained very simply is the current drawn by a piece of electrically operated equipment when power is first applied. It can occur with AC or DC powered equipment, and can happen even with low supply voltages. There is now a second 'instalment' on this topic - see Soft Start Circuits, and it covers some areas in greater detail than found here. The two share some information, but the second instalment has more oscilloscope captures of some of the less obvious approaches. Also, see Project 39, which is a well used and very popular inrush limiter, used by hundreds of ESP customers.
+ +By definition, inrush current is greater than the normal operating current of the equipment, and the ratio can vary from a few percent up to many times the operating current. A circuit that normally draws 1A from the mains may easily draw 50 to 100 times that when power is applied, depending on the supply voltage, wiring and other factors. With AC powered equipment, the highest possible inrush current also depends on the exact time the load is switched on.
+ +In some cases, it's best to apply power when the mains is at its maximum value (peak of RMS = nominal voltage × 1.414), and with others it's far better to apply power as the AC waveform passes through zero volts. Iron core transformers are at their best behaviour when the mains is switched on at the peak of the waveform, while electronic loads (rectifier followed by a filter capacitor for example) prefer to be switched on when the AC waveform is at zero volts.
+ +This is a surprisingly complex topic, and one that is becoming more important than ever before. More and more household and industrial products are using switchmode power supplies, and they range from just a few Watts to many hundreds of Watts. Almost all of these supplies draw a significant over-current when power is applied, and almost no-one gives any useful information in their documentation.
+ + +++ ++
++ +
Please note that the descriptions and calculations presented here are for 230V 50Hz mains. This is the nominal value for Australia and Europe, as well as many + other countries. The US and Canada, along with a few other countries, use 120V 60Hz. This is not a problem - all formulae can be recalculated using whatever voltage is appropriate. + For most examples, the frequency is (more or less) immaterial.
It is not possible to provide a lot of detail for every example, so in many cases a considerable amount of testing or background knowledge may be needed before you will be able to make use of the information here. In addition, component suppliers do not always provide information in the same way, and some info included by one supplier is omitted by others. This can make selection a challenge at times.
+ + +![]() |
+ WARNING: This article describes circuitry that is directly connected to the AC mains, and contact with any part of the circuit may + result in death or serious injury. By reading past this point, you explicitly accept all responsibility for any such death or injury, and hold Elliott Sound Products harmless + against litigation or prosecution even if errors or omissions in this warning or the article itself contribute in any way to death or injury. All mains wiring should be performed + by suitably qualified persons, and it may be an offence in your country to perform such wiring unless so qualified. Severe penalties may apply. | ![]() |
While this is not an article that describes any construction, it does involve measurements that are hazardous, and that may require specialised equipment to ensure safety. If you do not have the required equipment, please do not attempt any of the measurements shown. Never connect oscilloscope probes to the mains, as destruction of the 'scope is probable. Under no circumstances should an oscilloscope be operated without a safety earth/ ground connection via its mains lead.
+ +All current measurements were taken using the Project 139A and/or Project 139 current monitors, which ensure that no direct connection to the mains is needed. Switching at the zero-crossing and peak AC waveform was done using a specialised test unit that I designed and built specifically for assessing inrush current on a variety of products. For details of the tester, see Inrush Current Testing Unit (added in May 2022).
+ + +Inrush current is also sometimes known as surge current, and as noted above is always higher than the normal operating current of the equipment. The ratio of inrush current to normal full-load current can range from 5 to 100 times greater. A piece of equipment that draws 1A at normal full load may briefly draw between 5 and 100A when power is first applied.
+ +This current surge can cause component damage and/or failure within the equipment itself, blown fuses, tripped circuit breakers, and may severely limit the number of devices connected to a common power source. The following loads will (or may) all have a significant inrush current, albeit for very different reasons ...
+ +The list above covers a great many products, and with modern electronics infiltrating almost every household and industrial item used it actually covers just about every product available. Few modern products are exempt from inrush current - at least to a degree. Some of the most basic items we use do not have an issue with inrush current at all - most are products that use heating coils made from nichrome (nickel-chromium resistance wire) or similar. The current variation between cold and full temperature is generally quite small. This applies to fan assisted, column and most radiant heaters, toasters and electric water heating elements. Apart from these few products, almost everything else will have a significant inrush current.
+ +In some cases, we can ignore the inrush current because it is comparatively small, and/or extremely brief. A few products may draw only double their normal running current for a few mains cycles, while others can draw 10, 50 or 100 times the normal current, but for a very short time (often only a few milliseconds). Some products can draw many times their normal current for an extended period - electric motors with a heavy starting load or power supplies with extremely large capacitor banks being a couple of examples.
+ + +Although they are being banned (either by decree or stealth) all over the world, there are still many incandescent lamps in use, and this will not stop any time soon. Most traditional filament (incandescent) lamps are known to fail at the instant of turn-on. This is for two reasons - the filament is cold so has much lower resistance than normal, and the thermal shock can cause a fracture.
+ +When power is applied, there is a high current 'surge', along with thermal shock and rapid expansion of the tungsten. This doesn't affect the lamp initially, but as the filament ages it becomes thinner and more brittle, until one day it just breaks when turned on. For very large lamps used for theatrical lighting (amongst other things), the solution is to preheat the filament - just enough power is applied to keep the filament at a dull red. Full power is almost never applied instantly - it is ramped up so the lamp appears to come up to full brightness very fast, but this is a simple trick that works because the response of our eyes is quite slow.
+ +The cold resistance of a tungsten filament is typically between 1/12 to 1/16 of the resistance when hot. Based on this, it might be expected that the initial inrush current for a cold filament would be 12 to 16 times the current at rated power. The actual initial inrush current is generally limited to some smaller value by external circuit impedance, and is also affected by the position on the AC waveform at which the voltage is applied.
+ +I measured the cold resistance of a 100W reflector lamp at 41 ohms, and at 230V (assuming the power figure is accurate) the resistance will be around 530 ohms - a ratio of 12.9:1 and comfortably within the rule of thumb above.
+ +The time for the initial inrush current to decay to the rated current is determined almost entirely by the thermal mass of the filament, and ranges from about 0.05 seconds in 15W lamps to about 0.4 seconds in 1500W lamps [1]. This varies with the rated voltage too - a 12V 50W lamp has a much thicker (and therefore more robust) filament than a 230V 50W lamp for example. If incandescent lamps are always either faded up with a dimmer or use some kind of current limiter, they will typically last at least twice as long as those that are just turned on normally.
+ +Traditional (iron-core ballast and starter) fluorescent lamps also draw a higher current during the switch-on cycle. During the startup process, there are filaments at each end of the tube that are heated, and this draws more current than normal operation. Contrary to what you might hear sometimes, this startup current is typically only between 1.25 and 1.5 times the normal current, and it is not better to leave fluorescent lamps on than to switch them off when you leave the room. However, constant switching will reduce the life of the tube, so there is a compromise that depends on the application.
+ +Power factor correction (PFC) capacitors are used in parallel with many fluorescent lamp ballasts, especially those designed for commercial/ industrial use. These are necessary to minimise the excess current drawn by a passably linear but reactive load. When power is turned on, the inrush current may be very high - typically up to 30 Amps or more depending on the exact point in the main cycle when power is applied! This is many times the operating current of the PFC capacitor (as determined by the capacitance, voltage and frequency).
+ +Many fluorescent tube lights are now using the relatively new T5 tubes, and these are specifically designed to use electronic ballasts. Even the older T8 tubes will give more light output with a high frequency electronic ballast, and we will eventually see the iron-core ballast disappear completely. The electronic versions can be made to be more efficient, but the circuitry won't last anywhere near as long. Some of the efficiency gained will be lost again when the ballast (or the entire fitting) has to be replaced because a $0.10 part has failed.
+ +Many other lamps also have (often very) high inrush currents, but these will not be covered here.
+ + +This is such an important topic that some explanatory notes are essential. "Why is it essential to know?" You may well ask. Simply because so few modern loads are resistive, and power factor correction (PFC) is (or will be) used in a vast array of products. Many loads that currently have little or no PFC will be required to perform very much better in the future, and this has already happened with some categories of equipment. Many PFC circuits draw very high inrush current when switched on. If you want a more in-depth explanation of power factor, see Power Factor - The Reality.
+ +Power factor is not well understood by many people, and even some engineers have great difficulty separating the causes of poor power factor. Simply stated, power factor is the ratio of 'real' power (in Watts) to 'apparent' (or imaginary) power (in Volt-Amps or VA). It is commonly believed (but only partially correct under some specific circumstances) that power factor is measured by determining the phase angle between the voltage and current (commonly known as CosΦ (Cosine Phi - the cosine of the phase angle). This is an engineering shorthand method, and does not apply with any load that distorts the current waveform (more on this shortly).
+ +An inductive load such as an unloaded transformer will draw current from the mains, but will consume almost no power (note that a loaded transformer passes the load seen by the secondary back to the mains, and it's usually not inductive). Fluorescent lamps use a 'ballast' - an inductor that is in series with the tube. Similar arrangements are used with other types of discharge lighting as well. For the sake of simplicity, we will use a resistive load of 100 ohms in series with a 1H (1 Henry) inductor. Voltage is 230V at 50Hz, so the reactance of the inductor is 314 ohms. Total steady-state circuit current is shown in Figure 1, for both the inductive and capacitive sections. The inductive current lags the applied voltage by about 72°, and the capacitive current leads the voltage by 90° (voltage is not shown as it would make the graph too difficult to read).
+ +Without the capacitor (C1), the mains current is 698mA (700mA near enough) in this circuit - an apparent power of 161VA. However, the power consumed by the load (R1) is only 48.7W - 698mA through 100 ohms. Therefore, the power factor (PF) is ...
+ ++ PF = P R / P A Where PR is real power and PA is apparent power+ +
+ PF = 48.7 / 161 = 0.3 +
This is considered very poor, because the power company and your wiring must supply the full 700mA, but only a small fraction is being put to good use (about 213mA in fact), and only about 70V of the input voltage is available for the 100 ohm load. The majority of the current is out of phase with the voltage, and performs no work at all. This type of load is very common (all inductive loads in fact), and is easily fixed by cancelling the inductance with a parallel capacitor. Scams that claim that a silly power factor correction capacitor will make "motors run cooler" are obviously false - the inductive current is not changed!
+ +For the above circuit, the capacitance needed is about 9µF and it will draw around 650mA (again at 230V, 50Hz). Because the capacitive and inductive currents are almost exactly 180° out of phase with each other, the reactive parts cancel as shown by the graph. As a result, the generator only needs to supply the 48.7W used by the load, and the supply current falls to 213mA - exactly the value needed to produce 48.7W in a 100 ohm load (ignoring losses). The current we measured in the inductor (698 mA) does not change when the capacitor is added. The difference is that the majority is supplied by the capacitor and not the mains.
+ +One of the greatest problems with the idea of power factor is that many of the claims do not appear to make sense. The above example being a case in point - it seems unlikely that adding a capacitor to draw more current will actually cause it to fall. To understand what is going on requires a good understanding of reactive loads, phase shift and phase cancellation - even though some of it might seem nonsensical, it's all established science and it does work. For example, a leading phase angle implies that the current occurs before the voltage that causes it to flow, and while this might seem impossible, it is what happens in practice. It usually only takes a few cycles to set up the steady state conditions where this occurs.
+ +Inrush Current: The capacitor will be discharged when the mains is off, but when power is applied, the cap appears to be close to a dead short at the instant of switch-on. The inrush current is limited only by the mains wiring resistance and the ESR (equivalent series resistance) of the capacitor. Fluorescent lamps also require a starting current to heat the filaments, and this adds to the inrush current.
+ +Where some of the old-timers (and the not-so-old as well) get their knickers in a twist is when the load current is distorted. It has been argued (wrongly) on many a forum that the voltage and current are in phase, so power factor is not an issue. This is completely wrong - those who argue thus have forgotten that the CosΦ method is shorthand, and only applies when both voltage and current are sinewaves. It has also been argued (and again wrongly) that the capacitance following the bridge rectifier creates a leading power factor. It doesn't ! By definition, a reactive load returns the 'unused' current back to the mains supply, but this cannot happen because of the diodes. Non-linear circuits have a poor power factor because the current waveform is distorted, not because of phase shift.
+ +Note in the graph below that there actually is a 'displacement', with the maximum current peak occurring slightly before the voltage peak. However, this is not a leading power factor as many might claim, it's just a small displacement in an otherwise distorted waveform and doesn't mean anything even slightly interesting.
+ +Figure 2 shows the test circuit and waveforms for a non-linear load. These are extremely common now, being used for countless small power supplies, computer supplies, etc. Most power supplies below 500W will use this general scheme. The load won't be a simple resistor, but rather a switchmode power supply used to power the equipment. Note that current flow starts just before the AC waveform peak to 'top-up' the partially discharged filter cap (C1). Input current ceases just after the peak voltage, as the cap is fully charged and discharges much slower than the rate-of-change of the mains voltage.
+ +The power provided by the above circuit is 48W - as close as I could get to the previous example. Input current is 454mA, so apparent power is a little over 104VA. Power factor (calculated the same was as above) is therefore 0.46 - again, not a good result. Most power companies prefer the PF to be 0.8 or better (1 is ideal).
+ +The big problem we have with this circuit is that adding a capacitor does no good at all, nor does adding an inductor. Adding both (called a passive PFC circuit) will improve things a little, essentially by acting as a filter to reduce the current waveform distortion. Passive PFC circuits are physically large and expensive, because they require bulky components. The above circuit can have a considerably improved power factor (perhaps as high as 0.8 without becoming too unwieldy), but the inductance and capacitance needed will still be quite large. In a simulation, I was able to achieve a PF of 0.83 by adding a 1.5µF capacitor and a 100mH inductor, but these are neither cheap nor small. The inductor will also be quite heavy.
+ +Because of the severe waveform distortion (which the power companies hate), many new switching power supplies (especially those over 500W) use active PFC. This requires special circuitry within the supply itself, and if well done can achieve a PF of at least 0.95 - I've seen some that are even better. This is not without penalty though - there is more circuitry and therefore more to go wrong, and the cost is higher. Efficiency is usually slightly lower because of the additional circuitry needed - no circuit is 100% efficient.
+ +It is expected that all switchmode power supplies above perhaps 20W or so will eventually require basic PFC circuitry to achieve at least 0.6 or so without serious current waveform distortion. As most people are well aware, the cost of power is increasing all the time, and anything that increases distribution costs (such as poor power factors) will be passed on to the consumer. The effects of the PFC circuits on inrush current are described further below.
+ + +While incandescent lamps have always been a common source of (fairly modest by modern standards) inrush current, up until fairly recently only motors and transformers were the other sources of very high inrush currents. A 500VA transformer is hardly a behemoth, but is easily capable of an instantaneous current of over 50A if the external circuit will allow it. Even relatively small electric motors can draw very high instantaneous currents, and also draw a higher than normal current during the time taken for them to come up to speed.
+ +This is a real issue for power transformers used for amplifiers and power supplies, but it is far worse when large distribution and sub-station transformers are involved. At the voltage and power levels involved, simple techniques that are quite effective with small transformers cannot be applied without significant additional cost and complexity. Ultimately it comes down to the design of the transformers, which is decidedly non-trivial for distribution and sub-station units. To minimise losses (which can become very expensive), these transformers must be as efficient as possible, which tends to make the problems worse.
+ +There are several added complications with electric motors that would fill a sensible sized article by themselves, so I will concentrate on transformers. Some of the factors for motors are almost identical, but others are too complex to explain for the purposes of this article. As a result, I will concentrate on transformers, because these are very near and dear to the hearts of DIY people everywhere.
+ +I have described a transformer soft start circuit (see Project 39), and this is specifically designed to limit the inrush current of a large transformer. It is recommended for any tranny of 500VA or more, as these draw a very heavy inrush current. In common with anything that draws much more current at switch-on than during normal running, the maximum inrush is determined by (amongst other things) the point on the AC waveform where power is applied.
+ +When we switch on an appliance, in 99% of cases it's just a simple switch, and there is no control over the point where power is connected. It may connect as the mains waveform passes through zero, it may connect at the very peak of the voltage waveform. Mostly, it will be somewhere between these two extremes, and the first partial (or half) cycle could be positive or negative. AC circuits (including power supplies with full-wave rectifiers before the main circuitry) don't care about the polarity, but they do care about the instantaneous voltage.
+ +Transformers and other inductive circuits behave in a manner that is not intuitive. Should the power be applied at zero volts (the zero crossing point), this is the very worst case. As the voltage increases the core saturates, and peak current is limited by one thing only - circuit resistance. Since a 500VA toroidal transformer will have a typical primary resistance of around 4 ohms (usually less than 2 ohms for 120V countries), the worst case peak current is determined by ...
+ ++ I P = V Peak / R+ +
+ I P = 325 / 4 = 81A +
External circuit resistance can be added into the formula, but in total it is unlikely to be more than 1 ohm in most cases, so the worst case peak current is still around 65A. Consider that a 500VA transformer at full load will draw a little under 2.2A, so inrush current may be up to 30 times the normal full load current. This is significantly worse than a typical incandescent lamp. Note that the transformer winding can never draw more current than is determined by Ohm's law - it will usually be less, but the formula above is for the worst possible situation. The situation would be different if there were a way to prevent saturation, such as using a core that is many times larger than necessary, but this is clearly not an option due to size and cost.
+ +Figure 3A is two captures combined into one, and shows the inrush current waveform captured when power is applied at both the mains zero crossing point and at the peak. The transformer is a single phase, 200VA E-I type, with a primary resistance of 10.5 Ohms. Absolute worst case current is simply the peak value of the mains voltage (325V or 170V), divided by the circuit resistance. This includes the transformer winding, cables, switch resistance, and the effective resistance of the mains feed. The latter is usually less than 1 Ohm, and allowing an extra Ohm for other wiring, this transformer could conceivably draw a peak of about 28A. My inrush tester (see Inrush Current Testing Unit) also has some residual resistance, primarily due to the TRIAC that's used for switching. Although it's bypassed with a relay, there is a time delay before the relay contacts close and this reduces the measured inrush current slightly. Peak switching quite obviously reduces the inrush current dramatically, from a measured 19A down to 4A.
+ +In the above, worst case inrush has been based on the peak value of the AC waveform, and in theory this is correct. However, a more realistic peak inrush current figure is obtained if the RMS voltage is used. When working out something like inrush current, there are many things we don't know that affect the final value, including information about the steel used. Using the RMS voltage will usually give a final value that's closer to measured results. Not especially scientific I know, but for small transformers (up to 1kVA or so) the answer is likely to be closer to reality and not quite as scary.
+ +It is always better to close the switch at the peak of the input AC line voltage. Since the inductor's current is initially zero (as it was before power was applied), switching at the AC peak puts the applied voltage and the inductor's current immediately (very close to) being in quadrature (i.e. at 90° phase displacement) with each other. This minimises the inrush current, as can be seen clearly in Figure 3A. Normally, we don't build mains switches to do this (it's possible, but not simple), so random switching is normal, and is always better than zero-voltage switching that maximises inrush every time the transformer is turned on. Peak switching SSRs (solid state relays) are (or perhaps were) made, but it's unlikely that you'll be able to buy one for a sensible price.
+ +Note that transformer inrush current is unidirectional - all pulses are one polarity until the inrush 'event' has settled and normal operation is attained. This typically takes between 10 and 100 cycles, depending on the transformer. Some very large transformers as used in electrical sub-stations (for example) may take a lot longer to reach normal operation. Although you might expect otherwise, the DC 'event' occurs both with zero-voltage and peak switching.
+ +When the power is connected to a transformer at the very peak of the AC voltage waveform, this is (surprisingly) a much better alternative. Inrush current will usually be quite low, generally less than 1/4 of the worst case value. Without additional relatively complex circuitry, it is not possible to choose when power is applied, so any provision for inrush current must assume the highest possible value - that which is limited only by the winding (and external) resistance.
+ +Note that the following graph shows the capacitive inrush only, and does not include the inrush current caused by the transformer. The reason for this is simple - it is extremely difficult to simulate transformer inrush - as shown in Figures 3A and 3C it is easy to measure though if one has the equipment. The 'ideal' transformer shown doesn't saturate, a real one does! Without a suitable test system it also differs significantly each time power is applied because there is no predictable time within a mains cycle where the power is connected or disconnected. Inrush current may vary from the nominal full load current of the transformer, up to a value limited only by the winding resistance of the primary and external wiring.
+ +++ ++
++
This is a complex area, and is not one that is adequately covered for the most part. The basics of inrush current are generally explained well enough, + but the effects when a heavy load is present at the same time are mainly covered in passing only, with the transformer and capacitive inrush most often covered + separately. In reality, they are almost always present at the same time, which makes everything far more complex. The effects are easy to measure, but are a great deal + harder to simulate or prove with a few maths formulae. +
Things become far more complicated when the secondary feeds a rectifier, followed by a large bank of filter capacitors. Worst case inrush current is still limited by the winding (and other) resistances, but the capacitor bank appears to be a short circuit at the output of the transformer. Depending on the size of the capacitors, the apparent short circuit may last for some time. During this period, the transformer will be grossly overloaded, but this is of little consequence. Transformers can withstand huge overloads for a short period with no damage, and they will normally last (almost) forever even when subjected to such abuse many times a day.
+ +The optimum switching point on the mains waveform is at the zero-crossing for a capacitor bank, and this would appear to be in direct conflict with the transformer's requirements for minimum inrush. This can only ever apply if you have a source of ideal transformers, which of course only exist in theory (and simulators). In reality and as seen below in Figure 3C, the transformer inrush is dominant - the 'ideal' point on the AC waveform to apply power is still at the AC mains peak, something you would not expect. Lacking a sensible way to ensure that power is only ever applied at the voltage peak, the use of an inrush mitigation circuit is the only real alternative for transformer-based power supplies. This can be a thermistor (with reservations) or a high power resistor with a bypass circuit. See Project 39 for details of a tried and proven inrush current limiter that is very effective.
+ +Figure 3C is again two oscilloscope captures in one. The yellow trace shows the inrush current (14.5A peak) when the mains is switched at zero, and the blue trace shows the inrush current (8.5A peak) with switching at 90° (peak mains voltage of 325V). The same transformer was used as for the Figure 3A capture, but with a full-wave rectifier (2 diodes), 10,000µF capacitor and a 16 ohm load, with ~38V DC output. It's obvious that peak voltage switching is still preferable, and it shows a much smaller inrush current than zero-voltage switching.
+ +Perhaps unexpectedly, the presence of a load that appears to be close to a short circuit at switch-on actually tames the worst-case inrush current somewhat, and also minimises the unidirectional (DC) effect seen when an unloaded transformer is switched on. Although I don't have a mathematically proven explanation for this, there are two different effects ...
+ +Firstly, the load damps the inductance of the transformer so it no longer behaves like a 'pure' inductance. Consider too that the core is saturated in one direction, so transformer action is impeded. A fully saturated core is not capable of providing magnetic coupling between the windings, so the efficient transfer of energy between primary and secondary can only exist when the core is pulled out of saturation by the AC input voltage. The capacitive load doesn't actually get much charge at all in the first half-cycle.
+ +You can see in the above waveform that in the second half cycle, the current is higher than when the transformer is unloaded. This is because the cap is now charging. The steady state input power of the Figure 3C waveforms measured 120W and the power factor was calculated to be 0.83 - better than expected. Total system losses are about 30W - higher than I expected.
+ +Note that these tests were performed using a 'conventional' E-I lamination transformer. All peak currents will be much higher with an equivalent toroidal, because of reduced winding resistance, better magnetic circuit and the extremely low leakage inductance that is typical of toroidal transformers. However, the general trends seen above will still be apparent.
+ +As you can see, once a capacitor bank is connected to the secondary of a transformer (via a rectifier of course), it doesn't matter a great deal when power is applied. A fairly large inrush current will occur regardless of the exact point on the AC waveform where the switch closes. The previous examples show the possible combinations, and predictably, more capacitance and/or lower winding resistances mean higher peak current. The inrush current settles down quite quickly, and after 100ms it has all but disappeared as you can see from Figure 3C. Much of what remains after 4 cycles is normal load current (about 600mA).
+ + +If one transformer on a mains circuit it turned on and has a 'significant' inrush event, other (operating) transformers on the same circuit may saturate as well. This phenomenon is known as 'sympathetic inrush', and the combined effect can be very pronounced. Even transformers that normally don't cause problems can be affected, due to the effective DC component that's superimposed onto the mains. This is clearly visible in Figure 3C, with the majority of the current being unidirectional.
+ +Even if a transformer is fitted with an inrush limiter, this is most likely not in circuit when the next transformer is energised, so a momentary DC offset causes saturation. The current magnitude depends on the inrush current drawn by the second transformer. It's also dependant on the DC resistance of the primary windings, and large transformers have lower resistance, and are more susceptible to sympathetic inrush current.
+ +If you have this problem you'll hear the first-powered transformer growl when the second is turned on. The solution is to use an inrush limiter on all equipment (with transformers of 300VA or more) that are powered from the same AC mains feed. That way, the inrush current is limited and there's far less momentary DC offset.
+ + +A vast number of small appliances now use what is known as an 'off-line' switchmode power supply (SMPS). This means that the mains voltage is rectified, smoothed (at least to a degree) with an electrolytic capacitor, then the DC is fed to the switching power supply circuitry. This type of power supply is found in everything from compact fluorescent lamps to DVD players, mobile (cell) phone chargers to TV receivers. They have become truly ubiquitous, and are used to run just about all mains powered appliances that need low voltage DC for operation.
+ +Larger power supplies are also very common, used for PCs, some microwave ovens, high power lighting and numerous other tasks. Many of these now use active power factor correction, which makes them far more friendly to the electrical grid than those with no PFC at all. Many do not use PFC of any kind, and these always present a very unfriendly current waveform to the supply grid.
+ +The majority of these power supplies (both with and without PFC) have high inrush current - often far greater than anything we have used before. Even little compact fluorescent lamps (CFLs) and many LED lamps have such a high inrush current that people have been surprised that large numbers of them can't be used on a single switch (or circuit breaker). A typical CFL may be rated at 13W and draw around 95mA (assuming a PF of 0.6). In theory, it should be possible to have over 80 of these lamps on a single 8A lighting circuit, but even with as few as 20, it may be impossible to switch them all on at once without tripping the circuit breaker.
+ +Predictably, the reason is inrush current. Some CFLs and other small power supplies with similar ratings use a series fusible resistor (typically around 10 ohms) in series with the mains, both as a (lame) attempt to limit inrush, and as a safety measure (a fusible resistor will act like a fuse if abused - or so we are led to believe). Even with a relatively small capacitor (22µF is not uncommon), the worst case inrush current may be as high as 30A, and that's allowing for wiring impedance.
+ +Clearly, any power supply that draws up to 315 times the normal running current at switch-on is going to cause problems. Standard circuit breakers are rated for peak (inrush) currents of around 6 to 8 times the running current, so on that basis switching on just 2 CFLs at the same time and at the worst moment could theoretically trip an 8A breaker. This normally never happens, because there is enough wiring impedance (both resistance and reactance) to limit the current to a somewhat saner maximum. The fact does remain though that at least in theory, attempting to switch on just two or three CFLs at the same time could trip a standard 8A breaker.
+ +Figure 4 shows the typical rectifier circuit, along with the waveform. For the sake of being a little more realistic, the switch was closed 0.5ms after the zero-crossing point of the AC waveform, when the voltage has only risen to 51V. As you can see, the peak is still just under 11A, and is over 100 times greater than the RMS operating current. Note that if power is applied at the voltage peak and the capacitor and wiring were perfect (no internal resistance at all) the current would be equal to 32.5A as dictated by Ohm's law (325V peak / 10 ohms).
+ +When a manufacturer has gone to all the trouble of including active power factor correction, you might expect that the inrush current will be minimal because there is no large capacitor following the rectifier (see Figure 5). Unfortunately, the PFC circuitry generally will not start until there is a reasonable charge in the bulk capacitor. This issue is addressed by the diode (D6) as shown, and it conducts fully when power is first applied - this diode is always used, but it forms a dual purpose here. Sometimes there may be another diode in parallel with L1. C1 is the filter cap for the PFC controller, and is coupled via D5 to prevent it from being discharged when the AC waveform falls towards zero volts.
+ + + +The switch and inductor form a high frequency switched boost regulator, and the DC output is usually around 400V. The inductor has almost no effect at DC (or 100/120Hz) though, so the bulk (storage) capacitor C2 is charged directly from the mains, via L1 and D1. It is only after the MOSFET (Q1) starts switching at high speed that L1 starts to function normally - at low frequencies (100 or 120Hz) it does nothing at all. It is well beyond the scope of this article to explain switching boost regulators in any further detail, but suffice to say that this is a very common arrangement.
+ +The value of C2 depends on a number of factors, but for even a small power supply of perhaps 150W or so, C2 will be around 150µF. Most manufacturers will use a negative temperature coefficient (NTC) thermistor to limit the inrush current, but it's not at all uncommon for them to get the value horribly wrong. One that I recently came across used a 4 ohm thermistor - completely useless, and I was able to measure 80A inrush peaks easily.
+ +This type of power supply behaves very differently from what we expect with normal linear loads. The operating mains current depends on the voltage, and if voltage increases the current decreases! This is not expected unless you are used to working with switchmode power supplies (SMPS). As a result, a 100W supply will draw 435mA at 230V, and 830mA at 120V when operating at maximum input power (output power will typically be around 10% less than input power due to circuit inefficiencies).
+ +Some of the recent switchmode supplies I've seen have active inrush limiting - an electronic soft-start built into the power supply. There are several ways this can be done, and some basic ideas are shown below in section 7. If done well, inrush current can be almost completely eliminated, and the mains current gently ramps up to the full load value with no evidence of a current 'surge'.
+ +There is an expectation now that everything should work anywhere in the world without change, so universal power supplies (90 - 260V AC, 50/60Hz input range) are common. It is unfortunate that this may make the circuitry to reduce inrush current far more of a compromise than would otherwise be the case.
+ +While thermistors are cheap and effective for inrush current suppression, they have a number of serious limitations, as discussed in the next section.
+ + +One simple choice for reducing inrush current to an acceptable value is to use a resistive component. This needs to present sufficient impedance at switch-on to prevent potentially damaging current surges, but must not waste power needlessly during normal operation. The amount of current drawn during the first few milliseconds should ideally be no more than perhaps double the normal running current, but some switchmode power supplies will refuse to start if the voltage fails to rise above a preset lower limit within a specific time period.
+ +There are all kinds of reasons that may limit the range of choices for the start-up current, but most are limitations (either deliberate or otherwise) within the design of the power supply. Very simple supplies will try to start working as soon as any voltage is present, but may be completely unable to operate even after the limiting resistance is out of circuit if the inrush protection is not designed correctly.
+ +Other more sophisticated designs will use protective circuits that prevent the power supply from operating if the input voltage fails to reach a preset minimum, and/or does not rise quickly enough. In such cases, it may be necessary to accept a higher than desirable inrush current. Things become more complicated when equipment is "universal" - having a power supply range of 90-260V AC at 50 or 60Hz.
+ +An inrush limiter that works perfectly at 230V may prevent the supply from starting at 120V, but if set for 120V operation the inrush current at 230V (or above) becomes excessive. Ideally, this should signal that the power supply itself requires a redesign, but that may not be possible if the PFC integrated circuit used has limitations of its own.
+ +Some of the latest switchmode power supplies use an active inrush limiting scheme, and I have seen several examples where there is no inrush current at all. The input current (relatively) slowly increases from zero up to full operating current, with the input current never exceeding the maximum loaded input current for the power supply. Active inrush limiting has only been seen so far on power supplies that also have active power factor correction, and the additional complexity is necessary to prevent start-up problems. One area where this is becoming common is LED lighting, where many lamps are likely to be wired into a single circuit.
+ + +NTC (negative temperature coefficient) thermistors (aka surge limiters) are a common way to reduce inrush. They are readily available from many manufacturers and suppliers, and are well established in this role.
+ +There is a very wide choice of values and power ratings, and a thermistor is just a single component. Nothing else is needed ... at least in theory. Indeed, manufacturers make a point of explaining that their thermistor is the most economical choice, and that additional parts are not required. They may (or more likely may not) point out the many deficiencies of this simple approach.
+ +Thermistors range in value from less than 1 ohm to over 200 ohms and have surge current ratings from around 1A up to 50A or more. It is the designer's job to pick the thermistor that limits the inrush current to an acceptable value, while ensuring that the power supply starts normally and the thermistor resistance falls to a sufficiently low value to minimise losses.
+ +It is useful to look at the abridged specification for what might be considered a fairly typical NTC thermistor suitable for a power supply of around 150-300W depending on supply voltage (From Ametherm Inc. [3]).
+ +Resistance | 10 ±25% + |
Max Steady State Current up to 25°C | 2 A + |
Max Recommended Energy | 10 J + |
Actual Energy Failure | 30 J + |
Max Capacitance at 120V AC | 700µF + |
Max Capacitance at 240V AC | 135µF + |
Resistance at 100% Current | 0.34 ohm + |
Resistance at 50% Current | 0.6 ohm + |
Body Temperature at Maximum Current | 124°C + |
It is important to note that the resistance tolerance is very broad - this is common with all thermistors. Expecting close tolerance parts is not an option. The maximum capacitance values shown are for a traditional capacitor input filter following a bridge rectifier. Direct connection to mains is assumed. At rated current, the resistance is 0.34 ohm, so power dissipated is 1.36W which doesn't sound like much, but note the body temperature - 124°C. I would suggest that optimum operation is at 1A, where dissipation is only 0.6W and the temperature will be somewhat lower.
+ +The good part is that the surge energy is specified - in the above case it's 10 Joules. This means that it can withstand 10W for one second, or 100W for 100ms. It can also theoretically handle 1kW for 10ms or 10kW for 1ms, and unless stated otherwise this should not cause failure. Although there is some butt-covering with the maximum capacitance specification, this is largely a guide for the end-user. Based on this I'd suggest that 1kW for 10ms would probably be quite alright, as it's still only 10 Joules. Be warned though - there are probably as many ways of specifying thermistors as there are manufacturers, and not all provide information in a user friendly manner.
+ +While it is a fairly common suggestion (and used by some people), thermistors by themselves are completely useless in any equipment that draws a widely varying current during normal operation. Power amplifiers are a case in point - certainly the transformer and filter caps will cause a high surge current when the amp is switched on, but at low listening levels the thermistor has so little current through it that its resistance will be much higher than it should be. This can lead to power supply voltage modulation, and while that might lower the output transistor dissipation slightly, the thermistor is undergoing consistent stress - heating and cooling constantly whenever the amp is operating.
+ +Thermistors should only be used by themselves where the protected equipment draws a relatively constant power after it has settled down after power is first applied. While very convenient, NTC thermistors have a number of limitations.
+ +They dissipate power constantly while equipment is operating, and normally operate at a relatively high temperature (~125°C for the example shown in the table). This means that they must be kept well clear of temperature sensitive parts (semiconductors, capacitors, etc.). Because they run hot, this means they are dissipating power, and this adds to the heat load inside enclosures and lowers the overall efficiency of the product.
+ +Because thermistors normally run hot for minimum resistance, they must have time to cool down again between the time power is removed then restored. This may not be feasible, because momentary power outages are fairly common worldwide. If the power is off for only a couple of seconds, the thermistor will not have had time to cool, and there is almost no inrush protection when the power is restored. Most NTC makers suggest that a cool-down period of 30 seconds to a couple of minutes is needed, depending on the size of the thermistor, surrounding air temperature, etc.
+ +The use of thermistors is fine, but only if there is a bypass circuit that shorts them out after 150ms or so, and this is my recommendation for any audio equipment.
+ + +Thermistor makers like to point out that using an NTC thermistor is so much better than a resistor, because they are physically smaller for the same energy absorption. While this is certainly true, they are fairly wide tolerance devices and unsuited where the application may be subject to strict specifications. The best you can hope for is ±10%, available from some suppliers for some of the range.
+ +Resistors (which will be wire-wound for this application) are a very viable alternative, but they must have some method of bypassing once the surge has passed and the circuit is operational. The alternatives for this are described below.
+ +Resistor selection must be made on the basis of the maximum permissible current, but this is usually an unspecified value. To an extent, experienced engineers can estimate the allowable maximum for reliable operation over an extended period, but this is always a variable and may change if the resistor supplier changes the design.
+ +Some wire wound resistors are capable of astonishing surge currents, while others of apparently equivalent size and value will be destroyed instantly the first time they are used. Nevertheless, resistors remain a commonly used and extremely reliable means of protecting against inrush current. If properly sized and perhaps used in parallel to obtain the power and value needed, there is no reason that an inrush protector using resistors cannot outlast the equipment it protects.
+ + +As noted for resistors, a bypass scheme must be used to remove the series resistance from the circuit after the surge current has passed. The humble relay is a popular choice, because they are extremely reliable and are available for almost any application known. The voltage across relay contacts is negligible when they are closed, so contact power loss is close to zero. There is a small current needed for the relay coil though, but for equipment rated at less than 1kW the relay coil should consume no more than about 1W.
+ +Another alternative is a so-called 'solid-state relay' (SSR). These are usually more sensitive than traditional relays (less energising power is needed), but they dissipate some power across the TRIAC or SCR switching component (typically around 1-2W for each amp of continuous current). Cost is usually significantly higher than traditional relays, but they are used in some cases because they are often seen as being more convenient.
+ +It is also possible to make a solid state relay using a TRIAC or SCR directly controlled by a suitable opto-coupler. This is what's inside a 'real' SSR anyway, but by making it from discrete parts gives much greater flexibility. The general bypass schemes used are shown below, but other alternatives are possible.
+ +Many of the main complaints against NTC thermistors are completely eliminated if the thermistor is bypassed shortly after power is applied. The thermistor gets to do its job, and they are fully specified for the instantaneous power dissipation (unlike resistors). Once the circuit is operating normally, the relay shorts out the thermistor, so it is allowed to cool and adds no heat into the enclosure. This means that it is ready immediately after power is removed - no cooling time is needed at all.
+ +It is very important that the relay (or other device) removes the short from the thermistor or resistor very quickly after power is removed. If not, a momentary power outage will cause all equipment to draw a very large surge current when power resumes. The bypass circuit ideally needs to disconnect within a few milliseconds, and certainly well before the power supply 'hold-up' time expires.
+ +++ ++
+ Many power supplies are designed to continue functioning and providing output for up to 500ms or so after mains power is removed. This is intended to guard + against data loss (for example) during a momentary power outage. General purpose supplies may function properly only over a few missing cycles before the + regulated DC voltages start to sag. Hold-up time also depends on the load - a lightly loaded supply will maintain voltage for much longer than one operating + at maximum output current. +
With a proper bypass arrangement, resistors and thermistors are both equally suitable for circuit protection from inrush current surges. Thermistors have an advantage in that they will fall to a low resistance state even if the bypass system fails to operate, so if there is a fault they will not usually be subjected to massive power dissipation and possibly destroyed.
+ +Resistors do not have this fail-safe advantage, so it may be necessary to add a thermal fuse to protect against fire. Consider a 10 ohm resistor effectively connected directly across the 230V mains. If the bypass relay doesn't work, power dissipation may be as much as 5kW. Current will be close to 23A, so the fuse (if fitted !) should blow, but the resistor may fail first. Higher resistance values are worse - the current is not high enough to cause the fuse to blow straight away, but the resistors will get exceptionally hot and may set the PCB on fire. I generally suggest a soft-start resistance of around 33 ohms in series with the power supply. This is typically in-circuit for about 100ms, after which it is bypassed by a relay (see project 39 for an example).
+ + +Electronic power supplies are becoming more common every day, but a great many have extraordinarily high inrush current. In all cases the peak input current at switch-on is created as the main filter capacitor charges. There might be an NTC thermistor or current limiting resistor in the circuit, but neither is particularly useful at maintaining the peak current to a manageable value. This is not generally a problem where the appliance is a one-off, such as an amplifier, DVD player or even a PC, because it's not normal to have a very large number of devices on the same circuit.
+ +With lighting (CFL, fluorescent tube with electronic ballast or LED) it's a very different matter. For example, a 50W ceiling lamp is expected to draw around 220mA at 230V. This assumes a unity power factor, but the actual current may be up to 440mA (power factor of 0.5). It's unlikely that the power factor will be taken into account, so based on the rated power and the common use of a 16A circuit in commercial premises, an electrician could easily be fooled into thinking that you could safely have maybe 50 (or more) fittings on a single circuit (a total of 2,500W, drawing just under 11A). However, unless all the lamps have a very effective inrush limiter and power factor correction, the peak current when turned on will trip the circuit breaker every time someone tries to turn on the lights. Without power factor correction, the total current may be as high as 22A - the breaker will trip due to continuous overload. Where the power supply is rated at more than perhaps 25W, some form of active inrush protection system is essential.
+ +We need to examine the worst-case inrush current, and then figure out how it can be limited to a safe value. 'Safe' in this context means that the circuit breaker won't trip when lights are turned on, only when there is a fault. In general, it should be possible to ensure that inrush current is no more than 4-10 times the nominal operating current, with the inrush duration limited to a single half cycle (10ms at 50Hz, 8.3ms at 60Hz). This keeps the inrush current below the trip threshold for most typical breakers. It will not be possible to load the circuit to its maximum though - the maximum operating load might be as low as half the circuit breaker's current rating.
+ +The risetime of the mains (commonly called dV/dt - delta voltage/ delta time, ΔV/Δt) depends on how the mains is switched. Normal mains switches of all kinds create extremely fast risetimes, but the dV/dt may be tamed somewhat by the building wiring. At (or near) the zero-voltage point, the dV/dt is only about 100mV/µs, but if switched anywhere else during a half-cycle, the dV/dt can easily be several hundred volts/µs.
+ +The magnitude of the impulse depends on the exact time between a zero-crossing and the switching point. Worst case is at 90° after zero-crossing, where the mains is at its peak voltage. At other phase angles, the risetime doesn't change, but the amplitude of the transient is lower.
+ +Figure 7 is very similar to the circuit shown in Figure 4, but is a new circuit for this specific test. Note that the load shown will normally be a DC/DC converter that powers the circuitry and this applies to all the following diagrams.
+ +The load is 50W, with a 230V supply, and the 10 ohm input resistor is a lumped component that includes the mains impedance, diode forward resistance, capacitor ESR and any current limiting resistance (or thermistor) that may be fitted. 10 ohms is not an unreasonable figure, and even if that were used as a physical component its dissipation would be about 1.7W with the normal distorted current waveform created by the diode bridge and filter capacitor.
+ +If power is applied at the zero-crossing of the AC waveform (zero volts, green trace), the peak current is a passably friendly 8A, compared to the RMS operating current of 415mA for 50W output. Remember that this power supply example does not include power factor correction so current is higher than expected. So, inrush current will be about 20 times the operating current - not wonderful, but it might be acceptable. The magnitude of the inrush current is almost directly proportional to the capacitance, which in turn is determined by the output power. For example, a 50W supply will typically use a 100µF capacitor while a 100W supply will need 220µF (and so on). The value used also depends on the supply voltage, with more capacitance being needed for 120V operation than 230V.
+ +While zero-voltage switching does cause a significant inrush current, things rapidly become serious when the mains happens to be switched at the very peak of the AC waveform (red trace).
+ +Inrush current is now over 30A, just because the switch was closed at the AC waveform peak rather than the zero-crossing. In use, the current will always be somewhere between the two currents measured, depending on the exact moment the switch is closed. 30A is over 72 times the operating current, and as few as 5 loads using this power supply switched on at once will cause intermittent circuit breaker tripping. Should the series resistance be less than the 10 ohms shown, then the peak current will be proportionally greater - up to 100A is not out of the question! Larger capacitance values cause the inrush event to last longer, but do not increase the magnitude of the current because that's limited by the series resistance.
+ +There is a hint in the above as to one method of limiting the inrush current - arrange for some electronic switching to ensure that the power supply is not connected to the mains unless the voltage is close to zero. Zero-voltage switching is still not ideal, but is far better than random.
+ + +An easy way to ensure zero voltage switching is to use a 'solid state relay' (aka SSR) [8]. Many of the common SSRs are already designed for zero-crossing switching, and they do not activate unless the voltage across the relay is below around 30 volts or so. Because of this, they are completely unsuited for use with transformers, because transformer inrush current is at its very worst if the power is applied at zero volts. Never use a zero crossing SSR with transformers!
+ +It's relatively simple to incorporate an SSR (either packaged or discrete) into an electronic power supply, and if done properly this will ensure that the inrush current is limited to around 20 times the normal operating current, but this is still a significant inrush event and limits the number of appliances using the power supply on a single circuit. There are other issues when using any form of SSR as a switch for electronic power supplies, which may make this technique more difficult to implement that it might seem at first. The main problem is that SCRs and TRIACs don't conduct at all unless there is enough current, and this can cause continual spike currents to be generated because the switching is so fast. This is similar to the problem seen when CFLs are dimmed using a standard TRIAC dimmer (see CFLs - Dimming for more info and waveforms) [7]. Provided a TRIAC or SSR is provided with a continuous gate current after it's first triggered there should be no major issues.
+ +Zero voltage switching is easily accomplished using discrete parts, such as the MOC3043 zero-voltage switching optocoupler and a suitable TRIAC (as shown above). No special circuitry is needed, because the MOC3043 has the sensing and switching circuits built-in. While this technique can (and does) work, it carries a risk for any power supply that only draws current at the peak of the AC waveform. The zero-voltage sense circuit will try to turn the TRIAC on, but nothing will happen because there's no current drawn until the input voltage exceeds the stored voltage in the capacitor.
+ +This means that the circuit might not work properly, and the same applies to a SSR that incorporates zero voltage switching. Ideally, such an arrangement should be bypassed once the power supply inrush event is over. This adds even more complexity, and it's not really very effective anyway. Trying to find useful info on this method isn't easy, because there's not a great deal available on the Net.
+ +As with the following MOSFET circuit, the risetime of the voltage waveform (dV/dt) must not be so fast that it causes the TRIAC or SCR(s) to conduct (static dV/dt). The mains EMI filter needs to be designed to keep the risetime below the critical limit. This can range from as low as 50V/µs up to several hundred volts/µs, depending on the device. As always, it better to err on the safe side, and it's not that difficult to limit the risetime to around 50V/µs. This will probably happen automatically simply due to the distributed capacitance, resistance and inductance of the mains wiring. The TRIAC used must have a static dV/dt rating that's greater than the actual dV/dt so it doesn't self-trigger. A resistor should always be used between the gate and T1 (aka MT1) to maximise the static dV/dt performance (this resistor may be included in some TRIAC packages).
+ +++ ++
+There must be a note of caution here, as a TRIAC presents some 'interesting' challenges when used with electronic loads. A + TRIAC will stop conducting when the current falls below the holding current for the device used. Likewise, it cannot start to conduct unless there's enough current (called latching + current) to ensure reliable commutation from the 'off' to the 'on' state. This means that the TRIAC has to be selected with great care, and tested thoroughly with the load. Very + high (but short duration) pulse currents will be created on each half-cycle if the TRIAC and load are not matched perfectly. +
Ideally, the TRIAC will be bypassed with an electromechanical relay to prevent any issues with TRIAC conduction. One thing that is very obvious is that the exercise is not trivial. While it might be imagined that you could just give up and use an NTC thermistor, it should be very clear by now that this is rarely a workable solution in real life. Apart from anything else, there will always be a significant amount of excess heat within the enclosure that must be disposed of, and this alone can be a daunting prospect for a compact power supply.
+ + +There are several schemes to use MOSFETs as the current limiter. These can be used in linear or switched modes, and there are quite a few variations on the theme. Linear mode is the easiest to implement, but the MOSFET has very high dissipation for the first few half-cycles. Switching mode causes much lower dissipation in the soft-start MOSFET, but requires more circuitry. As power supply design becomes more sophisticated with dedicated ICs, the added complexity isn't as great as it might have been just a few years ago. However, it's still not as straightforward as we might hope.
+ +The greatest advantage of using a MOSFET is that the start-up inrush current can be made to be no greater than the normal operating current, so there is effectively no inrush current at all. The current waveform simply increases smoothly over a few cycles then settles at the running current with no high current peaks. Look at Figure 11 as an example.
+ +The biggest problem with the linear scheme is that peak power dissipation in the MOSFET can easily reach several hundred Watts. While it's not difficult to get rid of the heat (it only lasts for about 200 milliseconds or less), the stress on the MOSFET may be high, which may lead to premature failure. However, it is still fairly easy to ensure that the MOSFET remains within its safe operating area (SOA), and it is by far the easiest scheme to implement. The arrangement shown above reaches a peak dissipation of about 120W, and the average over the 170ms turn-on period is under 30W. This is not at all stressful, and as seen below, inrush current is all but eliminated.
+ +The very narrow spike just after switch-on is caused by the EMI filter's input capacitor. While the peak current can be rather high (8-10A is not uncommon), it only lasts for a few microseconds. The same thing is visible in the oscilloscope capture shown in Figure 12. X-Class caps are supposedly capable of withstanding this surge easily, but I have seen some that have degraded in use and show less capacitance than the marked value (allowing for tolerance). It's not known if the degradation was due to switch-on current surges or a 'dirty' mains supply, as the affected units were from an industrial complex.
+ +There is one point that is extremely important, but is also likely to be unexpected. When the power is applied via a switch, the dV/dt (rate-of-change of voltage vs. time, aka ΔV/Δt) is extremely high. The input filter and MOSFET drive circuit must be configured so that the drain-to-gate capacitance of the MOSFET doesn't cause spontaneous conduction. This generally means that at least two mechanisms must be in place so the MOSFET is never forced into unexpected conduction because of the extremely fast voltage rise when the switch is closed.
+ +The first line of defence is to limit the maximum risetime of the applied voltage, and the second is to ensure the gate has a low impedance path to the source (via a large capacitor for example) so the instantaneous current that flows in the drain-gate capacitance cannot raise the gate above the conduction threshold. Parasitic inductance must be kept to an absolute minimum, and the capacitor must be located as close to the gate and source pins as possible. While it's probably not well known that MOSFETs will switch on due to high dV/dt between the drain and source, it's very real - it can happen even when the gate is connected to the source via any impedance! [9].
+ +Any capacitively coupled energy is absorbed by C3 in Figure 9, which is very large compared to the drain-gate capacitance. It will easily absorb any current spike without the voltage changing appreciably. Local inductance between the gate and source must be kept very low indeed, or problems may still occur. This means very short tracks on the PCB from the MOSFET to the capacitor. If the voltage risetime is fast enough, nothing can prevent spontaneous conduction, but 'real-life' circuits will never be able to reach the ΔV/Δt required to overcome a suitably low gate impedance.
+ +The circuitry needed for a proper switched MOSFET inrush limiter is relatively complex, but easily within the capability of a fairly straightforward IC. One may already exist, but if so the details are not available (at least nothing that I could find). The requirement for avoiding spontaneous conduction due to high dV/dt is just as important with a high-speed switching limiter as with a linear version. The mains current waveform during the inrush period is similar to that shown above. The PWM controller starts off with narrow pulses and increases the pulse width over a period of perhaps 100ms, after which it applies a continuous gate current for Q1. Once the inrush period has elapsed, Q1 remains fully on, so losses are minimal.
+ +Figure 12 shows the measured inrush current for a LED lighting power supply fitted with active current limiting. As you can see, it is very effective, and inrush current is virtually non-existent. The very short spike is the point where power was applied, and is simply the EMI filtering capacitor (typically 100nF, X2 Class) charging. This spike is high current (~8A) but is so short that it will never cause a problem. Power was applied at the peak of the input waveform (~325V for nominal 230V mains), using a purpose-built tester that I designed and built [ 10 ]. It allows me to select zero-crossing or 90° (peak) switching. The start-up waveform doesn't change, but the sharp spike disappears when the mains is switched at the zero-crossing.
+ +The above image is a direct capture from a digital oscilloscope, and has not been altered other than resizing and cropping. As you can see, the technique is 100% effective. The inrush current is suppressed so effectively that the worst-case peak current is only fractionally more than the running current. This is an important part of electronic power supply design that it can be expected to become standard for any application where large numbers of supplies can be turned on simultaneously.
+ + +We have discovered above that the dV/dt of the switched mains can cause problems both for MOSFETs and TRIACs (spontaneous conduction), and that even the EMI filter can create a large current spike. If designers were to use a zero-crossing SSR or TRIAC, coupled with a MOSFET based 'true' soft-start circuit, it becomes fairly easy to ensure that there can never be a current impulse at the moment of switch-on. It would no longer make any difference if the mains were switched on at the peak or anywhere else on the waveform, because a zero-crossing SSR always applies current only when the voltage is close to zero, and the MOSFET ramps up the current in a controlled and entirely predictable manner.
+ +This approach would provide the best possible result, with no components being subjected to high impulse current. It's even possible to use this arrangement with transformers, because although zero voltage causes the worst case inrush current, the active MOSFET circuit can provide a smooth voltage increase so saturation effects can be eliminated completely. However, to do so means that we must include the MOSFET soft-starter circuit within a bridge rectifier so that it works with AC. This is a design exercise in itself, and still cannot address all issues.
+ +This idea isn't as 'over-the-top' as it might seem at first. There will never be a high dV/dt waveform applied to the MOSFET circuit, and this simplifies the design. For a manufacturer there is only a small additional cost, and it can be based on a dedicated controller IC that does all the hard work. Even the simple act of switching on a large bank of lights (for example) places stresses on the switch contacts. If the switching is done electronically using a zero-volt switching relay as shown above, small-gauge wiring can be used from a master controller to the switch and the switch only handles low current. For high-current loads, be aware that the TRIAC will dissipate about 1W/ amp of load current, and it may be advisable to bypass it with a relay if the load current is more than a couple of amps.
+ +As noted above (Zero Voltage Switching) a TRIAC is often sub-optimal for switching electronic loads. Great care is needed to ensure that the load appears to be sufficiently 'resistive' to prevent potentially very high current spikes on the input current waveform. This will lead to the early demise of the filter capacitor if it's not addressed carefully. TRIACs and electronic loads are often mutually exclusive! The alternative is to include an electromechanical relay in parallel with the TRIAC (not just for high-current loads!), but this involves additional cost. It does work though - I built a test-set that lets me switch at the zero-crossing or peak, and that uses a TRIAC with a paralleled relay.
+ +If incorporated into individual power supplies, the TRIAC and MOSFET can be comparatively low power, and inrush current is kept to an absolute minimum. Somehow I doubt that extremely cost-sensitive manufacturing would ever consider such an approach though, because it would inevitably add cost to the power supply. Unfortunately, there are suppliers, distributors and manufacturers who don't even know that inrush current is a problem. They certainly won't change anything to fix an issue they either don't know exists, or choose to ignore in the hope that no-one notices.
+ + +For a great many small appliances, inrush protection should be mandatory, simply because overall power consumption is falling, so people think (not unreasonably, I might add) they can use more appliances on the same circuit than was previously possible. As this article has shown, you may actually end up with far fewer than you might imagine. As explained here, getting a good inrush suppression system is actually quite difficult, and involves aspects of electronic devices that people generally don't think of because they are so obscure - especially dV/dt.
+ +Limited ability to connect multiple devices at once is especially troublesome with lighting - the inrush current is often much higher for most modern 'equivalents' to traditional incandescent lamps or fluorescent tubes, regardless of whether the replacements use CFL or LED technology. However, it must be noted that incandescent lamps have a significant inrush current too! For the first half-cycle, it will (typically) be around 12 times the normal operating current. The iron cored transformer of old for halogen lighting has given way to an electronic equivalent, which may be more efficient, but will never last as long. Indeed, a great many of the small efficiency gains that are mandated upon us by government decree can vanish in an instant if the 'new, improved' replacement device fails prematurely. Yes, I know this is a separate topic, but it's important here too. For what it's worth, the inrush current for 'electronic transformers' is generally fairly modest, and is almost entirely due to the lamp itself.
+ +Until manufacturers strive to minimise surge (inrush) current, installers will continue to have issues - especially if there is no inrush information provided in any of the documentation. It's actually very uncommon for any manufacturer to provide this info, even though it can (and does) cause some fairly major headaches when an installer gets caught out. The problem affects everyone - the manufacturer and/or distributor gets a bad name, installers have to change their wiring, and customers are inconvenienced. OEM power supplies generally do provide inrush information, but this usually doesn't get through to the 'user manual'.
+ +At the very least, details of inrush current should be provided with documentation. Installers need to know how many 'things' can be connected to a single circuit breaker, even if they are grouped into individually switched banks. This applies especially for lighting equipment, because the lower power demands of many modern lights can easily mislead people into thinking that large numbers of lamps/ luminaires can be used on a single circuit breaker. While there might not be a problem if lights are switched on in some kind of sequence, restoration of service after a power outage will cause all lights to try to come on at once.
+ +The circuit breaker may trip every time someone tries to reset it, until some of the individual banks are turned off with their individual switches. This is untenable in the workplace - unless there are regular power failures so everyone learns and knows the proper sequence, it could easily take some time before anyone figures out what needs to be done. Momentary interruptions will simply trip the circuit breaker when power resumes, and this is quite unacceptable. Doubly so because someone will try to reset the breaker over and over again, until by chance it manages to stay on. I know of two installations where this has occurred, exactly as described. The only fix was to rewire some of the lights onto an additional circuit breaker, and to change the circuit breakers to delayed action (D-Curve) types.
+ +All manufacturers of appliances that use switchmode power supplies need to provide useful information to installers or users, so that people know that these new products may behave differently from what might be expected. Even traditionally resistive loads like electric stoves are sometimes using an SMPS to power induction cooking systems, so there are very few things that are not affected. Even electric hot water systems are now available using a heat-pump (an air conditioner in reverse), and this will also have a significant inrush current.
+ +Imagine the load on the electricity grid if there is a short service interruption, and tens of thousands of high inrush current appliances all try to come back on simultaneously. As we've seen above, a 50:1 ratio is not uncommon, so if a fully loaded electrical substation suffers a momentary break in supply, how can it possibly cope with a 50 times overload when it tries to come back on-line?
+ +The answer, of course, is that it probably can't, unless the total load is significantly less than the rated capacity of the substation. Likewise, possibly hundreds of switchboard circuit breakers that are close to their limit will drop out. All this because no-one will tell installers and users that these items draw 20, 40 or 50 times the normal current when power is applied. In fact, it may only be because hundreds of switchboard breakers trip that the substation will be able to be reconnected, because the peak load is reduced. I've not heard of this happening (yet), but it's inevitable as electronic loads become the standard and with higher power ratings.
+ +I expect that most manufacturers will eventually get it right, but based on what I've seen so far it's likely to take a while. It's not hard to imagine how this problem has come about - I'm fairly sure that they simply haven't thought about the likely consequences, and any inrush current limiting is simply to protect the product itself. There appears to be little thought for the installation or the grid with many products. One only needs to remember where most products are manufactured now, and that the supplier is often selected on price alone. Unfortunately, it should come as no surprise that problems exist.
It is worthwhile mentioning that many of the latest (as of 2017, with some earlier) LED power supplies do incorporate an active soft-start function, and I've now tested quite a few that simply ramp up their input current over a number of mains cycles, with the current finally settling at the normal operating level. This is fairly recent, with most of the older lighting products failing rather dismally. This is obviously an issue that caught out a lot of installers (I know of several from immediate contacts in the industry), but the problems that used to be common are now largely a thing of the past. LED lighting has come a long way in a short time, and continued improvements are to be expected for some time to come.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +Intermodulation Distortion |
+ Introduction+ + +
+ 1 Demonstration Sound Files
+ 2 DIY Signal Analysis
+ 3 Method 'B'
+ 4 Bench Tests
+ 5 'True' Intermodulation Distortion
+ Conclusions +
Strictly speaking (and in particular from an RF (radio frequency) engineering perspective), the effects that produce sum and difference frequencies are not considered to be a part of intermodulation distortion, but are the result of mixing two frequencies. However, they are included in IEC standards for Difference Frequency Distortion (DFD), which is described in the standards IEC60118 and IEC60268. This test is referenced by Audio Precision, Tektronix and on the Analog Devices website, amongst many, many others. This test specifically refers to the production of the difference frequency as a result of IMD present in an amplifying circuit. From an audio perspective, any frequencies that result from intermodulation are counted as intermodulation distortion, including sum and difference frequencies (should they occur).
+ +For a more in-depth look at intermodulation distortion (IMD) as it is measured in amplifying devices, see Intermodulation Distortion (IMD). The article here concentrates primarily on the production (or otherwise) of sum and difference frequencies, and uses a 'brute force' approach so that the effects are clearly audible. IMD is normally far more subtle, although it's demonstrated in much the same way, but with reduced levels and using the test frequencies that are standardised for IMD testing.
+ +While the primary intent of this article is to allow the reader to demonstrate the phenomenon described for themselves (using ears, and optionally test equipment), it has ramifications for some of the tests that are commonly performed to measure intermodulation distortion. The DFD test referred to above is a prime example, and despite this being entrenched in standards documents, the test itself may fail to consider whether a circuit distorts symmetrically or not.
+ +The most interesting thing about this test method is how it behaves differently depending on whether the waveform and distortion is symmetrical or asymmetrical. Trying to verify this elsewhere is no easy task - I searched many different sources and found few reference to this most interesting behaviour, however a reader did find one other text that mentions the effect, "Audio Measurements" by Norman H Crowhurst (1958, pp98-102) [ 1 ]. The effects described are not 'new', but this is one of very few articles you will find that describe the difference between symmetrical and asymmetrical distortion and how it affects sum and difference frequencies.
+ +++ ++
++
This article is not about intermodulation distortion in general. While some of the text does refer to intermodulation, the article is intended to describe a specific case ... + sum and difference frequencies. It can be argued that these aren't really intermodulation products but the result of frequency mixing, however the two are pretty much interdependent in a non-linear + circuit (at least when asymmetrical). The observations herein are almost completely based on the differences between frequency products that result from symmetrical vs. asymmetrical distortion. +
With a symmetrical input waveform, symmetrical distortion does not create the sum and difference frequencies. At least, it doesn't generate a significant level at either the sum or difference frequency, and as noted below, each can be 75dB below the level that's created by an equivalent amount of asymmetrical distortion. For reasons that are very unclear to me, I can find no reference to this phenomenon in any text that I've seen so far other than the one described above. Almost every discussion you see that discusses distortion, intermodulation, and sum and difference frequencies implies (though very rarely stated) that the distortion mechanism and waveform is inconsequential. It's not, and in fact makes a big difference to the behaviour.
+ +So, to get both the sum and difference frequencies, the non-linear part of the circuit must be asymmetrical. If you actually want the sum or difference frequency, you will be bitterly disappointed if you use nice, low distortion input signals and a pair of back-to-back parallel diodes (as shown in Figure 1), because you'll get neither! The sum and difference frequencies are present, but at such a low level that they are inaudible and in real terms, probably immeasurable because they will be buried in noise. If you remove one of the diodes, the sum and difference frequencies reveal themselves immediately.
+ +It is important to understand that just because the sum and difference frequencies are not created by symmetrical distortion, this does not mean that there is no intermodulation. Because of the non-linearity in the circuit, if we examine only the fundamental frequencies (1kHz and 1.1kHz), we see these new intermodulation frequencies generated (all in Hertz) ...
+ +Refer to Figure 4 to see these frequencies for yourself. In the list above, I only included frequencies that are greater than 100µV (-74dB with respect to the two fundamental frequencies). Anything below that level will be swamped by noise in any real test - either when listening or with an oscilloscope/ spectrum analyser. There are additional harmonic and intermodulation products too, of course. Figure 4 shows clusters of distortion at 3, 5, 7 and 9kHz and well above that if the graph were extended. Asymmetrical (supposedly 'nice') distortion also shows similar clusters of frequencies as seen in Figure 3.
+ + +I don't expect anyone to believe the findings examined here, because no-one else (apart from the single instance referred to in the intro) appears to have mentioned it. Electronics as we know it has been mainstream for almost 100 years, yet in all that time this phenomenon has gone either un-noticed or deemed to be of no consequence. I consider that it is of great consequence (well, at least some consequence ) - hence this article. So, assuming that you don't believe me, I suggest that you build the circuit shown in Figure 1, and feed a couple of clean sinewaves into it. You can monitor the output with an oscilloscope fitted with FFT capability, but it is essential that you also connect an amplifier and speaker so that you can hear it for yourself. You won't be able to hear the sum frequency because it gets swallowed up by harmonics, but the difference is definitely audible when the circuit is switched to asymmetrical distortion. Actually, build the circuit and test it for yourself anyway - there's nothing like hands-on testing as a learning tool.
There are many free spectrum analysis tools available on the Net that can be used with your computer and sound card. While the 20kHz upper frequency limit would normally be a problem, it's more than enough for these tests. Such tools are great for just looking at things that you may not have seen before, and are highly recommended for interesting experiments, speaker tests and anything else that requires spectrum analysis in the audio band.
+ +There are two files that you can download - one is a two-tone signal that you can use for testing, and the other is a demonstration of the intermodulation distortion that you can listen to. While this may be convenient to get a basic idea, there is nothing like actually experimenting with the very simple circuit to convince yourself that the phenomenon described is real. I set the difference between the two signals below to 200Hz, as that proved to be clearer through PC and 'real' speakers.
+ ++ There are two files you can use to test this phenomenon. Both are MP3 format ...+ +
+ +
900+1100-intermod.mp3 +20 seconds, 370KB. This is the end result. First 10s, asymmetrical, second 10s, symmetrical
+
900+1200-clean.mp3 +20 seconds, 613KB. Use this to do your own tests.
+ + intermod-900+1200-clean.mp3 download (You may need to right-click and select 'Save File As...') +
In the first 10 seconds of the 'intermod' file, the 200Hz difference tone is clearly audible, and it all but disappears when the clipping becomes symmetrical. I didn't match the diodes, so some small asymmetry still remains, but I think it demonstrates the point pretty well. The sounds here have been re-recorded (again), changing the 1,200Hz tone to 1,100Hz, with a 200Hz difference frequency. Having experimented with a few speakers, this version is definitely clearer than the 1,200Hz + 900Hz version. The recordings are direct from the clipping circuits.
+ +The second (clean) tone is the original, using 1,200Hz and 900Hz. The 300Hz difference tone is audible when distortion is added, but I think the new recording shows the results more clearly, using 1,100Hz and 900Hz. The original distortion recording is still available ... 900+1200intermod.mp3. It's only 10s duration, with 5s for each form of distortion.
+ + +With two signals, one at 1kHz and the other at 1.1kHz, apply around 1V RMS from each of the signal generators - keep the two signal generators at roughly the same level. Advance the pot until you have enough level to obtain 'just visible' (or just audible) distortion. With the circuit shown the pot needs to be set for about half level. With the second diode connected, you don't hear any 100Hz tone, but as soon as you disconnect it, the 100Hz tone is clearly audible, and it will show up in the signal spectrum. You might want to use 900Hz and 1.2kHz (as used in the above sound files), as the 300Hz tone is more audible than 100Hz with small speakers.
+ +If you have a simulator program on your PC, you can also run a simulation. You'll see exactly the same thing if you set up the circuit as shown in Figure 1.
+ +This test actually came about while I was testing something completely different. Many people wax lyrical about the 'nice' distortion created by certain amplifiers, not realising that asymmetrical distortion creates not only the allegedly nice even harmonics, but also creates plenty of (allegedly nasty) odd harmonics as well. The total intermodulation products are actually greater with only one diode than with both (D1 and D2, Figure 1). I freely admit that no amplifiers normally show the type of distortion I used, but I was testing a theory, not looking for absolute specifics.
+ +The green trace shows the asymmetrical distortion, while the red trace is symmetrical. The input voltage is the same for both, but the peak amplitude is slightly lower with symmetrical distortion because both positive and negative peaks are (soft) clipped equally. The soft clipping behaviour is clearly visible on the negative peaks of the red trace.
+ +Soft clipping notwithstanding, this is a rather brutal demonstration, because it challenges so much of what has become 'common wisdom'. It is especially challenging when you run the tests for yourself. Even if you don't have an oscilloscope (let alone a digital model with FFT), the effects are immediately audible. If you don't have a signal generator, you can use your computer sound card, and generate the tones in software (using Audacity for example), or you can cheat and just download the file I already created (with Audacity ... intermod-1k0+1k2.mp3 20 seconds, 315KB, right-click and select 'download;). You will need fairly heavy distortion to make the effect audible, otherwise the difference frequency is masked by all the other frequencies.
+ +As you can see, there is a never ending stream of harmonic and non-harmonic frequency content. I stopped the trace at 10kHz, but it extends to infinity. After 8kHz the levels are less than 1mV (~70dB below the fundamentals). This also applies to the next trace, and there's no point continuing after the harmonic content is buried in noise as will be the case with a 'real' (as opposed to simulated) test.
+ +To make it easier to see which is which, I made the FFT green to match the green trace in Figure 2 and the FFT below is red to match the red trace. I also indicated where the sum and difference frequencies either are or should be - in the symmetrical test it's quite obvious that they are missing, although there is a vestige of the difference signal at 100Hz (but it's only at around 5µV and can be ignored).
+ +So, having looked at the simulated harmonic structure shown in Figures 3 and 4 we can make some observations. As you can see, both charts have exactly the same voltage and frequency ranges, so it's easy to compare the levels of all harmonics and intermodulation products. You can see the difference frequency at 100Hz, along with additional harmonics at 200Hz, 300Hz, etc. These diminish up to 500Hz, after which they start getting larger again. The sum frequency is 2.1kHz - again, clearly visible in the asymmetrical case (Figure 3) but missing entirely with symmetrical distortion.
+ +The difference frequencies are also visible with symmetrical distortion, but at significantly lower levels. At 100Hz, the difference is 75dB - the symmetrical distortion circuit produces a difference frequency (100Hz) that is almost 75dB lower than the 100Hz component from the asymmetrical circuit. Now, look closely at Figure 4 again.
+ +Right where the sum frequency should be (2.1kHz), there is ... nothing. Zip. Bugger all. So, symmetrical distortion not only eliminates the difference frequency, but there is no sum frequency either. Everyone keeps saying that intermodulation distortion creates sum and difference frequencies, but it only does so when either the non-linear circuit or the input waveform is asymmetrical!
+ +You can see easily that the FFT for symmetrical distortion is much less cluttered than that with asymmetrical distortion, yet it still manages to sound slightly harsher. This is despite the fact that there are more harmonic and non-harmonic artifacts, but comparatively little by way of sum and difference frequencies. However, there are very obvious sidebands around the input frequencies, and if you look closely at the third order IMD (900Hz and 1,200Hz) and fifth order IMD (800Hz and 1,300Hz) you can see that the true intermodulation products are higher with symmetrical distortion.
+ +The apparent reduction of 'clutter' is counter-intuitive and unexpected, especially since the harmonic distortion levels are quite similar (12.85% symmetrical vs 12.67% asymmetrical). It is likely that all non-linear circuits will show similar performance, at all levels. I also simulated a simple one-transistor amplifier and got very similar results (asymmetrical of course), with less than 1% THD, the sum and difference frequencies are very prominent as expected. It is probable that many people have seen the effects described when looking at FFT measurements of audio frequency (and radio frequency) signals, but have not realised the implications. In reality, while the 'clutter' is less apparent with symmetrical distortion, the levels of IMD are greater.
+ +When the level is reduced, so too is the distortion, but the ratios will remain much the same between symmetrical and asymmetrical distortion mechanisms. Bear in mind that music is rarely symmetrical, and it doesn't matter if the signal or distortion mechanism is asymmetrical - either gives the same result. It's only when THD is reduced to less than 0.1% that intermodulation products are reduced to an acceptable level. It's important to note that any amplifying device that introduces harmonic distortion will also create intermodulation products - the two are inextricably interrelated. As THD is reduced, so too is IMD. No amplifier of any kind has ever been built that has significant (ie. measurable) THD but no IMD or vice versa.
+ + +After a minor epiphany, I thought I'd test another possibility, namely that the sum and difference frequencies are created with symmetrical distortion, but with equal amplitude and opposite phase. This would mean that all of the signals that 'disappear' when the distortion is symmetrical do so because they are equal and opposite, and therefore cancel each other. The simulator was the obvious choice, and the circuit was tested.
+ +Sure enough, each individual signal had sum and difference frequencies, but when summed they vanished. This is proof enough for a valid theory that supports exactly what we see and hear. The test circuit is shown below, and it can easily be entered into any simulator so you can verify it for yourself. You can also build it, but be warned that the setting of VR1 and VR2 will be very sensitive. Even a small difference will result in visible (on an FFT) and audible sum and difference frequencies. You will almost certainly have to use separate 10 turn trimpots to be able to set them with enough precision to get a good result.
+ +With something as bizarre as this, normal methods of investigation don't work. The time domain (normal oscilloscope display) doesn't give us enough information, and the frequency domain leaves out the all important phase information. The chosen technique involves separating the two signals. One has distortion on the positive peaks and the other has the same amount of distortion on the negative peaks. One might apply theoretical maths to the problem, but my background is practical, not theoretical (at least not at this level), and I don't have the maths skills to even attempt a mathematical solution.
+ +Each signal by itself has sum and difference frequencies (readily detected at 'A' and 'B' in Figure 5), but when summed they vanish. The only way that can happen is if these frequency components have exactly equal amplitude and are 180° out-of-phase. Since they do indeed vanish, we can conclude that the signals must be as we imagined.
+ +Now we know the exact mechanism that causes the sum and difference frequencies to disappear with a symmetrical distortion circuit. If there is slight asymmetry (unequal diode forward voltage or pot settings for example), the sum and difference frequencies will still be 180° out of phase, but will not have identical amplitudes. Therefore, cancellation is not complete, and the sum and difference frequencies will pop right up again, but at a lower than normal level.
+ +I also tested this in the simulation, and even a tiny amount of difference between the two distorted signals will cause the sum and difference frequencies to rise up out of nowhere. Since correlation between simulation and physical testing was extremely good (see the next section), it's quite safe to expect that the simulated results of the expanded test will matched by reality. I didn't test this because I'm happy with the simulated results, which would simply be duplicated but with less precision.
+ + +There are a few other traces for you to look at. The following were captured using my digital oscilloscope, and show the waveform and FFT for each. I used 1.0kHz and 1.2kHz for these because the 200Hz difference frequency is easier to hear with small speakers. You can also listen to the waveforms - intermod.mp3. The first 3.5 seconds is with both diodes, and the remaining 3.5 seconds uses only one diode. Listen for the 200Hz tone that becomes (more) audible at the halfway point in the file.
+ +The first two traces were captured directly across the diodes, and the second two were captured from the wiper of the pot. Even though there is a resistor between the output and the diodes, the distortion is quite visible and was also audible. (I actually used 7.5k resistors because they were the first to hand). Note that the signal waveform is different in some of the following traces. This does not mean that anything significant is different, only that the phase relationship between the two sinewaves has changed ever so slightly. This is not audible.
+ +You can see the first set of harmonics at 3kHz and 3.6kHz and the next set at 5 and 6 kHz. Seventh harmonics are also just visible before everything disappears below the noise level. In the next trace, you can now see both odd and even harmonics, as well as the difference frequency (200Hz) at the left of the screen. Note that the 4th harmonic is not present. If you refer back to Figure 3 you will see that the 4th harmonic group is less than for the 5th harmonic group too - the simulation and real life are remarkably close.
+ +The next two traces were captured at the pot wiper, with a 7.5k resistor between the pot and the diode(s). The oscilloscope and monitor amp were connected to the pot wiper, and the diode(s) are partially isolated by the resistor. Distortion is not readily visible on the waveform, but can still be seen in the FFT and is clearly audible. The first set of harmonics (around 2kHz) is 30dB below the fundamentals rather than ~18dB in the first two examples in this section.
+ +The amplitude of the two fundamentals is greater because the measurement was taken from the pot wiper. Distortion is not visible on the waveform, but is clearly evident in the FFT trace. There are two things that tell you instantly which is which - the first is the presence or otherwise of the difference frequency (200Hz), and the other is the nature of the harmonics. Symmetrical distortion has no even harmonics (or at least much lower levels because perfect symmetry is hard to achieve in practice).
+ +You will most likely find that the asymmetrical distortion actually sounds less harsh than the symmetrical version, despite the fact that it has more harmonics overall. Like so many other things, this is probably not what you would normally expect. I suspect that the reason is purely psycho-acoustical in nature - anything with bass (or at least a low frequency) will tend to sound 'nicer' than an otherwise similar sound without a low frequency.
+ + +Having established that looking for sum and difference frequencies won't work with any amplifier that is truly symmetrical, it's worth looking at the real nature of intermodulation distortion. There are many standards (such as IEC60118 and IEC60268) that do refer directly to the difference frequency, and it's a test that's often used. As described above, if the device under test (DUT) is symmetrical, it won't show anything, even though the DUT may have considerable intermodulation distortion. The test is known as a 'difference frequency distortion' (DFD) test.
+ +From the Audio Precision website that explains the test ... "The DFD stimulus is two equal-level high-frequency tones f1 and f2, centred around a frequency called the mean frequency, (f1+f2)/2. The tones are separated by a frequency offset called the difference frequency. The two tones intermodulate in a distorting DUT to produce sum and difference frequencies."
+ +Intermodulation causes a degree of amplitude modulation of one or both frequencies. If the SMPTE RP120-1983 standard is applied, the DUT is subjected to a 60Hz tone and a 7kHz tone at the same time, with a ratio of 4:1 respectively. The results are obtained by examining the 7kHz frequency, which should be a pure tone. If intermodulation is present, sidebands will appear. These indicate that amplitude modulation of the 7kHz tone is present, with the sidebands spaced at 60Hz intervals. An FFT of the result might look like the following graph.
+ +The sidebands are clearly visible, and show that the 7kHz tone is amplitude modulated. This is intermodulation distortion, and is shown at a representative level for an amplifier with a THD (total harmonic distortion) of around 0.0075%. (THD+N measured 0.013%). An amplifier with less distortion may show only the 7kHz tone, with the sidebands (they will be present) buried in the noise. The graph shown is somewhat optimistic, with a noise level of about -132dBV (250nV output noise, which was deliberately injected so that the FFT was closer to reality). Note that the sidebands are spaced at 60Hz intervals.
+ +It's important to understand that the majority of this page describes only sum and difference frequencies, and the conditions under which they are (or are not) created. The amount of distortion is deliberately much greater than you'd normally see to highlight this particular issue. IMD is a complex topic, and the reader is advised to look at articles that describe it is detail. That doesn't diminish the points made here though - attempting to measure IMD by looking for sum and difference frequencies will only ever work when the distortion mechanism is asymmetrical.
+ + +It's interesting (to me anyway) that it seems that no-one else has ever thought to verify that sum and difference frequencies are not created when symmetrical distortion is applied to a complex (but symmetrical) waveform. It's important to understand that if the input waveform(s) is/are asymmetrical, then even a symmetrical clipping circuit as demonstrated here will still cause asymmetrical distortion, so in some respects it's a moot point. I also tested this by clipping one of the input signals before it reached the pot, and as expected the difference between one and two clipping diodes disappeared.
+ +The additional info described in Method 'B' has provided the solution to how this happens. It's no longer a mysterious phenomenon, but is now explained in a way that makes perfect sense. Although I simulated the test circuit but didn't build one, as seen above all simulations were very closely matched by bench testing, so I am confident in the results. We now have a reasonable explanation as to exactly why completely symmetrical distortion doesn't produce sum and difference frequencies. It is even (theoretically) possible to determine the degree of asymmetry based on the proportions of the sum and difference frequencies compared to the fundamentals, but this would be rather pointless. We can see that far easier by just looking at the harmonic structure - even harmonics indicate asymmetrical distortion.
+ +This has been a very interesting bit of research that has revealed something I've never seen mentioned (until advised of reference 1). Like much research, the end result isn't terribly useful to anyone because 'real world' signals will rarely be perfectly symmetrical when looked at over any sensible period of time. Just like it's entirely possible for all the violins in an orchestra to be in phase for brief periods (which will seriously mess up a surround decoder [ 2 ]), it's also a given that there will be brief periods where an audio waveform (for example) is perfectly symmetrical.
+ +Just in case anyone is wondering just how I came across this intriguing phenomenon, it was while I was testing the hypothesis that even-order distortion sounds 'nicer' than odd-order distortion. In the process, I was listening to two tones and heard the difference frequency disappear when I switched from asymmetrical to symmetrical clipping. "That's interesting" I thought, and the rest is history.
There is another way to look at this issue as well. The same frequencies as described above are assumed - 1kHz and 1.1kHz. Symmetrical distortion means that there are only odd-order harmonics - even order harmonics are suppressed/ cancelled. Since the sum frequency (2.1kHz) is midway between the second harmonic frequencies of the two original signals (either 2kHz or 2.2kHz) then by definition it should not exist. It can be argued that 100Hz (the difference frequency) is also an 'even' frequency or sub-harmonic, and therefore also should not exist. I'm not completely happy with this explanation, but it may help readers to understand the processes involved.
+ +Despite the marginally 'softer' sound with one diode, the whole exercise belies the claims made by those who say that second harmonic distortion is pleasant and adds to the music. It doesn't do anything of the sort. Well, it does add to the music, but the additions are unwanted. It is also important to understand that second harmonic distortion by itself never exists in any real-world amplifying device - it is invariably accompanied by higher orders of even harmonics (4th, 6th, 8th, etc.) and odd harmonics. The odd harmonics simply come with the territory, and are free. Using push-pull circuitry (symmetrical) can all but completely cancel even harmonics, leaving only the odd harmonics. These are somewhat less intrusive - primarily because the levels of all harmonics are reduced significantly. Adding feedback (proper negative feedback, not simple emitter/cathode degeneration) will reduce distortion further.
+ +Provided the amplifier has reasonable open-loop bandwidth and is fairly linear to start with, negative feedback will reduce all harmonics and IMD. The latter is by far the most crucial, and only when intermodulation distortion is minimised can you really claim to have a transparent amplifying device.
+ +![]() | On the basis of this information, it is likely that many intermodulation distortion tests are meaningless. Using a 19kHz and 20kHz tone and expecting to see 1kHz is fine (a + 'difference frequency distortion' test as described above), but it will only work if the amplifier being tested has asymmetrical distortion. With most real-world push-pull amplifiers, or any + other topology that has very low levels of second harmonic distortion, this test will be decidedly optimistic, and may fail to show the real IMD caused by the amplifier. + |
I'd love to have been able to include some references, but there is only one ... plus the next three snippets (2-4)
+ +There are countless references to using the DFD test to measure intermodulation, but very few (only one that I found) points out that there is a difference between symmetrical and asymmetrical distortion. Some of those I located are shown below (I no longer show most complete links because they have an annoying habit being moved, so often don't work).
+![]() | + + + + + + + |
Elliott Sound Products | +Intermodulation Distortion (IMD) |
![]() ![]() |
I suggest that you also read Intermodulation (?) - Something 'New' To Ponder, which was the forerunner to this article. It covers the specific case of sum and difference frequencies in detail, showing how they may not appear at all, despite considerable harmonic and intermodulation distortion.
+ +IMD (intermodulation Distortion) is one of the main culprits that can make amplifiers sound 'bad'. It's heavily reliant on harmonic distortion (THD - total harmonic distortion plus noise), but not in any easily calculated way. Any amplifier that has harmonic distortion, has intermodulation distortion as well, and the converse is also true. Harmonic distortion (as its name suggests) generates harmonics of the original signal. A 1kHz tone will have 2kHz, 3kHz, 4kHz, 5kHz (etc.) harmonics, but in some cases the even harmonics are suppressed (2nd, 4th, etc.). Push-pull amplifiers (regardless of topology) fall into this category, and have predominantly odd harmonics only. This is never true in real life - all amplifiers, regardless of what they use as an amplifying device, have both odd and even harmonics, but the even harmonics can be below the noise floor. Just as it's impossible to design an amplifier that presents only the second harmonic, all amplifiers will have some of every harmonic present. Hopefully, most will be far enough below the noise floor that the distortion is not intrusive.
+ +Unlike harmonic distortion, intermodulation generates frequencies that are not harmonically related. This makes them far more objectionable, because the frequencies generated are unrelated to the input frequencies. It's important to understand that the process of distortion (of all forms) is simply due to non-linearity. The amplifying device does not 'generate' the new frequencies directly, but they are an inevitable by-product of distortion. When the shape of a waveform is changed, the harmonics (and/ or other frequencies) are created simply due to the physics of waveforms. A pure sinewave has (by definition) no distortion, and consists of a single frequency - the fundamental. Distortion is due to non-linearity, and that modifies the shape of the waveform.
+ +An amplifier does not have to deal with multiple frequencies simultaneously. There is one value of voltage and current present at any instant in time, and the 'signal' does not pass through an amplifier and its feedback network multiple times (yes, this claim has been made many times elsewhere). The instantaneous value of the input is amplified, and that amplified version is attenuated by the feedback network and compared to the value present at the input. Should the two differ, the error amplifier (the first stage of the amplifier) tries to correct the difference in real time. Should the input signal change faster than the amplifier can react, the amplifier is no longer in its linear range, and the results are not useful.
+ +There also seems to be a 'difference of opinion' between RF (radio frequency) and audio engineers as to what actually constitutes IMD. Strictly speaking (for an RF engineer), IMD does not include the sum and difference frequencies, whether they are measurable or not. Audio engineers do consider sum and difference frequencies to be part of intermodulation distortion, since they are produced by the same mechanism that creates other IMD products.
+ +In a circuit with symmetrical distortion, these products are not evident at measurable levels, and they only appear if the distortion is asymmetrical. In RF work, the sum and difference frequencies are the expected result of mixing two frequencies together (in a deliberately non-linear circuit). Over the years, it appears that these sum and difference frequencies have been classified as part of IMD for audio, but not for RF. For audio applications the sum and difference frequencies are unwanted, and thus should be (and are) included as IMD products.
+ +Another term you'll come across is PIM (passive intermodulation), most often associated with junctions of dissimilar metals and/or oxide layers at a connection. This is rarely a problem with home audio, because the conditions that cause serious oxidation are (usually) not present. PIM is most likely to occur with high RF power levels, combined with the effects of corrosion caused by atmospheric pollutants, oxygen and moisture. This is a separate topic, and is not part of this article.
+ + +The effects of harmonic distortion are generally benign, provided the total measured distortion is less than 0.01%. In some cases it can be greater without becoming audible, but anything over 0.5% is decidedly 'lo-fi' (as opposed to 'hi-fi'). The actual value where it becomes objectionable depends on many factors, not the least of which is the type of material. For a single guitar played through an amplifier deliberately driven into clipping (aka 'overdrive'), the distortion levels are extreme, but this forms part of 'the sound'. Play music through the same system (and with the same amount of clipping) and the end result is intolerable.
+ +Any amplifier that has harmonic distortion also has intermodulation distortion, and of course the converse is also true - the two are inseparable. While harmonic distortion (THD) is less obtrusive than IMD, high levels of THD also mean there will be high levels of IMD. Many in audio consider THD tests to be 'pointless' because they don't describe 'the sound' or because sinewaves are 'too simple' to perform a meaningful test. These attitudes are based on the false premise that an amplifier somehow is subjected to multiple simultaneous frequencies, which is simply untrue.
+ +Intermodulation is vastly more complex than THD, and the result is that (usually higher) frequencies are amplitude modulated by lower frequencies. This is immediately apparent when an amplifier clips, but there will always be some degree of IMD even at low levels, well below maximum power. IMD acts not only on the original frequencies, but also on their harmonics, as well as the new (non-harmonic) frequencies created by IMD itself. It doesn't take a great deal of IMD to render the signal objectionable to listen to, and it's naturally worse with complex passages.
+ +The optimum way to measure IMD is to use the SMPTE (Society of Motion Picture & Television Engineers) standard RP120-1994, which uses a 60Hz tone and a non-harmonically related (usually 7kHz) tone, with an amplitude ratio of 4:1 (low:high). This test looks for sidebands around the 7kHz tone, the presence of which indicates amplitude modulation, and therefore intermodulation distortion. The 7kHz signal is shown below, showing the presence of IMD.
+ +
Figure 1 - 7kHz Tone Showing IMD Sidebands
The sidebands are clearly visible, and show that the 7kHz tone is indeed amplitude modulated. This is intermodulation distortion, and is shown at a representative level for an amplifier with a THD+N (total harmonic distortion plus noise) of around 0.013%. An amplifier with less distortion may show only the 7kHz tone, with the sidebands (they will be present) buried in the noise. The graph shown is somewhat optimistic, with a noise level of about -132dBV (250nV output noise, which was deliberately injected so that the FFT was closer to reality). The sidebands are spaced at 60Hz intervals if distortion is asymmetrical or at 120Hz with symmetrical distortion (the 60Hz tone is the 'modulating' frequency). In RF parlance, the 7kHz tone would be the 'carrier' frequency.
+ +The SMPTE test will show the sidebands regardless of whether the distortion is symmetrical or asymmetrical. However, if the distortion is symmetrical, the even-order sidebands won't exist, and they will be spaced at 120Hz intervals, not 60Hz. Although the number of visible sidebands is reduced with symmetrical distortion, the amplitude of those remaining is slightly higher for the same measured THD. If the same circuit is subjected to the CCIF (now IUT-R) IMD test, a 1kH (difference frequency) signal will only be seen if the circuit is asymmetrical. Information on who may or may not use the difference frequency as a 'metric' for IMD is hard to come by. In some cases it's called a DFD (difference frequency distortion) test, which implies that the measured level of the difference frequency is used, but a few sentences along that changes again.
+ +Given that the ITU-R IMD test based on IEC60118 and IEC60268 (apparently) rely on detection of the difference frequency, as discussed in Intermodulation (?) - Something 'New' To Ponder, amplifiers with symmetrical distortion will fail to provide a reliable result (they may fail to give a result at all - even though the amplifier may have considerable IMD). It's difficult to know how this standard came about, since we know that symmetrical distortion will not produce the sum and difference frequencies. As noted in the referenced ESP article though, this is something that many people seem not to know (which puzzles me, but so do many other things (at least vaguely) related to audio).
Figure 2 - 19kHz + 20kHz Tones Showing IMD Sidebands & Difference Tone (Inset: Composite Tone)
The inset shows the waveform of the composite (19kHz + 20kHz) tone. The amplitude variation you see is not amplitude modulation, but shows the beat frequency (which is at 1kHz). It's rather important that you don't confuse beat frequencies with intermodulation, as they are completely separate phenomena. While beat frequencies can easily be audible, that will only occur if both tones are audible individually. The 'beat' is simply the result of alternating constructive and destructive pressure waves arriving at your ear, or (and as shown) constructive and destructive reinforcement of the electrical signal. The inset is also a demonstration that there are never two separate signals - when added together you get a composite waveform, and only one value of voltage is present at any point in time.
+ ++ As a sidenote, beat frequencies are often used by guitarists to tune their instruments (e.g. 'harmonic tuning'), and by piano tuners who favour the 'old fashioned' way to tune, by counting the number of + beats for specific harmonics when two notes are sounded together. Predictably, this is outside the scope of this article, but most musicians will be well aware of beat frequencies and their importance. ++ +
With an amplifier having asymmetrical distortion, the graph above shows the difference tone, as well as sidebands spaced 1kHz apart at 18kHz and 21kHz. The two sidebands are a good indicator of IMD, but the difference tone at 1kHz is not. This is shown clearly in the following graph. The sidebands are created by any non-linearity in the system (including the measurement equipment!). If the sidebands are present, the signal(s) have been subjected to a degree of amplitude modulation, which is the direct result of non-linearity. Whenever a circuit is non-linear, the amount of amplification will change depending on the amplitude of the input signal. For example, a particularly poor amplifier may show a gain of 10 (20dB) at 1V input, and a gain of 11 (20.8dB) at 100mV. It doesn't matter if the voltage is AC or DC.
+ +
Figure 3 - 19kHz + 20kHz Tones Showing IMD Sidebands Only
When the distortion is symmetrical (as will be the case with almost all push-pull amplifiers), the difference tone disappears. The IMD can still be judged by the sidebands at 18kHz and 21kHz (third-order IMD products), and that is supposed to be what is measured when the ITU-R test is applied. Therefore, it's safe to say that this method works, provided sum and difference frequencies are not considered. Unfortunately, obtaining any standards document anywhere in the world is expensive, so I don't have access to all the details of the test procedure.
+ + +Amplifier non-linearity is a fact of life, but most competent designs manage THD and IMD levels that are below audibility. Earlier I stated that an amplifier has no 'understanding' of the waveform at its input, only the instantaneous voltage at any point in time. Based on this, you may wonder how (and why) more complex musical passages can create more complex intermodulation products. The answer is as simple as it is complex, and doesn't change the basics one iota.
+ +The circuit I used to produce the three graphs shown above is very simple indeed. It consists of nothing more than two 'ideal' (i.e. distortion-free) sinewave voltage sources, and an attenuator ultimately leading to a pair of diodes. The distortion is deliberately very low - this article is about IMD specifically, and high levels of distortion are neither necessary nor useful to produce the results. However, to be able to demonstrate exactly what happens with waveforms you can see requires a more savage test, and the results are shown here.
+ +
Figure 4 - 7kHz Tone Showing IMD Sidebands (Exaggerated For Clarity)
The sidebands are much more prominent now. Whereas the tests described earlier would show no sign of visible amplitude modulation, in the graph below it can be seen. There's still not very much, with the peak variation being from 82mV to 72mV (about 12%), and it only occurs briefly during the 60Hz cycle. A single diode was used, but you can see that the amplitude of the positive and negative peaks are affected, although not equally. The waveform is shown below, with the 60Hz component removed by a notch filter.
+ +
Figure 5 - 7kHz Tone Showing Amplitude Modulation
Only a part of the residual 7kHz tone is shown, purely because trying to show it all simply results in a solid block of red with the waveform itself not visible. The results shown are simulated, but if I were to set up a workbench example the same way, I'd get exactly the same result. The circuit used for the simulation is shown next, and it's apparent that the effect of the diode is quite small, given the circuit impedances seen in the schematic. The buffer is included so the notch filter doesn't affect the two voltages applied to the distortion circuit.
+ +
Figure 6 - 60Hz + 7kHz Test Circuit
The test circuit I used for the simulations is shown above. Although the frequencies and amplitudes shown are intended for the SMPTE test method, they can be replaced by two equal amplitude generators spaced 1kHz apart (e.g. 19kHz and 20kHz). The switch allows the use of a single diode (asymmetrical) or two diodes (symmetrical) so the effects of both can be examined. The 1k pot allows the amount of distortion to be controlled, at maximum resistance the distortion is low, and it increases as the resistance is reduced. The 60Hz notch filter is not required for ITU-R tests. In case you are wondering why there are no component values for the notch filter, that's because they are very precise and decidedly non-standard values to get a perfect notch at 60Hz. R and C values are determined using the standard RC filter formula ...
+ ++ f = 1 / ( 2π × R × C ) ++ +
Without the notch filter in circuit, it becomes difficult to measure the IMD sidebands of the 7kHz tone because the amplitude of the 60Hz component forces a higher level setting for the FFT analyser to accommodate the low frequency without creating additional distortion. If you have a distortion meter, the 60Hz tone can be removed with the internal notch filter, and the distortion residual output will show the level and distortion present on the 7kHz tone. The distortion will only be visible on a scope if it's greater than (around) 5%, but you might be able to see amplitude modulation if the IMD is great enough.
+ +The above test circuit can be used if you wish to take measurements. The notch filter has to be tuned very carefully, and if the DUT (device under test) is am amplifier or preamp, no buffer is necessary. With this arrangement and a competent scope (or a PC sound card with an appropriate input attenuator), you can see the IMD products fairly easily by running an FFT to examine the signal in the frequency domain.
+ +The notch filter isn't essential, but it makes it far easier to see the IMD products. The only thing missing will be the 60Hz signals, but THD products will still be present at 180Hz, 300Hz, 420Hz, 540Hz (etc.) assuming symmetrical distortion. With asymmetrical distortion, even harmonics of 60Hz will also be visible. Be aware that if a simple notch filter as shown (without feedback) is used, the harmonics of the 60Hz tone will be attenuated. The second harmonic (120Hz) is attenuated by 9dB, the third by 5.1dB, fourth by 3.3dB, etc. These are harmonic distortion artifacts and can be ignored for IMD measurements.
+ +
Figure 7 - Intermodulation Distribution (ITU-R Method)
The areas in a brown background do not appear with symmetrical distortion, while those on a green background are always there when IMD is present. This makes complete sense once you know that sum and difference frequencies aren't generated with symmetrical distortion, nor will there be any second harmonic distortion (which also requires an asymmetrical distortion mechanism). Therefore, with symmetrical distortion, any IMD products are limited to odd-order effects only. With a symmetrical distortion process (two diodes in Figure 6), the third-order products are greater than for the same basic arrangement, which also makes sense because both peaks are affected rather than one. This results in more amplitude modulation.
+ + +I expect that for many readers, the production of AM (amplitude modulation) may be somewhat mysterious, and it's necessary to look at a somewhat more radical example so that it can be easily understood. The example I'll use is rather drastic, but it is the clearest possible example. When two signals are added together (and I'll use a 60Hz + 1kHz (not the normal 7kHz tone), the peak amplitude is the sum of that of the two individual frequencies. If you have peak levels of 1V at 60Hz and 250mV at 7kHz, the total peak level is ±1.25 volts. In reality, these can be any two voltages, or frequencies that are far enough apart that the composite waveform shows each as individual signals, with the high frequency 'riding' on the lower frequency. This is shown below. I used 60Hz and 1kHz so the two signals would be visible. The level of the final signal is ±412mV peak, due to the three resistors. VR1 is reduced to its minimum value for this test, to get maximum 'visibility' of AM in action.
+ +
Figure 8 - Composite Waveform (60Hz, 1kHz, SMPTE)
If we now imagine that the amplifier in question cannot pass the full ±412mV peak due to the diodes, some of the 1kHz waveform will be attenuated at the peak of the 60Hz waveform. Apart from the fact that the 1kHz tone is now distorted, its level is also reduced to less than it should be. So an 83mV peak signal (after attenuation) is reduced to (a distorted) 72mV peak. It has been amplitude modulated. The resulting 1kHz tone is shown next, replete with distortion (which is very hard to see). The 60Hz tone has been removed with the notch filter.
+ +
Figure 9 - Amplitude Modulation
The AM on the 1kHz tone is visible above, and the tone is also distorted. The simulator says that THD is 0.78%, which almost seems as if it should be audible, but not necessarily objectionable. The problem isn't simple harmonic distortion though, it's the IMD products generated. Two plots are shown below, the first with asymmetrical distortion (which produces supposedly 'nice' even harmonics), and the best you can say for it is that it's a mess.
+ +
Figure 10 - IMD, 1kHz Tone, Asymmetrical Distortion
In contrast, the following plot is for symmetrical distortion (two diodes). Although the peak amplitude of some of the sidebands is slightly greater due to a little more AM, the overall picture is far less 'cluttered', with fewer IMD products overall. However, even though there are fewer IMD frequencies produced, that doesn't mean that it will sound any better. IMD always sounds bad, regardless of the mechanism that produces it.
+ +Note that although 1kHz was used in this test, there are IMD products at 2, 3, 4 and 5kHz. When the high frequency tone is 7kHz, that results in IMD at and around 14, 21, 28 and 35kHz, and the IMD 'artifacts' extend to much higher frequencies as well. Whether these can be measured depends on the test equipment available, and frequencies above 21kHz (the third harmonic of 7kHz) can be ignored because they are inaudible. The distortion is still present though, and may cause further intermodulation, potentially including difference frequencies of IMD products with wide band material (as opposed to a 'simple' two-tone test) signal. IMD is complex, and the more frequencies present at the input, the more IMD is developed.
+ +
Figure 11 - IMD, 1kHz Tone, Symmetrical Distortion
Interestingly (and you may not have expected it), the 60Hz tone is also distorted by the circuit shown in Figure 6. Depending on whether there are one or two diodes connected, the distortion will be either symmetrical or asymmetrical. Remember that at any point in time, there is only one value of voltage, and the 60Hz and 1kHz tones are simply added together and become one single composite waveform. It doesn't matter if the 60Hz tone is visibly 'clipped' or not, because the composite tone comprises both waveforms which are now inseparable in a wide band system. That means that anything that happens to one, also happens to the other. The distortion of the 60Hz tone is comparatively low compared to the AM and distortion of the 1kHz tone, and it measures 0.5%. That adds even more harmonics to the overall waveform, and with that comes even more intermodulation.
+ +In Figures 10 and 11, there are clearly IMD frequencies well below 1kHz, and these are products of the distortion of the 60Hz tone. As more harmonic frequencies are added, the IMD becomes more and more complex, and with material containing a multiplicity of different frequencies (which still only result in a single voltage at any point in time!), the IMD products become overwhelming and turn the music into 'mush'. There is one way (and only one way) to minimise this, and that's to ensure the amplifier is as linear as possible. Like it or not, that means it will have low measured THD. As noted earlier, THD and IMD go hand-in-hand, and it's not possible to have one without the other.
+ + +In any 'real' amplifying device, there should normally be no significant non-linearity while the amplifier is in its linear range. For designs with little or no feedback, non-linearity is guaranteed, because there is no single discrete active component that has perfect linearity. At very low signal levels (compared to operating voltage) the degree of non-linearity is generally quite small, and can be well below audibility. In general (at least with single-ended circuits), as the signal level increases, so too does non-linearity. The transfer curve of an ideal amplifying device is a straight line, with no discontinuities. In reality, the transfer curve is never a simple straight line, it's always curved. It doesn't matter if it's a valve (vacuum tube), transistor, FET or MOSFET, none has a perfect transfer curve.
+ +The graph only shows the low-level performance, but any amplifying device will also reach saturation at some point. As the current increases, the gain falls, so there is non-linearity at both low and high output current. Once any amplifying device is fully saturated, further increases at the input make no difference, as it's fully turned on. This is clipping. There must be some mechanism to prevent solid-stage devices from over-current conditions that will damage the device or the power supply. The minimum 'resistance' for valves is usually quite high, and damage is less likely (it can still happen though).
+ +
Figure 12 - Transfer Curve, 2N2222 Example
The red curve shows the actual transfer curve, and the narrow green trace shows the ideal straight line. Even though the non-linearity seems pretty small, it's there, and as such shows that the circuit will cause distortion. The red trace was obtained using the most ideal conditions possible. The collector voltage remains fixed at 12V, and the base current is derived from an 'ideal' current source. In any real circuit, the conditions are worse, as is the non-linearity. Even though it may appear that the upper part of the red curve is straight, it's not. There is a continual but slight curvature over the full range of collector current.
+ +Despite the very gradual curve you see, if biased with 30µA and supplied with a 10mV input signal to the base, that results in a 3.7mA (peak to peak) collector current variation. The distortion is then almost 5%. This is improved by feeding the base with a variable current rather than a voltage, but with the same (as close as I could get it) collector current variation. Doing so reduces distortion to 0.15%, demonstrating the importance of topology when maximum linearity is essential. This article is not going to investigate ways to obtain maximum linearity - that's an altogether different topic, but it does show that even a small curvature in the transfer curve can result in surprisingly high distortion. Where there is harmonic distortion, there is also IMD.
+ +Valves are actually worse than most transistors (as seen in any valve datasheet), as are JFETs and MOSFETs. It's a myth that "valves are more linear than transistors" - oft quoted, but wrong. The difference is that they (usually) operate over a smaller range of their operating voltage than transistors, but that doesn't help at all with power stages which have a great deal more non-linearity than modern power transistors. There are many ways that are used in project and commercial amplifiers alike that ensure that transistor linearity is well within the range where open loop (i.e. without feedback) gain is good enough to ensure low distortion, and when feedback is applied this improves both harmonic and intermodulation distortion figures. These aren't just numbers - they are essential figures for any amplifier, especially IMD.
+ +Modern design practices (well, they aren't really that modern any more) result in pretty good linearity before the addition of feedback. Power transistors are now much better than they were even 10 years ago, with linearity that's far better than earlier devices. This makes it comparatively easy to design an amplifier that has a low open-loop (before feedback is applied) distortion, and feedback improves that to the point where distortion is (usually) not an issue. Combined with a good overall design that can still be surprisingly simple, it's rare for amplifiers to have distortion (THD/ IMD) that is audible.
+ +By using bi-amping (or even tri-amping), each power amp has reduced power demands and a narrower bandwidth, which can also improve IMD. This is another tool in one's arsenal that can be applied to a system to get the maximum linearity. There are countless amplifiers (and opamps) that are so good that they test the limits of analysis systems, so unless your favourite amplifier is a zero feedback design or a single-ended triode, then IMD is usually not a problem. This doesn't mean that all amplifiers are free of IMD or even particularly low THD, but it's rare for any competent design to have audible flaws.
+ +Early transistor amplifiers often had crossover distortion, where the signal becomes distorted as it passes from one output transistor to the other. This is particularly insidious, because distortion was nearly always measured at close to full power, and the effects of crossover distortion are most objectionable at low power levels. When neither output transistor is conducting, the amplifier has extremely low open loop gain (it can even be negative), so feedback cannot remove this type of distortion. This gave transistor amps a bad name compared to valve amps of the era, and some people still use this as the reason that valve amps sound 'better'. They don't. They often do sound different, but 'different' is not equal to 'better'.
+ +Reviews may lead you to believe that one amplifier is "markedly superior" to another, but they are generally full of hyperbole, terms and phrases that have no meaning in engineering (or no meaning at all in some cases). Since most reviews are based on sighted tests (not blind or double-blind), the results are pretty much meaningless. Listening tests are important, and so are measurements. Ideally, the two will be in agreement, and if both are conducted sensibly, that will almost always be the case. It's very rare that an amplifier measures badly but sounds good, and the converse is also true.
+ +It's worth a short paragraph to discuss an amplifier's 'linear region'. An amplifier that's clipping is outside the linear region because the output voltage cannot exceed the supply voltage. Feedback is irrelevant, because the output is not a linear function of the input voltage. If the input voltage to an amp changes so quickly that the amplifier is subjected to slew-rate limiting, then again, this cannot be corrected by feedback. The linear region is where the amplifier's output voltage is in direct proportion to the input voltage, and where the feedback path ensures that the error amplifier (the input stage) has almost identical voltages at each input. Most amplifiers spend most of their time in this linear region (there are exceptions of course).
+ + +All amplifiers (including opamps) have a frequency response that's tailored to ensure stability. That almost invariably means that while the gain at DC or low frequencies is very high (it may be more than 100dB for some opamps), as the frequency increases, gain decreases. As the gain decreases, there is less negative feedback, so it is less able to reduce non-linear distortion. What that means for the circuit is that distortion rises with increasing frequency, simply because there is less feedback (closed loop). Since distortion of all forms is directly related to the feedback ratio (i.e. the difference between open loop and closed loop gain), as the open loop gain falls, distortion must rise.
+ +Fortunately for most audio applications, the frequency where this starts to have any real impact is generally well outside the audio band. However, it also depends on the amount of gain set by the feedback loop itself, and expecting very high gain from a single opamp (for example) can lead to unexpectedly high distortion at the top end of the spectrum. In some cases, people have claimed that this is 'proof' that feedback is 'bad' and ruins the sound, but countless commercial (and DIY) amplifiers show performance levels that exceed the ability of basic test equipment to measure any 'defects' - real or imagined. Some opamps are so good that even the best laboratory grade instruments struggle to measure their distortion, be it THD or IMD. In many cases, there is certainly the possibility that high-order IMD (and THD) products are greater than lower order distortion, but it's often the case that these supposedly 'troublesome' high order harmonics and/ or IMD products are below the noise floor.
+ +There is no doubt at all that we can hear tones that are well below the background noise, but not if they are at 25kHz or more! Provided all harmonic and non-harmonic distortion components are either well below the noise floor and/ or are at frequencies above 20kHz, we don't need to be concerned. The situation is different if one is designing ADSL line drivers or other circuits that operate at low radio frequencies, and likewise if designing RF transmitters. Audio is neither of these, but it does present some unique difficulties of its own.
+ +Most RF equipment is narrow-band, and even relatively wide-band equipment may only cover a range of a small fraction of an octave. Audio operates over a range of ten octaves, so the tricks that are used for RF equipment can't be used. However, audio equipment can be considered 'mature', in that the principles have been well known and (at least reasonably) well understood for close to 100 years. In that time, circuitry has been refined over and over again, with major manufacturers producing semiconductors that are close to being as good as one can get. This may continue to improve, but the rise of digital signal processing (DSP) and Class-D amplifiers means that there are significant changes to the way audio is processed.
+ + +TIM (transient intermodulation distortion) (aka TID - transient induced distortion) has been the topic of considerable research, and although it's now considered to be largely irrelevant, it's certainly possible to re-create it quite easily. First proposed in 1976 in a paper presented to the AES (Audio Engineering Society), it's very existence has been called into question. The original test stimulus proposed was a 3.18kHz squarewave, and a 15kHz sinewave, with a ratio of 4:1 respectively. The squarewave is provided via a low-pass filter, with a -3dB frequency of 30kHz or 100kHz "depending on quality requirements of the equipment being measured" (sic). While it's hard to criticise the authors for their enterprise, it should be apparent to most readers that the test is particularly severe, and doesn't represent the structure of 'normal' musical programme material.
+ +It's not my intention to cover this in great detail, largely because I believe the test conditions proposed to be so far outside the normal parameters of audio signals that the results are not a reliable test for amplifier 'sound'. For any amplifier to be classified as 'low TIM', it requires not just high linearity, but far greater speed than is actually necessary to ensure high fidelity reproduction. This makes an amplifier more complex, but more importantly, it may have marginal stability due to the requirement for a wide open-loop bandwidth and high slew rate. Neither is necessary for an amplifier (or preamplifier) that is expected to handle frequencies up to 20kHz of 'normal' programme material. If an amplifier has wide bandwidth and a high slew-rate without compromising stability, then there is no reason to avoid it.
+ +There is no doubt whatsoever that many opamps and power amps will provide apparently 'dismal' performance when subjected to the TIM test methodology, but there is no actual requirement for them to be able to handle such a severe test. The use of a squarewave is always a good test of an amplifier's performance (and one that I routinely use), but expecting normal audio circuitry to handle a signal that isn't created by any known musical instrument is not really a fair test. The more conventional IMD tests that are used are a better predictor of performance than a test that subjects the DUT (device under test) to operating conditions that are far outside the bounds of realistic audio.
+ +It's worth considering that few (if any) mixing consoles or other elements in the recording chain have been verified as 'low TIM', nor have the majority of DACs (digital to analogue converters) or ADCs (analogue to digital converters). The LM4562 opamp (one of the lowest distortion opamps around) has specifications for just about everything, in particular THD and IMD. TIM is not mentioned. Whether it's because it's considered irrelevant or is below the measurement capabilities of the National Semiconductor (now Texas Instruments) laboratory is not known.
+ +The net result is that TIM can certainly be induced in many circuits, but the parameters are so far beyond what is actually necessary for good sound reproduction that it's not a test that needs to be performed. Provided THD and IMD are within reasonable limits, the chances of 'serious degradation' of the signal can be considered negligible. Quite obviously, any circuit has to be fast enough to remain within its linear region for any audio programme material, and if that is achieved then nothing more needs to be done.
+ + +The TOI (third order intercept) is often quoted for RF equipment (transmitters etc.). Because the IMD ratio depends on the power level of the fundamental input tones, this is a useful test to determine how much power can be delivered before results are unsatisfactory. The fundamental principle of TOI is that for every 1dB increase in the power of the input tones, the third-order products will increase by 3dB.
+ +As the power of a two tone stimulus is increased, the IMD ratio will decrease as a function of input power. At some arbitrarily high input power level, the third-order distortion products would theoretically be equal in power to the fundamental tones. This theoretical power level where first-order (fundamental/ stimulus) and third-order (IMD) products are of equal power is called the third-order intercept.
+ +This is not relevant to audio applications, and is mentioned here purely because it's a term that you'll come across when looking for information on IMD. There are many other terms you may also come across for RF applications, but they are 'application specific', and not something you need to be concerned about. Searches will bring up a lot of information, and it can be difficult to work out which things you need to know vs. those that are irrelevant to audio frequency amplifiers.
+ + +Intermodulation distortion is without doubt the most complex form of distortion in any amplifying equipment, and is the most difficult to understand. Measurement isn't easy either, because test equipment that's generally well out of range for hobbyists is required. There are several test systems that can do a very good job, but ideally you still need a good notch filter to remove the low frequency component. Without that, it can be very difficult to see intermodulation products clearly, because the range of most affordable test gear is not great enough to be useful. Most digital oscilloscopes have an FFT (fast Fourier transform) function, but it lacks the range and precision necessary to be able to identify IMD unless it's so high as to be audible.
+ +IMD often pales into insignificance compared to that created by some loudspeaker drivers, with 'full-range' speakers being one of the worst because they have to handle the entire frequency range. Some people have worked around this by horn loading the driver for low frequencies, but bass horns are impracticably large for most listening rooms. Usefully, while loudspeakers in general have much higher distortion levels than most amplifiers, the effects are generally low-order because the drivers are generally used over a relatively narrow bandwidth. Even wide range drivers can sound very good, provided power levels are modest.
+ +IMD is far more difficult to measure and quantify accurately than 'simple' THD, but you can generally rest assured that if the THD level is sufficiently low, IMD is unlikely to be a serious issue. Despite all the claims that harmonic distortion measurements are 'pointless', they are nothing of the sort. Low THD means high linearity through the circuit, and if a circuit is sufficiently linear it's unlikely to generate serious IMD. There are factors that can change this, as linearity can deteriorate at higher frequencies (which may not be measured for THD). In general, if you get a good THD figure at 1kHz and low IMD with an SMPTE and/or ITU-R test, then it's time to listen critically to ensure that the measurements and what you hear are in agreement. Few amplifiers will disappoint if they provide good test results.
+ + +![]() ![]() + |
![]() | + + + + + + + |
Elliott Sound Products | +Inverter AC Power Supplies |
Inverters are used in all kinds of places and for all kinds of reasons. One very common application is to convert 12V from a car DC outlet to 230 or 120V AC to power small appliances. These are very common, especially with travellers with motor-homes or caravans. Another is for 'uninterruptible' backup supplies (UPS - uninterruptible power supply) for computers, either in the home or in large data centres. Inverters are also used with solar systems and wind generators, with some being very large and powerful indeed. This article only looks at the technologies commonly used for small and medium power systems - those up to a few hundred watts, but the techniques used can be scaled to almost any power level.
+ +The basic requirements and the most common types are described. It is not meant to provide a design process, but to inform the reader what the various terms mean, how different types of inverter interact with common appliances, and how they work. There are many aspects of the design process that are far too complex to attempt to explain in detail however, so don't expect to see every possible variation described in full.
+ +Please note that waveforms and voltages are shown based on 50Hz and 230V RMS output. 60Hz 120V systems use identical technology, and simply use a transformer with a different turns ratio and a 60Hz oscillator. DC input current is virtually unchanged for a given output power. While a 60Hz inverter can theoretically use a slightly smaller transformer than a 50Hz unit, the difference is so small that it can be ignored for all practical purposes.
+ +Circuit examples show MOSFETs used for switching, but many high power inverters use IGBTs (insulated gate bipolar transistors) because they are more rugged and are designed for very high current operation. Some budget inverters may use standard bipolar transistors if they are only low power, because they are cheaper than the alternatives.
+ + +The idea of an inverter is simple enough. We use an oscillator to generate the required frequency (50 or 60Hz), and use that as the input to a power amplifier. Because the amplifier's working voltage is generally fairly low (typically 12 or 24V DC), a transformer is used to step up the voltage to 230V or 120V as required. Most inverters will use the transformer as part of the power amplifier itself, because this makes the overall design much simpler, especially for modified squarewave designs.
+ +Let's assume that the circuit is 100% efficient just for the moment. This makes calculations nice and simple, and also gives us a rough idea of what the final circuit has to be able to do in real life. 12V DC is a very common input voltage, and this is suited for use in cars, motor homes and for computer UPS applications. The first thing we now need to know is how much output power do we need. For the sake of the exercise, let's assume 1,000W (1kW).
+ +To obtain 1kW at 120V requires an output current of 8.33A, or 4.35A at 230V. Unfortunately, 1kW at 12V means that we need 83.33A from the battery, ignoring all losses. If you wanted to be able to provide 1kW for 1 hour, you'll quickly discover that you need a 12V battery rated at around 120AH (amp hours). Lead-acid batteries are the most economical choice for a UPS, and that's what you already have in the car (make sure that you don't fully discharge the battery). Lead-acid batteries (including gel-cell and AGM types) provide a reduced capacity if they are discharged quickly. For example, a 120AH battery will usually only provide its claimed capacity if discharged at the 10 hour rate (i.e. 10 hours at a current of 12A for a 120AH battery). Higher discharge current means that the capacity of the battery is reduced.
+ +The above current requirements refer only to the RMS output current (AC), and the average input current (DC). For 230V output from a 12V source, the average DC input is typically around 20 times the RMS output current for a modified squarewave inverter. DC input current is higher than the rough calculation, because it must include an allowance for losses in the system. In reality it is wise to lower your expectations.
+ +It's probably fair to say that inverters are a fairly evil load for any battery, especially if you expect more than a few watts output. It's equally fair to say that the output of any inverter that isn't a sinewave ('pure' sinewave) is also a pretty evil source for a great many loads. It's not even possible to give a list, because so many loads are now electronically controlled. Once electronics is involved with a load (especially motors and transformers), it's only possible to know what's involved if you have detailed specifications and/or a circuit diagram.
+ +Some products might state whether they are suitable for use with various inverters, but most don't. Most switchmode power supplies will be happy enough, but they may be subjected to higher peak current than normal if the input is not a sinewave. PCs should be alright - they are the very load that most UPS systems are designed for. If in doubt, seek advice from the appliance manufacturer.
+ +Inverters are commonly classified by their output waveform, so you will typically see the following types offered ...
+ +Note that 'modified sinewave' and 'modified squarewave' inverters are actually quite different, but it's common for the two to be lumped together and the terms used interchangeably. This is partly because there is no strict definition of the terms, and advertising material is notorious for bending the rules to make a product seem more appealing. Claiming that an inverter is modified sinewave sounds much better than saying it's modified squarewave - particularly for people who have a little knowledge of such things. The three most common types have their waveforms shown below. In each case the RMS value of the voltage waveform is 230V, but only the modified squarewave and sinewave types maintain the correct peak voltage of 325V.
+ +
Figure 1 - Inverter Waveforms, All 50Hz, 230V RMS
For the squarewave and modified squarewave waveforms, I added the sinewave as an overlay so you can see the difference clearly. The 'modified sinewave' waveform isn't shown here because it's somewhat more complex and harder to produce. There are also several different ways to create a modified sinewave, and these are discussed below. As noted above, in many advertisements you will see the modified squarewave type referred to as modified sinewave. This is false advertising, but some people really don't know the difference.
+ +All squarewave based inverters will cause stress to interference suppression components fitted to the connected appliance. A sinewave has a relatively gentle rate of change of voltage (DVDT aka ΔVΔT, the change of voltage over time). Squarewaves (modified or otherwise) have a very high DVDT, and additional filtering is needed on the inverter output to reduce it to something acceptable to the most common loads.
+ +Filtering is also needed so that products will pass EMI (electromagnetic interference) tests that apply in most countries. It's not at all uncommon for inverters to cause radio interference, especially on the AM bands. You can also expect to be told that this interference will cause cancer, your belly-button will fall off and you'll get ingrown toenails as a result of 'dirty electricity' as it's become known. Maybe bad things will happen, but it's not like we use inverters pressed close to our bodies all day. Most 'pure' sinewave inverters also create interference because they operate at high switching frequencies.
+ + +The simplest inverter is a squarewave type. The oscillator is very basic, and they are fairly easy to build. Unfortunately, the ratio of peak to RMS voltage is very different from a sinewave, and this will cause stress to some appliances. Motors and transformers in particular will usually draw much higher current than they are designed for, so they may run hot enough to cause premature failure. Most switchmode power supplies don't care, and will operate quite happily from a squarewave input. Interference suppression capacitors will be stressed by the fast rise time of the squarewave.
+ +A sinewave has a peak to RMS ratio of 1.414 (√2), so a 230V sinewave has a peak value of 325V and a 120V sinewave has a peak of 170V (close enough in each case). A squarewave with a peak value of 325V has an RMS voltage of ... 325V. Peak and RMS are the same. If the voltage is reduced so that the RMS voltage is correct, then many electronic power supplies will see a greatly reduced input voltage because many charge filter capacitors to the peak of the voltage. So where the load expects to see peaks of 325V (or 170V), it will only get 230V or 120V peaks. Some loads will not power up properly if the voltage is too low.
+ +The above notwithstanding, I will explain a basic squarewave inverter first, because the same switching circuitry is used for the modified squarewave converter as well. The simple squarewave is easy to understand, and will make it easier to follow the more complex options. The most common arrangement for simple inverters is to use a transformer with a centre-tapped low-voltage primary. The centre tap is connected to the 12V DC supply, and each end of the winding is connected to earth/ ground in turn. This is shown in Figure 2. It is important to understand that there must be no time when both MOSFETs of transistors are turned on at the same time, so there is a short period where both are turned off. This is known as 'dead time'.
+ +
Figure 2 - Basic Squarewave Inverter
The inverter shown in Figure 2 is very basic - it has been simplified to such an extent that it is easy to understand, but it does not work very well. The biggest problem is mentioned above - the peak and RMS voltages are the same, and this limits its usefulness. However, the same basic circuit operated at a higher frequency (25kHz or more) is exactly what's used with a great many DC-DC converters. See Project 89 as an example. R1/C1 and R2/C2 are snubber circuits that reduce high voltage spikes from the transformer.
+ +Even operated at 50Hz, the circuit is fairly efficient. It's very important to choose transistors or MOSFETs that have a very low 'on' resistance. It is imperative that losses in the switching devices are minimised, and heavy wire is needed for all interconnections and on the transformer's primary. Every small resistances add up quite quickly in a high current circuit, and it's easy for losses to become so great that overall efficiency is reduced dramatically. This is not what you want when operating equipment from a battery, because amp-hours cost money.
+ +As shown, the output stage is very similar to that used in a great many different inverters. The only difference between the circuit shown and a modified squarewave inverter is the oscillator and the transformer voltage ratio. For the squarewave inverter, the transformer ratio is determined by ...
+ ++ Rt = Vout / Vin (Where Rt is the transformation ratio, Vin is the input voltage and Vout + is the RMS output voltage... equal to the peak voltage with a squarewave inverter)+ +
+ Rt = 230 / 12 = 1:19.16 +
The above does not make any allowance for losses, and the ratio would need to be between 1:20 and 1:22 (for each primary winding) to allow for losses across the MOSFETs and in the transformer windings. This type of inverter has no mechanism for regulation, so the output voltage will vary with the load. To keep the variation to a minimum, all losses must be kept as low as possible.
+ +An AC waveform swings positive and negative, so the peak-to-peak voltage is double the peak voltage. This is accomplished by the transformer, which has a dual primary with a centre-tap. Because of the dual primary, the ratio may also be written as 1+1:20 (for example). The ratio based on the voltage across the entire primary is 1:10 and the peak-to-peak input voltage is actually 24V. This is the voltage across each switching MOSFET - it varies between close to zero and +24V. This is simple transformer theory - if you don't understand, then please read the articles Transformers, Part 1 and Transformers, Part 2.
+ + +To provide a waveform that has the same RMS and peak voltages as the mains, we need to modify the waveform to that shown in Figure 1B. The remainder of the circuit remains exactly the same, but the transformer ratio is changed so that the peak voltage is created.
+ ++ Rt = Vpeak / Vin+ +
+ Rt = 325 / 12 = 1:27.08 +
Again, allowances must be made for switching and transformer winding resistance, so the final ratio will be around 1:30 to obtain the required 325V peak voltage for a 230V RMS voltage under load. A lot of common loads rely on the peak voltage, in particular simple switchmode power supplies. Unfortunately, it's not feasible to regulate the peak voltage with a basic design, but it is relatively easy to regulate the RMS voltage simply by changing the width of the voltage pulses. As the pulse width is increased, the RMS voltage is increased, even though the peak voltage may be reduced.
+ +For a waveform with exactly 325V peaks, each positive and negative going pulse needs to be exactly 5ms wide. This means that for a 50Hz waveform (20ms for one complete cycle) the voltage will be as shown in Figure 3. This is the same waveform as that shown in Figure 1B, but expanded for clarity.
+ +
Figure 3 - Modified Squarewave Waveform In Detail
Naturally, for 60Hz mains the timing is different, but the essential part is that the waveform period is divided evenly into 4 discrete segments that are exactly equal. For 50Hz, the period is 20ms, so the waveform is made up of 4 × 5ms segments. It might not be immediately apparent, but this gives the same 1.414 peak/ RMS value as a sinewave. The RMS value is 230V and the peak is 325V (give or take a fraction of a volt). The distortion is a rather high 47% (THD), and although it can be reduced by changing the width of the pulses, doing so changes the voltage. The best distortion figure (28% THD) is achieved when the pulses are about 7ms wide (instead of 5ms), but the RMS voltage is increased to over 270V. All in all, equally timed pulses and dead time are far simpler to generate and give a fairly good overall result.
+ +The transformer requires a different turns ratio as described above. Apart from the oscillator, the inverter circuit is identical to that shown in Figure 2. The oscillator must be more complex to produce the waveform, but it's not difficult and can be done in many different ways. One of the easiest is to use a PIC (or any other programmable micro-controller), which also means that frequency stability can be extremely good if the controller uses a crystal oscillator.
+ +Regulation of the RMS voltage can be achieved by making the voltage pulses wider or narrower, but the peak voltage cannot be regulated without extreme circuit complexity. For a simple inverter that's suitable for many common loads, the additional circuitry will never be added because the circuit would not be simple any more.
+ +Since it is easy to regulate the RMS value by simply changing the width of the pulses, you may think of this as a very (very!) crude form of PWM (pulse width modulation). And so it is. It is theoretically possible to add a filter that will give a passable sinewave at the output, but because the frequency is so low it would be uneconomical and would actually create far more problems than it would ever solve.
+ + +While the modified squarewave inverter can be seen as a very crude form of PWM, one form of modified sinewave uses low speed PWM to achieve a rough approximation to a sinewave (discussed below). Another variation is to build a step waveform, by switching different transformer windings in and out of circuit. This is shown below, and you can see that it is starting to resemble a rather piecemeal sinewave. This is a crude form of pulse amplitude modulation (PAM), a technique that was common for a brief period before fully digital systems were economically feasible.
+ +
Figure 4 - Modified Sinewave Waveform
This waveform cannot be created using the simple switching shown above, and requires a transformer with more primary windings to generate the output voltage. By carefully adjusting the number of turns and switch timing it's possible to get a waveform with distortion of around 20% or better. Because of the relative complexity of the waveform, it has to be created using discrete logic (cheap but inflexible) or a programmable microcontroller (PIC or similar), which allows fine timing adjustments if necessary.
+ +This type of inverter is not common, because its transformer is more complex and it needs additional switching transistors and driver circuits. With Class-D amplifier technology now commonplace, it's easier and cheaper to build a 'true' sinewave inverter than to mess around trying to implement a workable modified sinewave. To give you an idea of the relative complexity, Figure 5 shows a simplified circuit.
+ +
Figure 5 - Simplified Modified Sinewave Schematic
It's no longer appropriate to call the frequency generator an oscillator, because it has to generate a relatively complex waveform. This make it a waveform generator, rather than a simple oscillator. It may not be immediately apparent how this circuit works, so first let's assume that we are about to generate a positive half cycle followed by a negative half cycle.
+ +It is imperative that no two MOSFETs are ever on at the same time, or extremely high and possibly destructive current will flow. This means that there will be small glitches in the output waveform, but most loads will be unaffected. Some basic filtering will remove the highest harmonic frequencies, and is essential to prevent radio frequency interference. Snubber circuits have not been shown, nor has the fuse.
+ +The waveform timings described are only intended as an example. To optimise the peak to RMS ratio and distortion performance it will be necessary to make small adjustments to the timing of each pulse and off period. This will also be necessary to change the frequency - the timing of the pulses described will provide a 50Hz output. Changes to the transformer winding ratios and small timing modifications can be done to optimise the peak vs. RMS voltage and output distortion. It should be possible to get distortion below 20% with a peak to RMS ratio very close to 1.414:1 with this arrangement.
+ +There is another variant of the 'modified sinewave' inverter that uses low-speed pulse width modulation (PWM). Rather than use a switching frequency of 25kHz or so, it can be done with a frequency of around 550Hz. The 'sampling' frequency should be an odd harmonic of the desired fundamental frequency to ensure a symmetrical output waveform.
+ +
Figure 6 - Low-Speed Pulse Width Modulation Waveform
There is very little point trying to filter this waveform, because the sampling frequency is far too low and no sensible filter can remove the harmonics. I have no personal experience with this type of inverter, so I can't be certain how most common loads will behave. Because of the very high harmonic content, most motors and transformers are likely to be stressed and may overheat. With 96% harmonic distortion, it's by far the worst so far, and if you are going to go to the trouble of PWM, then it might as well be the real thing from the outset. Like the other 'modified sinewave' variant shown above, it will cost so little more to implement a true sinewave that low-speed PWM is not worth considering.
+ + +Making a pure sinewave inverter is (in theory) not especially difficult. All you need is a sinewave oscillator of the right frequency, a power amplifier to provide the current you need, and a transformer to increase the voltage to 230V or 120V RMS. Unfortunately, this is very inefficient and makes poor use of the battery's capacity. This used to be fairly common for sinewave laboratory power supplies, and I have one in my workshop. It's very large, extremely heavy (two very large transformers and a big heatsink), and although the waveform is extremely good it runs hot enough at full load to make full use of the heavy-duty fans that are fitted. Forget battery operation entirely, because it operates from relatively high voltage to keep the current within sensible limits. This power supply (it's inappropriate to call it an inverter) uses a vast number of power transistors to allow it to drive 'difficult' loads.
+ +Although it is possible to use much the same power amplifier arrangement as shown in Figure 2, a great deal of feedback is needed to obtain good linearity. It's generally easier to use a more-or-less conventional power amp (but remembering that it has to be fully protected against accidental shorts, normal momentary overloads and possibly very reactive loads. This makes the amplifier complex and expensive, and more so if you want to operate if from a low supply voltage.
+ +When the supply voltage is only 12V DC, it's almost essential to run two amplifiers in bridge (BTL) mode, since that effectively doubles the supply voltage. Using a linear power amplifier is not viable for an inverter for a UPS, because the efficiency is poor (expect no better than ~60% for 'real-world' circuits), although it can be increased slightly at the expense of some distortion. Expecting better than 70% overall efficiency is generally unrealistic unless the sinewave is clipped to the extent that it resembles a squarewave.
+ +
Figure 7 - Clipped 'Pure' Sinewave Waveform
With distortion of just over 5% (the mains can be worse than that), an RMS voltage of 231.5V and a peak value of 310V, the above waveform is very close to that obtained directly from the mains. Because of the clipping, the efficiency will be in the vicinity of 70-75% - somewhat better than the theoretical maximum with a pure sinewave. The transistors still need substantial heatsinks, and of course every Watt of heat has to be supplied by the battery.
+ +As should be apparent, this is not an ideal circuit. The relatively low distortion is good for motors and other inductive loads, and causes little stress to any load because it's close to what comes out of a wall outlet. However the extra battery drain is high enough that you lose at least 30% of the battery's capacity in heat.
+ +Because this is not a viable option, no representative circuit is provided. If anyone wanted to build an inverter using linear amplifiers, it is feasible and potentially useful if the power levels are low. One example that comes to mind is to use a crystal-controlled sinewave oscillator, IC power amplifier and a suitable transformer to create up to 10W or so. Such an arrangement is ideal for driving synchronous clocks or turntable motors that generally only use 2-3W at the most. Ensuring that the amplifier does clip will help to reduce the total power dissipation.
+ + +PWM is the technology of choice for maximum efficiency and a clean sinewave output. The modulation frequency should be high enough to ensure no-one can hear it, which typically means at least 25kHz. Lower frequencies can be used, but the noise from the transformer or filter inductor may be intolerable and the filter components will be larger and more expensive. There are countless chip-sets available for making PWM circuits, and it's not difficult to get very high performance with high efficiency. It's possible to get a properly designed Class-D amplifier to have an efficiency of between 80% and 90%, but there will also be transformer losses that must be considered.
+ +For power output of more than perhaps 200W, the Class-D amplifier will almost certainly use discrete components. IC amplifiers are available that can do more, but an inverter is a special case when it comes to the load. Many common loads will present close to a short-circuit when first powered on (motors, toroidal transformers and simple mains rectifier-filter capacitor power supplies for example), and this causes extreme stress on the amplifier.
+ +For an output of 500W (for example) at 230V, the load impedance is 106 ohms. Since the transformer will need a 1:30 ratio (1:900 impedance ratio), the effective load on the power amplifier is only 118mΩ - 0.118 ohm! This is an extraordinarily low impedance, and gives you an idea of the kind of load experienced. Remember that this can drop to almost zero, limited only by the resistance of the transformer windings, and so far has only considered a resistive load. There's more info on the transformer ratios below. To combat the high losses experienced at such low impedances, it's wise (and more efficient) to include a boost converter to increase the available 12V to something more manageable. Naturally, there will be losses involved in the boost converter, but with careful design they will be less than the losses incurred without it.
+ +To examine the processes needed for a Class-D power amp for inverters, I suggest that you read the Texas Instruments application note [ 2 ]. This recommends the use of a 'tri-level' PWM waveform, generated by dedicated logic and uses a bridged output stage. A highly simplified explanation is shown here as well, and I expect that it will be somewhat easier to understand. It's also worth looking at the Class-D article on the ESP website [ 3 ].
+ +
Figure 8 - Derivation Of PWM (Blue) From Input (Red) And Reference (Green)
Generating the PWM waveform is (at least in theory) delightfully simple. A sinewave is fed into one input of a comparator, and a linear triangle waveform into the other. When the signal voltage is greater than the reference, the output of the comparator is high and vice versa. The comparator output will look like the blue trace in Figure 8. Being a simple on/off waveform, it's easy to amplify and the original sinewave can be reconstructed using a relatively simple inductor/capacitor (LC) filter. Naturally, reality is different. Dedicated chipsets that are available to generate PWM signals will generally give far better results than discrete ICs, and will provide much of the other support functions as well. These include MOSFET gate drivers and cycle-by-cycle current limiting, both essential for an inverter expected to deliver significant current.
+ +The essential functions are shown below, but without including a full schematic. Figure 9 is highly simplified, because a complete schematic is too complex to follow easily. The two oscillators are shown in the next section - one 50Hz sinewave oscillator and one 25kHz triangle wave oscillator. These are used to generate the PWM waveform. Note that in switchmode power supply language, a bridged output stage like that shown below is commonly referred to as an 'H' bridge, and is drawn so that the switching devices and transformer form the shape of the letter 'H'.
+ +
Figure 9 - Simplified PWM Sinewave Inverter
As shown above, it is preferable to use a bridged amplifier to drive the primary. This has the effect of doubling the supply voltage, so the maximum swing across the transformer is almost 8.5V RMS (24V peak-peak) rather than just under 4.25V that can be obtained from a single 12V supply. The current that each MOSFET stage must control is extremely high, and MOSFETs with extremely low RDSon (on resistance) are needed. At an output of just 1A peak into the load, each MOSFET will be switching a peak current of at least 30A DC.
+ +The bridged PWM amplifiers are driven just like any other bridged amp, but with a PWM signal. Because the high frequency switching may play havoc with a transformer attached, it might necessary to use the output low pass filters so that the switching signal is isolated from the transformer. If the transformer is made to have very low leakage inductance, it will be possible to place the low pass filter at the output, but this means that the required inductance will be greater than that needed if the filter is in the low voltage circuit. The MOSFET driver sections are responsible for level shifting (high side) and for providing the required dead-time to ensure that the vertical MOSFET pairs (Q1, Q2 and Q3, Q4) are never on at the same time.
+ + +For any high power inverter, the transformer becomes a major part of the unit, in size, weight and cost. If the inverter uses a switchmode boost supply to obtain the peak voltage needed for the output, it can use a much smaller transformer because it will switch at 25kHz or more, rather than 50Hz. The output stage then works with the full peak voltage, either 325V or 170V DC, to suit 230V and 120V mains respectively. A basic diagram of this kind of inverter is shown below. By using a higher DC voltage (e.g. 400V for 230V output), it becomes possible to provide regulation that can be as good as you need it to be.
+ +
Figure 10 - DC-DC Converter, High Voltage PWM
This arrangement allows the DC-DC converter to be optimised, and the transformer can be a great deal smaller than would otherwise be the case. Although only two IGBTs are shown for the DC-DC converter, ideally it would use several high current devices in parallel so that the extremely high current can be handled with minimum losses. Since this arrangement may be used with inverters of any power, but it only becomes economical for an output of perhaps 250VA or more (typically allowing for a 500VA peak or 'surge' rating). At an output of just 500VA (or 500W), the average DC current will be around 47A allowing for losses.
+ +The output stage will be an 'H' bridge so that the DC voltage is only half that otherwise needed for a full AC cycle. It may seem silly to use two separate stages, having a DC-DC converter followed by a PWM sinewave generator at the full mains voltage, but it has many advantages and if done properly will be more efficient than a single switching stage. This approach also makes regulation easier, but it requires very comprehensive protective circuits around the output switching devices (not shown in Figure 10).
+ +Providing protection isn't especially difficult, but it needs to be fast enough to protect the switching devices under worst case conditions. Mains loads can be very hard on inverters, because there are so many that appear to be close to a short circuit when power is applied. Most switchmode power supplies, large transformers and motors are especially difficult, with motors being one of the hardest of all. Start current for typical motors is very high, and if the motor has to start under load (refrigeration compressors being one of the worst offenders) the problem is greater still. If the inverter can't supply enough current for the motor to start, either the inverter or the motor (or both) may be damaged.
+ +
Figure 11 - Photo Of 300W High Voltage PWM Inverter
The photo above shows the insides of a 300W inverter that follows the block diagram shown in Figure 10 pretty much exactly. The output section is driven by a PIC microcontroller and two IR2110 combined high-side and low-side MOSFET drivers, each driving a pair of IRF840 high voltage MOSFETs. The PIC is responsible for generating the sinewave, probably using a simple table to determine the pulse width needed for each transition. It's crystal controlled, so the frequency will be fairly accurate, but this wasn't tested. Distortion is very low - all harmonics are below -40dB, so total distortion is unlikely to exceed around 2% - this is an excellent result for an inverter.
+ +The main inverter section uses a pair of IGBTs to handle the high current. The large yellow core marked PSI-300W is the inductor for the output filter, along with a 2uF, 300V AC capacitor. The other core you can see is the switching transformer that converts the 12V input to approximately 350V DC, switching at ~40kHz.
+ + +There are many different ways to make oscillators that are suitable for generating sinewaves and triangle waves. In a highly integrated commercial design, they will probably both be digital, and preferably crystal locked so the frequency is accurate. For a UPS, the situation is complicated if you want the output of the generator to be in phase with the mains so the changeover is free of glitches. In the case of a stand-alone sinewave generator, we don't care, especially as the system can also operate as a frequency changer. Producing 60Hz mains in a 50Hz country (or vice versa) is a fairly common testing laboratory requirement for example.
+ +The oscillator described in the first reference [ 1 ] and shown in Figure 10 is fairly straightforward, and has good frequency stability. Amplitude stability is determined by the saturation voltage of the first opamp, and may vary slightly with temperature. For a more comprehensive look at various sinewave generator techniques, see Sinewave Oscillators - Characteristics, Topologies and Examples. For an AC source, distortion below 1% is more than acceptable, and even a Class-D stage can benefit (slightly) by allowing it to clip the peaks. For most applications it doesn't matter at all if the generated mains waveform has up to 5% total distortion, and this eases the demands on the 50/60Hz oscillator. In particular, it means that accurate amplitude stabilisation techniques aren't needed, simplifying the design.
+ +
Figure 12 - Three Stage Phase-Shift Sinewave Oscillator
While the design is straightforward and has fairly low distortion, the amplitude will vary a little as the frequency is changed via VR1. The amplitude can be varied to some extent by changing the ratio of R3 and R4, but this also changes the frequency and is not useful. U1 operates as a amplifier with gain controlled by R3 and R4. As shown it has a gain of 10 (100k / 10k), and if the gain is reduced by much it won't oscillate. Higher gain makes oscillation a certainty, but at the expense of higher distortion. With a 12V supply, the output level is about 460mV RMS with a distortion of 0.8%. Frequency is 50Hz with VR1 set to 52k. Because the output sinewave is taken from the output of an opamp, it has low impedance. To obtain a higher level, U4 can be wired as an amplifier, or the output can taken from U3 (930mV with 2% distortion).
+ +This oscillator is usable for either linear or Class-D inverters. There's obviously not much point making a sinewave oscillator for a modified squarewave inverter. A good sinewave can also be created using digital synthesis, and that has the advantage that it can be crystal controlled. While absolute frequency stability is usually not very important for an inverter, it doesn't hurt anything and if it comes (virtually) free then what's not to like? A PIC can be used to generate the sinewave, and also monitor circuit performance, temperature, etc.
+ +
Figure 13 - Schmitt Trigger + Integrator Triangle Generator
The triangle wave generator can also be done many different ways, but as shown above is fairly simple and has good linearity. U1 is wired as a Schmitt trigger, having positive feedback applied to its non-inverting input. U2 is an integrator. The output from U2 increases until the non-inverting input of U1 is forced higher than the reference voltage (Vref) at the inverting input. It rapidly switches its output high, causing the output of U2 to fall linearly until the non-inverting input of U1 is forced lower than Vref. The cycle repeats indefinitely. With the values shown and a 12V supply, the output amplitude is 4V peak-to-peak at a frequency of 25.8kHz. VR1 allows you to set the level to match that from the sinewave generator for the optimum modulation level. C2 is used at the 'bottom' end of VR1 so that the 6V reference voltage is retained, and doesn't vary with the pot setting. R6 ensures that the triangle wave and DC reference level cannot be lost, even if the pot becomes open-circuit.
+ +
Figure 14 - Comparator To Create PWM Waveform
By combining the circuits of Figure 12 and Figure 13 and adding a comparator, we get a complete pulse width modulator - and yes, it really is that simple. For a better idea of the exact waveforms involved, refer to Figure 8. The output is PWM, and is ready to send to the switching MOSFETs via a suitable level shifter and gate driver IC. These are readily available, with the International Rectifier IR2110 being one of the most common. This part is specifically designed to drive the gates of MOSFETs for Class-D amplifiers.
+ +
Figure 15 (Left) - PWM Waveform, 2.5kHz with 50Hz Modulation
Figure 16 (Right) - Recovered 50Hz Signal With Spectrum
Figure 15 shows the output of a pulse width modulator along the lines of that shown in Figure 14. The main difference is that I used an opamp (which works but is isn't really fast enough), and I had to reduce the triangle waveform frequency to 2.5kHz so the waveform could be seen properly on the oscilloscope.
+ +The recovered waveform is shown in Figure 16, along with the frequency spectrum in the lower violet trace. The 50Hz waveform is the spike at the extreme left, and the 2.5kHz residual (with its sidebands) is seen in the centre of the frequency domain measurement. The filter used was just a simple resistor-capacitor low-pass type with a -3dB frequency of 159Hz (10k resistor and 100nF capacitor), so there's more of the 2.5kHz signal than you'd normally see. If the modulation carrier frequency is increased to 25kHz, the 50Hz waveform is very clean indeed - even with such a crude filter and slow opamp.
+ + +Many inverters offer 'regulation', but it's often not proper regulation that maintains both peak and RMS at the designated output voltage. For modified squarewave inverters, the regulation circuit will attempt to maintain the RMS voltage as the peak sags under load and/or as the battery discharges. This is done by making the 'on' periods longer, and the output voltage starts to resemble that from a squarewave inverter as the load is increased.
+ +True sinewave inverters using PWM will use a variety of techniques, but the easiest is simply to allow the output waveform to clip. The alternative is to ensure that the PWM amplifier has some headroom, and to apply a comprehensive feedback circuit to ensure that the AC output remains within preset limits.
+ +With all inverters, it is essential to realise that the current on the input side will be very high. That means that everything in the chain can affect the regulation, from the battery, supply leads, switching devices and transformer primary windings. Even a rather paltry 100W inverter will draw 8.33A DC at 12V, but the instantaneous current is higher and losses haven't been considered. The actual (average) current will be closer to 10A, and peak current will be almost 20A. Even a small resistance causes a serious voltage drop - for example just 0.1 ohm will cause a loss of 2V at 20A, so 12V is now only 10V.
+ +It is quite obvious that if 12V is reduced to 10V at the peak current, then the output voltage must fall at least in proportion, and there may be a bit more loss due to internal resistances. The required peak of 325V will fall to only 270V and the RMS value will be down to about 190V. The only way that proper output regulation can be achieved is with feedback. A high voltage PWM inverter is likely to be the only one that can offer both acceptable regulation (better than 5% from no load to full load) while maintaining the correct peak to RMS ratio - see below.
+ + +The transformer used for a low frequency inverter is invariably a step-up type. The primary must have very low resistance because of the high current involved, and in all cases the transformer has to be designed for the mains frequency in use. This means that it will be comparatively large - at least the same size as a normal step-down transformer intended for the same VA rating.
+ +Depending on the intended usage (intermittent or permanently connected for example) the allowable losses will be different. A transformer that will only be used for occasional UPS duties may be smaller than the ideal case, and it will therefore be cheaper, smaller and lighter. Of course, it will also have higher losses. The primary inductance is of little real consequence, but it must be high enough to ensure that magnetising current at 50 or 60Hz is low enough to ensure losses are within sensible limits. Inductance calculations of mains transformers is not an exact science. Much of the magnetising current will be due to partial saturation, so the calculated value will be lower than expected.
+ +As an example, a fairly basic (i.e. nothing special) 30:1 ratio (230 to 7.67V RMS) mains voltage transformer may draw 50mA from a 230V 50Hz mains supply with no load. This is the magnetising current, and the effective inductance is therefore calculated using the normal inductive reactance formula ...
+ ++ XL = V / I+ +
+ XL = 230 / 0.05 = 4.6k ohms
+ L = XL / ( 2π × f )
+ L = 4.6k / ( 6.283 × 50 ) = 14.64H +
It follows that with a turns ratio is 30:1 (7.66V RMS Output) the effective secondary inductance will be about 16.2mH. When used in reverse for an inverter, the best case magnetising current will be 1.5A, but it will usually be more and will vary widely depending on the transformer's construction. As always with transformer design, it's really only the core saturation limit that needs to be addressed, and this depends on the core material, the type of core (E-I, toroidal, etc.) and the maximum allowable dissipation at idle. Contrary to popular belief, the core flux of any transformer is at a maximum when there is no load. The flux always reduces as the load current is increased [ 5 ].
+ +For a step-up transformer, it is essential that the low voltage primary has enough turns to prevent core saturation. It's a much bigger problem with step-up transformers because the primary resistance is very low, and even slight saturation will cause a dramatic increase in the current drawn from the battery. Unlike a conventional mains transformer, the primary resistance is too low to provide any current limiting. You will have noted that I suggested a secondary voltage of only 7.67V (10.8V peak), and this is necessary because the transformer will be used in reverse, and there is only a 12V supply available. Expecting at least 1.2V loss is realistic for a small inverter, although it may be greater.
+ +As always, transformer design is a compromise, and to get the lowest resistance means few turns of thick wire. However, if the wire is so thick that you can't get enough turns, the core will saturate and no-load losses become excessive. The designer's task is to work out the thickest wire possible for the turns needed, and to choose a core that's big enough to avoid saturation, but not so big that it becomes too heavy and expensive.
+ +Perhaps surprisingly, even if the amplifier is PWM at high frequency, the transformer can't be a small ferrite core type. The low frequency content (i.e. the mains frequency) is the dominant factor, and the transformer has to be able to handle that, not the switching frequency. This limitation applies even if there is no low pass filter between the amplifier(s) and the transformer's low voltage primary.
+ +Naturally, this is not the case where the PWM is done at high voltage and the PWM stage supplies the AC output directly. In HV PWM inverters, the high voltage is generated by a high frequency switchmode supply, and that can use a much smaller transformer core because it operates at 25kHz or more. Most of these inverters are fan cooled, even when only they are fairly low output power types (100-200W or so).
+ +It's not at all uncommon for commercially available (low voltage, step up transformer) inverters to have a transformer that is clearly too small. In order to get the required number of turns needed to avoid saturation, the transformer must use wire that is thinner than required to remain cool under load. This is usually addressed by fan cooling the transformer. Although this certainly works and prevents the transformer from melt-down, it doesn't prevent the losses that cause the transformer to get hot in the first place. The result is decreased efficiency.
+ + +As should now be obvious, an inverter is not trivial. Many of the cheap ones that are available are only low power, and if they claim to be more than around 100VA then you can be assured that they won't be the size of a drink can. Remember that the transformer alone will be rated for the full load current, so even a small inverter (100VA, or 230V at 430mA) needs a transformer rated for at least 100VA. Most will make claims of up to double the rated output for 'surge' or 'peak' output, but this will almost invariably mean that the transformer is overloaded during this period. A common method to allow a smaller than ideal transformer is to fan cool it, and this is quite common for cheap inverters.
+ +Frequency accuracy and stability are rarely quoted. Although relatively unimportant for most applications (5% accuracy will usually be quite sufficient) there are a few cases where both stability and frequency are extremely important. Don't imagine that any budget inverter is stable enough to drive synchronous clock or timer motors for example. An error that's insignificant for most applications is extremely significant for clocks and mechanical timers that use the mains as a reference frequency.
+ +In case anyone was wondering, there is no project for a sinewave inverter and there's not about to be. On-line auction sites will have many listings for inverters, some will be modified squarewave (but claim 'modified sinewave'), and others shown as true sinewave. This may or may not be the truth. Either way, at the prices they sell for, it's not worth trying to build one. In general, I'd suggest that you halve the claimed rating, as I suspect that very few are capable of their advertised power ratings, but even after doing that, they are still cheap.
+ +Because of the very high currents involved, the switching devices must be extremely rugged, and good protection is needed to ensure that momentary overloads don't cause failure. It is also necessary to include battery protection, so that if the voltage falls below a pre-determined minimum voltage the inverter turns off. If this isn't included, the battery will be ruined because all current chemistries are damaged if they are discharged too far. As a guide, you can assume about 10A for each 100W of output with a 12V input. This assumes an overall efficiency of around 83%, which will cover most budget inverters and quite a few up-market types as well.
+ +For those so inclined, it can be amusing to look through some of the advertisements for inverters. I've seen (claimed) 2,500W (5,000W peak) inverters, where it's stated that the unit has a 40A fuse. With a 12V supply, the inverter can be expected to draw up to 500A (peak) and around 250A at full rated continuous power (at 12V input and allowing for losses) *. I wonder what the 40A DC fuse is for. Perhaps they are telling naughty fibs.
+ * 40A at 12V is 480W input power, and does not allow for losses. Actual output would be around 460W assuming 'typical' losses in the circuitry. At 13.8V (battery under charge) 40A is 552W + input power, nowhere near 2,500W. ++ +
![]() | + + + + + + + |
Elliott Sound Products | +Designing With JFETs |
JFETs (junction field-effect transistors) are beloved by many, but unfortunately the range has shrunk dramatically in the past few years. This has made it very difficult to build some of the more esoteric designs from readily available types, but Linear Systems produces a range that's ideal for many designs. I mention this because they kindly sent me some samples (full disclosure here) of two different types. One of these is the LSK170B (equivalent to the revered 2SK170, but with graded maximum drain current). Having received these, I decided that it was a worthwhile exercise to look at the basic design processes for JFET stages in general.
+ +JFETs provided by Linear Systems notwithstanding, most of the designs shown use a rather pedestrian 2N5484. I used this because it's one of the few low-cost JFETs that you can still get from (some) major suppliers, and it has basic specifications that make it ideal for general-purpose low current applications. It doesn't excel at anything in particular (although it does have fairly low noise of around 4nV√Hz), but it also has few 'bad habits'. This is important when experimenting, as it makes it more likely that you'll have a successful outcome.
+ +I have avoided the more complex designs, simply because they are complex, and because you need to go to considerable trouble to match the JFETs closely enough to get a working circuit. While JFETs have many desirable features, they also come with many challenges. One of the advantages is that because the gate is a reverse-biased diode, there is far less likelihood that any stray radio-frequency signals will be detected and amplified, as can happen easily with BJTs (bipolar junction transistors) and many opamps (operational amplifiers). The challenges are covered below, and they are not insignificant.
+ +As noted within this article, I have very few designs that use JFETs. This is not because I dislike then (quite the opposite), but because the range from most of the larger suppliers has been reduced to a few devices intended for switching, rather than linear operation. They do work as amplifiers, but some have so much input capacitance that they are unusable with high-impedance signal sources. The few remaining devices from the major suppliers are often only available in a surface-mount device (SMD) package, making it next to impossible to use traditional prototyping systems such as a breadboard or Veroboard. While you can use a small adapter board (available from a few suppliers), this is still a nuisance, as each device you wish to test or experiment with needs its own adapter.
+ +The really low noise devices such as the 2SK170 are gone ... other than on eBay, where you might get a JFET of one type or another, but it's unlikely to be genuine. Linear Systems makes the LSK170, which is pretty much a direct equivalent, but they aren't available from most major distributors. 'General Purpose' JFETs such as the once-ubiquitous 2N5459 might show up in a search, but be designated 'non-stocked' or similar, with orders accepted only for large quantities with a significant lead-time.
+ +The information in this page is intended to show both the advantages and disadvantages of simple JFET stages. The process is complicated by the wide parameter spread that is unique to JFETs. Other 'linear' amplifier devices are far more predictable, including valves (vacuum tubes). However, this doesn't include MOSFETs, as they are not intended for linear applications. Inherent non-linearity is a 'feature' of all amplifying devices, and it's generally dealt with by using a combination of good engineering practice and negative feedback. The latter is not a panacea though, and if performance is lacking before feedback is applied the results are usually uninspiring.
+ +With suitable device selection, one of the biggest advantages you gain with JFETs is noise. The 2SK170/ LSK170 devices are particularly good in this respect. We tend to think that JFETs are optimised for high impedances, but even with low impedances (as low as 100Ω or so), JFETs can beat bipolar transistors. Noise is (usually) minimised by operating a JFET with zero gate voltage (and therefore maximum drain current), but this is not always feasible.
+ +I've shown many circuit variations below, but not all are useful. The idea is that you can experiment to find circuit topologies that do what you need, and push the boundaries to see what can be achieved. All of the circuits shown will work (every variation has been simulated as 'proof of concept'), but functionality depends on the individual characteristics of the JFET you use. If you want to try some of the more 'interesting' variations, you'll need to have a range of trimpots to hand, as fixed resistors are too limiting.
+ +These circuits aren't projects, but rather a collection of ideas that can be incorporated into other designs if required. No simple circuit will ever beat an opamp for overall performance, and the gain with these simple circuits isn't easily set by a couple of resistors. However, not every circuit has to be 'perfect', and getting the gain you need to within a fraction of a dB is not always essential. The exception is with a stereo system, where a gain difference between channels will shift the stereo image.
+ + +JFETs have some unique features, but unfortunately, one of those is a very large parameter spread. Often, a circuit that's designed based on 'typical' parameters for a given device will simply refuse to work properly, especially if the supply voltage is fairly low (such as a 9V battery). As a result, you either have to hand-pick the device(s) that meet your criteria from a larger batch, or it's necessary to include a trimpot to adjust the operating conditions.
+ +While a trimpot certainly works, it usually also means that the gain is different between two (supposedly) identical circuits. This is one of the many reasons that I rarely specify JFETs in projects. The other major reason is that the range has shrunk so much that there are few alternatives. Most of the 'linear' JFETs have disappeared from the inventory of suppliers worldwide, leaving a few devices that may be designed for switching (e.g. as mute circuits in amplifiers or preamps). Another common area is RF, although this doesn't preclude a JFET from being used for audio.
+ +As a result, I will continue to avoid JFETs except where there is no other choice. This is a shame, because they are really quite nice devices for the most part, but the parameter spread will always be a challenge. If you have plenty of voltage to spare (typically around +24V DC) this isn't a major issue, but with low supply voltages they are always tricky. There are a few designers who love JFETs, and consider them to be 'better' in all respects than BJTs (bipolar junction transistors). However, you need to consider that the chance of picking any difference whatsoever in a proper double-blind test is likely to be zero!
+ +There are three basic topologies - common source (a 'normal' amplifier), common drain (source follower) and common gate. The common gate arrangement is generally only used for radio-frequency circuits, and won't be covered in this article. Nor will I be covering some of the more 'esoteric' configurations that seem to be loved by some designers. This isn't because they don't work, but because they can become fairly complex, without ever managing to approach the performance of a $5.00 opamp. They are interesting, but the difficulty of getting them to work as well as possible isn't easy. This is mainly due to the lack of availability of JFETs suitable for audio and the wide parameter spread. This makes design harder with more complex circuits.
+ +This article should be read in conjunction with FETs (& MOSFETs) - Applications, Advantages and Disadvantages. There is some commonality between the two, but this article concentrates more on specific parameters, what they mean, and how to design with them. reading both will increase your understanding of the design issues faced due to 'parameter spread', and the 'Applications' article covers more options, but with less detail.
+ +JFETs are roughly equivalent to a triode valve (vacuum tube), although in some cases it may be claimed that they are equivalent to a pentode. This only appears to be the case, due to the conduction curves of JFETs resembling those of pentodes. However, in terms of stage gain they fall into the triode region - pentodes usually have a gain of over 100 in 'typical' circuits, but a single JFET stage can't even get close to that. The available gain can usually be directly compared to common triodes such as 12AT7, 12AU7 and 12AX7. Unfortunately for the JFET, its parameters are far less predictable than they are for valves, making the design process more complicated. Small-signal MOSFETs are a lot closer to pentodes, having much higher gain in a typical circuit. However, most common MOSFETs are enhancement-mode, and require a different biasing scheme. They are also comparatively noisy, and IMO they are not suitable for low-level audio applications.
+ +With a valve stage expected to handle a relatively low-level signal (e.g. around 100mV), it's hard to make it not amplify. Because of the high voltages used, even a fairly badly designed valve stage can work perfectly well in a given application, as the output voltage swing will be a small percentage of the supply voltage. When using JFETs with typical voltages from 12V to 24V or so, a biasing error will cause considerable distortion because the output voltage swing is limited by the low voltage available. The lower the supply voltage, the more accurate the biasing needs to be.
+ +One thing you won't find in this article is pages of formulae, transfer (and other) characteristic graphs, equivalent circuits and a few more pages of formulae. These may be 'interesting', but they only apply to the specific JFET that was tested to obtain the graphs or values in formulae. The next JFET you remove from the bag (or wherever you keep them) will be completely different, and it's only by chance or (often tedious) testing that you'll find two the same.
+ + +The most important parameters are the gate-source cutoff voltage, and the maximum current with zero gate voltage (referred to the source). These are designated VGS (off) and IDSS respectively, and they determine the usable bias points. As most JFET circuits use 'self-biasing' (in the same way as valves using cathode bias), the bias is achieved by using a resistor in the source circuit. The voltage developed across this resistor gives the gate a negative voltage in the same way that a valve's grid is made negative with cathode bias. If the source is more positive than the gate, then the gate has a negative voltage referred to the source. Table 1 shows the cutoff voltage (the negative gate voltage for the specified drain leakage current).
+ +JFETs are depletion-mode. This means that a negative (assuming N-Channel) gate voltage is needed to turn the JFET off. With no gate voltage (VGS = 0), the JFET will be turned on. In contrast, most (but not all) MOSFETs are enhancement-mode, so without any gate voltage they remain off. There are depletion-mode MOSFETs, but they are nowhere near as common as enhancement-mode devices. A major supplier I looked at shows 109 depletion-mode MOSFETs (of all types) vs. 9,919 enhancement-mode types. There are no enhancement-mode JFETs.
+ +Symbol | Parameter | Test Condition | Type | Min. | Typ. + | Max. | Units + |
VGS (off) | Gate-Source Cutoff Voltage | VDS = 15.0V, ID = 10nA | J111 | -3.0 | -10.0 | V + | |
J112 | -1.0 | -5.0 | V + | ||||
J113 | -0.5 | -3.0 | V + | ||||
+ | |||||||
VDS = 5.0V, ID = 1.0µA | 2N5457 | -0.5 | -6.0 | V + | |||
2N5458 | -1.0 | -7.0 | V + | ||||
2N5459 | -2.0 | -8.0 | V + | ||||
+ | |||||||
VDS = 15.0V, ID = 10nA | 2N5484 | -0.3 | -3.0 | V + | |||
2N5485 | -0.5 | -4.0 | V + | ||||
2N5486 | -2.0 | -6.0 | V + | ||||
+ | |||||||
VDS = 20.0V, ID = 100pA | J201 | -0.3 | -1.5 | V + | |||
J202 | -0.8 | -4.0 | V + | ||||
+ | |||||||
VDS = 15.0V, ID = 2nA | MPF102 | -8.0 | V + | ||||
+ | |||||||
VDS = 10.0V, ID = 100nA Note 1 | 2SK209 | -0.2 | -1.5 | V + | |||
+ | |||||||
VDS = 10.0V, ID = 1nA Note 2 | LSK170 | -0.2 | -2.0 | V + |
+ Note 1 The 2SK209 is available in a SMD package only (TO-236/ SOT-346) - 2.9×1.5mm. It's included as an example, but could be useful in audio circuits.+ +
+ Note 2 VGS (off) is the same for all variants of the LSK170. +
I included some JFETs that used to be common (the 2N545x series), readily available types (J11x series), the J201/202, MPF102, 2SK209 and the LSK170. Each series specifies a different drain-source voltage and minimum current. As you can see from the table, VGS(off) varies over a wide range (from 3.3:1 up to 6:1 ratio is typical, but the 2N5457 has a ratio of 12:1). This is greater than any other small-signal amplifying device, and herein lies one of the biggest issues. There are ways around it, but they can add considerable complexity. It's worth noting that the J201/202 data varies from one vendor to another (I have two datasheets for these, and they are quite different), so not only must you look at the datasheet, but you need to ensure it's from the actual manufacturer of the JFETs you have. Note that VGS (off) is also known as the 'pinch-off' voltage (VP), where drain current is reduced to some very low value (typically < 1µA). This is indicated as VP where used.
+ +Symbol | Parameter | Test Condition | Type | Min. | Typ. + | Max. | Units + |
IDSS | Zero Gate Volts Drain Current | VDS = 15.0V, VGS = 0 | J111 | 20 | mA + | ||
J112 | 5.0 | mA + | |||||
J113 | 2.0 | mA + | |||||
+ | |||||||
VDS = 15.0V, VGS = 0 | 2N5457 | 20 | mA + | ||||
2N5458 | 5.0 | mA + | |||||
2N5459 | 2.0 | mA + | |||||
+ | |||||||
VDS = 15.0V, VGS = 0 | 2N5484 | 1.0 | 5.0 | mA + | |||
2N5485 | 4.0 | 10 | mA + | ||||
2N5486 | 8.0 | 20 | mA + | ||||
+ | |||||||
VDS = 25.0V, VGS = 0 | J201 | 0.2 | 1.0 | mA + | |||
J202 | 0.9 | 4.5 | mA + | ||||
+ | |||||||
VDS = 15.0V, VGS = 0 | MPF102 | 2.0 | 20 | mA + | |||
+ | |||||||
VDS = 10.0V, VGS = 0 | 2SK209 | 1.2 | 14 | mA + | |||
+ | |||||||
VDS = 10.0V, VGS = 0 | LSK170A | 2.6 | 6.5 | mA + | |||
LSK170B | 6.0 | 12 | mA + | ||||
LSK170C | 10 | 20 | mA + | ||||
LSK170D | 18 | 30 | mA + |
Depending on the datasheet (and the expected use of the JFET) the transconductance may or may not be specified. Along with the other parameters, transconductance (measured in mhos [the 'mho' is 'ohm' backwards], mA/V [uncommon] or Siemens, and is sometimes indicated with ℧) is also variable, with a more-or-less typical range from 1mS to 10mS. For any given JFET type, expect a range of roughly 2:1 from the highest to the lowest. mS is roughly equivalent to 'mA/V' for valves, but it doesn't tell the whole story and is (for the most part) pretty much irrelevant. One of the reasons for this is that it's so hard to actually design a stage using a JFET, because all of the parameters are so variable. You can perform all the theory you like, examine the graphs in the datasheet until you're bored or bewildered, design the stage based on the theory you just applied, and find it doesn't work. Not because of anything you did, but simply because the wide variation of VGS(off) (in particular) makes most calculations pointless.
+ +You won't see this mentioned elsewhere, and many sites will show all the theory needed to get a working design. Some will use 'load line' graphs to show the optimum bias point, and others will describe a number of formulae (often a vast number). With few exceptions, these are only useful if the FET you have is identical to the one used to make the graphs (or describe the parameters) shown. The author may (or may not) point out somewhere that you'll need to make a change to one component (usually a resistor) or another, but most don't seem to have noticed that this makes the whole 'design' process redundant. There are relatively few things that need to be considered, and after that you have to either select the JFET to suit the design, or change the design to suit the JFET. There are obviously a few things that make a difference in otherwise identical circuits, with transconductance being but one.
+ +For example, if a given JFET has a transconductance of 3mS (3 milli-Siemens, or 3mA/V), you'd expect it to vary the output current by 3mA for each volt of input signal. This is rarely possible, since the drain current change will almost always be a small fraction of 1mA. I ran a simulation using a (servo assisted) 2N5484, 5 & 6 in an identical circuit. The servo ensured that the drain voltage was ½ the supply voltage. Applying a signal of 10mV, I measured the drain current change (ΔI) to arrive at the transconductance figure. I obtained transconductance figures of 4.18mS (2N5484), 3.52mS (2N5485) and 3.98mS (2N5486). The measured gain was (in the same order) 23.5dB, 21.76dB and 20.20dB. This was in correlation with the measured transconductance, but it varies from one device to the next, even of the same type! Transconductance also changes with drain current. The variation can be as much as 2:1 under identical conditions, thus making detailed analysis somewhere between useless and pointless.
+ +I'm all for showing readers how to design an amplifying stage, but when there's so much variability between devices it becomes a moot point. The only way you will ever know how a circuit will perform is to build it. A simulation is not helpful, because the simulator models will have a set of parameters that are 'typical', except they usually aren't typical at all. You'll notice that in all cases shown in the tables, only a minimum and maximum is specified - there's nothing that falls into the 'typical' column because they are all different.
+ +The parameters shown in the tables are still important though. If you have a 9V supply, there's no point selecting a JFET that needs a -10V gate voltage to turn off. Likewise, if you have a design drain current of 1mA, selecting a JFET that can deliver up to 20mA with zero gate voltage would probably be unwise. It can still be made to work, but may need such a high negative gate voltage that it's impractical. This is where datasheets are helpful, but only if you understand the implications of each parameter.
+ +It's not included in the tables, but you must ensure that the maximum rated voltages (VGS and VGD) are not exceeded. These will usually be somewhere between 25V and 50V (they are always shown in the datasheet), and exceeding them can destroy the JFET. The breakdown is between the drain or source to the gate, which normally forms a reverse-biased diode. Like any diode, if the voltage is too high it will cause a relatively large current to flow when the junction cannot withstand the voltage. Power dissipation will be high, and the JFET will probably fail.
+ +Most JFETs are symmetrical, so drain and source can be exchanged with little or no change in performance. Another thing that won't be covered here (but can be useful) is that the 'on-resistance' (rDS (on)) can be made lower than the datasheet value by making the gate positive with respect to the source. There's a limit though, because if the gate-source or gate-drain voltage exceeds +0.65V the gate diode will conduct. Common practice is to keep the maximum positive voltage to around 300mV or so. Likewise, never exceed the 'absolute maximum' values shown in the datasheet. These include reverse gate-source voltage, maximum gate current, the rated maximum power dissipation and temperature limits.
+ ++ As a side-note, JFETs can be used as low-leakage diodes in critical applications. However, a BJT 'diode' (base to collector) is usually better than a JFET. Choose your JFET/ BJT wisely + though, as some are better than others. A BC549 at 12V will have a leakage of around 17pA (706GΩ!), vs. 5nA for a 2N2222 (simulated but not tested). Most 'typical' + JFETs will be around 25pA at 12V (only 480GΩ). These data are usually not shown in datasheets. ++ +
The tables also don't show the intrinsic gate-source capacitance, CISS. This is shown in some datasheets, ignored in others, while in a few cases it will be specified in a way that may not be particularly useful. It would be nice if all datasheets had the same info in the same units, but I fear that's asking too much. Expect the gate-source capacitance to be between 5pF and 10pF, although some may be higher or lower than this. JFETs designed for switching will often have much higher CISS than other JFETs, so be very wary of using them in high-frequency applications. Note that CISS is not a fixed value for any JFET, and it's non-linear. The value changes with drain current and bias voltage.
+ +All datasheets specify the maximum allowable drain current and gate current (with the gate forward-biased). Power dissipation is also shown, and for TO92 devices it's generally no more than 500mW, with SMD parts generally having a reduced maximum power for the same device type. It's uncommon for either of these figures to be exceeded, as most (but certainly not all) JFET circuits are low current. The gate is a reverse-biased PN junction in normal operation, and there's a limit to the maximum voltage between the gate and the source and/ or drain. This determines the maximum operating voltage.
+ +The above doesn't show all possibilities, but it does cover those discussed here (plus the J30x types). I have no idea why manufacturers failed to standardise the pinouts - they managed to do it with popular valves (vacuum tubes) and many other devices, but for some reason when you give someone three pins to play with, they will use every combination possible. The three indicated with part numbers appear to be the most common. While most JFETs are symmetrical (so drain and source can be swapped with no change in performance), it's always better to use the 'proper' orientation to minimise confusion later on.
+ +Before using any JFET, make sure that you have a copy of the datasheet, and acquaint yourself with the terminology (and pinouts) used. Not all manufacturers use the same terms for the various parameters, some include data that is not shown in other datasheets (e.g. rDS (on) is shown for the J11x series, but not most others), and noise may be stated as 'noise figure' in dB or provided in nV√Hz. Just because a JFET is designated as a 'switching' type, this doesn't mean it won't work as an amplifier and vice versa. However, be aware that an amplifier JFET usually won't be as effective for switching as a 'true' switching device (where the rDS (on) will be specified). Likewise, a switching JFET may perform poorly as an amplifier, particularly due to a higher CISS, which will limit the high frequency response with high impedance signal sources.
+ + +Before you can start working with JFET circuits, you need the values for VGS(off) and IDSS. That's why I included these two tables, because these two parameters are the most critical for any circuit design. They are also the values that require matching, so a simple method for measuring any JFETs that you have (or buy) is fairly important. The test circuit shown below relies on a simple measurement technique, The 1MΩ resistor will cause a small current flow for VGS(off) tests (the meter will show a positive voltage, but it is a negative value). It will be different from the datasheet value, but the 'error' will be tiny and can be ignored. Your multimeter must be able to measure down to millivolts, as the voltage across R1 (1Ω) will show 1mV/mA. If your meter can't measure below 1mV, you will need to increase the value of R1. If you make it 10Ω, the voltage reading is divided by ten to get the current. For example, if you measure 0.012V (12mV), the current is 1.2mA.
+ + + +'DUT' means 'device under test'. This test is easily performed, needing only an external power supply. Ideally it will have a current limiter so that a shorted device doesn't cause smoke, but if not you can use a 'safety' resistor in series with the supply. You can use up to 100Ω for the safety resistor, and while its inclusion will change your readings, all devices tested will have the same 'error', so the results will balance out. P-Channel JFETs can also be tested, simply by reversing the supply polarity.
+ +When the pushbutton is open, the reading shown on the meter is VGS (off), that voltage where the JFET does not conduct more than a few microamps passed by R2. With the pushbutton pressed, you'll measure IDSS, the maximum current with zero gate voltage. Many JFETs are fully symmetrical, so drain and source can be swapped and you'll get the same readings.
+ +By measuring JFETs before use, you know what voltage range you need for the desired drain current. Because the parameter spread is so large, you need to design each JFET amplifier based on the measurements you take. No other amplifying device requires this step. To complete this example, we'll test a 2N5484 (I'll use the simulator, but I have run tests on 'real' JFETs too).
+ ++ VGS (off) = -1.255V+ +
+ IDSS = 3.37mA +
The value obtained for VGS (off) is within the range given in Table 1 (-0.3 to -3.0V), and IDSS is also within the range (1 - 5mA). Another (seeming identical) device will almost certainly be quite different from the one you just tested. This is quite normal with JFETs, hence the need for an easy way to test them.
+ +![]() | JFETs are normally operated in the 'saturation' region, which is to say that the device will draw the maximum current possible for a given gate (negative) voltage. This + doesn't mean that changing the drain voltage won't affect the current, because it will. The amount of change depends on the JFET itself, and (like all parameters) it will vary from one + device to the next - even of the same type. The maximum current is defined by IDSS, the current drawn with zero gate-source voltage. That's why Table 2 shows the + manufacturer's test voltage, which varies from one device type to the next. It should be apparent that expecting to operate a JFET with an IDSS of 1mA with a drain current + of more than 1mA won't work - ideally the quiescent drain current will be somewhere between 50% and 85% of the rated (or measured) IDSS for minimum distortion. This + isn't always possible. + |
Now we can look at a design for the JFET just tested. We need to choose a drain current, and fairly obviously it must be less than 3.37mA because that's the maximum possible drain current, obtained with zero volts between the gate and source. While the current can be increased, that requires that the gate is driven positive with respect to the source, and this should be avoided in a linear circuit. A reasonable current would be a bit under half the maximum, so we'll settle for 1.5mA as an initial test. Since that's pretty close to half the maximum, we can try a negative bias voltage that's close to half the voltage measured for VGS (off), which gives us -625mV. The drain voltage (no signal) should be about 6V.
+ +In the design shown (as well as the other examples that follow), an input capacitor is optional. The gate voltage is nominally zero, and an input cap is only necessary if the source has a DC offset. If present, the DC offset will disturb the bias point and the circuit may no longer work. The value of the input cap is determined by the gate resistor (R1) and lowest frequency of interest, usually 20Hz. The value should be 5 times that indicated by the usual formula ... C = 1 / ( 2π·R·f ). A value of 39nF is fine with a 1MΩ input impedance.
+ +The important thing to note is that if you increase the value of R3, that increases negative bias, reduces JFET current, and in turn increases the drain voltage (due to the reduced current through R2). The converse also applies of course. The optimum drain voltage is half the supply voltage, with a suitable offset to account for the source voltage. Don't expect to get much more than 2V peak from a circuit such as that shown without significant distortion!
+ +Since (at least initially) we are primarily interested in the DC potentials, no input signal is used. When power is applied the drain and source voltages can be measured to see how close we get to the original design figures. These are determined as follows ...
+ ++ RS = 625mV / 1.5mA = 416Ω+ +
+ RD = 6V / 1.5mA = 4k +
These aren't standard values, so we'll use 390Ω and 3.9k. Remember, this is a first attempt, and we shouldn't expect it to be right first time around. The simulator gives the following values, which aren't too bad for a first guess ...
+ ++ ID = 1.225mA+ +
+ VGS = 495mV
+ VD = 7.2V +
We can also measure the gain, as it's then possible to calculate the transconductance. There really isn't much point, but it's worthwhile for a better understanding of the JFET being used. The source resistor causes degeneration (as is the case with a BJT design), but JFETs don't have very high gain, so while we'd expect a gain of ten from a BJT, with the JFET it's only 5.2 (for a BJT in the same configuration, the gain would be [almost] RD / RS).
+ +To measure transconductance, RS must be bypassed by a capacitor with a reactance of RS / 10 at the lowest frequency of interest. For good response at 20Hz, that means a 204µF cap - we'll use 220µF as it's a standard value. With the capacitor in place, the gain is 10.7 (20.6dB).
+ +Transconductance (aka gm ['adopted' from valve terminology] or gfs - forward transfer conductance) is determined by dividing the change of drain current by the change in gate voltage. A gate voltage change of 10mV gives a drain current change of 29.2µA, so gm is 2.6mS (or 2.6mA/ V). The datasheet for the 2N5484 claims 3,000µmhos (3mS) to 6,000µmhos (6mS), so again, we're probably close enough (it varies with drain current amongst other things). The datasheet chart for transconductance indicates that with 1.2mA drain current, it should be around 2.5mS at 25°C.
+ ++ Note: I do hope no-one thought that these parameters weren't affected by temperature - JFETs are no more immune to thermal changes than any other semiconductor. ++ +
This basic technique will work most of the time, but will only give acceptable biasing conditions if you know the actual values for VGS (off) and IDSS. If you work from averaged figures in the datasheet it will probably work, but it won't be optimised. Of course, the simple way to do the design is to select a 'suitable' drain current (that's well within the range shown), calculate the drain resistor, and use a trimpot to adjust the source resistance to get the maximum undistorted output from the drain.
+ ++ gm = ΔID / ΔVGS (Δ means change) ++ +
The change of gate voltage should be kept small to ensure that non-linearity doesn't mess up the measurement. The change of drain current should be such that you can ignore it compared to the quiescent (no signal) current. Every time you use a different drain current, you'll measure a different transconductance, even on the same JFET. The curve is not linear, but tends to be parabolic, following what's generally referred to as a 'square law'. This can be defined by the following formula ...
+ ++ ID = IDSS × ( 1 - [ VGS / VGS (off) ] ) ² ++ +
You don't need to remember this, as (like all JFET parameters) it varies. However, let's do an example, using the same JFET as before (2N5484), but we'll use the datasheet minimum IDSS of 1mA as our maximum, so quiescent current should be 500µA. For a 12V supply, we expect around 6V across the drain resistance, so by Ohm's law that works out to be 12k. This should work with any example of a 2N5484, and we'll simply use a trimpot instead of a source resistor. Based on the previous design exercise, the drain resistance is about four times the previous value (12k vs. 3k9) so the source resistance should also be about four times that used in the previous example. That means ~1.2k so a 5k trimpot gives plenty of adjustment capability. There's no need to be exact when a trimpot is used.
+ +We end up with the circuit shown above. To get 6.7V at the drain (with ~820mV at the source), TP1 will be set for 1.8k (in the simulator), and the value of C1 can be reduced since the resistance is lower (the calculated 56µF cap is likely to be unobtainable, so you'd use 100µF). The gain without C1 is about ×5.1, and with C1 in place it's ×19.7 (25.9dB). The transconductance has changed too, and is reduced to 1.79mS. From this you can (rightly) deduce that transconductance does not imply the gain from a circuit, as the first example had a measured gfs of 2.92mS, but had lower gain!
+ +In general, operating any JFET at a higher current will usually result in lower distortion, but you also get lower gain if the drain load is resistive. As noted, the drain current must be less than IDSS or you may cause gate current to flow, causing distortion (valves are no different in this respect). Recommendations vary widely, but my suggestion is to stay within 50% to 85% of IDSS for most circuits. Some circuits may perform better at higher or lower current, depending on the output amplitude. You also achieve the maximum output swing by using the Figure 3.2 circuit, although distortion will be quite high (> 1% is typical) if the output level is more than ~500mV RMS. Modifying the bias point is likely to make this worse, not better.
+ +Input impedance is roughly equal to the value of R1, but it's frequency dependent. The input impedance is affected by input capacitance (CISS) and any gate leakage current. Input impedance is always higher at low frequencies, where the input capacitance has negligible effect. The output impedance is (again roughly) equal to the value of the drain resistor (R2). It's actually in parallel with the drain resistance (equivalent to plate resistance in a valve), but the drain resistance is normally very high and can be ignored. Remember that the drain resistor is also in parallel with the external load resistance (shown as R4), and if the load is low impedance, you'll lose voltage gain. The JFET 'sees' the combined resistance of R2 and R4 in parallel as its effective drain resistance, and this affects the gain. These caveats apply to all JFET amplifiers, regardless of topology.
+In some designs you may see a voltage divider used at the input (gate) which is claimed to allow the JFET to operate over a wider range. While this may be true, it's far easier to use a trimpot, as this removes the likelihood of supply noise being coupled to the input, and it means that the input capacitor is optional if there's no DC present from the signal source. Even if you do use a voltage divider as shown above, you'll still need to use a trimpot or select the JFET. Distortion performance is not changed compared to the Figure 3.2 version.
+ +If you don't include the source resistor bypass capacitor (C1), the signal to noise ratio (SNR) will be reduced. The resistor makes noise (see Noise In Audio Amplifiers for details), and this will be amplified by the JFET, acting as a grounded gate circuit for noise voltage at the source. C1 bypasses this noise and increases the gain, and the circuit will always have better SNR with C1 in place than it will without it. This isn't always possible or desirable, but you need to be aware of it.
+ +Section 5 covers active load arrangements, but the one shown next is actually a better choice. The circuit is simple, and it has no issues with stability because the DC conditions are set with resistors, and not active devices. This happens because the 'current source' is only active for AC, and it does nothing at very low frequencies or DC. Once the trimpot is set to get symmetrical distortion and/ or clipping, it biases itself like any other simple JFET amplifier, but has exceptionally high gain. The bootstrap section involves Q2, C2, and the centre-tap of R2 and R3. Output impedance is low (only a few ohms) so the JFET isn't affected by the next stage's input impedance.
+ +The bootstrap circuit ensures that the voltage across R3 doesn't change, therefore the current through it doesn't change either. This is a constant-current drain load, that gets the maximum possible gain from the JFET. My simulation says that the gain is ×100 (40dB), which is very good indeed for a single amplifying device. It's far higher than you'll ever get with a JFET in a 'conventional' circuit such as those shown above. Distortion performance is disappointing, with the simulation showing 2% at 1V peak (700mV RMS) output. This is obtained with only 10mV peak input!
+ +The source bypass capacitor (C1) is optional, but if it's omitted the JFET will amplify the noise from R4 (trimpot). The gain variation is less than 3dB with it connected/ disconnected. Active current-source loads are discussed in Section 5, and while the theoretical advantages are clear, the practical realisation of an active load is difficult. The benefit of the scheme shown above is that the gain for DC remains small (around ×4.5 for the example shown), so setting up the DC operation parameters is no more critical than for any other simple JFET amplifier stage.
+ + +Earlier on, I mentioned using a servo around a JFET to set the operating conditions to the same drain voltage, regardless of the JFET used. The idea is quite valid, but is also silly - adding an opamp to a JFET circuit makes no sense, as you can just use the opamp to amplify. It will have predictable gain, lower distortion and much better performance than any JFET. However, we shall not let this deter us.
Sw1 (Hi/ Lo) lets you select the drain current so that low-current devices can be tested. In the 'Hi' position, the drain current is 1.82mA, reduced to 600µA in the 'Lo' position. Some JFETs will only work satisfactorily with drain current below 1mA. You can change the value of R2a/ R2b to suit your requirements. The tests described were all performed using the 'Hi' setting.
+ +As you can see, this a completely over the top for a simple JFET amplifier circuit. The opamp uses ½ the supply voltage as a reference, so the drain of the JFET will always be at exactly 6V with a ±12V supply. The only case where it may fall down is with a FET with much higher than normal VGS (off), where the opamp's output can't swing far enough to compensate. With a J111 in circuit, the opamp's output voltage was +7.4V, needed to force the source voltage to +4.6V (-4.6V negative bias referred to the gate). The supply voltages should be a minimum of ±12V. The negative supply is only used for the opamp, so it can apply negative correction where required (this will be the case with low VGS (off) devices, where the 1k resistor would result in too much bias). With the simulator's 2N5484 in place, the opamp's output is -3.73V, with -48mV at the source of Q1 (yes, it's very slightly reverse biased).
+ +If I use a J113 in my simulation, the output of U1 is -72mV, with the source voltage for Q1 then set to +871mV. The current through R2 never changes with no signal. Because the voltage is fixed at half the supply voltage by the servo (6V for this demo), the drain current is always 1.82mA, regardless of the JFET used. It's highly unlikely that anyone else will build this circuit, not because it doesn't work, but because it's not sensible to throw so many parts at a simple JFET amplifier stage. Note that C2 is optionally a bipolar electrolytic capacitor, only necessary if you wish to test BJTs or MOSFETs, as the emitter/ source voltage will be negative. For testing only JFETs it will be a standard polarised electro as shown.
+ +For what it's worth, the circuit shown will work with an NPN BJT or a small-signal N-Channel MOSFET just as well as a JFET. Provided the opamp can provide sufficient voltage to bias the device used, it can be used to make direct comparisons between devices. I'm not entirely sure that this is useful, but some people may wish to put one together just for the fun of it. It's not often that you see a circuit that will automatically bias almost any device you choose to try. I measured the transconductance of a BC550C BJT in the same circuit, and it managed 22mS - significantly higher than any JFET and the 2N7000 MOSFET (13.7mS). The BJT had significantly lower distortion than the JFET or MOSFET. Testing PNP or P-Channel devices will require the circuit to be rewired to suit.
+ +Some readers may choose this arrangement to quantify JFETs into categories in much the same way as the Figure 2.1 circuit, but providing the ability to measure gain as well as the required source voltage for operation at the selected current. For anyone often working with JFETs, this could be an invaluable tool. Maybe it's not quite as silly as I first thought!
+ +The basic idea shown here is now available as a project, with plenty of detail so you can adapt it for your tests. See Project 237 for all the details. Test results are also presented, and it's the best JFET tester I've used.
+ +The transconductance measurements are based on using the device in its intended mode of operation, and may not agree with the value stated in the datasheet. The figures quoted are usually based on a constant drain voltage and at a specified drain current. This almost certainly will be at a voltage and current that are different from the values you will use, and the figure will be different.
+ + +Figure 3.4 shows a bootstrapped drain load, which makes the JFET's drain current almost constant. As with BJTs and valves, using a current source load improves performance, usually resulting in higher gain and better linearity. When using JFETs, the simplest (although this may be debatable) is to use a second FET as the load, configured as a current source. JFETs aren't particularly wonderful in this role, but they are a simpler solution than a BJT current source. While the performance of the latter is a great deal better than the JFET, it's also more complex. A JFET needs just one resistor, as shown below. This will only work with well-matched JFETs.
+ +If the JFETs are matched, the voltage across each will be close to identical, and the circuit will bias properly. With unmatched JFETs, you are in for a world of pain - unless the characteristics of both are close to identical the circuit will not work as expected. Perhaps surprisingly, the source resistor bypass capacitor (C1) makes very little difference to the gain. Depending on the JFETs used and operating conditions, it may increase the gain by between 3dB and 6dB, but that's all. Omitting C1 reduces distortion, and the degree can be significant (as much as 10:1). However, you may experience an increase in noise levels, as the resistor (R3) noise will be amplified.
+ +An alternative is to use a current sink load on the source pin. This uses more parts, and looks like it would offer superior performance. However, the JFET doesn't really care how its drain current is defined, so an active or passive source circuit should make no difference. A simulation shows this to be the case, and both frequency response and distortion are almost identical. Note that there's no actual difference between a current source and current sink - it's merely a matter of semantics, not circuit behaviour. I haven't shown a circuit for this, as there's really no point.
+ +This arrangement might appear to be ideal, since it provides much higher gain than any other variation. However (and you just knew there would be a down-side), it is extraordinarily sensitive to the value of R3 in relation to R2. Even a tiny parameter variation (such as will happen to the JFET with temperature) throws everything out, and it will either distort or may even stop amplifying altogether. The gain is ×222 (47dB), reduced to ×136 (42.6dB) without C1. Distortion is reduced by a factor of 2.7 if C1 is omitted. Interesting, but not useful in a 'real world' amplifier without a servo circuit (Figure 4.1). While that will work (very well) it's starting to get very silly indeed. All that for a JFET amplifier that still can't beat a couple of opamps!
+ +Any active current source load will make the setup of DC conditions very difficult. This is because the JFET is forced to have extremely high gain for all frequencies including DC, so it's inevitable that even small changes (due to time and temperature for example) will cause large changes to the DC conditions. The simple way around this is to use bootstrapping instead, as shown in Figure 3.4. The loss of performance is measurable, but the circuit will actually perform better if the DC conditions are made less critical.
+ + +Having examined constant current, now we can examine constant voltage. The arrangement shown isn't one that I've seen, but it's inevitable that it has been used before. The opamp is used to 'current load' the JFET's drain, and (for an ideal opamp) there is (close to) zero AC voltage at the opamp's inverting input. The opamp obtains its reference voltage from the same connection (the JFET's drain terminal), and while it may seem unlikely that this will work, it does work very well. The opamp must have a high input impedance, and a FET-input type is recommended. However, due to the biasing scheme used, you can use a bipolar opamp (R3 should probably be reduced to around 100k).
+ +Because of R4, which applies feedback and makes the inverting input a 'virtual earth' stage, the opamp has an input impedance of close to zero ohms. Biasing is provided via R3, and is bypassed with C3. C2 ensures that the opamp's DC gain is unity, to prevent serious offset problems. The opamp is wired as a transimpedance amplifier, meaning its output voltage is directly proportional to the input current (but inverted). The gain of the opamp stage is determined by the transconductance of the JFET and R4, and R2 only affects the gain by varying the transconductance a little. As simulated, the overall gain is ×23, and it can be increased or reduced by increasing (or decreasing) the value of R4. As shown, you'll almost certainly need to use a trimpot in place of R3 so the operating conditions can be set for the JFET you have (the value was 1k for the simulation).
+ +There is almost zero voltage variation at the drain of Q1, so the only thing that changes is the current through the JFET. Since the voltage across R2 doesn't change, nor does the current through it. This arrangement sets up the JFET to operate as a 'true' square-law device, and it has only 2nd harmonic distortion. There is a tiny amount of 3rd harmonic distortion but it's 100dB below the fundamental. The 2nd harmonic is at -34dB, with a THD of just over 2% with 810mV RMS output (50mV peak input). The gain is directly proportional to the value of R4, so if it's doubled, so is the circuit gain. The JFET is operating with a transconductance of 2.23mS.
+ +This circuit doesn't even approach 'ordinary'-if, and it most certainly is not hi-if. However, some experimenters might like to play with it, and it will be found that 2nd harmonic distortion is not as 'nice' as claimed by so many, because it still generates intermodulation distortion. Intermod is particularly troublesome with complex musical passages due to the vast number of additional frequencies generated. Having said that, I tried it as a guitar preamp, and it had plenty of gain with R4 set to 33k, and can drive a power amp directly. It tested much better than the simulation, and sounded great. However, it's still not hi-if.
+ + +Source followers (aka buffers) are intended to adapt high impedance sources to lower impedance loads. Unlike an opamp buffer, they always have a measurable voltage loss, so gain is typically around 0.9 rather than unity. There's no practical limit to the input impedance, but it will rarely be more than 10MΩ for most common designs. Although the gate current is small, it's not zero, so you must expect a small DC voltage to appear across the input resistor (R1). When properly designed, most source followers will elevate the gate to some positive voltage, so an input capacitor is mandatory (unlike the other circuits shown above).
+ +A JFET source-follower has one significant advantage over a BJT emitter follower, in that the input impedance is not affected by the load (Note 1). Likewise, the output impedance isn't affected by the signal source (another odd characteristic of emitter followers). However, the output impedance of a source follower is nowhere near as low as that from an emitter follower. In this case, it's 330Ω, and while it's close to the same value as TP1 in this example, the two are not related. You must also remember that output impedance has nothing to do with a circuit's ability to provide current to a load. In a circuit that draws around 1.2mA from the supply, the maximum negative current will be less than 1mA, after which it will clip the negative half-cycles (a circuit such as Figure 7.2 is assumed).
+ ++ ¹ This isn't strictly true, because if you rely on bootstrapping to increase the input impedance, the following load reduces the output level. As the level is reduced, + bootstrapping becomes less effective. ++ +
While it is (sometimes) possible to use a JFET with nothing more than an input and source resistor, mostly this gives woeful performance. The source will be at a voltage determined by the JFET's characteristics, which generally means the voltage is quite low (a little less than the VGS (off) voltage). With the 2N5484 I've used for other examples, the source voltage may only be around 700mV, and that is the absolute limit for a negative-going input signal. If the amplitude is greater than 1.4V peak-peak, the negative half-cycle will clip and the positive half-cycle will draw gate current. Distortion will be high even before clipping, so this is usually not an option. An example is shown below - this is not the way to build a source-follower!
+ +Provided the input signal is less than 100mV RMS or so, the circuit shown will work, but it's just wrong, and has almost no headroom. Even with a mere 100mV input, the distortion is more than 0.5%, where it should be less than 0.01%. Fortunately, it's not at all difficult to get it right, with the addition of one resistor and an input capacitor. The requirement to know the specific values for VGS (off) and IDSS is just as important for a source follower as it is for a common source amplifier.
+ +You'll See this used, but it's not the best example of design. The input impedance is 1.1MΩ, but of course R1 and R2 can be made higher values. However, having a resistor tied to the supply rail makes it susceptible to any supply noise. It also misses an important improvement that is provided by the next circuit. It is easy to set up, and will 'self-bias' quite well as long as R3 is a suitable value. 'Suitable' in this context means that it should pass no more than 85% of the minimum IDSS figure shown in the datasheet. The source voltage will be slightly higher than the gate, as required to bias the JFET properly. If R3 is too low in value, the input will draw gate current, which will cause distortion with high impedance sources. I used 3.9k, which will pass around 1.6mA (quiescent) and is a reasonable compromise between drain current and output drive capability.
+ +In the following drawing, the values for R2 and R3 are the same as that shown in Figure 3.1 (390Ω and 3k9). Again, as a first guess (and based on the same calculations), it's pretty good. however, distortion performance is not quite as good as that for Figure 7.2. This also depends a great deal on the JFET used, so distortion figures are intended as a rough guide only.
+ +The Figure 7.3 circuit (when set up properly) can handle an input of 10V P-P (3.54V RMS) with distortion below 1%. That's by no means wonderful, but at lower voltages (e.g. 1V RMS) it falls to around 0.17%. This is still pretty poor, and we need a more complex topology to improve it any further. It may come as a surprise, but bypassing R2 with a capacitor (220µF or so) actually increases the distortion, but has very little effect on the output impedance.
+ +There's an unexpected change to the input impedance with the Figure 7.3 circuit. Because R1 is effectively bootstrapped (from the junction of R2 and R3), the input impedance is not 1MΩ as you'd expect, but is raised to over 6MΩ. If the input voltage is 1V, the voltage across R1 is only 160mV, implying an input impedance of 6.25MΩ (by calculation). The simulator also shows the input impedance to be 6.25MΩ, so input impedance has been increased by a factor of more than six. This isn't usually considered, but it's quite real. When R2 (trimpot) is bypassed as discussed above, input impedance is increased further, but over a limited frequency range.
+ +Much better performance is obtained by using a second (matched) JFET as a constant current sink, in the source circuit of Q1. For convenience, I used the same values shown in Figure 5.1, just rearranged to make it a source follower instead of a common source amplifier. Distortion (at 1V RMS) is reduced to 0.0039%, output impedance is around 550Ω and it's about as good as you can reasonably expect without a buffer stage.
+ +Because the bootstrapping of R1 is more effective due to Q2, the input impedance has increased to over 70MΩ. However, it falls off as frequency increases, and is 'only' 5.4MΩ at 30kHz. In my simulation, the input impedance starts to fall beyond 1kHz, but it's unlikely that will cause the slightest problem in use. If you need very high impedance at low frequencies, this is the circuit you need. Unlike the 'standard' arrangement shown in Figure 7.2 which has an output level of 923mV (for 1V input), the Figure 7.3 circuit's output is 995mV. This is still less than unity gain, but it's close enough for most purposes.
+ +If you have a negative supply available, biasing a JFET follower is a great deal easier. You don't have to worry about the bias at all, as it looks after itself. You do lose the bootstrapping of course - it can still be done by adding another resistor and capacitor, but isn't shown here. The circuit shown can be used with just the JFET - simply leave out the transistor and replace R2 with a direct connection to the positive supply. Do not leave R2 in the circuit without Q2 - the circuit doesn't work properly if it's in place. Omitting the BJT from the circuit increases distortion and output impedance, and also reduces the gain to ~0.93 (vs. 0.99 with the BJT).
+ +The BJT gives you the best of both worlds - high input impedance and a commendably low output impedance. The circuit shown has an output impedance of only 4Ω, but obviously cannot supply any useful current into such a low impedance. There are several variations on this theme, and it is preferred over a single-supply circuit. The values for R2 and R3 aren't critical, but need to be selected to suit the IDSS of the JFETs being used. The circuit shown can normally handle an input voltage of around 4V RMS with vanishingly low distortion. The addition of Q2 dramatically improves performance, reducing distortion by up to two orders of magnitude, a significant advantage.
+ + +The biggest limitation to extended frequency response is due to the gate-drain capacitance (CGD) and the Miller effect. The effective capacitance seen at the gate is equivalent to CGD multiplied by the AC voltage gain. Because most JFETs are symmetrical, the value provided for CISS is the sum of CGD and CGS, although some datasheets also provide different values for 'on' and 'off' conditions (particularly for switching types such as the J11x series). However, the gate capacitance is not a fixed value, and it varies with gate, drain and signal voltage (and of course between different JFETs - even of the same type).
+ +Provided the source has a relatively low output impedance, the effects of input capacitance are negligible. However, when a JFET is used with a high impedance signal source, you can easily run into problems with poor high frequency response. Depending on the application, the only solution available may be to use a JFET as a source-follower before the amplifying stage. This negates CSG completely (as it's effectively bootstrapped) leaving only CDG which is referred to the supply rail. The 2N5484 that I used for many of the examples here has a CISS of 5pF, so you have 2.5pF for both CDG and CSG. As an amplifier with a gain of 21dB, the -3dB frequency is 159kHz with a 100k source impedance. This is extended to over 1MHz for a source follower under the same conditions.
+ +Given that the Miller effect multiplies the gate-drain capacitance by the voltage gain, you might expect that the -3dB frequency should be much lower than the simulator calculated, but the Miller effect does not necessarily give an exact figure because the JFET's capacitance varies with drain current. However, you can prove it for yourself by adding external capacitors (around 1nF is a good value to try), and you'll find that CGD is indeed multiplied by the voltage gain.
+ +At the time of writing I don't know if I need to provide diagrams to demonstrate this or not, so I've chosen not to include them. If readers want me to add the necessary diagrams I'll do so, but rest assured that the effects are completely real. Whether or not the input capacitance causes anyone any grief depends on the application, and for most audio applications it's usually not a consideration. However, be warned that switching devices such as the J11x series perform far worse than JFETs designed for amplification. Unfortunately, these are the very ones that are now the hardest to obtain.
+ + +One thing that you don't want is gate current, as this can lead to serious distortion. It's far less common with JFETs than valves that draw grid current, but it can cause the same problem, known as 'blocking'. This problem is almost always due to heavy overdrive of the stage, where the input signal's peak amplitude is greater than the FET's gate diode voltage. For this reason, it's not a good idea to use a JFET stage for a guitar distortion circuit, unless you ensure that it's properly configured to prevent blocking. If you don't include an input capacitor (C1 below) you will never have an issue with blocking, but if the preceding stage has a DC offset, then C1 must be included. Other measures may then be necessary to prevent blocking.
+ +To achieve the blocking state, all that's needed is a transient input level high enough to forward-bias the gate diode. This causes the input capacitor to charge, which forces the gate voltage to become more negative. If the input signal is at a high enough level and from a relatively low impedance, when the level returns to 'normal', the JFET remains cut-off, and only a distorted remnant of the signal gets through until the gate voltage has returned to zero. The drawing shows a simulated circuit that works quite well. The distortion is fairly high, but that wouldn't be an issue with a guitar preamp for example. (Note that the JFET and/ or source resistor [R3] would need to be selected.) As simulated, the quiescent DC level on the drain is about 8V. This is higher than the ideal, but it's still within reasonable limits.
+ +Fortunately, although blocking is likely with many configurations that are commonly used, it isn't a problem unless there's an input capacitor and the likelihood of high-level input transients that cause severe overload. With no input cap (or with a high impedance signal source) it's highly unlikely. That is not to say that it can't occur, as shown in the following simulation. For the first 4ms, the signal is at a 'normal' level (at around 460mV peak, 330mV RMS), and is amplified as expected. After just a 6ms burst of high-level (2V peak) the input capacitor (C1) charges to -1V. When the input returns to its previous level, the output is highly distorted, and shows a significant DC level shift. The circuit is fairly conventional, but is not optimised. Any JFET stage (with an input capacitor) can be forced into blocking, although it's usually not quite as severe as the waveform shown below.
+ +The graph shown is directly from the simulator, and with an input voltage of ~330mV RMS it has a gain of ×22.5 (22.6dB). When the signal level is raised to 1.4V RMS, the gate voltage is driven to -1V via the gate diode, turning off the JFET. The high-level signal still gets through, but the amplifier clips heavily. Once the signal is returned to its former value, the JFET remains off until C1 discharges. With the values shown, this will take close to 50ms. While recovery is fairly fast, the sound is dreadful.
+ +The blocking effect isn't necessarily limited to the JFET stage itself. In the waveform, you can see that the output voltage shows a significant positive swing, and this may cause the following stage to saturate (clip) until C3 discharges. The rate of discharge is determined by the input impedance of the next stage (rarely just a simple 100k resistor as shown), and while it may be fairly brief, it can still cause gross 'non-harmonic' distortion. This distortion is non-harmonic because it's based on a crude timer that is not affected by the input frequency. Complete blocking of a JFET stage is not necessary to cause problems with the following stage(s). A momentary bias shift may be all that's needed to cause havoc.
+ +Note that this simulation deliberately makes blocking more severe than you are likely to experience with most designs. The example shown has been exaggerated for clarity.
+ +Blocking isn't something you come across often, but when it happens, unless you know the cause, it may take some time to track down. The symptoms are obvious if you know what to look for, but if you've never come across it before, it can be difficult to work out what's happening. I came across it first about 50 years ago, when a colleague couldn't work out what was wrong with a valve amplifier that cut out after a brief transient overload. It could take up to 30 seconds before it would produce sound again, but most cases only involve very brief blocking behaviour (which sounds awful). You have to know what to look for! Once learned, problems like this are forgotten at your peril. Even blocking lasting a few milliseconds is enough to cause what should be guitar 'fuzz' to sound like 'fart'. As a guitar effects pedal, no-one has ever longed for a fart-box (to my knowledge at least. )
You'll often see JFETs used for muting. They are ideal in this mode, because by default the JFET is turned on (shorting the signal to ground) with no power, and when a suitable negative voltage is applied to the gate, the JFET turns off, and lets the signal through. The following is taken from the article Muting Circuits For Audio, and is shown again here because it's relevant.
+ +Junction FETs (JFETs) can also be used, and like the relay they mute the signal by default. To un-mute the audio, a negative voltage is applied to the gate, turning off the JFET and removing the 'short' it creates. Unlike a relay, JFETs have significant resistance when turned on. The J11x series are often used as muting devices, and while certainly effective, the source impedance has to be higher than with a relay. The typical on-resistance (RDS-on) of a J111 is 30Ω (with 0V between gate and source). The J112 has an on-resistance of 50Ω, and the J113 is 100Ω (the latter is not recommended for muting). I tested a J109 (which is better than the others mentioned, but is now harder to get) with a 1k series resistor, and measured 44dB muting, and that's not good enough so two JFETs are needed as shown.
+ +Note that JFETs will generally not be appropriate for partial muting (for a 'ducking' circuit for example), because when partially on they have significant distortion, unless the signal level is very low (no more than around 20mV), and/or distortion cancelling is applied. This application is not covered here.
+ +To un-mute the signal, it's only necessary to apply a negative voltage to the gates. There is no current to speak of, and dissipation is negligible. JFETs are ideal for battery powered equipment, but there has to be enough available negative voltage to ensure that the JFET remains fully off ... over the full signal voltage range. If you use a J111 with a 10V peak audio signal, the negative gate voltage must be at least -20V (the 'worst-case' VGS (off) voltage is 10V), and the gate must not allow the JFET to turn on at any part of the input waveform.
+ +Using a JFET to get a 'soft' muting characteristic works well. The JFET will distort the signal as it turns on or off, but if the fade-in and out is fairly fast (about 10ms as shown) the distortion will not be audible. You may be able to use a higher capacitance for a slower mute action, but you'll have to judge the result for yourself. I tested the circuit above (but using a single J109 FET) and the mute/ un-mute function is smooth (no clicks or pops) and no distortion is audible. Measured distortion when the signal is passed normally is the same as my oscillator's residual (0.02% THD).
+ +If a JFET has an on-resistance of 30Ω, the maximum attenuation with a 2.2k source impedance is 37dB. This isn't enough, and you will need to use two JFETs as shown to get a high enough mute ratio. This is at the expense of total source resistance though. With the dual-stage circuit shown above, the mute level will be around -70dB. It is possible to reduce the value of the two resistors (to around 1kΩ) which will reduce the muted level to around -60dB, which is probably sufficient for most purposes. An alternative is to use two or more JFETs in parallel. Two J111 FETs will have a total 'on' resistance of 15Ω, four will reduce that to 7.5Ω, etc. Consider that a set of electromechanical relay contacts will have a resistance of a few milliohms!
+ +You can improve the attenuation by applying a small positive signal to the gate, but it should not exceed around +400mV. Any more will pass DC through to the signal line as the (normally reverse-biased) gate diode conducts. In general I would not recommend this, as it adds more parts that have to be calculated for the mute control circuit, and the benefit isn't worth the extra trouble.
+ +There is also the option of using a JFET based optocoupler (the datasheet calls it a 'symmetrical bilateral silicon photo detector') such as the H11F1. These are claimed to have high linearity, but I don't have any to test so can't comment either way. According to the datasheet, low distortion can only be assured at low signal voltages (less than 50mV). They might work as a muting device, but the FET is turned off by default, and turns on when current is applied to the internal LED. This means that the internal FET would need to be in series with the output for mute action when there's no DC present. The on resistance of the FET is 200Ω with a forward current of 16mA through the LED. I don't consider this to be a viable option.
+ +Analog Devices used to make ICs called the SSM2402 and SSM2412 that included a three JFET 'T' attenuator and a complete controller circuit for a two channel audio switching and/or muting circuit. They have been discontinued, and there doesn't appear to be a replacement. They were aimed at professional applications such as mixers and broadcast routing, and would be useful parts if still available.
+ + +It would be easy enough to include the design formulae that are shown on many other sites that cover JFET design, or to show graphs that allow you to determine the optimum bias point for the device you plan to use. Unfortunately, these are pretty much redundant because every device you use will be different from the others in your parts bin - even of the same type and manufacturing batch. Unless you measure your JFETs and put each into a separate bag marked with the two main parameters (VGS (off) and IDSS), every design is pretty much a lottery.
+ +This doesn't mean for an instant that I don't like JFETs, or that they should not be used. Apart from anything else, they can be quite good fun to play with, and they are ideally suited to applications that require a high input impedance. While most of the desirable JFETs are now difficult to obtain, they are still available from some major vendors, albeit with a reduced range. If you are willing to use SMD parts the choice is a little better, but these are hard to work with in experimental 'lash-up' circuits. RF (radio frequency) types are usually a little easier to get than the 'traditional' audio devices that were the mainstay of so many early designs, but these work perfectly at audio frequencies.
+ +Using a trimpot to set the operating conditions (which may need to be altered to get maximum undistorted output level) is by far the easiest, but that still doesn't always mean that the design is optimal. The extreme variability of JFETs means that you either need to accept that every simple amplifier you build will be slightly different, or the circuit will be far more complex than almost any other solution. This is not the case if the input level is so small that the output swing needed remains (comparatively) tiny compared to the supply voltage, but it becomes an issue when the input level (and output level) are both high enough to create significant distortion.
+ +The decision to use a constant voltage at the drain (as shown with Figure 6.1) or constant current (Figures 5.1 and 5.2) is easy. A constant current load gives the best performance and lowest distortion, but is usually very difficult to bias properly. So much so that unless you use a dual matched JFET, the chances of it behaving itself are fairly slim. This is solved by using bootstrapping as shown in Figure 3.4, which also provides a low output impedance. Constant voltage is more predictable and quite stable, but distortion performance is usually mediocre at best.
+ +Contrary to what some people may claim, JFETs are not linear. For a given gain and output swing, a simple BJT stage will nearly always beat an equally simple JFET stage hands down. This is very much dependent on signal level though, and at low output levels (typically less than 500mV), a JFET may have lower distortion. Of course, this also depends on the JFET itself, the signal level and the supply voltage. Distortion performance of JFETs (and BJTs) can be improved dramatically by using a constant current source in place of a drain (or collector) resistor (as shown in Figure 5.1), but that's not always feasible with JFETs (parameter spread strikes again).
+ +A common claim is that due to a JFET's square law behaviour, it is capable of producing only the second harmonic, with no higher order harmonics at all. While this is true, it generally only happens under very specific conditions that may not be achievable in your circuit. The conditions can be met, with the output being drain current modulation with the voltage unchanged (Figure 6.1). Unfortunately for those who may believe that this is somehow 'musical' or 'pleasant', it's not always easy to achieve with any realistic (i.e. usable) circuit. I can think of other ways it might be able to be exploited as well. What I can't think of is why anyone would bother. Such a circuit will still create intermodulation distortion, which is far more objectionable than the harmonic distortion that causes it.
+ +Another furphy is that complementary JFET circuits (using N and P-Channel JFETs) are actually complementary. Having looked at the parameter spread of only N-Channel devices, it's obvious that getting perfectly matched complementary JFETs will be somewhere between extremely difficult and impossible. This would mean that not only are the JFETs of each polarity matched, but their opposites are matched to each other as well. That would mean identical VGS (off), IDSS and transconductance for N-Channel and P-Channel devices. This is very unlikely indeed.
+ +All things considered, JFETs are useful when you need very high input impedance, along with wider bandwidth than you can get with (affordable) opamps. You do need to be aware of the gate-source capacitance (CISS) though, as that is often high enough to cause premature high frequency rolloff with high impedance sources. As a means of 'general purpose' or 'pure audio' amplification, JFETs should be one of the last choices after other possibilities have been exhausted. Wide parameter spread, lack of availability of good amplifying types, and limited gain means that JFETs should only be used when you have no other choice - this is rare for audio, but there are a few cases where a JFET is a sensible choice. JFET input opamps are usually a different matter, as some are very good indeed (the TL07x series are 'utilitarian' examples). It's worth noting that JFET opamps claiming 'superior audio performance' are engaging in hyperbole (or wishful thinking) - this is 'marketing speak' and doesn't necessarily represent reality!
+ + +You need to be careful with some references, as the claims may not stand up to scrutiny. Despite claims, JFETs don't sound 'better' than BJTs or opamps in well designed circuits. There are three main things that affect sound quality - frequency response, noise and distortion. Some of the latest opamps will beat any discrete circuitry in all three categories when properly implemented, and absolutely do not somehow 'mangle' your audio in ways that cannot be explained (without resorting to snake oil).
+ +![]() | + + + + + + + | + +
Elliott Sound Products | +Loudspeaker L-Pad Calculations |
As regular readers will be aware, I don't like passive crossovers, and for any serious listening I'll always recommend using a fully active system. However, there are countless situations where people can't justify an active crossover and multiple amplifiers. Despite my own general preference for active systems, I still have three passive speakers in everyday use. One is my PC sound system, another is the clock radio in the bedroom (I simply cannot tolerate the pissant internal speakers, so have external boxes hooked up), and the last one is in my workshop.
+ +A great many people prefer passive boxes so they can 'mix-and-match' power amplifiers, as that's very inconvenient with active systems. One of the passive systems I have has remained passive simply because I can't perform power amp listening tests without it. While not strictly ideal, there is no doubt that a well designed passive system can perform extremely well, and the processes described here are intended to let you get the best results possible.
+ +L-Pads are used with passive crossover networks to adjust the sensitivity of one or more loudspeaker drivers. The least sensitive driver sets the overall system efficiency (in dB/W/m), and any others must be padded back so their sensitivity is the same. For example, a bass driver may have a sensitivity of 89dB (at 1W/1m), midrange may be 92dB and the tweeter 95dB. The padding needed is therefore ...
+ +++ ++
+Bass 89dB + Mid 92 - 89 dB (3dB attenuation) + Tweeter 95 - 89 dB (6dB attenuation) +
The amplifier power that's dissipated in the pads is completely wasted. While transformers could be used, this would become very expensive very quickly, and although far less power is wasted it's not an economical approach. There are a few 'high end' speaker systems that do use auto-transformers to provide level matching, but they are in the minority. The power dissipated (lost) in the pads is not easily calculated, because it depends on a great many variables. Some on-line calculators also work out power ratings for the L-pad resistors, but the figures given are grossly inflated and fail to consider the energy levels at the frequencies where the pads are working.
+ +The calculator here will not attempt to work out resistor power ratings, and it's up to you to make adjustments as required, based on the info provided below. If you are building a high power system or expect the highest fidelity, I strongly recommend that you do not use passive crossovers at all - they should be active, with separate amplifiers for each driver. When you consider the time and effort needed to design and build a quality passive network, it becomes apparent fairly quickly that an active system is likely to be cheaper, with far fewer compromises.
+ +L-Pads are a useful arrangement though, and when set up properly they ensure the crossover network 'sees' the desired impedance. You will still need impedance compensation circuits though, because nearly all loudspeaker drivers have an impedance curve that interferes with the crossover network, causing (sometimes severe) frequency response and phase anomalies. Impedance correction can minimise aberrations, but can be both difficult and expensive.
+ +
Figure 1 - Typical 2-Way Crossover With L-Pad
The drawing above shows a 2-way impedance corrected 12dB/ octave network. The values are shown for 8 ohm (nominal) drivers, and are described in detail in the article Design of Passive Crossovers. The impedance correction networks must be designed specifically for the loudspeaker drivers. There is some additional info that you'll need to know in the article Measuring Loudspeaker Parameters. These networks are not 'generic', but even if not fully optimised the results will usually be better than nothing. After correction, the impedance curve should be fairly flat across the crossover frequency and for at least an octave either side. For a 3kHz xover, the woofer and tweeter's impedances should be flat from 1.5kHz to 6kHz. That is the minimum requirement - a wider bandwidth is preferable.
+ +After correction, the driver impedances will be slightly greater than the voicecoil's DC resistance. Expect around 6 ohms for more-or-less 'typical' 8 ohm drivers. The correction networks are commonly found by experimentation unless the driver parameters are particularly comprehensive. The crossover network is designed to suit the corrected impedance, and not the nominal driver impedance.
+ +Note: L-Pads are not limited to speakers, and can be used anywhere that you need an attenuator with a defined input impedance. The load impedance must be used in place of speaker impedance, and can be any value desired.
+ + +There are two calculators, one to determine the values needed to obtain a given attenuation, and the other to work out the attenuation of a network you may find in a commercial (or DIY) crossover. It's important to understand that the speaker impedance is the actual (i.e. measured) value, including any impedance correction networks. If the nominal impedance is used, the attenuation may not be accurate, and without impedance correction the response will often be anything but flat.
+ +The very first exercise is to determine the resistive drop caused by the low pass inductor (this step is almost always forgotten !). A typical coil of around 600µH using 0.8mm wire will have a resistance (Ri) of about 0.53 Ohm. We can calculate the low frequency loss in dB with the formula ...
+ ++ dB = 20 log ( Ri / Z + 1 ) ++ +
For our example, this gives ...
+ ++ dB = 20 log (( 0.53 / 6 ) + 1 ) = 20 log ( 1.088 ) = 0.735 dB ++ +
Alternatively, just insert the speaker impedance and inductor resistance into the 'Calculate Attenuation Of Existing Network' calculator below. Due to rounding, it will show 0.74dB but that's accurate enough for all practical purposes.
+ +The inductor's series resistance reduces the woofer's sensitivity slightly, in this example by 0.74dB. The tweeter therefore needs to be attenuated by an additional 0.74dB, over and above the amount indicated by the different driver sensitivities. In a 3-way system, the midrange will also have a series inductor, and the same process is necessary to ensure that its resistance and loss of sensitivity is also considered. There is no requirement to compensate for series capacitors. Their ESR (equivalent series resistance) will be well below the limits of audibility, and it's not necessary to correct for ESR because it's generally irrelevant.
+ + ++ + + + | + |
When analysing an existing network, you can leave the Rpar field empty to calculate the attenuation with just a serial resistor (Rser). This is not recommended for design, but some commercial crossover networks are made to a price with little concern for accurate response. Use of a proper L-Pad allows the impedance presented back to the crossover network to be maintained at the design value. Because of the resistances used, an L-Pad may help make loudspeaker impedance correction a little less critical than for a 'raw' driver.
+ +For example, a driver with a high resonance peak will be tamed, because the total driver + L-Pad impedance is limited by the parallel resistance. It's common to see a resonant peak of 40 ohms or more for some drivers, but that becomes impossible if there's a lower value resistor in parallel. This does not mean that impedance correction is not needed. No passive crossover can perform properly if the driver impedance changes with frequency.
+ +++ ++
+Vr = 10^( A / 20 ) (Antilog( A / 20 ) - Where Vr is the voltage ratio and A is attenuation in dB + Rs = Z × (( Vr - 1 ) / Vr ) Where Z is impedance and Rs is the series resistance + Rp = Z × ( 1 / ( Vr - 1 )) Where Rp is the parallel resistance + + A = 20 × log(( Rs + Z ) / Z ) Attenuation when only a single series resistance (Rs) is used (not recommended) + A = 20 × log( Rs / (( Z × Rp ) / ( Z + Rp )) + 1 ) Attenuation with given impedance, Rs and Rp +
You can use the above formulae in a spreadsheet if you don't want to keep referring to the web page. These formulae give the same answers as the calculators shown, but with no limit to the number of digits displayed. The calculators are limited to 3 decimal digits, but it's rare that you'll ever need to use more than one decimal place. When the speaker system is driven, voicecoil temperature rise will cause the crossover frequencies to change - hopefully only slightly, but possibly dramatically at high power. You can't do too much about the temperature rise, and this is one of the reasons I generally dislike passive networks. Loudspeaker parameters will also change with time, so expecting precision is impractical.
+ +IMO, passive crossovers should only be used for low to moderate power (up to ~100W amplifier power), and for home listening at 'reasonable' volume levels. This helps to minimise temperature rise in the drivers and crossover components (especially inductors), and therefore limits the audible changes to the sound when everything is at an elevated temperature. This isn't an area that gets a lot of attention, but it should because the audible changes can be quite pronounced.
+ + +Power ratings for the L-Pad resistors are not particularly easy to calculate. There are so many variables that it's virtually impossible to provide a simple (or even a complex) formula. It's generally easy enough for tweeters, because they are relatively low power. A 100W system's tweeter will generally be capable of no more than about 10W (continuous programme material), so we know instantly that no resistor in the L-pad can exceed 10W, and 5W will most likely be enough. For the small extra cost, 10W resistors are preferred.
+ +A 100W system into 8 ohms requires 28V RMS (continuous), but will rarely exceed around 15V RMS above 3kHz. Since music has dynamics (well, not all perhaps), the average power is a lot lower. For a tweeter L-Pad, the resistors (series and parallel) should be rated for around 10W. This is overkill, but they are not expensive and you have a reasonable safety margin. Most of the time, they probably won't even get more than lukewarm. While it may be possible to get a rough idea of the power needed for an L-Pad used for a midrange driver by calculation, you will almost certainly need to verify it by measurement.
+ +The power ratings depend on the type of programme material, the amplifier's power rating and the crossover frequency. As shown above, tweeter L-Pads are easy enough, but the midrange driver in a 3-way system can be expected to sustain considerable power, especially if the crossover frequency from the woofer is less than 500Hz. If at all possible, get a midrange driver with a sensitivity that's not too different from that of the woofer. This won't always be feasible of course. If they are close to being the same, far less power needs to be thrown away by the L-Pad.
+ +Also, remember that the bass driver's series inductor will dissipate power. While it will be moderate at low levels (up to perhaps 10W average), it may be considerable if the system is driven hard from a large power amp (or driven hard with a smaller amp that's well into clipping). High continuous power (regardless of how it's produced) will cause everything that dissipates power to get hot. The effects range from power compression (see Loudspeaker Power vs. Efficiency) to response changes due to crossover misalignment caused by resistance changes.
+ +The effects are not particularly subtle - at a temperature of 200°C, the resistance of copper has increased by a factor of around 1.65, so a nominal 6 ohm (DC) voicecoil will have a resistance of 10.2 ohms. The impedance is increased by a similar margin, so an 8 ohm driver will become 13 - 14 ohms. It's unrealistic in the extreme to imagine that the crossover network can tolerate such a change without shifting its characteristics. Thermal effects are not usually apparent in the L-Pad resistors, because the resistance wire is designed to have a very low thermal coefficient of resistance.
+ + +The purpose of this article was primarily to provide the calculators, but it's also necessary to point out the other requirements for passive systems. While most people think that passive crossovers are the most suitable for loudspeaker systems, this is not the case. Today, it's actually easier (and often cheaper) to use a vastly superior electronic crossover. The cost penalty for a DIY system is minimal, and it may even work out cheaper. The capacitors and inductors in the Figure 1 crossover would not be cheap, and the impedance correction circuits add considerably to the overall cost.
+ +As noted earlier, this article should be read in conjunction with Design of Passive Crossovers, because that's one of the very few articles on the ESP site that discusses passive crossovers. Also see Series vs. Parallel Crossover Networks and Measuring Loudspeaker Parameters, as these are relevant to crossover design in general.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +LC Oscillators |
LC (inductor/ capacitor) oscillators are almost a thing of the past now, with digital synthesis having taken the lion's share of modern applications. While digital synthesis can cover a wider range and do things that 'simple' LC oscillators cannot, they require a far greater effort to design and build. Like most things analogue, many people might consider LC oscillators to be 'old hat', but they are easy to build and can provide very good performance.
+ +While LC oscillators aren't particularly useful for audio applications, with large enough inductance and capacitance they can be made to work at any (low) frequency you like. High frequencies are the most common place you'll find these circuits, with typical frequencies ranging from a few hundred kilohertz up to around 100MHz or so. Early radio ('wireless' as it was known in the early days) and TV receivers used LC oscillators, as did many other circuits (wireless remote control being a common application).
+ +Crystal oscillators are actually a specialised version of the more traditional LC tuned circuit, except that the tuning is done by mechanical resonance rather than electrical resonance. In this role, the inductance is roughly equivalent to compliance, and the capacitance is (equally roughly) equivalent to mass. Both of these are physically very small, so the frequency is high. The equivalent inductance is generally high (in excess of 1H) and the capacitance very low (less than 1pF), and the Q ('quality factor') is extremely high. Quartz is a piezo-electric material, so when it's flexed it generates a voltage, and conversely when a voltage is applied, the quartz flexes. Metallised electrodes are deposited onto the thin quartz crystal (which is 'cut' to suit the application).
+ +Many audio enthusiasts have (albeit inadvertently) built LC tuned circuits when constructing a graphic equaliser for example. These originally used physical inductors until the development of the gyrator - a simulated inductor (see Active Filters Using Gyrators - Characteristics, and Examples). Gyrators perform as an inductor, by 'reversing' the action of a capacitor. They can be built to simulate a very large inductance (many Henrys), but aren't subject to magnetic fields as would be the case with a physical inductor. However, they are not suited to (and aren't necessary for) radio frequencies.
+ +Passive loudspeaker crossover networks use inductors and capacitors, and they follow the same rules as any other LC circuit. A 12dB/ octave crossover network with an open-circuit (or missing) driver will act as a series tuned circuit at the resonant frequency. It will appear as close to a short-circuit across the amplifier at resonance! This is something that everyone should know, but most constructors are blissfully unaware of the damage it can do.
+ + +When an inductor and capacitor are wired in series or parallel, they have a response that's dictated by the value of capacitance and inductance. With parallel resonance, the tank circuit's impedance is (theoretically) infinite, but naturally it can never achieve this due to circuit losses (resistance in particular). In contrast, a series LC network has (again theoretically) zero impedance at resonance. Resistive loss (particularly the resistance of the wire used to make the inductor) is the limiting factor. A series network has one other (very important) characteristic, in that the input voltage is multiplied by the Q ('quality factor') of the circuit. A series LC circuit with a Q of 100 and an input voltage of 1V RMS, will produce a voltage of 100V RMS at the junction between the capacitor and inductor. An example would be a 100µH inductor in series with a 10nF capacitor, having a total series resistance of 0.99Ω.
+ +The resonant frequency is determined by the formula ...
+ ++ f = 1 / ( 2π × √ L × C ) More commonly shown as ...+ +
+ f = 1 / ( 2π √ LC ) +
For the example described above, resonance is 159.155kHz. In radio frequency work, the inductance and capacitance will be a great deal lower. To resonate at 1.59MHz, the inductor could be reduced to 10µH and the capacitance reduced to 1nF (in RF circles this would be often be referred to as 1,000pF). At resonance, the capacitive reactance and inductive reactance are equal, but the signal across each is 180° out of phase. For the 159kHz example, the inductive reactance (XL) is 100Ω as is the capacitive reactance (XC).
+ ++ XL = 2π × L × f+ +
+ XC = 1 / ( 2π × L × f ) +
In each case, f is in Hertz, L is in Henrys and C is in Farads. The values are selected so they are 'sensible' for the impedance of the surrounding circuitry. What constitutes 'sensible' varies widely, and most radio (and TV) circuits will have impedances of several kΩ. This means comparatively large inductance, and equally comparatively small capacitance. In most of the examples that follow, I've aimed for an impedance at resonance of around 100Ω, but some of the other examples will be different as the values can become inconvenient.
+ +The tank circuit is fundamental to LC oscillators, and it's also used for transmitters of all powers. In many cases it uses a tapped inductor (basically an autotransformer), but it can also have a second winding, often to enable the circuit to boost the signal level to ensure reliable oscillation. In other cases, the transformer action is used to match the impedance of the oscillator to an external circuit or transmission line. As interesting as these topics can be, they will only be mentioned in passing, as the possibilities are endless, and this is an article, not a book about RF circuits.
+ +
Figure 1.1 - Basic LC Tank Circuits
The basic tank is simply an inductor and a capacitor, along with parasitic resistance. They can be classified as being in series or parallel, depending upon the way the signal is applied. For a series circuit, the signal is in series with the loop, and for parallel circuits the signal is applied from an external source (i.e. outside the loop). The impedance of the source should be high for a parallel circuit, and low for series. Coil (and wiring) resistance is parasitic, and needs to be carefully controlled to obtain high Q. Note that inducing a current into the inductor with a secondary winding is effectively the same as inserting a physical voltage source.
+ +
Figure 1.2 - LC Tank Circuit With Inductive Coupling
The drawing poses an important question - is it a series or parallel resonant circuit? While it looks like a series tuned circuit, it's actually parallel. I added a 10Ω resistor to the drive winding, and monitoring the input current shows that the current falls to a minimum at resonance, which tells us that the impedance is at maximum. It's important to understand this, as many oscillators may look like the tuned circuit is a series type, but it almost always behaves like a parallel network. We would understandably expect that the input should be a sinewave, but usually it's not. This is because many oscillators operate in Class-C, so the active device may only conduct for a small fraction of a cycle. So, while the input distortion is very high, the output distortion will be low. A high-Q tuned circuit reduces the distortion more effectively than a low-Q circuit.
+ +The amount of coupling (k) determines how much of the magnetic flux from the drive coil passes through the resonant coil. I used a value of 0.5, which indicates loosely-coupled coils. The actual value depends on how close the coils are to each other, the type of core (ferrite or air) and whether the magnetic circuit is open or closed. A closed magnetic circuit can only be achieved with a high-permeability core that encloses the windings (e.g. a toroidal or E-I type core). The maximum coupling coefficient is unity, meaning that the flux cuts through both coils with no 'leakage'.
+ +There is always phase shift through any frequency-selective circuit. Below resonance, the inductor is dominant, so amplitude increases with increasing frequency. Above resonance, the capacitor is dominant, so amplitude decreases with increasing frequency. At resonance, there is no phase shift with either a series or parallel resonant circuit. Because inductive and capacitive reactances are equal and opposite they cancel, leaving only the coil's winding resistance (plus any other stray resistance).
+ +
Figure 1.3 - Basic (Fig. 1.1) LC Tank Circuit Responses
The two examples shown provide a capacitive and inductive reactance of 100Ω at 159kHz. Increasing the feed resistance (Rfeed) for the parallel circuit improves the selectivity (Q) of the tuned circuit. With 1k, the bandwidth (-3dB from the peak) is 17.4kHz and the Q is 9. If Rfeed is increased to 10k, the bandwidth is 3.17kHz, a Q of 50. Figure 1.2 shows the response with the values shown in Fig. 1.1. The main limitation for the Q is the coil's series resistance. The signal source is assumed to have zero impedance for the two tests.
+ +In theory (and assuming no losses), once the tank circuit is triggered into oscillation, it will continue to pass an electric field from the capacitor to the inductor (where energy is stored by way of the magnetic field) and back again indefinitely. The real world does not allow this of course, as losses are ever-present (even if the coil is a super-conductor, as used in MRI systems). Resistance is inevitable in other parts for the circuit, and energy is absorbed by the victim patient. All 'ordinary' resonant circuits are subjected to greater losses (lower Q), and the oscillation dies out quickly.
There's another factor that affects the Q as well - the dielectric losses of the capacitor. Ceramic caps (usually NP0 or G0G, zero temperature coefficient) are common, but in the early days silvered mica was popular. Silvered mica caps are still available, but at a serious cost penalty. Polystyrene (also expensive) is also very good, and polypropylene can be used when high values are required. The so-called 'tuning gang' (a [usually dual] variable capacitor) was used extensively in AM and FM radios until fairly recently. Air is a particularly good dielectric, and has very low losses.
+ +In some cases, the ceramic cap would be chosen to have a particular temperature coefficient to offset the effects of temperature on the tuned circuit. For example, an N750 ceramic cap will show a 2.2% capacitance increase at 0°C, and a decrease of 2.2% at +50°C (25°C is the reference temperature) [ 1 ]. Other common temperature compensation classifications are N450, N330, N220, N150 and N75. The 'N' number specifies PPM/ °C. Somewhat predictably, I won't covering temperature compensation further as it's very specialised.
+ +When it comes to coils (inductors), many RF circuits use Litz wire (multiple thin individually insulated strands woven together) to minimise skin effect. This is a problem at radio frequencies, because the current tends to flow on the outer 'skin' of the conductor, increasing its effective resistance. Many RF coils are made with silver plated wire to give high conductivity for the outer skin, especially for higher power applications. You may have seen RF coaxial cable with a copper-plated single steel inner conductor, and this is done for the same reason. The steel is 'incidental' - it's there to provide support for the copper plating, and of course it adds strength to the cable too.
+ +For very high-power circuits, it used to be common to wind coils with copper tube, with cooling water pumped through the tube. The water was (is) generally de-ionised or distilled to ensure low electrical conductivity. When you're dealing with transmitters delivering hundreds of kilowatts, you need all the help you can get. Needless to say this is outside the scope of this article, and is only mentioned in passing. For completeness, I must include a quote taken from the Radiotron Designer's Handbook ...
+ ++ With any valve oscillator an exact analysis of the method of operation is very difficult, if not impossible, and it is usual to treat the circuits as being linear (at least for simple design + procedure) although they depend on conditions of non-linearity for their operation. This simplification is valuable because the mathematical analysis which can be carried out yields a + great deal of useful information concerning the behaviour of the circuits. That the circuit operation is non-linear can be readily appreciated by considering the fact that the amplitude + of the oscillations, once started, does not continue to build up indefinitely. The energy gain of the system reaches a certain amplitude and then progressively falls until equilibrium + is established. The limits are usually set by the valve-plate current cut-off [which] occurs beyond some value of the negative grid voltage swing, and plate current saturation or grid + current damping will limit the amplitude of the grid swing in the positive direction. ++ +
The above is not specific to valves, and it applies regardless of the type of amplifying device. These days, full analysis is possible and we have the benefit of very powerful computers, simulators and other tools that didn't exist at the time. However it's often pointless anyway because the tools still cannot take the physical conditions into consideration unless each is specifically accounted for. In particular, stray capacitance, mechanical rigidity and 'incidental' losses via radiation due to the coil's magnetic field interacting with its surroundings remain difficult to model.
+ +
Figure 1.4 - Requirements For An Oscillator
The requirements for all oscillators is the same. It doesn't matter if they are audio or RF, tuned with caps and coils or caps and resistors, they all share the same basics. The first is an amplifier. This must have sufficient gain at the tuned frequency to ensure that oscillation is continuous. If the gain is too low, oscillations will not start, or may be triggered at power-on but die out quickly. The frequency or phase sensitive network is designed to provide zero phase shift at the required frequency of oscillation, so the output of the amplifier is fed back to the input as positive feedback. Finally, there's a non-linear element (explicit or implied) that prevents the oscillation amplitude from increasing forever. In most RF oscillators, this is an 'implied' part of the circuit, so there's no additional parts needed, but the power supply voltage (or available current in some cases) is the limiting factor.
+ +For audio oscillators or RF oscillators requiring high purity sinewave output (low distortion), other methods are used. This will typically take the form of an automatic gain control system, which can be as simple as a thermistor or lamp (at least for audio), or a secondary tuned circuit to remove harmonics generated by the oscillator. In most cases, RF oscillators rely on the tuned circuit to get acceptably low distortion. That's certainly the case with all of the circuits shown below. Distortion (as simulated at least) is less than 3% for all examples. This can be improved by operating the transistor in Class-A (all circuits shown use Class-C, where conduction is less than 180°).
+ +Class-C is very common with RF circuitry, as there is (almost) always a tank circuit that completes the cycle, and it only needs a small 'injection' of energy to maintain oscillation. RF is very different from audio, even though the two may seem to be similar in many ways. The greatest difference is bandwidth, and in most cases this makes audio far more challenging. An AM broadcast receiver has a frequency range of (roughly) 3.2:1, and an audio bandwidth of about 5kHz - a very small fraction of the radio carrier frequency (which is converted to [typically] 455kHz in a superheterodyne AM receiver). The audio bandwidth is a mere 1% of the RF signal. Audio covers the range from (nominally) 20Hz to 20kHz, a ratio of 1,000:1. Note that due to broadcasting requirements, AM radio has an upper frequency limit of about 7kHz, but even that is rarely achieved by most receivers.
+ + +Probably the two most common RF oscillators are the Hartley and Colpitts. A variation on the Colpitts oscillator is the Gouriet-Clapp, which provides higher frequency stability. The Hartley circuit gets its name from the inventor, Ralph Hartley, in ca. 1915. The Colpitts oscillator is a variation on the Hartley, in that it uses a capacitive signal 'splitter' instead of a tapped inductor. It was invented in ca. 1918 by Edwin Colpitts. The Gouriet-Clapp is a variation of the Colpitts circuit, and it looks very similar but for the addition of an extra capacitor.
+ +Preceding the circuits mentioned above was the Armstrong oscillator, invented by Edwin Armstrong in 1912. It's a little more complex, and uses two coils often with some adjustment of the mutual coupling between the two. The 'feeder' coil (connected in the plate or collector circuit) is referred to as a 'tickler' coil. Frequency stability is acceptable, but isn't as good as the Hartley, Colpitts or Clapp (in ascending order of stability). The Armstrong oscillator was the basis for two of Armstrong's greatest contributions to radio - continuous-wave transmission using an oscillator to set the frequency, and the superheterodyne (aka superhet) receiver (after some controversy the earliest patent for the invention is now credited to French radio engineer and radio manufacturer Lucien Lévy) [ 2 ]. Prior to that 'regenerative' receivers were common, using positive feedback to increase the available gain and selectivity.
+ +When all of these circuits were devised, the only amplifying device available was the valve (vacuum tube). They can also be made using bipolar transistors (BJTs), junction FETs (JFETs) or MOSFETs. Any device capable of amplification will work, including opamps, although they generally have limited frequency response. All oscillators can be driven with common emitter, common collector or common base transistor topologies, or the equivalent for JFETs, MOSFETs or valves. Most of the circuits shown use the common emitter connection, although common collector (emitter follower) connections are shown for Hartley and Colpitts circuits.
+ +The three most common circuits are shown in the next section. These are all common-emitter designs, and each is tuned for 159kHz. In some cases there will be minor frequency deviations caused by coupling capacitors (in particular the cap to the base of the transistor), but these are not considered in the circuits shown. All circuits use the same bias and emitter resistances. Most of these oscillators operate in Class-C (much less than 180° conduction time), and signal purity is provided by the tank circuit. For optimum performance, this requires the highest Q possible.
+ +Strictly speaking, the capacitance is a combination of the actual capacitance used, along with inevitable stray capacitance. This may include the collector (or drain/ plate) capacitance, along with any capacitance between the wiring and chassis. There is also inter-turn capacitance in the coil itself. It's important to ensure mechanical rigidity, which in the early days often meant using single-core wiring, suspended between circuit nodes. This isn't an issue with a PCB, but at very high frequencies the losses inherent in fibreglass can play havoc. Ceramic or other low-loss materials are necessary when the frequency is greater than 1GHz, and at lower frequencies when significant power is involved.
+ +In the examples shown here, the coil is indicated as being air-cored. However, this would be rather large for 100µH, and it would almost certainly use a ferrite core to keep the size down. The choice of ferrite composition is highly dependent on the expected frequency, and a core intended for use with lower frequencies (including audio) would show high losses with RF. A coil calculator for air cored coils can be found here: Single Layer Air Core Inductor Calculator. A 25mm diameter coil with 100µH inductance will be 5.1mm long using 0.1mm enamelled wire, with 51.6 turns. This would require about 4 metres of wire.
+ +
Figure 2.1 - Armstrong/ Meissner Oscillator
The Armstrong or Meissner design is really the 'grandfather' of all oscillators. Invented in 1912 by Edwin Armstrong and independently by Alexander Meissner in 1913, [ 3 ], this was the most important contribution to radio of all. Transmitters rely on an oscillator to provide a known frequency for transmission. Early (on-off only) transmitters used just a spark-gap, followed later by a tuned circuit excited by a spark gap or a high frequency alternator, but the spark-gap transmitters were very wide bandwidth and could not be adapted for voice transmission. While theoretically possible, alternators were not used for voice transmission either.
+ +like all LC oscillators, the Armstrong/ Meissner oscillator uses a tuned circuit. The location of the tuned circuit varies, and it can in the plate (for a valve) or collector (using a transistor), and a loosely coupled ('tickler') coil to provide feedback to the grid/ base. There are several variations, and in others, the tuned circuit is in the grid or base circuit allowing the tuning capacitor to be grounded. Note the coil's 'polarity' marks in Figure 2.1 - the dot signifies the start of the winding. The coils are loosely-coupled, meaning that they are usually wound side-by-side on the former (with a small gap between them). Amongst other things, this prevents a high degree of interaction between the two, and helps to reduce distortion. Determination of the tickler coil's turns and spacing from the tuned coil was done (usually empirically) in the design phase.
+ +As with all of the designs shown, the amplifying device can be a valve, bipolar transistor, JFET or as part of a dedicated IC. In the simulation I ran, a JFET was the most stable, as the much higher gain of a BJT caused the circuit to misbehave. This was the only oscillator I simulated that was 'finicky' about the coupling between the coils, and if the coupling is too great the circuit has very high distortion. All others simulated pretty much perfectly from the outset, and required no tweaking. I suspect that 'real life' would be similar, one of the reasons the Armstrong/ Meissner circuit isn't used often any more.
+ +In some cases (particularly when impedance matching was required), a third coil was used for the output. This could be close-coupled to L1, or use loose coupling to prevent unwanted interactions between the tank circuit and the 'outside world'. A separate output winding can be added to any of the circuits shown below as well.
+ + +While the Armstrong may be the grandfather of LC oscillators, it's been superseded for the most part. One reason is that it's sensitive to the coupling between the tuned circuit and 'tickler' coil, which isn't an issue in the circuits that followed. Stability is a very important parameter for an RF oscillator, because the frequency is high, and even a small drift (in percentage terms) means a large frequency change. AM radio stations are spaced only 9kHz or 10kHz apart, so a drift of a few kHz is the difference between your signal being received clearly, subjected to heterodyning (adjacent channel interference causing high-pitched whistles along with the audio) and/or not being picked up at the expected frequency. Prior to crystal oscillators being used, this would be a major problem if your transmitted signal changed frequency with time, temperature or whim.
+ +If a transmitter drifts, everyone tuned in is affected, and that's a real problem. This is one of the reasons that temperature sensitive ceramic capacitors were developed, so that temperature changes wouldn't affect the tuning (for transmitters and receivers). Modern transmitter and receiver designs render these points moot for the most part, but these designs have mainly been evolutionary, not revolutionary. One thing that did revolutionise transmission and reception was the crystal oscillator, followed by digital synthesis, and these have made most LC oscillators a part of history. That doesn't mean that they are useless or pointless, as it's far easier to build a fully tunable LC oscillator than it is to put together a digital frequency synthesiser!
+ + +The basic Hartley oscillator is shown in Fig. 3.1. The total inductance is 100µH, with 10nF in parallel. The circuit oscillates at 159kHz, as determined by the inductance (L1) and capacitance (C1). The coil has a tap that is usually somewhere between 50% and 25%. The tap means that the signal to the base is inverted so it's in phase with the collector signal (positive feedback).
+ +
Figure 3.1 - Hartley Oscillator
The tap simply provides enough positive feedback to ensure reliable oscillation. Reducing the drive level into the base lowers distortion, which is important for many RF applications. The output level will be from (close to) zero to +24V due to the tuned circuit (an output of almost 8.5V RMS). The reactance of C2 and C3 is only 10Ω and 100Ω (respectively) at 159kHz. These caps can be made smaller for higher frequencies.
+ + +With a Colpitts design, a single inductor is used, with two capacitors, each with twice the required tuning value, to split the signal. The centre-tap of the tuning caps is grounded, and the resulting signal at the base is inverted, providing positive feedback.
+ +
Figure 3.2 - Colpitts Oscillator
The collector load resistor (R2) may be accompanied by a RFC in series, to provide a higher impedance at radio frequencies. This isn't necessary at low frequencies such as 159kHz, but it helps as the fT (transition frequency) of the transistor is approached. As shown, the circuit is perfectly happy at up to 20MHz and likely beyond. The fT of a BC549 is around 100MHz if that helps at all. Expecting higher than perhaps 30-40MHz would almost certainly be unwise.
+ +Tuning a Colpitts oscillator may seem like a challenge, but a common approach is to add a tuning cap in parallel with the coil. The tuning frequency is easily calculated, as it's based on the existing series caps in parallel with the tuning cap.
+ + +James K. Clapp published his design in ca. 1948, and provided a full paper on the oscillator in 1954 [ 3 ]. The formula to determine the frequency is more complex than the others, as it uses a combination of series and parallel capacitors in the tuned circuit. The circuit is also (and preferably) referred to as the Gouriet-Clapp oscillator, because the circuit was independently developed by Geoffrey Gouriet for the BBC in Britain in ca. 1938. The Gouriet circuit was not published until ca. 1947.
+ +
Figure 2.3 - Gouriet-Clapp Oscillator
Although the circuit is superficially similar to the Colpitts oscillator, the primary tuning capacitor is C1, is in series with the tuning coil. The frequency is also influenced by the series combination of C2 and C3. With the values shown, the effective capacitance of C2 and C3 in series is (roughly) 18nF, and the frequency is determined by the following formulae ...
+ ++ Cp = C2 × C3 / ( C2 + C3 )+ +
+ fo = 1 / ( 2π × √ L × C1 × Cp / ( C1 + Cp ))
+ fo = 198kHz for the example shown in Fig 2.3 +
If (for example) C2 were reduced to 47nF, the series combination is 14.98nF (15nF is close enough) and the frequency is increased to 205.5kHz. This has been verified by simulation, with the calculated and simulated frequency being almost identical. While the tuning frequency is more difficult to calculate, the Gouriet-Clapp circuit has the best frequency stability of the three major types, so is a very good choice. The two parallel caps (C2 and C3) must be high-stability types. The ratio between them is somewhat arbitrary, but common usage indicates that it will be within the range of 2:1 to 5:1, with the upper capacitor (C2) being the smaller of the two.
+ + +It may seem odd to use an emitter follower (or any other device including a valve). The follower circuits have a voltage gain of less than unity (typically between 0.9 and 0.98). The small loss is easily compensated though, since it's easy to make the tuned circuit have the necessary gain to ensure oscillation. Followers have an advantage, in that the output impedance is low, making it easier to drive following circuitry.
+ +
Figure 3.1 - Hartley Oscillator (Follower)
An emitter-follower Hartley oscillator is shown above, and the tuning coil provides the necessary voltage step-up via transformer action. As before, the tapping point is around 25%, so the AC voltage at the base of Q1 will be greater than the voltage at the emitter. The idea is to provide enough step-up via transformer action to get reliable oscillation, but not so much that the transistor is overdriven, as that will cause excessive distortion.
+ +
Figure 4.2 - Colpitts Oscillator (Follower)
For a Colpitts oscillator using the tapped capacitance, we get the same step-up action as before. This will require you to perform some calculations or experiments, as it may not be immediately apparent. The capacitors are normally equal, providing a (nominal) 2:1 step-up. While this may seem excessive, it usually works well enough in practice.
+ + +The most common tuning method is to use a variable capacitor. Examples can be found in older (valve or transistor) radios. You can still get them, but most now use a plastic film dielectric which is must less stable than air. The advantage is that the capacitor is a lot smaller for the same capacitance. Some are seriously expensive, particularly anything classified as 'vintage'. Most AM radios used a dual-gang variable capacitor, with one section used for the local oscillator and the other to tune the incoming RF signal.
+ +There's a 'gotcha' when tuning an oscillator. If the capacitance (or less commonly the inductance) is changed, the frequency is changes as expected, but so is the tuned circuit's Q. You should recall from Section 1 that the resonant frequency occurs when capacitive and inductive reactances are equal. If one is varied, the effective impedance of the circuit is altered, so the Q and (by implication) amplitude are affected.
+ +
Figure 5.1 - Variable Gouriet-Clapp Oscillator
Note that I made no attempt to optimise the above circuit, other than to provide values that allowed it to oscillate across the range shown. With an output frequency of 523kHz the amplitude is 5.12V RMS, falling to 3.27V (RMS) at 1.7MHz. The variable output level was (comparatively) easy to cure with valves, because many were available with a 'remote cutoff' grid construction, meaning the gain could be varied by changing the bias (this was also used for AGC - automatic gain control).
+ +In most cases, the range needed isn't particularly great, as AM radio only spans the frequency range from 530-1700kHz (this differs slightly by country). That's a ratio of 3.2:1, and while the oscillator level will vary, manufacturers made an effort to minimise the amplitude variation. This article isn't about AM radio receivers though, so I won't be providing any more detail. All of the oscillators shown above can be made variable, but the Colpitts is harder than the others because there are two capacitors. A small frequency variation is possible by changing only one cap, but it's not a common approach. As noted above, it's more common to reduce the value of the series caps, and place a variable capacitor in parallel with the coil.
+ + +As noted above, a crystal (aka xtal) is an electro-mechanical resonator, using quartz as the piezoelectric medium. They have extremely high Q (up to 3,000 is common), and consequently very good frequency stability. Where necessary, crystals are housed in a temperature controlled mini-oven (OCXO - oven controlled/ compensated crystal oscillator). A variation is the TCXO - temperature compensated crystal oscillator (the term TCXO is sometimes used to describe temperature controlled crystal oscillator). The frequencies available range from a few tens of kHz up to about 200MHz, but that's at the extreme end. Without compensation, the frequency drift is typically around 0.6PPM/°C (just over 1 second per month).
+ +In modern circuits, there is often provision for direct connection of a crystal to the IC (many microcontrollers have this feature). The necessary circuitry is internal, and only requires the connection of the desired crystal and (usually) a pair of loading capacitors. These can often be 'tweaked' to pull the crystal frequency a little - the range is small though. The most common 'cut' applied to crystals is known ass the AT-cut. I don't propose to go into details here, as there's a great deal of information elsewhere.
+ +One of the most common circuits is the Pierce oscillator, using a CMOS inverter. The feed resistor (R1) is commonly left out, as a CMOS inverter has enough output resistance to limit the drive level to a safe value. C1 and C2 depend on the crystal itself, and datasheets will usually provide the optimum value for a given crystal. The range is typically between 10pF to 100pF, but higher values are sometimes used. In some cases a trimmer capacitor is used (usually in place of C1) to allow a small amount of variation. Some CMOS ICs will require R2 to force the inverter into 'linear' operation, but this is usually not needed. If included the value will typically be at least 1MΩ.
+ +
Figure 6.1 - Xtal Equivalent Circuit And Basic Pierce Oscillator
The values in the equivalent circuit are ... unusual, and would never be found in a 'traditional' tuned circuit. The inductance is very high (250mH) and the capacitance extremely low (40fF - femto-farad, or 0.04pF), but be aware that these are not physical values, but are used for modelling the crystal's behaviour. The high inductance and low series resistance contribute to a very high Q, with the circuit shown resonating at 1.59MHz with a Q of around 50,000!
+ +Because there is so much available material for crystal oscillators (and Reference 7 is recommended reading) I don't intend to go any further on this topic. Crystal oscillators are mentioned here simply because the crystal itself is equivalent to a series resonant circuit, with the crystal providing the inductance and capacitance. Today, these are probably the most common radio frequency oscillators used, because they remove the tedium of winding coils and perfecting circuits to get acceptable frequency stability.
+ +Of course, crystals aren't perfect, and this is particularly true for the 32.768kHz crystals used in quartz clocks. The cheap movements are often less accurate than a decent mechanical clock, and they have fairly poor temperature stability and initial accuracy. It used to be that (quality) quartz clocks were very accurate, but those are now consigned to the dustbin of history.
+ + +Although LC oscillators aren't used for 'true' audio applications, they are still an important analogue building block. Since radio is intended for audio, the topic is relevant. Even if it weren't, oscillators in general are an interesting topic, and while you may not need to use an LC oscillator any time soon, knowing the basics of how they work is an important part of general electronics knowledge. RF circuits seem rather mysterious to many people, and sometimes they seem to defy the laws of physics. They don't, but RF is a very different world from that of audio.
+ +This article is intended only as a short introduction to the world of RF oscillators. While countless hobbyists have built oscillators, this is often an unwanted byproduct. Any time you have sufficient positive feedback (due to excessive gain, poor shielding between preamps and power amps, etc.) you risk creating an oscillator. Its frequency won't be stable or predictable, because there's no defined resonant circuit, other than by accident. Audio oscillators are very different, and many examples are shown in the article Sinewave Oscillators - Characteristics, Topologies and Examples.
+ +Radio would never have been possible without the contributions of the early pioneers who devised the circuits described here, and (as always) a great deal of the development was done to facilitate telephone systems, which were the basis of all modern electronics. As with many other articles, this one is not something you'll need very often, and many in electronics will never need to know anything other than how to construct a crystal oscillator. Even that's becoming uncommon, as most microcontroller boards have already done the hard work, and all that's left is to provide power and some code, along with interfaces to the outside world.
+ + ++ 1 Ceramic Capacitor Data (Tecate Group)+ + +
+ 2 Superheterodyne Receiver (Wikipedia)
+ 3 Armstrong oscillator (Wikipedia)
+ 4 Frequency Stable LC Oscillators (JK Clapp)
+ 5 Oscillators (Oregon State)
+ 6 LC Oscillators (Modern Ham Guy)
+ 7 Crystal Oscillator Circuits (Robert J. Matthys)
+ 8 Crystal Oscillators (Prof. Ali M. Niknejad, University of California, Berkeley)
+ 9 AWV Radiotron Designers Handbook (Edited by F. Langford Smith. 1955) +
![]() | + + + + + + + |
Elliott Sound Products | +LDO Regulators |
+ Introduction+ +
+ 1 - LDO Fundamentals
+ 2 - Conventional Regulator
+ 3 - Noise
+ Conclusions
+ References +
Given the vast number of application notes, design guides and other material concerning low dropout (LDO) regulators, anyone would think they were complex. Despite initial appearances, they actually are complex. There are many things that must be considered to ensure stability, and not the least of these is due to the use of a PNP series pass transistor or a P-Channel MOSFET, which means the output is from the collector or drain, and not the emitter or source as with most conventional regulators.
+ +This imposes several design constraints, and especially affects stability. In some cases, the output capacitor must be of a particular type, often one having greater than normal ESR (equivalent series resistance). Should you be sufficiently gung-ho and think that you can use any old cap you like, you may or may not get away with it. In many cases, the LDO simply becomes an oscillator, so rather than the nice clean DC you expect, you have DC, but with a high frequency superimposed. Adding more capacitance can make matters worse rather than better. These devices can be finicky, as a quick search through forum posts or application notes will confirm. There are some that are quite happy as long as a few basic conditions are met, but there are other LDOs that insist that you follow the maker's recommendations to the letter.
+ +The circuits and results shown are exactly as simulated in each case. Note that these are not recommended circuits, but are simplified so that operation is easy to understand. While you probably could build them and get reasonable results, that's not the purpose here. This isn't a project or construction article, it's for information only.
+ + +The general scheme is shown (highly simplified) below. The output must come from the collector (or MOSFET drain) to ensure that the regulator can function with only a few hundred millivolts of input-output differential. Compare this to a conventional emitter-follower output, which needs an absolute minimum of around 1V, but more commonly 3 to 4 volts between the input and output. The reason for the voltage differential is quite simple. There has to be enough voltage across the regulator to ensure it can reduce the incoming DC to the value required, but all the while ensuring that there is a source of base current for the series pass transistor or gate voltage for a MOSFET. For a 5V regulator, that means that the input voltage has to be at least 7V, and usually more.
+ +The two circuits that follow are conceptual - they both work in the simulator I use, and will (probably) work in practice as well. However, I don't suggest that you build them because they can never work as well as a commercial IC version. As should be apparent, the PCB real estate needed is significantly more than an IC too, as there are more parts used. Both circuits will have a -2mV/°C temperature coefficient because the emitter-base junction of Q3 is part of the voltage regulating feedback network. The zener diode will add more temperature dependence unless its voltage is around 8 volts where the tempcos cancel (zener diodes have a positive temperature coefficient above ~6.8V).
+ +
Figure 1 - Simple Discrete LDO Regulators (A - MOSFET, B - BJT)
The two versions function as follows ...
+ +With a P-Channel MOSFET output device, the gate voltage is derived from the negative side of the supply, via Q2. R1 is needed so the MOSFET can be turned off. Normally, Q2 is turned on via R2, and when the output voltage is such that the zener conducts, Q3 turns on (slightly), removing some base current from Q2 and turning the MOSFET partially off to the degree necessary to keep the voltage at the preset level. The BJT version functions in a very similar manner, except that Q2 provides base current to Q1, rather than gate voltage. The base current is limited by R2 (2.2k in version B). If Q1 (BJT) has a gain of 50, the maximum output current will be about 120mA, but it varies with input voltage. In both circuits, C2 is a 'dominant pole' compensation capacitor, and it is required for stability (over and above other stability considerations).
+ +For an LDO regulator, the necessary base current (or voltage for a MOSFET) comes from the negative supply (earth/ ground/ common). The series pass device can therefore operate with very little voltage across it. If the differential voltage is too low, the regulator will be less able to reject supply ripple. It's obviously a requirement that the lowest voltage (including negative-going ripple peaks) must always be greater than the output voltage.
+ +Note the 100mΩ resistor (simulating a higher than normal ESR) in series with C3 in the MOSFET circuit. The gate-source capacitance of the MOSFET inserts another (unwanted) filter pole into the closed loop circuit, and without the increased ESR the MOSFET regulator is marginally stable. Small variations in components can trigger oscillation at the DC output unless the output cap has a higher than normal ESR. This is not an issue with the BJT version because the base capacitance is much lower than the gate capacitance of a MOSFET. This is exactly the behaviour that we need to be aware of. In practice, nearly all LDO regulators are fussy about the ESR of the output cap, and manufacturer's recommendations should always be followed. Using low-ESR caps is generally unwise.
+ +The performance of the two is shown next, and the graph is for output voltage vs. input voltage at an output current of 50mA. The MOSFET regulator provides 5V out with only 5.05V in, while the BJT circuit starts to regulate with an input of 5.25 volts. The 4.3V zener may seem to have a voltage that's too high, but it operates at a very low current, and the voltage across it is lower than expected. Normally a precision voltage reference would be used here, such as a TL431 or similar. The zener was used for convenience when I set up the simulation.
+ +
Figure 2 - Simple Discrete LDO Regulators Dropout Voltage (Red - MOSFET, Green - BJT)
With a MOSFET design, it requires enough input voltage to be able to turn on the MOSFET for the current needed. This may be 3V or more, and can be seen in the above chart - the MOSFET circuit has no output at all until the input has reached 4 volts. With a P-Channel device, the gate voltage is provided from the negative supply, and there only needs to be enough voltage between source and drain to ensure that the MOSFET has full control of the output.
+ +A commercial LDO might be quite happy with no more than 100-200mV DC between input and output, so for 5V out, the input only needs to be perhaps 5.2V (depending on output current). This is a big advantage in battery powered equipment, because the battery can discharge to a much lower voltage before the regulator ceases to function properly. A standard emitter-follower based regulator can't come close to that, because apart from the emitter-base voltage, some additional overhead is needed to allow base current to be provided with minimal losses. This is a trade-off, in that allowing lots of base current at low voltages means high quiescent current at elevated voltages. The base current will usually be provided by fairly complex design to ensure that the regulator itself draws no more current than it needs to. This means a higher overhead voltage.
+ +The next issue is output impedance. An emitter/ source follower has a low output impedance, even without feedback. Depending on the other parts used, an emitter follower can easily show an output impedance of less than 1 ohm. Conversely, the output impedance from the collector of a BJT or drain of a MOSFET is extremely high - typically several megohms. LDO regulators therefore rely on feedback to reduce the output impedance to something reasonable. Ideally, a voltage regulator has a zero output impedance, so that a change of load current does not affect the voltage. A simple emitter follower regulator (with no feedback) might show an output impedance of (say) 1 ohm, but LDOs will often struggle to get much below 0.5 ohm with feedback. They cannot be used without feedback.
+ +As noted above, some LDOs are very susceptible to oscillation if the output capacitor has too much or too little ESR. A length of PCB track may introduce enough inductance to cause problems as well. Before you commit to using an LDO, you need to decide if that what you really need, and make sure that you follow the maker's recommendations to the letter. In some cases, that means using a 'high-K' ceramic capacitor of perhaps 10µF or so, and/ or using a tantalum capacitor (something to be avoided if at all possible IMO).
+ +Because of the inherently high output impedance before feedback, if your load current changes rapidly over a wide range of current, you will need to use more output capacitance than you may have imagined. However, it must still fulfil the stability criteria for the LDO in all respects. It is useful that very high values of output capacitance (100µF or more) are usually less likely to cause oscillation than low values (see below).
+ +
Figure 3 - 'Conventional' LDO Regulator Internal Representation
The drawings in Figure 1 demonstrate a simple discrete LDO, but a manufacturer's 'equivalent circuit' is going to look more like that shown above. The essential parts aren't really changed, but the error amplifier (Q2 and Q3 in Figure 1) is shown as an opamp. In reality, the internal circuit may (or may not) be very different from the discrete circuits above, but even the equivalent circuit here is a simplification. There will be provision for short-circuit protection, over-temperature shutdown, and other functions that vary from one device to the next. The voltage setting components (R3 and/ or R4) will be external for adjustable types.
+ +The idea here is to show the basics, primarily from the perspective of stability. The following chart is adapted from Texas Instrument's SLVA115 [ 4 ] for the TPS76050 LDO voltage regulator IC. It shows the range of output capacitance vs. ESR that ensures stable (or unstable) operation. This general trend is very common, although the specifics vary depending on the LDO. Some LDO regulators may even need a low value resistor in series with the output cap to ensure stable operation. Note that it is equivalent series resistance (ESR) that can be critical, not the value of capacitive reactance, which of course falls with increasing frequency. In some cases, ESL (equivalent series inductance) may also be an issue, but few bypass caps have more than a few tens of nano-Henrys of ESL at the very most. It can be significantly less than 10nH for most SMD caps. PCB tracks can add a lot more. For reference, 10mm of straight 0.1mm wire has an inductance of about 10nH.
+ +
Figure 4 - LDO Regulator Stability Vs. Output Capacitance
As is obvious, if you were to use a very low ESR 10µF capacitor (e.g. some multilayer ceramics), the circuit will oscillate, as it's in the 'unstable' region of the graph. You would need to ensure that a 10µF capacitor had an ESR of at least 100mΩ to remain well within the 'stable' section. Larger value caps have a bit more leeway, but a 2.2µF cap is marginal regardless of its ESR. The stability can also be affected by load current, so it is essential that you are fully acquainted with the requirements of the LDO you plan to use before you choose the support components. Ensuring that the capacitance and ESR remain stable over the long term is also important, so capacitors have to be chosen with care. You also need to be aware that high-K ceramic dielectrics also suffer from capacitance loss due to aging (time and temperature).
+ +There are significant risks when using 'high-K' ceramic caps (common in SMD), as most have a significant voltage coefficient, and can lose 50% or more of their stated capacitance just because they are operated near their rated voltage. These caps also have a high thermal coefficient, so have to be tested over the full temperature range. Much as it pains me to have to say so, sometimes tantalum is the only sensible option for the output cap. I have avoided them for many years because of their undesirable characteristics (not the least of which is 'exothermic ignition failure' - they can (and do) catch on fire!). However, sometimes nothing else provides the specific conditions needed for stability [ 5 ].
+ +Most articles will go into some detail about the filter poles that exist within the LDO itself, and the additional pole created by the output cap. Phase diagrams and other details are often very helpful for those who fully comprehend the closed loop stability criteria for feedback circuits, but the datasheets usually provide the specific information you need to ensure that you don't build an oscillator. Full phase analysis is not essential, but it is important to know that problems will be created if you don't follow the recommendations.
+ +Bear in mind that it's not just the cap at the output of the LDO regulator itself that must be considered - bypass caps across the supply lines of ICs that the circuit uses are also part of the output capacitance, and may cause unforeseen problems if you are unaware of this.
+ + +For some perspective, the drawing below shows a simple discrete 'conventional' (emitter follower) regulator. This will be stable with almost any imaginable combination of output capacitance, and used to be a very common design before the advent of 3-terminal regulator ICs.
+ +If the transistor is NPN (and assuming a positive regulator), the base current has to be supplied from the input. That means the input must be at least a couple of volts higher than the base of the series pass transistor, and the emitter (the output) is 700mV lower than the base voltage. The circuit shown below will normally be unconditionally stable with any value of output capacitor, and ESR is usually irrelevant.
+ +
Figure 5 - Simple Discrete Conventional Regulator
The output is from the emitter of the series pass transistor (Q1). Should the output voltage rise above the preset voltage, Q2 is turned on a little harder via R2, causing it to 'steal' base current from Q1, which turns off just enough to keep the voltage steady. The reverse happens if the output voltage falls. If a zener diode is used for the reference, it should get most of its current from the regulated output (via R4), ensuring a stable output. This is not required if a precision reference diode is used. The version shown is hampered somewhat by its (deliberate) simplicity. This type of circuit was commonly used with output voltages of 12V or more, and weren't normally used for 5V supplies in the general form shown. The main reason for their use was to minimise ripple - there was rarely a need for a very accurate regulated supply.
+ +The circuit is the basis of the regulator shown in Project 96 (48V phantom feed power supply), and this topology was used because it can withstand a much higher input voltage than a 3-terminal regulator. This basic circuit can have surprisingly good performance with a few additional parts, but these days it actually costs more to make than it does to buy a 3-terminal regulator, which outperforms it for most tasks. (It's still useful for high voltages though.)
+ +
Figure 6 - Discrete Regulator Performance
The performance of the regulator is shown above. It's not as good as the LDO versions shown above because it has lower gain. Ideally, R1 would be replaced by a constant current source which improves the circuit's regulation markedly - but also increases complexity. However, it shows the principle, and it's obvious that it can't start regulating until the input voltage reaches 6.2V, over 1V more than the LDO type. In most circuits, the required differential is higher than this - in general, assume that the input should be at least 3V greater than the regulated output.
+ + +Like any other active device, regulators make noise. Because of the way LDOs are so often used (at low voltages and with sensitive analogue to digital converters for example), noise can cause problems. It's important to distinguish between internal noise (including the noise contribution of resistors used to set the output voltage) and power supply rejection ratio (PSRR). Supply ripple rejection is usually not an issue with a battery powered or a pre-regulated supply, such as a 3.3V supply derived from a regulated 5V source.
+ +Internal noise includes thermal, flicker (1/f) and shot noise. The most significant contribution is usually from the band-gap used to provide the reference voltage, and while it's difficult to get a straight answer from most of the published material, it would appear that LDOs are generally quieter than conventional regulators. Comparing quoted output noise figures isn't always easy, because they are often specified (very) differently.
+ +As an example (directly from datasheets), the LM317 adjustable (conventional) regulator has a noise figure of 0.003% / V output, while a TPS76425 (2.5V) LDO has a noise output of a little over 60µV with a 10mA output current. Noise increases with increasing output current. Specifying the output noise in completely different ways doesn't help anyone - if you work out the noise for the LM317 at 2.5V output, it's 75µV.
+ +When it comes to determining the actual amount of noise generated by the regulator, you need to consult data sheets, and/or put one together and measure it. This is a surprisingly complex area, and doubly so with LDOs, because noise often varies with output voltage as well. Noise is a particularly important parameter for LDO regulators because of the way they are used. The output voltage is usually low, with 5V being towards the upper end of where they are typically used.
+ +Because LDO regulators are common with sensitive electronics (especially ADCs and DACs), noise performance is critical to the performance of the circuit being powered. Unlike opamps which have very high power supply rejection, most ADCs and DACs rely on the supply being noise free to ensure an accurate (and hopefully low noise) output.
+ + +In this short article, I have tried to highlight the potential issues with LDOs, especially compared with 'conventional' voltage regulators. There is no reason to be put off using one if that's what your circuit requires, but if you have plenty of 'spare' voltage available (such as with most mains powered power supplies) then a conventional voltage regulator is almost always a better choice.
+ +LDOs can be finicky, and because of their topology they are not inherently stable. It's certainly possible to make a standard regulator oscillate too, but it usually requires rather sub-optimal (or simply misguided) design and PCB layout to cause issues. With an LDO, you could easily run into trouble just by using an output capacitor that's different from the one that was used for initial tests. It's also necessary to ensure that the capacitor used will not degrade over time in such a way as to cause problems after a few years of operation.
+ +It's a clear sign that a part is likely to be tricky to use when manufacturers offer evaluation boards. You won't find any for conventional regulators, but there are many available for LDOs, with most provided by the manufacturers and/or major distributors. Provided you follow the maker's guidelines and test the design thoroughly, there is no reason not to use an LDO if that's what you need, but as I hope is now obvious, they are not as predictable as the 3-terminal regulators you are used to.
+ +For battery powered circuits, the LDO is by far the best solution unless there is already a higher voltage supply available. For example, a circuit using a combination of analogue and digital circuitry might need 11.1V (LiPo, 3 cells in series) for the analogue side (typically opamps) and 5V for the digital side. An LDO isn't needed here because the main supply can't be allowed to fall below around 9.5V to prevent battery damage, so there is still plenty of headroom for a standard regulator. If you need 3.3V from a single LiPo cell, then you have no choice.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +High Voltage Audio Systems |
![]() ![]() |
There is a great deal of confusion about the use of high voltage (aka 'constant voltage' or 'high impedance') speaker lines for commercial applications. Common voltages used are 25V, 50V, 70V and 100V, and some are country dependent. In some cases, this is due to regulatory restrictions on the use of voltages deemed hazardous - typically anything over 32V RMS. For this discussion, I will use the common 70 V line in the examples, although 100V is the de-facto standard in Australia. The calculations (and the problems) are no different for any line voltage, and conversion is simple.
+ +Some installations may require that the cables be installed in conduit if over a specified voltage, and it may also be a requirement that one side of the speaker line be earthed in the same way (and presumably for the same reasons) that the neutral is earthed in mains distribution systems. Most US installations require that one high impedance feed line be earthed (grounded).
+ +In the US, 70V lines are the most common, and the voltage is based on a requirement that the AC peak voltage must be no more than 100 volts - I don't know if this is still current, but it's probably too late to change now regardless. Higher voltages may require conduit, which increases the cost and difficulty of the installation. The RMS value of a 100V peak sinewave is 70.7V, hence the 70V limit. In Australia, Europe and many other countries, 100V lines are more common. The choice will always depend on local regulations and system requirements.
+ +There is a trend towards renaming 'constant voltage', '70V' and '100V' to 'high voltage' or 'high impedance' audio. This appears to be due to the confusion caused by the more familiar terms, because the uninitiated will be unaware that the voltage is not constant, nor is it actually 70V or 100V, other than for the occasional peak. I tend to support the change, as there is no doubt that the traditional terms are somewhat untidy - the implication and reality are very different. However, I will still use the 'old' terminology for most of this article because I'm used to it.
+ +Before discussing the issues and difficulties faced, first we'll look at why commercial sound systems use high voltage lines in the first place. The general idea is 'borrowed' from the way mains power is distributed from the power station. If power were simply delivered at 230V (or 120V) from power station to homes and businesses, the current would be extremely high, requiring very heavy gauge conductors to minimise losses. This is uneconomical in the extreme, so the current is delivered at high voltage (such as 330kV) to local sub-stations, where it's reduced to a lower voltage (e.g. 11kV) for local area distribution, and finally reduced again by pole transformers or similar to the normal voltage we expect from power outlets.
+ +This process might seem inefficient, but transformers have low losses and the system is the most efficient way to distribute power over long distances. As an example, if a local area draws 4,350 amps from the mains at 230V (1 MW), this is transformed to only 91A at 11kV. Line losses with sensibly sized conductors are far lower, and thinner wires can be used with comparatively low loss due to cable resistance. At 330kV the initial 4,350A is only 3A, and we are still delivering a megawatt! Note that I have made no allowance for losses, but you get the idea.
+ +It's exactly the same with distributed audio signals, except (of course) the voltages and power levels are far lower. The basic idea is that the output from the amplifier will be 100V/ 70V RMS at full power, and small transformers are used at each speaker to reduce the voltage to get the desired power from the speaker. Figure 01 shows the general wiring scheme. I have included parts that I consider essential, but seem to be ignored elsewhere, and the drawing also shows tapped line transformers and an attenuator.
+ +Tapped transformers are common, as they allow different zones to have different power (and hence SPL). Re-entrant horns are popular for outdoor areas and for large indoor areas where background noise is a problem. Attenuators are a bit more complex than the simple pot shown, but serve the same purpose - occupants of an area can set the volume to suit the environment. Attenuators are available for both 70/100V lines or nominal 8Ω circuits, but are normally not permitted in emergency systems.
+ +Each of the sections shown in the above diagram is covered in detail below. The high pass filter and DC protection are not normally even mentioned by suppliers of 'constant voltage' line components, but are essential to obtain maximum fidelity and to protect the amplifier from the rather evil load presented by the line output transformer at low frequencies. There are also additional modules that should be used, depending on the application. One of the most important of these is an input clipping circuit and a good peak limiter, both set to ensure that the maximum line voltage is not exceeded. Neither is suggested by most suppliers, and they are not commonly offered as part of the system. Some suppliers do have peak limiters, usually as an option.
+ +As noted earlier, it must be made clear that 70V, 100V (or other voltage) and 'constant voltage' are somewhat misleading terms - the claimed voltage is (intended to be) reached at the limit of the amplifier's output, namely at full power. The actual maximum voltage on the line can be variable, especially if the amplifier and line drive transformer are not correctly matched. With normal programme material, the measured RMS voltage will be somewhere between 10 - 30V at the onset of clipping, depending on the programme material itself and the line voltage being used. In this respect, the term 'high voltage audio' is less ambiguous.
+ +The term 'constant voltage' comes from the fact that the line voltage doesn't change significantly as speakers are added or removed - it remains constant, regardless of load. In reality there are changes of course, but they are small because the distribution line has a low impedance source. The installer must understand the difference between source impedance and load impedance!
+ +Finally, why was this article written? There seem to be many people who are firmly convinced that you can hook a line output transformer up to "any old amplifier" and get a 70V line. "Nothing to it" they say. Well, that's true, but only if you don't mind blowing up amplifiers or don't care what the end result is like. This is an industry that's been going strong for many years, but not based on hooking up a transformer to just any amplifier. It might work for a while, but there's a lot more to it than you might expect.
+ +It's probably worth pointing out that one of the main failure points of 70/100V 'constant voltage'/ high impedance systems is caused by others working in the same ceiling space as the wiring (and often speakers). A favourite seems to be using a staple gun to fix wiring to timber battens, with the staple promptly shorting out the wiring. It's easy to tell when this has happened (the resistance is far too low), but finding the problem can take a great deal of time and effort. Note that insulated staples are often used, but angled just right to ensure that the staple penetrates both conductors. The short can be almost anywhere within the system, so use the most robust cable you can get and hope that the idiot with the staple gun never comes near your installation!
+ +Of course you could use Pyrotenax cable (aka 'pyro' or 'mineral-insulated copper-clad cable' - look it up if you've never heard of it), which is a copper tube, with mineral insulation surrounding the inner conductor(s). It's fire-rated and very expensive, both to buy and install. It may be a requirement for fire alarm/ alert systems in some sensitive installations. I seriously doubt that any customer (other than a secure government department perhaps) would accept the price of the cable and its installation.
(Note that parts of this section were adapted from the Lenard Audio site with permission).
+ +There is much info on the Net about the use of audio in shopping centres, lifts (elevators if you insist) and the like. I have no intention of delving into the psychological or psycho-acoustics of these systems, this is a purely technical article and whether (or not) background noise (sometimes called 'music' by the installers) constitutes an attempt at mind control is not open for discussion.
+ +However, there is no doubt whatsoever that if the background 'noise' is distorted or suffers from poor fidelity in other areas, it will have exactly the opposite effect from that intended. Anything that subjects employees or shoppers to awful sound quality will either drive them away or insane - perhaps both. Placement of speakers is usually limited to ceilings, and whether or not that's ideal isn't even worth discussing - in most cases there's no other choice.
+ +Many of the installed systems also serve as emergency evacuation alarms, and these are subject to strict regulation in most developed countries. Three things are of primary importance ...
+ +To ensure intelligible speech, distortion has to be reasonably low - no more than perhaps 5% THD (total harmonic distortion) or so. Alarm tones may be severely distorted (if permitted by the regulations), and this increases both loudness (real and apparent) and penetration.
+ +Background 'music' should be reproduced with the lowest distortion possible, and at a level that does not impose on anyone. If the source is a radio station (which I hear fairly often), the tuner must have a decent antenna and be properly tuned to the station! Sounds easy enough, but it's not uncommon to hear radio station background where neither of these requirements has been met. The result is grating, to put it mildly.
+ +For new installations, it would be useful if architects consulted with a reputable installer before deciding on speaker placement. Acoustics is not a simple field, and unless one is experienced the results can easily be a disaster.
+ +There are many excellent books on commercial sound installation, and one of the most respected is 'Sound System Engineering' by Don and Carolyn Davis.
+ + +Quality cable and connections are essential, including clear and detailed installation documentation that remains on site. Many small ceiling speakers have poor fidelity, but there are exceptions. Cost is not a guide to audio quality - many 'big brands' are just as bad as cheaper alternatives, and may even come from the same factory.
+ +It is wise to audition speakers before installation. This allows some measurements to be made to verify sensitivity and transformer performance, and to ensure that the dimensions are as claimed. It's not at all uncommon to order products, only to find that there are significant changes from those supposedly identical units used at previous installations.
+ +The technical requirements are often for many small speakers to be spread over large areas. Cable length can be hundreds of metres. To minimise cable loss, the amplifier output is increased to a higher voltage through a step-up line transformer. Each speaker has a step-down line transformer.
+ +The line system (assuming 70V lines) normally operates at around 10-30V RMS but can peak at 70V. All line transformers have a limited bandwidth that restricts low frequency performance in particular. A skilled electronics technician should always check samples of line drive transformers (if separate from the amplifier) and speaker transformers. 100V lines operate with a peak of 100V RMS, and will typically be operated at 14-40V with programme material.
+ +The specifications provided for power amplifiers and speakers with line transformers are commonly referenced only to power (in Watts) and the rated line voltage. This information is not sufficient for accurate calculations. An electronics technician will require an hour or more to take necessary measurements and to determine the missing information. This is essential to make accurate calculations for the installation.
+ +Note that the steps described above are an outline of the basic procedure. All are explained in much greater detail below. The steps shown next do not follow the numbering from the above list.
+ +1 - Determine the amplifier power needed for the installation, based on the number of loudspeakers, loudspeaker efficiency and the SPL that is expected in each area. If total power is too high (over 200W), use several smaller amplifiers rather than one very large one. Once speaker power is known, add a minimum of 1.5dB (about 40%). If you worked out that you'd need 70W in total, use a 100W amp. There are losses in the system, and this accounts for typical losses and allows some room for overall level adjustment. Where more power is needed, it's better to use a number of smaller amps than one powerful one. Four 100W amps are usually a better proposition than a single 400W amp, as you have some redundancy in case of amplifier failure.
+ +2 - Measure the voltage ratio (which is the same as the turns ratio) of the amplifier output transformer, all taps unloaded. The voltage ratio (turns ratio) on some transformers varies widely, depending on rated power and line voltage. There is usually an allowance for insertion loss (see 3.4), so voltage will be slightly higher than expected. For example, the turns ratio may be 1:3.6 instead of the theoretical 1:3.5 for a 100W 70V transformer.
+ +It is essential to know the actual load impedance for the line transformer. This information is rarely quoted in specifications that come with the amplifier or transformer. Deducing this from the power-load specifications supplied with amplifier and speakers is educated guess work at best, and rarely accurate. The amplifier line transformer must actually be measured! Guessing isn't good enough.
+ +3 - Measure the amplifier 'rail to rail'. If the amplifier has a rail supply of ±30V (60V total) the output will be about 20V RMS. If the transformer has a turns ratio of 1:3.5 the secondary voltage at full power will be 70V RMS. The amplifier's output voltage can also be measured using an audio oscillator set to around 400Hz and a speaker driven from a line transformer as a monitor. You will be able to hear the onset of clipping as a harshness on top of the tone. When you hear that, reduce the level until the harshness just disappears, and measure the RMS voltage at the amp output.
+ +4 - Measure the power of the amplifier under load. Determine whether the amplifier's output is designed for 4 or 8Ω loads. The output transformer impedance ratio is the square of the turns ratio. If the amplifier is designed to give 100Watt (20V RMS into 4Ω) and you need a 70V line, the turns/ voltage ratio of the transformer must be 1 : 3.5 (20 * 3.5 = 70) or slightly higher to account for losses. The combined load impedance must be no less than 49Ω. This is easily calculated ... The impedance ratio is 3.5² or 1:12.25, so the secondary impedance is 4 *times; 12.25 = 49Ω. Otherwise, you can use the chart shown in Figure 2.
+ +
Figure 1.1 - Power Vs Impedance Chart
Note that too many speakers on the line will overload the amplifier. The total load impedance presented by all speakers must never be lower than the value calculated by the above formula. A lower than recommended load impedance can easily destroy an amplifier. The total number of speakers must represent a load no lower than that for which the amplifier is designed, regardless of power. The load directly affects the running temperature of an amplifier, and therefore its reliability.
+ +Recorded music is usually compressed and has a limited dynamic range. Assuming 6dB dynamic headroom (or peak to average ratio), the amplifier can be driven at a maximum average voltage no greater than ½ of that for full power. The power delivered is ¼ of the maximum. It is generally wise to allow for amplifiers to run at no more than 70% of the rated peak power, so there is some room to make adjustments to compensate for losses. For example, a 100W amp should never have to deliver more than 70W peak.
+ +5 - For this example, a 100 Watt amplifier is required to deliver an average of 25V RMS of music on the 70V line, the line should be connected to the 1:3.5 turns ratio (70V) tapping of the output transformer. This will allow the total number of speakers connected to provide a load impedance no lower than the value determined in [4] above. If the total number of speakers to be connected will reflect a load impedance of less than that calculated, then the choices are ...
+ +6 - Audition the speakers. Assume the speakers have no association with the specifications supplied, regardless of the model number or brand that is printed on the box. It is usually impossible to know in which factory the speakers were made or how many re-selling agents they encountered on their way to you. The brand does not necessarily indicate the actual manufacturer, nor does it signify a level of quality!
+ +7 - The speaker line step-down transformer. This must be measured for ...
+ ++ Saturation frequency+ +
+ Actual turns and impedance ratios +
8 - Decide on the power to be delivered to each speaker in the system. This is the maximum power that will be delivered to each speaker - the average will be around ¼ of the maximum. If the speakers are 8Ω, then 2W requires 4V RMS. To obtain 4V from a 70V line a step-down ratio of 17.5:1 is required, and you'll need to select the closest available tapping. Be prepared to adjust your calculations to suit - especially if the transformer is intended for a different line voltage or is not accurate.
+ +9 - Calculate the reflected impedance on the line, from each 8Ω speaker on the 17.5:1 tap. The impedance is calculated from the square of the turn's ratio. (17.5² = 306.25) × 8Ω = 2450Ω. If the total number of speakers connected to the line is (say) 50, and each speaker is connected to its 17.5:1 transformer tap, then the total impedance to the line is ...
+ ++ 2450Ω divided by 50 = 49Ω ++ +
The 100W amplifier is driving the line through its transformer from the 1:3.5 tap. The lowest load it can drive is 49Ω, so no more speakers can be added. Note that there is no allowance for losses at this stage. Nor is there any scope for later additions, so the number of speakers should be reduced.
+ +10 - From these calculations, this 100W amplifier is operating at an average of around 13W, and allows around 6dB of headroom for the music transients. The transients will peak at 70W, so there is a little headroom but no more speakers can be added to the system. It is wise to ensure that the amplifier is not fully utilised - it is better to have perhaps 40-45 speakers on the line rather than the full 50. Doing so gives you some 'wiggle room' should it be needed.
+ +Repeat steps 1 to 10 until you have a sensible setup that uses equipment you can actually buy and that is within budget. There's little point having a theoretically perfect system if it relies on equipment that doesn't exist or is so expensive that no-one will pay for it.
+ +This procedure is broken down into more precise steps in the next section, and includes examples for two line drive transformers and a representative speaker transformer. The one I used is designed for 100V lines, which is a small advantage in some respects.
+ +11 - After all speakers are connected, verify the line impedance with an impedance meter. You cannot use a multimeter because they measure resistance, not impedance. Impedance meters test using AC, typically at 1kHz. If you don't have one (or don't want to spend the money - they aren't particularly cheap), you can use the amplifier as a source, along with a 100Ω resistor. The resistor is temporarily installed between the amp's output and the speaker line, and you measure the voltage from the amp, and that to the line. Use around 5V RMS at somewhere between 200Hz and 1kHz. The current drawn is shown as a voltage (AC) across the resistor.
+ +For example, if the amp has an output of 5V RMS, and you measure 2.5V across the 100Ω resistor, that's a current of 25mA. The voltage across the speaker line is 2.5V (5V minus 2.5V), so the line impedance is 100Ω. This is nothing more than Ohm's law, except your measurements are AC, not DC. It's more messing around than an impedance meter, but the results are just as accurate. Alternatively, you could use a 500Ω pot (preferably wirewound, but carbon will do if you keep the voltage low), and adjust it until the voltage across the speaker line is exactly half the voltage from the amplifier. When that's done, you measure the resistance of the pot with a multimeter, and the result is the line impedance. Feel free to change the frequency to see how much the impedance changes.
+ + +Some installations will require the use of column speakers, which may be glorified by referring to them as 'directional arrays' for example. These will almost always operate at significantly higher power than ceiling speakers, and typically consist of a vertical row of small speakers.
+ +Column speakers are often seen in churches, shopping centres, travel terminals, gymnasiums etc. Their intended application is for announcements and background music. The advantage of a column is its simplicity and being visually unobtrusive. The fidelity of a column can be no better than that of the individual speakers, regardless of marketing claims. Some may include a horn loaded compression driver to reproduce the high frequencies, and this will give better overall dispersion and fidelity if done correctly.
+ + +Column Directivity
+A single speaker has a varying conical dispersion. As more speakers are added vertically, the sound from each speaker is 'compressed' by the ones above and below. This results in increased horizontal dispersion and reduced vertical dispersion.
In reality the horizontal directivity is limited by wavelength and is inconsistent. High frequencies (wavelengths less than the distance between speakers or the diameter of the speakers) result in intense vertical lobes. These lobes cause phase cancellations and loss of intelligibility, and the high frequency energy is decreased. One solution is to cross over the high frequencies to a single tweeter or a small horn.
+ +Some small (and often expensive) columns have a complex passive crossover network that reduces energy or high frequencies to the outside speakers as the frequency increases, so only the centre speaker remains working at the highest frequency. This is sometimes known as a 'tapered' array. At lower frequencies (wavelengths longer than column length) dispersion control is no longer effective. Typical column speakers all have a limited and inconsistent horizontal dispersion.
+ + +There are many things that must be considered for any installation. Many of these are totally dependent on the specific installation and cannot be determined without knowing all distances, required SPL (sound pressure level, in dB), signal sources and the environment. Outdoor systems will usually need much more power than those indoors, but every installation will be different.
+ +The following steps are intended to provide a starting point, and give you enough information to be able to quantify the many different pieces of equipment that will be needed for the install. The examples are just that ... examples. You need to be able to adapt the examples to the gear you have to work with - this will often be specified by someone else, and it may not be accurate.
+ +If you have all the information, it becomes a relatively easy matter to verify the original design and/or make adjustments as needed to make the system perform as expected.
+ +Remember that a 70V line will have 70V RMS at the point of amplifier clipping (or limiting). It doesn't matter if the amp is 10W or 500W, the voltage is unchanged, and only the power changes - based on the maximum available current from the amplifier. Predictably, and at least in theory, you can connect 10 x 1W line speakers to the 10W amp, and 500 of the same speakers to the 500W amp. The actual number will always be lower than the theoretical limit. Each 1W speaker presents an impedance of 4,900Ω to the line.
+ +Likewise, 0.5W speakers will have an impedance of 9,800Ω, 2W speakers are 2,450Ω, 5W speakers are 980Ω, etc. Simply divide 4,900 by the power in Watts. The actual impedance of each speaker will vary significantly though, and the savvy installer will test each piece of the equipment so there are no surprises. Reality will be different from theory!
+ +Some installers use impedance meters so the total line impedance can be verified before the amp is connected (or to locate system faults), and such an instrument may be very useful if you do a lot of work with high voltage audio systems. However, there's a lot more involved, and that's why this article has so much information that you just don't see elsewhere.
+ + +Before you even start, you need to know the expected sound pressure level (SPL) within the installation. Re-entrant horns with compression drivers are very efficient (up to ~110dB/1W/1m), but they don't usually sound wonderful. They are well suited for alarms and speech announcements in very noisy environments, but have a very limited frequency range - typically around 250Hz-8kHz. Normal ceiling speakers almost always have a wider response, but are more likely to be around 90dB/W/m or perhaps even less. You need to know if the system simply provides background 'music', or does it do dual duty as a paging system and/or emergency evacuation alarm?
+ +How loud does each different input signal (music, announcements, alarms) have to be at the speaker locations? This determines the number of speakers needed, and that determines how many speakers can be driven by an amplifier of a given power rating. There are no easy guidelines for any of the above - some are determined by government regulation, others by the client's expectations and the installation environment. Ceiling height, distance between speakers, shop or other fittings and background noise all affect the amount of power that's needed. This is especially true for announcements, paging or emergency evacuation sirens.
+ +There are installations where high voltage lines are run, but with very high power levels (500W or more). While these are not covered here, the principles are no different. For high power system installations you may find voltages of 200V RMS or more being used, as this reduces cable losses. Transformers at both ends become much larger and heavier, especially if frequencies down to 40Hz are required.
+ +Distortion is to be avoided for speech and music, but may not be an issue for siren tones, as these are normally rather distorted already, and if the amplifier clips (distorts) due to being over-driven, the effect will usually be inaudible. Often, it will even help, because the distorted tone is not only louder, but far more irritating (good for getting attention!). A small amount of peak clipping is generally acceptable for voice announcements, but it should not exceed around 3dB. This means that a 100V peak speech signal can be clipped at 70V without serious loss of intelligibility.
+ +In some cases, individual wall-mounted level controls may be used so that the level can be adjusted for some areas. There are suitable controls available for either the 70V line or the 8Ω speaker output from the line speaker transformer. These are not likely to be permitted for emergency alarm or evacuation systems.
+ +If re-entrant horns are used, they must be mounted in such a way as to ensure no-one can ever be very close to them, because of the very high SPL they can generate. In other cases (such as railway stations for example), they are typically operated at very low power, but in larger numbers than may be the case elsewhere. Automatic gain control may be used to raise the level when there is noise, such as a train entering or leaving the station. Some suppliers provide the equipment to do this.
+ +Sirens and emergency announcements must be audible regardless of noise, and possible hearing loss is secondary to people being severely injured or killed because they couldn't hear the warnings. I don't know about my readers, but I'd rather suffer some temporary hearing loss than be burnt to death. Call me odd if you must.
As noted in the overview, it is wise to measure the amplifier's output swing. It's not at all uncommon to find that the maximum line voltage is quite different from the nominal voltage - it may be higher or lower, mainly because the line drive transformer is not configured properly to the amplifier power or voltage swing. The result is that the amp may either be overloaded, or not delivering the power you expected and paid for. Remember that you need to keep at least 1.5dB of reserve - if you need 70W, use 100W (etc.). If the job is tricky or involves long cable runs, you may need to allow 3dB reserve, so 100W becomes 200W.
+ +To measure the voltage swing properly, you need an oscilloscope so you can see exactly when the amp clips. A 100W/ 4Ω amplifier should deliver 20V RMS when connected to a 4Ω load - that's a peak-to-peak swing of just under 57V (±28.5V). It's common for most amps to deliver a bit more than their rated power, so you might measure up to 25V RMS or so (a bit over 150W), depending on the amp. If the transformer is designated as being suitable for a 100W/ 4Ω amplifier, then with 25V RMS output, the line voltage will be higher than the nominal value.
+ +A 70V tapping will give you 87.5V, and the 100V tapping will provide 125V at the onset of clipping. The clipping voltage will also vary slightly with mains voltage, but although that changes during the day the effects are minimal. If the amp's output is higher than the calculated value for the power delivered, then you should use the actual maximum line voltage for the calculation examples that follow - not the 'nominal' voltage.
+ +Likewise, one would hope that all transformers designed for line systems would be clearly marked, and that detailed specifications would be provided as a matter of course. Unfortunately, this is not the case, and most manufacturers and suppliers give the minimum amount of information possible. Perhaps they assume that installers are sufficiently skilled to know how to determine all the parameters, or maybe they don't care much either way.
+ +Line output transformers are commonly rated for power, voltage taps, and sometimes frequency response. They almost always neglect to state if response is at full power or some lower 'reference' level - I suspect the latter, as any audio transformer that can handle 150-200W at 20Hz is a very large piece of kit indeed. The voltage taps allow the installer to select the desired (or specified) line voltage when connected to an amplifier of the stated power rating. As described above, the actual voltage will almost certainly be different from the nominal value. Few if any - I've not seen it anywhere - transformers have provision for an additional winding to add or subtract a few volts to ensure the line voltage is within specification.
+ +Likewise, the speaker transformers will usually provide taps for different power levels (e.g. 0.5W, 1W, 2W, 5W, etc.). If the 1W tapping is used, that is the maximum power that can be delivered to the speaker before the driving amplifier reaches its limits and starts to clip - assuming of course that the amplifier and line driver transformer are perfectly matched in all respects. The average power with programme material will usually be no more than around 250mW.
+ +The reference used for this section uses the following ...
+ +Regardless of amplifier power, the maximum voltage on the line will be 70 volts (RMS) when driven to the onset of clipping with a sinewave signal - but only if the amp and transformer are perfectly matched. The actual line impedance can be calculated, but the idea of the system is that (in theory at least) you don't need to know - the speaker transformer taps determine the power to the speaker, and you simply ensure that the sum of all the transformer taps never exceeds the amp's rating (100W for these examples).
+ +For example, a 100W high voltage line system can have ...
+ +While this is all fine in theory, there are many, many things that can go wrong. These include, but are not limited to ...
+ +Most of these are easily addressed by following the appropriate procedures outlined by the manufacturers of the various parts or making repairs as needed. However, unless you have already run tests on the components you won't know if the ratings are accurate, nor will you know the limitations of the power amplifier when supplying power to a transformer load. There is very little anyone can do to stop later modifications or additions though, nor can you know who will do the work.
+ +Speaker efficiency can easily bring you undone. One I looked at stated that "high efficiency means less power is needed" - this was for a speaker rated at 88dB/1W/1m. If they think that's efficient then I'd hate to see their 'inefficient' models. Above 90dB/1W/1m is acceptable, 95dB/1W/1m or better is pretty good, but below 90dB is woeful. Another 'high efficiency' speaker I looked at was only 86dB/1W/1m!
+ + +While most dedicated systems have the output transformer as an integral part of the amp, many after-market transformers are readily available. While most are true transformers with isolated primary and secondary windings, some are auto-transformers. These have a single winding, with taps for input and various output voltages.
+ +Auto-transformers have no galvanic isolation - the primary is simply part of the overall winding, but wound with heavier wire. As a result, they cannot be used where fully floating inputs or outputs are needed, nor can they be used safely with bridged amplifier outputs. In all other respects, they should be treated to the same tests as a 'real' transformer. Auto-transformers are an economical alternative when the transformation ratio is less than 1:2
+ +The line drive transformer takes the comparatively low (20V RMS for example) output from the amplifier, and steps it up to provide the desired output voltage - there are more turns on the secondary than the primary, and the voltage is increased by the ratio of primary to secondary turns. While you'll rarely ever find out just how many turns are used, the ratio is easily determined by the method described in the sections below.
+ +It is possible to work out exactly how many turns have been used, but there's little point and I won't bother with a description of the process. For those who are really interested, have a read through the articles about transformers on the ESP site. The technique is explained for anyone who wants to go that far.
+ +The following table shows the measurements that were taken on the two line output transformers I have. The input voltage was a 10V RMS sinewave at 1kHz. All measurements are with the secondary unloaded. Rp is primary resistance, and Rs is secondary resistance.
+ +Toroidal Output Transformer, Rp = 0.16 Ω + | ||||
Tap | Rs | Volts Out | Turns Ratio | Z Ratio + |
50 V | 1.36 | 25.87 | 1 : 2.59 | 1 : 6.71 + |
70 V | 2.44 | 36.14 | 1 : 3.61 | 1 : 13.03 + |
100 V | 4.00 | 51.5 | 1 : 5.15 | 1 : 26.52 + |
E-I Output Transformer, Rp = 0.19 Ω + | ||||
Tap | Rs | Volts Out | Turns Ratio | Z Ratio + |
50 V | 1.00 | 26.73 | 1 : 2.67 | 1 : 7.13 + |
70 V | 1.54 | 37.30 | 1 : 3.73 | 1 : 13.91 + |
100 V | 2.38 | 53.20 | 1 : 5.32 | 1 : 28.30 |
From this, it's easy to see that for 70V output at maximum power, the amplifier needs an RMS output voltage of 19.4V for the toroidal transformer, and 18.8V for the E-I transformer. These are both suited to an amplifier that can produce 20V RMS undistorted into a 4Ω load. Any amp that has a greater output voltage will increase the maximum line voltage from the nominal value.
+ +For example, a 200W/ 4Ω amp will provide 28V RMS, so the nominal 70V line will be around 100V with both transformers. This may be unacceptable in some installations that must comply with strict regulations.
+ +As shown below (see Saturation), a 'traditional' EI laminated transformer will almost always be a safer option than a transformer with a toroidal core. Toroidal cores have a very 'tight' magnetic circuit, and with a high enough voltage at low frequencies, the onset of saturation is sudden and vicious. An EI transformer has 'built-in' tiny air gaps and a lower permeability core, so the effects are tamed - at least to an extent. An EI transformer will have a slightly higher insertion loss, but that's usually nothing to worry about. Given the choice, I'd go with an EI transformer every time.
+ + +Although it's not provided and is theoretically not needed, it's very useful to know the impedance of the line, and that presented by the speakers with their transformers. You can also work out the maximum line current and characterise transformers with minimal markings. Note that 'nominal' impedance is used - it will vary with frequency as with all loudspeakers. This procedure also helps verify that the traditional method works and is accurate - provided of course that the line voltage and transformer taps are also known and accurate. In reality, this is possible but unlikely. Expect a deviation of up to ±1dB at best, but allowing for greater variations is a good idea.
+ +The photo shows a 'typical' small speaker transformer for 100V line applications. It can also be used with 70V lines, but with all power taps reduced by 3dB (so 2.5W, 1W, 0.5W and 250mW). The lower voltage would also mean that low frequency response will be extended by 1 octave, although this is not useful for most applications. The transformer is small, with a core measuring only 40 x 33 x 14mm. The lowest power available determines the total primary turns, and fewer turns are needed for higher power ratings. If you perform full voltage saturation tests on this type of transformer, the secondary must be terminated with the design impedance. Saturation will occur at a higher frequency is the secondary is not terminated!
+ +A point made in the Transformers articles is relevant here too - for any transformer, the maximum flux density is obtained when the transformer is idle (no load). This is the exact opposite of what many people expect!
+Using our 100W amp and 70V line example as before, the impedance is easy to calculate ...
+ ++ Zl = Za × Rt²+ +
+ Zl = 4 × 3.5² = 49Ω +
Where Zl = Minimum Load Impedance, Za is the amp's load impedance and Rt is the transformer's turns ratio. You can get the same result another way too, and if both agree you know you didn't make a mistake with the calculations.
+ ++ Z = V² / P so ...+ +
+ Z = 70² / 100 = 49Ω +
Now it's easy to determine the high voltage line current at full rated power ...
+ ++ I = V / Z so ...+ +
+ I = 70 / 49 = 1.43A +
Knowing the current allows you to calculate the voltage drop caused by the speaker line(s), so the proper cable can be installed to minimise losses. From the speaker side, if we use 8Ω speakers set to the 2W tap, we know that we should get 2W maximum at the speaker, so the voltage ratio can be determined, and from that we can calculate the impedance ratio and then the impedance presented to the line.
+ ++ V = √ ( P × Z )+ +
+ V = √ ( 2 × 8 ) = 4V
+ + Vratio = Vline / Vspkr
+ Vratio = 70 / 4 = 17.5 : 1
+ + Zratio = Vratio²
+ Zratio = 17.5² = 306.25 : 1
+ Z = Zspkr × Zratio
+ Z = 8 × 306.25 = 2,450Ω for each speaker
+
We established that using the generalised method that's traditionally used, we could have 50 * 2W speakers, and 2,450 / 50 speakers = 49Ω. We can use a simple division because the speakers are all in parallel and the same impedance. This is exactly the load impedance we calculated for the line at full power, and the two methods give an identical result. Having determined that the shorthand method is indeed accurate, we may assume that's all we need, and it's much simpler.
+ +Something that we will almost certainly be unsure of is the speaker transformer, especially if purchased as a 'general purpose' line transformer. One that I checked is claimed to be a 100V line transformer, but how can we be sure? Can we use it with a 70V line, and what will happen if we do? What about the unmarked transformers you have in your junk box? Can we use them, perhaps? There is an easy way to find out.
+ +It was established above that for 2W, we need 4V output, and a voltage ratio of 17.5 : 1, but the voltage rating for the transformer I have is 100V, not 70V as required. It's easy enough to measure the voltage ratios - just inject a sinewave signal into the speaker side at around 1kHz and 1V, and measure the voltages on the primary taps. The input voltage doesn't have to be accurate, as long as you can read all voltages accurately. From that, you can work out the turns ratio.
+ +While you can measure inductance as I did for the table, it's not generally useful. Ideally it will be as high as possible, but reality will almost always show the bare minimum. On the 5W tap, the transformer's magnetising current will be ~55mA at 50Hz without a speaker connected (ignoring saturation!). That's more than the 'ideal' current that would be drawn with the speaker connected. If used with the 5W tap, the transformer pictured above has a low-frequency limit of about 150Hz at 100V. Without tests you'll never know. For what it's worth, I did try it with 100V at 50Hz on the 5W tap, and saturation was gross. The transformer also got quite warm quite quickly.
+ +You can calculate the reactance of any inductance easily, as it's simply 2π·f·L. For 5.7H, that's ~1.8k at 50Hz, so current at 50Hz is 55.5mA. This is unacceptable for operation at 50Hz, and remains marginal at 100Hz with a 70V line. With a 100V line, the 5W tap is pretty much unusable! You don't need to do any of these calculations, as the saturation test is the only thing you can rely on, and that's by far the most important parameter.
+ +100V Speaker Transformer, Rs = 0.47 Ω, 1V Input + | |||||
Tap | Resistance | Inductance | Volts Out | Turns Ratio ¹ | Z Ratio + |
5W | 61 | 5.7 H | 16.5 | 15 : 1 | 225 : 1 + |
2W | 99 | 13 H | 26.0 | 24 : 1 | 576 : 1 + |
1W | 143 | 28 H | 36.6 | 34 : 1 | 1156 : 1 + |
0.5W | 208 | 52 H | 51.4 | 49 : 1 | 2304 : 1 |
The measured turns ratios will be different from the theoretical value as shown in the table. For example, to get 2W we need a ratio of 25:1 (100V line), but the transformer shown has a ratio of 26:1 (3.86V out from the 2W tap, 100V input). This may be done to compensate for the resistive losses, or it's simply the result of the manufacturer's winding techniques. Never expect it to be especially accurate. The nominal 2W output may only provide ~1.7W in reality (a loss of less than 1dB). The overall losses add up though, with the primary resistance (and inaccurate turns ratios) being the most troublesome. You can quite easily lose up to 300mW (5W taps) in each speaker transformer due to winding resistances. However, if you design a system based on everything being 'just right' and you have no reserve power available you will have problems.
+ +If used on a 70V line, the closest to our required ratio (to obtain around 2.5W) is obtained from the 5W tap, but now we must revise the number of speakers because the turns ratio is different. If we neglect this the amp will be overloaded. With 8Ω speakers, we'll have a lower line impedance (1,800Ω instead of 2,450Ω), so we can use a maximum of 36 speakers - not 50 as we could before ...
+ ++ Z = Zspkr × Zratio (5W tap)+ +
+ Z = 8 × 225 = 1,800 +
Apparently small differences multiply quickly, and it's all too easy to miscalculate and overload the amplifier unless you know how to calculate the voltage and impedance ratios.
+ +It is accepted by all amplifier manufacturers that the nominal impedance of a speaker is simply a marketing figure, and the real impedance will be higher at some frequencies and lower at others. There is a safety margin included in all amp designs to accommodate this fact, but it is very unwise to deliberately overload the amp just because you think it has an inbuilt margin for error. It probably does, but the amp has to work harder, and may overheat and fail if used with a lower than rated total load impedance.
+ +If you used all 50 speakers with the transformer described above set to the 5W tap, the amp is now expected to deliver a maximum power of over 136W into a 36Ω load (instead of the 49Ω load it was designed for). It might survive, or it might not. It will certainly have to work harder (and thus get hotter), but the lowered impedance may also cause the amp's protection circuits to operate, something that must be avoided unless there is a real fault.
+ +The above represents absolutely the least of anyone's concerns though. There are much more serious matters that need to be addressed, but it seems that even many established manufacturers are unaware of the issues (or perhaps marginally aware at best). It is incredibly easy to destroy an amplifier if it's allowed to push a transformer into saturation. Even protection schemes that will prevent failure with a short-circuited output may be unable to save the amp when driving a saturated transformer.
+ +You can run these tests on any transformers you happen to have handy, including mains transformers. Indeed, some small mains transformers can give much better results than 'proper' speaker transformers if they happen to have the right voltage ratio.
+ +The final number of speakers that can be added will always be lower than expected due to transformer insertion loss. This assumes that you really do expect exactly 5W from each 5W speaker - in reality it doesn't matter much. The final SPL you need is determined by a great many factors that can never be controlled and may even vary during the day. Loss of a Watt here or there is meaningless in the greater scheme of things.
+ +I was recently asked why the speaker transformers use a tapped primary, when it would presumably be better if the primary were fixed, with taps on the secondary. This seems quite reasonable, but fails to account for winding resistance due to the number of primary turns needed. If we look at the transformer described above and in Table 2.4.1, we see that for 5W output (100V line), the primary resistance is 61Ω, rising to 208Ω for the 0.5W tap. Ultimately, it's all about saturation! If the transformer had a single primary of 61Ω, when a speaker were to be connected to a secondary 0.5W tap, the transformer will be close to being unloaded. This increases the risk of saturation quite dramatically.
+ +For a given low frequency limit, an unloaded transformer will saturate at a much lower voltage than one that's loaded to its design rating. To prevent this, the transformer would need more primary turns (and therefore a higher winding resistance). This makes little difference for 0.5W output, but the extra resistance will cause greater losses when the 5W tap is used. The manufacturers of these transformers worked out a long time ago that the easiest (and cheapest) method is the one used - a tapped primary.
+ +As a side note, if you use the 5W tap with a 100V line, the 0.5W tap will have a voltage of over 300V at maximum level (~220V for a 70V line). Unused primary taps should be insulated with a suitable cover, perhaps a piece of heatshrink tubing or a purpose-designed insulator cap. This is rarely done, but it does present a potential (sorry ) hazard. I'm not aware of any regulations that cover this, but it's quite real.
The situation is simpler with 100V lines. For 100W, the maximum line voltage is 100V, but all calculations are pretty much the same as for 70V lines. For 100W, the line impedance is 100Ω. This reduces the current compared to the 70V system. Transformers intended for 100V systems can be used with a 70V system, but the converse may not be true. Transformers wound for 70V will have less inductance, and may saturate earlier than expected if the lower frequency limit isn't increased by a factor of 1.4 (meaning a lower limit of ~70Hz for a nominal 50Hz transformer).
+ ++ Z = V² / P, so ...+ +
+ Z = 100² / 100 = 100Ω +
Now it's easy to determine the high voltage line current at full rated power ...
+ ++ I = V / Z, so ...+ +
+ I = 100 / 100 = 1.0A +
Knowing the current allows you to calculate the voltage drop caused by the speaker line(s), so the proper cable can be installed to minimise losses. From the speaker side, if we use 8Ω speakers set to the 2W tap, we know that we should get 2W maximum at the speaker, so the voltage ratio can be determined, and from that we can calculate the impedance ratio and then the impedance presented to the line.
+ ++ V = √ ( P × Z )+ +
+ V = √ ( 2 × 8 ) = 4V
+ + Vratio = Vline / Vspkr
+ Vratio = 100 / 4 = 25 : 1
+ + Zratio = Vratio²
+ Zratio = 25² = 625 : 1
+ Z = Zspkr × Zratio
+ Z = 8 × 625 = 5kΩ for each speaker
+
As you can see, the calculations use the same formulae, so the end result is different. The power delivered to each speaker isn't changed though, and cabling losses are slightly less for the same gauge of cable.
+ + +Despite the vast amount of information on the Net, there is very little that discusses transformer core saturation. One would expect that waveforms such as those shown in Figures 5, 6 and (perhaps) 7 would be plentiful, but one would be mistaken. The fourth reference [4] was only found after an extensive search (when the article was almost complete), and is the only one I've found that discusses transformer saturation in any depth. It's rather disturbing when one of the most important pieces of information about 70/100V line systems is so difficult to find.
+ +Transformer saturation is simply an amplifier killer. It is essential that any amplifier connected to a transformer is designed specifically for the purpose, or is provided with enough external protection to limit the current so as to prevent damage. I was able to create peak saturation currents of over 50A into two perfectly ordinary line output transformers - both were designed to match a 100W power amplifier to a 70V or 100V line.
+ +Simply stated, saturation is a function of voltage and time. Any transformer will saturate if the applied voltage remains at one polarity for long enough. This is why you see the saturation current rise to a peak at the zero-crossing point of the applied voltage waveform (see Figures 5, 6 & 7). Once the current rise is no longer limited by inductance (which approaches zero as the core enters saturation), it becomes limited only by the DC resistance of the winding. The longer the waveform remains at one polarity (as frequency is reduced for example), it matters not whether the voltage waveform is sine, square or anything else, the current will increase rapidly as the core saturates.
+ +Higher voltages increase the rate-of-change of the magnetising current, so as voltage is increased, the frequency at which the core saturates also increases. These two parameters (voltage and frequency) are inextricably linked - if one is increased, so is the other and vice versa. Once the saturation point has been reached, a very small increase of voltage or decrease of frequency will cause the saturation current to increase alarmingly.
+ +It is important to understand that transformer saturation is not affected substantially by the power delivered to the circuit. Saturation effects are worse when the transformer is unloaded! While this may be counter-intuitive, it is true regardless of whether you believe it to be so or not. There is more information available in the article Transformers - Part 2. Although the article concentrates on mains power transformers the essential properties are unchanged.
+ +Most people will assume that the better the transformer (for example a toroid instead of an E-I laminated unit) will improve things, but in reality the exact opposite is true. An E-I transformer has losses that help to protect the driving amplifier, and the saturation curve is significantly less savage. Measurement data are shown below, and the values were measured with real transformers, driven from a real amplifier that can provide ±50A spikes with ease (the dual-board version of P68, but with a reduced supply voltage for these tests).
+ +Note that all measurements taken are with the transformer(s) unloaded. This is the worst case situation, but it will happen in an installation. As the transformers are loaded, the saturation effects are reduced - indeed, if the secondary is short-circuited, the transformer will never saturate (but the amplifier will probably blow up). For any reasonably sized 70V line output transformer, the primary's winding resistance is so low that the difference between the loaded and unloaded saturation currents will be negligible in the greater scheme of things.
+ +NOTE: Do NOT run these tests unless you are absolutely certain that the amp can handle saturation. These data are provided for information, and although easily duplicated can kill an amplifier very easily. Saturation tests are shown further down, and provide an easy way to take measurements without placing the amp at risk.
+ +Toroidal Core | E-I Core + | ||
Frequency | I sat | Frequency | I sat + |
40.0 | 1A | 40.0 | 1A + |
39.1 | 2A | 33.6 | 2A + |
38.5 | 3A | 31.0 | 3A + |
38.1 | 4A | 28.8 | 4A + |
37.6 | 6A | 27.0 | 6A + |
37.3 | 8A | 26.3 | 8A + |
37.1 | 10A | 25.6 | 10A + |
36.3 | 20A | 22.8 | 20A + |
35.2 | 30A | 20.5 | 30A + |
34.2 | 40A | 18.7 | 40A |
The test voltage I used produced around 75V on the 70V line output for the toroid, and 63V for the E-I transformer. This was done purely for consistency for the measurements, but at 40Hz the voltage is already causing saturation - The input voltage and initial frequency used were simply to create a baseline. I don't have one to test (and they are fairly uncommon), but C-Core transformers are almost as bad as toroidal types, and both should be avoided - regardless of sellers' recommendations to the contrary.
+ +While the table above looks pretty scary, it becomes even more scary when shown graphically. Because of the narrow frequency range, it was easier to graph the frequency linearly rather than the traditional logarithmic method. As you can see easily, the toroidal transformer has a much faster rise of saturation current as frequency is reduced, and few general purpose amplifiers can be expected to be able to provide 40A of peak current and survive.
+ +It's not apparent from the graph, but as shown below, the peak current occurs while the full supply voltage is across the output transistors! As you can see, the current spike is at its greatest when the output voltage is zero, ensuring the maximum possible dissipation in the output transistors. This is a disastrous situation for most power amps, because if the supply voltage is (say) 35V and there is a 40A peak current, instantaneous transistor dissipation is 1400W (no, that's not a misprint). Few transistors (or parallel combinations) will tolerate that much peak power and survive the ordeal for very long.
+ +First, let's look at the waveform at the onset of saturation. I took this to be 1A peak for these tests, but that's still too high in reality. The reasons will be made clear shortly. There's 100mV peak across a 0.1Ω resistor, giving 1A peak and 223mA RMS (from the oscilloscope readout).
+ +This is a reasonable condition, and one that most amplifiers can handle easily, but look at the graph or table above. A very small reduction of frequency (or increase of voltage) will cause a huge increase in current. At 34Hz (only 6Hz lower than the frequency used above, the peak current for the toroidal transformer has risen to the point where most amplifiers will either fail, or their protection circuits operate. The former ensures immediate silence, and the latter causes an extremely unpleasant-sounding distortion waveform. The E-I transformer fares better, but it's immediately obvious that low frequencies and high voltages will still cause major problems for the amplifier.
+ +You may also see that there is a slight offset - the negative peaks are smaller than the positive peaks. This is due to a small DC offset from the power amp. Normally, it would cause no trouble at all, but because of the extremely low impedance of the winding it becomes quite noticeable.
+ +As you can see in the above, the peak current is only 35A, not 40A at all. What you are looking at is the voltage developed across a 0.1Ω resistor in series with the transformer primary (yellow trace) and the voltage on the transformer secondary (blue trace). Peak saturation current occurs at the zero crossing point of the waveform, and you can see that the voltage waveform is distorted around the zero point too.
+ +Since there's 3.5V peak across 0.1Ω, that's 35 amps peak, and the RMS value is 10.9A (1.09V RMS across 0.1Ω). Somewhat predictably, the transformer got warm during this test, although the amplifier didn't seem to be troubled. However, it would have been very easy to destroy the amp if I wasn't very careful, despite the very substantial and robust output stage. The supply voltage was reduced to the minimum possible without clipping using a Variac - otherwise I would have had an expensive repair job.
+ +As should now be quite obvious, there is more to this than simply hooking up a line output transformer to any old amplifier that you have lying around (or purchased for the job). Any frequency that can be delivered to the transformer that causes heavy saturation places the amp at risk, and even the low primary resistance of the winding is cause for concern. If a transformer has a primary resistance of 0.1Ω and the amplifier has an offset of 100mV (high, but not normally a problem with a loudspeaker load), there will be 1A of DC flowing through the winding! This alone may be sufficient to cause partial saturation.
+ +DC through the primary will cause the transformer to saturate earlier in one direction, making an already troublesome combination even worse. Any frequency below that which causes saturation must be attenuated heavily, using a filter with at least 24dB/octave rolloff. It is also wise to feed the transformer via a resistor and perhaps a fuse, and the amplifier must have exemplary protection circuits. Low frequency turn-on or off thumps must be eliminated completely, perhaps using a relay timed so that it can never close until the amp is 100% stable.
+ +Contrast all of this against the recommendations of some (including well known) amp makers, who will cheerfully sell you a line transformer to go with their amplifier, but provide absolutely no information so you know how to do the job properly. One that I looked at claims a frequency response from 20Hz to 20kHz, but gives no power level where that is measured. There is zero information about saturation or protection for the amplifier - just connect it up and away you go, apparently.
+ +Note that when re-entrant horns are used, all of the measurements can usually be dispensed with. You will need to use a high pass filter to protect the compression drivers, and that means that the cutoff frequency will be somewhere between 200-300Hz. This is well above the frequency where any even passably acceptable transformer will saturate. However, I'd run the test (described below) anyway, just to be sure.
+ +
Figure 3.4 - Transformer Saturation Spike Waveform
Any amplifier that has full SOA (safe operating area) protection will generate spikes when the transformer saturates. This is shown above, and you can see not only the spike waveform, but the voltage and frequency that was used for the test. This waveform as well as the two shown below were all performed with my small test amp that has an LM1875 power amp IC built in. This IC has full protection, and at a very low level of under 8.6V and a frequency of 33Hz the action of the protection is immediately apparent. In case you were wondering, it sounds just as bad as it looks.
+ +When a resistor capacitor network as shown in Figure 10 below was fitted (I used 8.2Ω in parallel with 235µF), the signal distorts, but there is no sign of amplifier distress. The image on the left shows the voltage waveform across the transformer, and the one on the right shows the amplifier output waveform. The amp is now properly protected, and although this technique does not prevent saturation it does save the amplifier from enormous stress.
+ +The protection circuits may well save the amp, but the DC protection network means that high level, low frequency signals cause comparatively subtle distortion rather than the really evil-sounding spiked waveform shown in Figure 7. The amp is also isolated from the very low DC resistance of the transformer primary. However, I still consider the use of a properly configured high order filter (at least 24dB/octave) to be absolutely essential - both are needed, always.
+ + +There is a fairly easy way that you can test a transformer to find the saturation limit. All you need is an amplifier with enough output voltage swing to drive the transformer to full output, a 10Ω 5W resistor, a signal generator (sinewave) and a multimeter (preferably true RMS). Connect everything up as shown below. The diagram also shows how to measure the saturation frequency of the speaker transformers ... provided it is higher than that for the output transformer!
+ +It's safer to use the 100V output of the output transformer (if provided) for the speaker transformer test, with the amplifier output reduced until you have 70V at the output. You must use the highest power tap that is provided on the speaker transformer. Even if you don't plan to use it for your installation, that doesn't mean that someone won't change it later. The transformer will saturate at a higher frequency for the highest power tap, so testing at lower power taps will give you false hope.
+ +The measurement details are shown in the next section. Don't attempt to run both tests at the same time, but it's alright to leave the 10Ω resistor in series with the output transformer when doing the speaker transformer tests, provided you can still get the required line voltage.
+ +Apply a signal at around 1kHz, and increase the amp's output voltage until the secondary of the transformer gives 70V (for a 70V line - otherwise the desired line voltage). Measure the voltage across the 10Ω resistor and note it down. Slowly reduce the frequency until the voltage measured across the resistor is no more than 3 to 3.5 times the voltage you measured at 1kHz. This can be expected to be somewhere between 50Hz to 100Hz, depending on the size of the transformer compared to its power rating - bigger transformers will work at lower frequencies and/or higher power.
+ +Note the frequency. This is the lowest frequency at which the transformer should be used for the voltage used. Include a filter with at least 24dB/octave rolloff (preferably 36dB/octave - see Project 99), set with a -3dB frequency that's no lower than the test frequency. For example ...
+ ++ 0.65V across 10Ω at 1kHz+ +
+ 2.1V at 70Hz +
Therefore, the filter should be configured so that its -3dB frequency is 70Hz or above. The values above were taken from a test I did with the E-I line transformer. The selection of 70Hz allows the transformer to be driven to full power easily, with no risk of saturation unless the input voltage is well in excess of that required. This is why amplifiers should have the correct power rating for the transformer used.
+ +Note: Should the amp have more output voltage capability than required to get 70V/100V, the transformer may saturate at low frequencies due to the higher voltage, frequency notwithstanding. A voltage increase of 6dB means that the saturation frequency is doubled!
+ + +Voltage
+This brings us to the next issue - voltage. The saturation curve of a transformer will show that if the frequency is reduced by one octave, the applied voltage must be reduced by 6dB (half the voltage) for the same saturation current. While this might seem to demonstrate that a 6dB/octave high pass filter is sufficient, this simply cannot prevent excess voltage at low frequencies. High-level low-frequency noises from connected equipment being turned on and off, dropped microphones or just the simple act of adjusting the bass tone control can defeat the efforts of a simple filter. No filter less than 24dB/octave is sufficient to protect the system.
A transformer that is on the verge of saturation at a particular voltage and frequency will saturate heavily if either voltage is increased or frequency is reduced. The effects are identical, as shown in the following table. The test frequency was 40Hz, I used the 70V tap, and the same two transformers were used as for the frequency test.
+ +Toroid | E-I + | |
Saturation Current | Output Voltage | Output Voltage + |
1 A | 75.4 | 63.0 + |
2 A | 78.9 | 76.8 + |
4 A | 80.3 | 88.4 + |
10 A | 82.1 | 100.0 + |
20 A | 83.9 | 107.7* + |
30 A | 84.9 | 111.5* + |
40 A | 86.2 | 115.4* |
* The power amp was clipping when these three tests were performed. Without clipping, the voltages would have been much higher. This also demonstrates clearly that just because the amplifier clips, this does not prevent or reduce saturation.
+ +It is very clear that the E-I transformer is far more tolerant of excess voltage and low frequencies than the toroidal. It is fair to say that using a toroidal transformer for this application is a recipe for disaster - they are simply not suitable for the job because they have such a vicious saturation characteristic. The same applies for C-cores, which although uncommon, do exist for high voltage systems.
+ +A fast peak limiter can be used to 'tame' the voltage output from larger than necessary power amps, but it has to be 100% effective, and not generate any low frequency artifacts when it operates. Use of a limiter should be considered mandatory anyway, as it will prevent the customer from driving the amp into clipping, which results in harsh distortion that is very unpleasant for those subjected to it. If the system is also used for emergency evacuation announcements and/or sirens, these should bypass the limiter in most cases.
+ +Alternatively, re-test the transformer with the amplifier at the onset of clipping, provided the line output voltage is no more than 3dB greater than the nominal voltage (100V for a 70V line or 140V for a 100V line). This can only be done if the extra voltage does not cause a conflict with regulations or other conditions that may apply to the installation.
+ + +Now that the main step-up transformer has been covered, we can look at the speaker transformers. These will also be subjected to saturation, and although the effect of just one is insignificant, when there are perhaps 30 or more of them connected to the amplifier the effect is just as bad as for a saturating line drive transformer.
+ +Using the same transformer discussed above, it's useful to check its voltage and frequency limits. Since the tranny is rated for 100V lines but is being used with a 70V line, we might expect it to work to a lower frequency than would be the case when used at full rated voltage. However, the typical speaker transformer is made to a price, and good low frequency response is not a parameter that's considered. Nor will it be included in the sales literature for cheap examples.
+ +It was established that with a 70V line, the 5W tap was the best match for 2W output into an 8Ω speaker for the transformer I have. The primary resistance for the 5W tap is 133Ω and the impedance cannot fall below that, regardless of saturation. As with all transformers, the output will be grossly distorted when the core saturates, and this alone is reason enough to restrict the low frequency to something sensible.
+ +We already know the impedance that needs to be reflected back to the 70V line (1,800Ω), and the maximum current from the line (ignoring losses for the moment) is therefore ...
+ ++ I = V / R+ +
+ I = 70 / 1800 = 38.8mA +
When I tested this transformer with 70V at 50Hz, saturation was clearly evident - 200mA peak (83mA RMS). This is more than double the current that should be drawn by the speaker, without a speaker even being connected! Most speaker transformers are very basic - expect that few can manage anything below 70Hz - regardless of claims made!
+ +Again, the same test that was used for the amplifier transformer can be applied, except that we need a variable frequency source of 70V RMS - we'll use a 100Ω test resistor and the power amp line transformer, but wired for 100V out so it can't saturate during the test. If the same criterion is adopted as before, we will need to limit the LF response to no less than that which increases magnetising current by 3-3.5 times compared to the 1kHz value.
+ ++ 0.445V across 100Ω at 1kHz (4.45mA)+ +
+ 1.45V at 80Hz (14.5mA)
+ 4.08V at 80Hz, secondary loaded with 8Ω resistive (40.8mA)
+
As noted earlier, all tests were conducted with the transformers unloaded, but I included the load to double-check the result. Adding a load reduces the effects of saturation, so a higher voltage or lower frequency can be applied than the tests indicate. However, the no-load test is far safer for the installation.
+ +No-load testing is also more realistic than you might imagine, because the saturation frequency of the transformer and resonant frequency of the speaker will be at very similar frequencies (horn speakers not included). At resonance, the impedance of a cone speaker rises dramatically, so the transformer will be operating at a very light load, and will saturate earlier than you would measure with a resistive load (this has been tested and verified).
+ +Based on the above, the transformer I have should have a cutoff frequency of 80Hz (70V line). While it is possible to get it down to 70Hz to match the main amp output transformer, there is a small risk. In this case, I would be inclined to accept the risk - it's highly unlikely that all connected speakers will become open circuit from the transformer secondaries, and the loaded performance at 70Hz was found to be acceptable with both resistive and speaker loads. This is primarily because of losses in the primary winding, which has a DC resistance of 99Ω for the 2W tap (I consider the 5W tap to be unusable), so the effective operating voltage is reduced slightly.
+ +At the full 70V line voltage and with a speaker connected, some distortion at 70Hz was audible, and the total audio current was roughly the same whether the speaker was connected or not! Remember that this is worst case, with the amp on the verge of clipping, so everyday performance has a good safety margin.
+ +If the speaker transformers saturate at a lower frequency than the output transformer, this means that the filter for the output transformer determines the -3dB frequency. You cannot run the output transformer at a lower frequency than already determined just because the speaker transformers will handle it - the output tranny couldn't cope with lower frequencies before, and still can't.
+ + +Most transformers will attenuate the high frequencies to a degree. This is due to simplistic winding techniques, and in general none of the techniques for achieving good HF response with good quality valve output transformers are used. These techniques involve a process called 'interleaving', where the primary and secondary windings are split into sections and literally interleaved.
+ +Because of the relatively small step-up and step-down ratios of line transformers, adequate HF response is achieved without expensive and time-consuming hi-fi winding techniques. Yes, there will be some loss, but it's rare that it will cause a problem. If it's found that the speakers sound a little dull, it's easy to add some treble boost to compensate. Response above 16kHz is not needed - the low frequencies are already rolled off, and extending to 20kHz is completely pointless. Very few ceiling speakers will reproduce 20kHz in a meaningful way, and no re-entrant horns can do so. Despite claims to the contrary by some who may wax lyrical about their 'extended top end', it's a waste of time and effort to even attempt 20kHz.
+ +In some cases, you may see ringing on the line output if the amp is fed with a squarewave for testing. This is due to the transformer's leakage inductance, and can be cured with a snubber (basically a Zobel network, with a resistor and capacitor in series). It's usually safe to ignore the effects, but if it worries you then you will usually have to determine the optimum resistance and capacitance empirically (by experiment). It's certainly possible to calculate the values if you know the leakage inductance and the cable capacitance on the secondary side, but mostly you won't know either, and a bit of ringing is rarely a problem. This is public address, not hi-fi.
+ + +Since there is resistance in the transformer windings, there are losses. Insertion loss is normally quoted in dB, and indicates how much power will be lost by each transformer. Compared to other system losses (especially the resistance of long cable runs), transformer insertion loss is not insignificant, but it should not be an issue with a well designed system. This is especially true for the output (line driver) transformers.
+ +There are so many things that should be described for line transformers, but insertion loss is pretty much standard fare, despite being only marginally useful. It's common for the loudspeaker sensitivity or SPL due to surroundings to vary by far more than the typical insertion loss. Attempting to set the system SPL to an exact figure is pointless, because all reproduced material has some dynamic range, and that means the level varies anyway.
+ +Toroid, Rp = 0.16Ω + | E-I, Rp = 0.19Ω | Speaker, Rs = 0.47Ω + | |||
50 V | Rs = 1.36 | 50 V | Rs = 1.00 | 0.5W | Rp = 208 + |
70 V | Rs = 2.44 | 70 V | Rs = 1.54 | 1 W | Rp = 143 + |
100 V | Rs = 4.00 | 100 V | Rs = 2.38 | 2 W | Rp = 99 + |
5 W | Rp = 61 |
Table 3.4.1 shows the winding resistances I measured for the three transformers I have on hand. The winding resistance for the output (step-up) transformers is low, but it can't be ignored. Measurement tells me that the insertion loss for the two output transformers is around 0.7dB at full load. This is reduced if the total load impedance is higher than the minimum 49Ω we determined in section 2.4.
+ +Most speaker transformers have an insertion loss of around 0.5 - 1.5dB, but the figures quoted are often rather optimistic, especially for cheap transformers. To put this into perspective, if a speaker transformer has an insertion loss of 1.5dB and is connected to the 5W tap, the transformer/speaker combination will require around 7W of amplifier power in order to get 5W delivered to the speaker.
+ +Insertion loss is entirely the result of winding resistance. Higher resistance means more insertion loss for a given speaker power. It's easy to test it, and the test can be at any convenient input voltage. I used 10V, and the measured insertion loss of the speaker transformer was just over 1dB at 1kHz with an 8Ω resistive load. Given the typical impedance curve of most cone loudspeakers, the actual loss will typically be lower than the measured value because the impedance will only match the rated 8Ω over a limited frequency range. Note that re-entrant horns will provide a relatively constant load across their frequency range, because their impedance usually doesn't vary as much as a cone speaker.
+ +The effects of insertion loss will reduce the number of speakers that can be used with a given amplifier, or speakers will be up to 1.5dB quieter than expected. No installed system should be so close to the limits that a loss of 1.5dB can't easily be corrected by a small increase of output voltage ... via the volume control.
+ + +There are several things about the amplifier itself that must be considered. Under no circumstances can the amp be allowed to produce a low frequency thump when switched on/off. Any significant LF energy will cause instant saturation of the output transformer and possible failure of output transistors. If the amp uses a relay as part of its protection circuit, the simple action of the relay opening and/or closing at the wrong part of the AC waveform can cause extremely high saturation current and/or a high 'flyback' voltage. The most likely effect of this will be that the protection circuit trips again, and this can easily repeat until the amp fails completely.
+ +An input clipping and muting circuit is also essential, and this should be after the high pass filter. The filter itself will likely produce a high offset as power is applied and removed, and if this gets through to the power amp it will be amplified and again cause heavy saturation. The reason for all these protection systems is simple - when a system is installed, no-one knows what the customer will do with it. Something as simple as a dropped microphone can cause a low frequency, high amplitude signal sufficient to cause output stage failure in the power amplifier.
+ +Few commercial or high-power PA amps (other than those specifically designed for constant voltage line usage) will satisfy these requirements, and almost no domestic amps will even come close. It is highly recommended that a resistor/ capacitor network is included to help protect the amp against the extremely low DC resistance of the transformer as shown below ...
+ +The DC protection network needs to be tuned so that only frequencies well below that which cause saturation are attenuated. This is not a filter, it simply isolates the amplifier from the low resistance of the transformer's primary winding. The capacitance should be 4 times the value you may have thought, as this ensures that the voltage across the resistor is kept low, reducing heat. For example, if a 3.9Ω resistor is used and bypassed with caps as shown, the capacitors for a 4Ω transformer load need a combined capacitance of about 2,700µF so that operation at 70Hz is not affected - 4 x 2,700µF in series/parallel gives 2,700µF. While the use of 63V caps (preferably rated at 105°C) might seem like overkill, it's not. The ripple current rating has to be high enough to ensure that the caps never even get warm in use. The typical average ripple current with the values shown will be about 3A with a 100W amp at the onset of clipping. The resistor needs to have a rating of at least 10W for the example amplifier. Typical 2,700µF/63V caps should have a ripple current rating of greater than 2A RMS, so in a series parallel connection have some reserve.
+ +The capacitors should not be situated close to the resistor, as that may get very hot under some fault conditions and may overheat the electros. The circuit is easily made up on tagstrips and designed so it's easy to replace. The capacitor current may be rather high if the high voltage line is shorted, so good amplifier fault protection is a must to protect the capacitors as well. The network shown won't be especially cheap, but regardless of the cost, it's still far cheaper than having to send someone to replace the amplifier and then having the amp repaired (only to fail again if these precautions aren't adopted) - as well as finding and fixing the original fault of course.
+ +The capacitance is calculated using a slight modification of the traditional formula ...
+ ++ C = ( 2π × R × f ) × 4 (result is in Farads) ++ +
If the value calculated does not exist or can't be located easily, use the next larger size cap. For example, if you work out that 2,700µF is ideal but unobtainable at a sensible price, you can use 3,300µF or even 4,700µF. Remember to check the ripple current rating!
+ +Using this arrangement does not mean that the steep high-pass filter can be eliminated! This is in addition to any other frequency protection scheme. Likewise, the amplifier's output stage protection circuits still need to be extremely good - capable of protecting the amplifier against a long term short-circuit. Few can do so! With the high pass filter in circuit, the resistor will normally have very little voltage across it, so will normally only get slightly warm. However, if the high voltage line is shorted or the amp fails it may get very hot. The capacitors may be damaged if the HV line is shorted due to high current, however the amp's own protection circuits should normally limit current to a safe value.
+ +IMO, all commercial installations should use amplifiers that have been designed from the ground up for this purpose alone. The idea that PA or domestic amplifiers can simply be fitted with transformers and used is not sensible - there are too many variables, any one of which can render the system inoperable. The low resistance of the transformer makes it a very hostile load for any amplifier.
+ +It's also very important to understand that just because the amplifier clips, this does not reduce saturation effects. These remain as bad or worse than when the amplifier is not clipped, because the voltage remains at the peak value for longer, allowing the current in the transformer to increase to dangerous levels.
+ + +To be certain that an amplifier/transformer combination will work reliably, it must be able to survive some basic torture tests. If the amp doesn't have the basic protection schemes that have been outlined here, there is every chance that it will fail. The tests can be done in a few minutes, and will improve your confidence in the installation if all tests pass. In essence, the tests are ...
+ +1 - This test simulates what happens if a mic lead develops an open circuit shield, or can be the result of connecting an auxiliary product (such as a CD player). In both cases it is possible to get remarkably high voltages, but at quite high impedance levels so current is low. The amplifier's inputs must be tolerant of any real-world fault, and not suffer any damage. In some installations, mic leads can be very long, and faults are inevitable during the life of the equipment.
+ +2 - Verify that there is no evidence of transformer saturation with any possible input signal. In some cases the protection circuits will operate, but the test must show that no damage occurs and that the amp continues to function after the test.
+ +3 - At some time, the amp is going to get feedback from the speaker line back to the input. This test is capable of blowing up any amplifier ever made, so you may be understandably reticent to destroy the amp for no good reason. It is quite difficult to ensure that an amplifier can still reproduce normal high frequencies with audio, but cannot be destroyed by the test - however it can be done by using a low pass input filter at ~16kHz and a peak limiter that limits high frequencies to a lower level than low and midrange frequencies. I know it can be done because I've done it.
+ +4 - This test is used to simulate normal speech, but with the amp driven to clipping. Speech waveforms are almost always asymmetrical, and some amplifiers cannot cope with asymmetry without producing a (sometimes significant) DC offset at the output (see Power Amplifier Clipping). The input sinewave from the test oscillator is clipped using a diode in parallel with the signal, and the amp gain should be increased to the point where the clipped input peak just causes the amplifier to clip. The unclipped part of the input waveform will now be heavily clipped by the amplifier.
+ +If the DC protection circuit is not included between the amp and transformer, this test will cause very heavy transformer saturation with some amps. The test can be bypassed if the DC protection circuit is included.
+ +5 - The final test is self explanatory. It is inevitable that the high voltage speaker line will be shorted at some stage, so it's better to know what happens before the event, rather than having to figure out what went wrong afterwards. You need to be very confident of the amp's protection circuits, and use of the DC protection circuit may make the test less likely to cause amplifier failure.
+ +A colleague used to work on line systems some time ago, and the faults he encountered included everything listed above. With most of the equipment, the only uncertainty was how long it would take the abuse before it failed - everyone knew that it would fail, just not when!
+ + +As outlined throughout this article, there is an absolute need for some input signal conditioning to ensure that the amplifier is as reliable as possible. Some of the essential processing may be included within the amplifier if it's been designed specifically for high voltage line use, but that's not always something you can count on. The following items are not listed in order of importance - all should be used as a matter of course.
+ +While the list looks rather daunting, none of the items listed is expensive or difficult. The peak limiter is a possible exception, but savvy installers should look for amplifiers that include this feature. Many suppliers offer peak limiter modules for high voltage audio amplifiers.
+ +The drawing above shows a suitable diode clipping network, along with the high and low pass filters. The maximum signal level is limited to about 1.3V RMS before the clipper starts to limit the peak amplitude. Distortion with 1.3V RMS is under 2%. If more level is needed, just add more diodes - for example an additional 6 diodes will raise the maximum signal level to 2.7V RMS. Make sure that you use the smallest number of diodes possible. The number needed is determined by the power amplifier's input sensitivity with the volume control (if fitted) set to maximum.
+ +The filter frequencies are as shown in the drawing for the values indicated, and the frequency can be increased or decreased by changing capacitors C1...C4. Lower capacitance gives a higher frequency and vice versa. If C1...C4 are changed to 82nF the -3dB frequency is increased to 85Hz and if the caps are increased to 120nF the -3dB frequency is reduced to 58Hz. The selection of parts for the 16kHz low-pass filter is based on a typical power amplifier or peak limiter input impedance of 22k, and should not need changing.
+ +A suitable compressor is shown in Project 152 (Part II) in Fig. 12. It was designed for bass guitar, but works very well with any programme material. It's not particularly fast, but the diode clipper will prevent rapid excursions. The input would ideally come from the output of the filter shown above, and the degree of compression should be just enough to ensure that the level is fairly constant. The output control determines the voltage applied to the power amplifier, and hence the maximum line voltage.
+ +With these circuits in place, you provide good protection for the entire system. All are easy to build and use low-cost parts throughout. Of course nothing is completely foolproof, partly because fools are often very ingenious. The job of a PA installer is to make it as difficult as possible for anyone to circumvent the protection systems, deliberately or otherwise.
There are countless amplifiers on the market now that can provide 70 or 100V RMS outputs when connected as BTL (bridge-tied-load). Each amp only needs to be able to supply 35V or 50V RMS, and because the outputs are 180° apart, the total voltage is the sum of the two outputs. If such an amplifier is designed to supply 4Ω loads, rated power output needs to be 306W/4Ω (70V total) or 625W/4Ω (100V total). Great care is needed, because if the amp is capable of more power, that means the line voltage can be higher than the design goal, and all the speaker transformers will saturate at higher frequencies than expected.
+ +A fast peak limiter that can be set accurately to absolutely limit the maximum voltage is one answer, but it must be secured so that no-one can play with the settings. Because amps with this much power need relatively high voltage power supplies, they need extremely effective protection circuits. They must be fast acting, and capable of protecting the amp indefinitely with a shorted output. This is a big ask for any design, and few high power PA amps are suitable. Because of the relatively high output voltage, they will normally be rated for much more power than is sensible for a high voltage line system.
+ +Using BTL amps has another disadvantage too - many installations (particularly in the US) require that one side of the 70V or 100V line be earthed, and you can't do that with a BTL amplifier. Both speaker outputs must remain floating. While they do have an earth reference, that's not the same as having one side of the line earthed.
+ +Very few high power amps are designed so they can supply a short circuit load indefinitely (especially when connected in bridge mode), so a resistor should be used in series with each amplifier output. The resistor should have a value that brings the total DC circuit resistance (with all speaker transformers connected) to no less than 8Ω, split evenly between the bridged amplifiers.
+ +For example, if the DC resistance of the entire line is 4Ω, use a 2.2Ω 100W resistor in series with each output. The worst case load that the amp will ever 'see' is now 8.4Ω, which is safe for the amplifier. This doesn't address the possibility of a shorted line very close to the amp, but it's impossible to account for every possibility. 100% reliability hasn't been achieved in any electronic products thus far, and it's unlikely that perfection will ever be reached. Note that if there is a fault, the resistors will get extremely hot - consider using a thermal switch to disconnect the amp if (when) there's a fault.
+ +When a high power amp is bridged and used in this manner, it is also unrealistic (and not very wise) to expect full power. Dedicated line amps are usually rated for 100-150W, and it's better to use multiple amps than one very large one, as there is some system redundancy when more than one amp is used.
+ + +Ideally, only amplifiers that have been specifically designed for line voltage use should even be considered. While it may be possible to save a little by cobbling amps and transformers together, the savings are likely to be short-lived unless all the precautions listed here are in place. The ideal system will use the transformer as part of the output stage, and this is especially useful when systems have to operate from a single 24V supply. These are standard for emergency evacuation systems, and use an output stage that is somewhat reminiscent of a valve output stage, but operating at much lower impedances and higher current.
+ +The circuit shown below is conceptual - it is not intended to be a real amplifier, however many may see the resemblance to a valve amp. The general principle of a 'real' transformer-coupled output stage is the same though, but it will include bias stabilisation, safe area protection for the output devices, etc. Although shown using lateral MOSFETs, most amps of this type use bipolar transistors as they are cheaper. Because of the comparatively low supply voltage, the safe area is usually much larger than for an amp using higher supply voltages ... have a look at the data sheets for a few high power BJTs to see the safe operating area of devices at various supply voltages. It is vitally important that the bias drawn by each output device is identical, or the transformer will saturate earlier than it should, and the saturation will be asymmetrical.
+ +The general idea shown above can easily provide up to 500W into a 70V or 100V line (with additional MOSFETs), even without the use of exotic output transistors. As noted above though, it's better to keep the power down to no more than perhaps 150W or so, and use multiple amplifiers. Lateral MOSFETs in the output stage are a better choice than bipolar transistors as they are more tolerant of difficult loads, but they are also a great deal more expensive. Providing protection for the stage shown above is not difficult - it's easier than for a traditional solid-state amplifier. The low supply voltage helps a lot, because it minimises the effects of second breakdown - a major cause of transistor failure even when fully protected.
+ +At full power, supply current is fairly high - as shown it will peak at over 14A (9.4A average) from a 24V DC supply when delivering 150W into the 70V line (a load impedance of 33Ω across 70V). There are many modern (and cheap) transistors that can be paralleled to get that power and current rating easily. While amps built this way are not usually capable of true high-fidelity, performance is more than acceptable for background music, announcements and alarms. Provided that low frequencies are filtered out to avoid transformer saturation, distortion can be well below 0.5% at any power level without difficulty.
+ +The most important factor is reliability. For example, a Class-D amp could be used to obtain maximum battery life for an emergency system, but the complexity may completely outweigh the advantages. The design of a Class-D amp is far more involved and consequently potentially less reliable, and esoteric and/or surface mount parts are used so it becomes difficult to service other than by replacement. When used at full volume (heavy clipping) for sirens, the amp shown is just as efficient as a typical Class-D design.
+ + +Anyone who thinks that commercial 70V/100V line installations are simple should be disabused of such notions by now. There are far more complexities and things that can go wrong than with any traditional system where amplifiers drive speakers directly. The transformers are the root cause of these problems, and failure to appreciate the ability of a transformer to destroy an amplifier will inevitably lead to tears.
+ +Neglecting losses throughout the system may lead to a system that doesn't live up to expectations. Speaker transformers are a special case, and while their losses aren't huge, they all add up. The same applies to the speaker wiring, so the gauge has to be selected to account for the length of your cable runs. Very light duty cables are alright for short runs (a few metres), but the gauge has to be increased for long runs. These can easily exceed 100 metres in a large layout, and placing the PA system centrally is rarely possible.
+ +While anyone can just follow the instructions as described on many websites and elsewhere, this is no guarantee that the system will work reliably. The processes themselves are simple enough, but unless the installer is aware of the risks (to the amplifier in particular), at some point the inevitable will happen and a low frequency signal will get through the amp with enough energy to saturate the transformer core. Even amps that have minimal protection might tolerate this a few times, but eventually the system will fail. As likely as not, the amp will be blamed ("that's the second time this month that the amp has failed - useless bloody thing!"), but this is quite unfair.
+ +The problem is that the installer didn't understand what can happen when an unsuitable amp is used to drive a transformer. The same amp may well survive for many, many years in a domestic hi-fi or as a PA amp driving speakers directly - it was simply never designed to drive a transformer! That's hardly the amp's fault.
+ +Naturally, there are already countless ordinary amplifiers connected to randomly selected transformers and without a measurement or calculation in sight. Some of these will operate for years without problems, others will fail as soon as a low frequency signal is applied. Should you choose to ignore the info presented here, you'll never know into which category your installation fits ... until it fails. Just because it doesn't fail immediately doesn't mean that it's right, or that it won't fail in a day, week, or a year. After it's installed, no-one has the slightest idea what the client will do with it, and it may end up pushed well past its limits without anyone being any the wiser.
+ +If the high pass filter and DC protection circuits haven't been included, it only needs someone to turn up the bass tone control to destroy the amp, or if it's well protected, cause horrific distortion as the protection circuits operate. To the customer, that's a fault, and it might be one that you have to fix. Now you know how to do so.
+ +Although it might seem that many of the suggested additions to the standard circuit are overkill, they are really just common sense. High voltage audio systems can have a very hard life, and are expected to work reliably for many years (for the customer, that means forever!). By ensuring that the amp is protected from all the common issues that arise when it's connected to a transformer you ensure the long-term reliability of the installation. That can't be a bad thing, especially since most of the things needed to ensure reliability add so little to the overall cost of the equipment.
+ + +Many of the topics examined in depth in this article are not mentioned anywhere, by anyone, so there are no references for transformer saturation measurements or test procedures. These were developed by experimentation and measurement of transformers I had available. The references are mainly to do with 70V line systems in general, but those shown originally have all vanished.
+ +Note that all links and references are provided so the reader can improve his/her understanding of the topic. ESP has no affiliation with any of the companies listed, and their inclusion does not imply that the information is accurate or is suitable for your requirements, nor does this note imply the opposite.
+ +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Lithium Cell Charging |
+ Introduction+ + +
+ 1 - Battery Management System (BMS)
+ 2 - Charging Profile
+ 3 - Constant Voltage And Constant Current Power Supplies (Chargers)
+ 4 - IC Single Cell Charging Circuit
+ 5 - Multi-Cell Charging
+ 6 - Battery Protection
+ 7 - State Of Charge (SOC) Monitoring
+ 8 - Battery Powered Projects
+ 9 - Appliance Batteries & Chargers
+ Conclusions
+ References +
Charging lithium batteries or cells is (theoretically) simple, but can be fraught with difficulties as has been shown by the multiple serious failures in commercial products. These range from laptop computers, mobile ('cell') phones, the so-called 'hoverboards' (aka balance boards), and even aircraft. Balance boards caused a number of house fires and destroyed or damaged many properties worldwide. If the cells aren't charged properly, there is a high risk of venting (release of high pressure gasses), which is often followed by fire.
+ +Lithium is the lightest of all metallic elements, and will float on water. It is very soft, but oxidises quickly in air. Exposure to water vapour and oxygen is often enough to cause combustion, and especially so if there is heat involved (for example, from overcharging a lithium cell). Exposure to moist/ humid air causes hydrogen gas to be generated (from the water vapour), which is of course highly flammable. Lithium melts at 180°C. Most airlines insist that lithium cells and batteries be charged to no more than 30% for transport, due to the very real risk of catastrophic fire. Despite the limitations, lithium batteries are now used in nearly all new equipment because of the very high energy density and light weight.
+ +Batteries have charge and discharge rates that are referred to 'C' - the battery or cell capacity, in Ah or mAh (amp or milliamp hours). A battery with a capacity of 1.8Ah (1,800mAh) therefore has a 'C' rating of 1.8 amps. This means that (at least in theory) the battery can supply 180mA for 10 hours (0.1C), 1.8A for 1 hour, or 18A for 6 minutes (0.1 hour or 10C). Depending on the design, Lithium batteries can supply up to 30C or more, so our hypothetical 1,800mAh battery could theoretically supply 54A for 2 minutes. Capacity may also be stated in Wh (watt hours), although this figure is usually not helpful other than in advertising brochures.
+ +In the US and some countries elsewhere, the Wh rating is required by shipping companies so they can determine the packaging standard needed. A single 1.8Ah cell has a stored energy of 6.7Wh [ 4 ]. Alternatively, the lithium content may need to be stated. The reference also shows how this can be calculated, although any calculation made will only be an estimate unless the battery maker specifically states the lithium content. The reason for this is the risk of fire - carriers dislike having shipments catch fire, and the lithium content may dictate how the goods will be shipped. When batteries are shipped separately (not built into equipment) they must be charged to no more than 30% capacity.
+ +Unlike some older battery technologies, lithium batteries cannot (and should not) be left on float charge, although it may be possible if the voltage is maintained below the maximum charge voltage. For most of the common cells in use, the maximum cell voltage is 4.2V, called the 'saturation charge' voltage. The charge voltage should be maintained at this level only for long enough for the charge current to have fallen to 10% of the initial or 1C value. However, this may be subject to interpretation because the initial charge current can have a wide range, depending on the battery and the charger.
+ +Unfortunately, while there are countless articles about lithium battery charging, there are nearly as many different suggestions, recommendations and opinions as there are articles. One of the main things that is essential when charging a lithium battery is to ensure that the voltage across each cell never exceeds the maximum allowable, and this means that each and every cell in the battery has to be monitored. There are many ICs available that have been specifically designed for lithium battery balance charging, with some systems being quite complex, but extremely comprehensive in terms of ensuring optimum performance.
+ +While the traditional lithium-ion (Li-Ion) or lithium-polymer (Li-Po) has a nominal cell voltage of 3.70V, Li-iron-phosphate (LiFePO4, aka LFP - lithium ferrophosphate) makes an exception with a nominal cell voltage of 3.20V and charging to 3.65V. Many commercial LiFePO4 batteries have in-built balancing a protection circuits, and only need to be connected to the proper charger. A relatively new addition is the Li-titanate (LTO) with a nominal cell voltage of 2.40V and charging to 2.85V.
+ +Chargers for these alternative lithium chemistry cells are not compatible with regular 3.70-volt Li-Ion. Provision must be made to identify the systems and provide the correct charging voltage. A 3.70-volt lithium battery in a charger designed for LiFePO4 would not receive sufficient charge; a LiFePO4 in a regular charger would cause overcharge. Unlike many other chemistries, Li-Ion cells cannot absorb an overcharge, and the specific battery chemistry must be known and charging conditions adjusted to suit.
+ +Li-Ion cells operate safely within the designated operating voltages, but the battery (or a cell within the battery) becomes unstable if inadvertently charged to a higher than specified voltage. Prolonged charging above 4.30V on a Li-Ion cell designed for 4.20V will plate metallic lithium on the anode. The cathode material becomes an oxidizing agent, loses stability and produces carbon dioxide (CO2). The cell pressure rises and if the charge is allowed to continue, the current interrupt device responsible for cell safety disconnects at 1,000-1,380kPa (145-200psi). Should the pressure rise further, the safety membrane on some Li-Ion cells bursts open at about 3,450kPa (500psi) and the cell may eventually vent - with flames !
+ +Not all cells are designed to withstand high internal pressures, and will show visible bulging well before the pressure has reached anything near the values shown. This is a sure sign that the cell (or battery) is damaged, and it should not be used again. Unfortunately, many of the articles you find on-line discussing balance boards (in particular) talk about the cell quality (or lack thereof) and/or the charger quality (ditto), but neglect to mention the battery management system (BMS) discussed next.
+ +This is one of the most critical elements of a lithium battery charger, but is rarely mentioned in most articles that discuss battery fires. In general, it's assumed (or not known to the writer) that the battery pack includes - or should include - a protection circuit to ensure that each cell is monitored and protected against overcharge. It's likely that cheap (or counterfeit) battery packs don't include a protection circuit at all, and any battery without this essential circuitry is generally to be avoided unless you have a proper external balance charger with a multi-pin connector. The problem is that sellers will rarely disclose (or even know) if the battery has protection or not.
+ + +It's not especially helpful, but many sellers of batteries and chargers fail to make the distinction between battery monitoring and battery protection. These are two separate functions, and in general they are separate pieces of circuitry. Unfortunately, the term 'BMS' can mean either monitoring or protection, depending largely on the definition used by the the seller, and/or understanding of what is actually being sold.
+ +I will use the term 'balancing' to apply to the management of the charging process, and for batteries (as opposed to single cells), it's the balancing process that ensures that each cell is closely monitored during charging to maintain the correct maximum cell voltage. Protection circuits are usually connected to the battery permanently, and are often integrated within the battery pack. These are covered further below. In some cases, protection and balancing may be provided as a complete solution, in which case it truly deserves the term 'BMS' or 'battery management system'.
+ +For proper control of the charge process with more than a single cell, a battery balance system is absolutely essential. The balance circuits are responsible for ensuring that the voltage across any one cell never exceeds the maximum allowed, and is often integrated with the battery charger. Some have further provisions, such as monitoring the cell temperature as well. In large installations, the individual cell controllers communicate with a central 'master' controller that provides signalling to the device being powered, indicating state of charge (inasmuch as this parameter can be determined - it's less than an exact science), along with any other data that may be considered essential.
+ +For comparatively simple batteries with from 2 to 5 series cells, giving nominal voltages from 7.4V to 18.5V respectively, cell balance isn't particularly difficult. It does become a challenge when perhaps 110 cells are connected in series, for an output of around 400V (as may be found in an electric car for example). Cells can also be connected in parallel, most commonly as a series-parallel network. Common terminology (especially for 'hobby' batteries for model airplanes and the like) will refer to a battery as being 5S (5 series cells), or 4S2P (4 series cells, with each comprised of 2 cells in parallel).
+ +Operating cells in parallel is not a problem, and it's possible (though usually not recommended) that they can have different capacities. Of course they must be using the exact same chemistry. When run in series, the cells must be as close to identical as possible. Of course, as the calls age they will do so at different rates - some cells will always deteriorate faster than others. This is where the balance system becomes essential, because the cell(s) with the lowest capacity will charge (and discharge) faster than the others in the pack. The majority of balance chargers use a regulator across each cell, and that ensures that each individual cell's charge voltage never exceeds the maximum allowed.
+ +In its simplest form, this could be done with a string of precision zener diodes, and that is actually fairly close to the systems commonly used. The voltage has to be very accurate, and ideally will be within 50mV of the desired maximum charge voltage. Although the saturation charge voltage is generally 4.2V per cell, battery life can be extended by limiting the charge voltage to perhaps 4.1 volts. Naturally, this results in slightly less energy storage.
+ +The two major components of a BMS will be looked at separately below. These may be augmented by performance monitoring (state of charge, remaining capacity, etc.), but this article concentrates on the important bits - those that maximise both safety and battery life. So-called 'fuel gauges' are a complete topic unto themselves, and they are only covered in passing here.
+ + +The graph shows the essential elements of the charge process. Initially, the charger operates in constant current (current limit) mode, with the maximum current ideally being no more than 1C (1.8A for a 1.8Ah cell or battery). Often it will be less, and sometimes a great deal less. Charging at 0.1C (180mA) would result in a charge time of 30 hours if the full saturation charge is applied. However, when a comparatively slow charge is used (typically less than 0.2C), it is possible to terminate charging as soon as the cell(s) reach 4.2V and the saturation charge isn't necessary. For example, based on the 'new' charging algorithm, the cell shown in Figure 1 may require somewhere between 12 and 15 hours to charge at 0.1C, and the charge cycle is ended as soon as the voltage reaches 4.2 volts. This is somewhat kinder to the Li-Ion cell, and voltage stress is minimised.
+ +As is clearly shown in the graph, a fast charge means that the capacity lags the charge voltage, and 1C is fairly fast - especially for batteries designed for low consumption devices. After about 35 minutes, the voltage has (almost) reached the 4.2V maximum and charge current starts to fall, but the cell is only charged to around 65%. A slower charge rate means that the charge level is more closely aligned with the voltage. Like all batteries, you never get out quite as much as you put in, and you generally need to put in about 10-20% more ampere hours (or milliamp hours) than you will get back during discharge.
+ +Some chargers provide a pre-conditioning charge if the cell voltage is less than 2.5 volts. This is generally a constant current of 1/10 of the nominal full constant current charge. For example, if the charge current is set for 180mA, the cell will be charged at 18mA until the cell voltage has risen to about 3V (this varies depending on the design of the charger). Most systems will never need pre-conditioning though, because the electronics will (or should!) shut down before the cell reaches a potentially damaging level of discharge.
+ +In use, Li-Ion batteries should be kept cool. Normal room temperature (between 20° and 25°C) is ideal. Leaving charged lithium batteries in cars out in the sun is ill-advised, as is any other location where the temperature is likely to be higher than 30°C. This is doubly important when the battery is being charged. When discharged, some means of cutout is required to ensure that the cell voltage (of any cell in the battery) does not fall below 2.5 volts.
+ +It's usually better not to fully charge lithium batteries, nor allow a deep discharge. Battery life can be extended by charging to around 80-90% rather than 100%, as this all but eliminates 'voltage stress' experienced when the cell voltage reaches the full 4.2 volts. If the battery is to be stored, a charge of 30-40% is recommended, rather than a full charge. There are many recommendations, and most are ignored by most people. This is not the users' fault though - manufacturers of phones, tablets and cameras could offer an option for a reduced charge - there's plenty of processing power available to do it. This is especially important for items that don't have a user replaceable battery, because it often means that otherwise perfectly good equipment is discarded just because the battery is tired. Given the proliferation of malware for just about every operating system, it's important to ensure that battery charge settings can never be set in such a way that may cause damage.
+ + +During the initial part of the charge cycle, the charger supply should be constant current. Current regulation doesn't have to be perfect, but it does need to be within reasonable limits. We don't much care if a 1A supply actually delivers 1.1A or 0.9A, or if it varies a little depending on the voltage across the regulator. We obviously should be very concerned if it's found that the maximum current is 10A, but that simply won't happen even with a fairly crude regulator.
+ +For a purely analogue design, the LM317 is well suited for the task of current regulation, and it's also ideal for the essential voltage regulation. This reduces the overall BOM (bill of materials), since multiple different parts aren't needed. Of course, these are both linear devices, so efficiency is poor, and they require a supply voltage that's greater than the total battery voltage by at least 5 volts, and preferably somewhat more.
+ +As an alternative to using two LM317 ICs you can add a couple of transistors and resistors to create a current limiter. However, it doesn't work quite as well, the PCB real estate will be greater than the version shown here, and the cost saving is minimal. The circuit below does not include the facility for a 'pre-conditioning' or 'wake-up' charge before the full current is applied. This isn't essential if the battery is never allowed to discharge below 3V, and may not even be needed for a 2.5V minimum. Anything less than a discharged cell voltage of 2.5V will require a C/10 pre-conditioning charge. If you only ever charge at the C/10 rate, a lower charge rate is not needed.
+ +The arrangement shown will limit the current to the value determined by R1. With 12 ohms, the current is 100mA (close enough - actually 104mA), set by the resistance and the LM317's internal 1.25V reference voltage. For 1A use 1.2 ohms (5W is recommended), and the value can be determined for any current needed up to the maximum 1.5A that the LM317 can provide. At higher current, the regulator will need a heatsink, especially for the initial charge phase when considerable voltage will be across U1. The diodes prevent the battery from applying reverse polarity to the regulator (U2) if the battery is connected before the DC supply is turned on. D1 should be rated for at least double the maximum current, and will ideally be a Schottky device to minimise dissipation and voltage loss. + +
This is simply the basic charger, which can be designed to fulfil the requirements described above. This is far from the full system though, as the management system and balancing circuits are missing at this stage. Each system will be different, but the basic circuit is flexible enough to accommodate most 2-4 cell battery packs. Charging can be stopped by connecting the 'Adj' pin of U1 to ground with a transistor as shown. When charging is complete, a voltage (5V is fine) is applied to the end of R3, and the current limiter is shut down. Be aware that the battery will be discharged by the combination of the balance circuits and the current passed through R4, R5 and VR1 (the latter is about 5.7mA).
+ + +A single cell (or parallel cell batteries) charger is conceptually quite straightforward. However, when the full requirements are considered it becomes obvious that a simple current limited precision regulator as shown above may not be enough. Many IC makers have complete lithium cell chargers on a chip, with most needing nothing more than a programming resistor, a couple of bypass capacitors and an optional LED indicator. One (of many) that incorporates everything needed is the Microchip MCP73831, shown below. Most of the major IC manufacturers make specialised ICs, and the range is vast. TI (Texas Instruments) makes a range of devices designed for full BMS applications ranging from a single cell to 400V batteries used for electric vehicles. Another simple IC is the LM3622 which is available in a number of versions, depending on the end point voltage. A version is also available for a two-cell battery, but it lacks balancing circuitry which makes it rather pointless (IMO).
+ +Four termination voltages are available - 4.20V, 4.35V, 4.40V and 4.50V, so it's important to get the correct version for the cell type you will be charging. The constant current mode is controlled by R2, which is used to 'program' the IC. Leaving pin 5 ('PROG') open circuit inhibits charging. The IC automatically stops charging when the voltage reaches the maximum set by the IC, and will supply a 'top up' charge when the cell voltage falls to around 3.95 volts. The optional LED can be used to indicate charge or end-of-charge, or both using a tri-colour LED or separate LEDs. The status output is open-circuit if the IC is shut down (due to over temperature for example) or no battery is present. Once charging is initiated, the status output goes low, and it goes high when the charge cycle is complete. Note that this IC is only available in SMD packaging, and through hole versions are not available. The same applies to most devices from other manufacturers.
+ +The charger shown is a linear regulator, so dissipates power when charging the cell. If the discharged cell voltage is 3V, the IC will only dissipate 300mW with a 100mA charge current. If increased to the maximum the IC can provide (500mA), the IC will dissipate 1.5W, and that means it will get very hot (it's a small SMD device after all). Should the cell voltage be less than 3V (deeply discharged due to accident or long term storage), the dissipation will be such that the IC will almost certainly shut down, as it has internal over-temperature sensing. It will cycle on and off until the voltage across the cell has risen far enough to reduce the dissipation to allow continuous operation. Switchmode chargers are far more efficient, but are larger, more complex, and more expensive to build.
+ +Some controllers include temperature sensing, or have provision for a thermistor to monitor the cell temperature. ICs such as the LTC4050 will only charge when the temperature is between 0°C and 50°C when used with the NTC (negative temperature coefficient) thermistor specified. Others can be designed to be mounted so that the IC itself monitors the temperature. These are intended to be installed with the IC in direct thermal contact with the cell. The series pass transistor must be external to the IC to ensure that its dissipation doesn't affect the die temperature of the IC.
+ +The current programming resistor is set for 10k in the above drawing, and that sets the charge current to about 100mA. The datasheet for the IC has a graph that shows charge current versus programming resistor, and there doesn't appear to be a formula that can be applied. A 2k resistor gives the maximum rated charging current of 500mA. As discussed earlier, a slow charge is probably the best option for maximum cell life, unless the cell is designed for fast charging. Unfortunately, the IC has a preset maximum voltage, and it can't be reduced to limit the voltage to a slightly lower value which will prolong the life of the cell. R1 allows about 2.5mA for the LED, so a high brightness type may be needed. R1 can be reduced to 470 ohms if desired.
+ +For low current charging, there's probably no reason not to use an accurate 4.2V supply and a series resistor. The charge process will be fairly slow, but if limited to around 0.1C or 100mA (whichever is the smaller), a charge cycle will take around 15 hours. The resistor should be selected to provide the desired current with 1.2V across it (12 ohms for 100mA). There is little or no chance that the low current will cause any damage to the cell, and although it's a pretty crude way to charge, there's no reason that it shouldn't work perfectly well. I have tried it, and there don't seem to be any 'contra indications'.
+ + +While charging a single cell (or parallel cell battery) is fairly simple with the right IC(s), it becomes more difficult when there are two or more cells in series to create a higher voltage battery. Because the voltage across each cell must be monitored and limited, you end up with a fairly complex circuit. Again, there are plenty of options from most of the major IC manufacturers, and in many cases a dedicated microcontroller ends up being needed to manage the individual cell monitoring circuits.
+ +There are undoubtedly products that don't provide any form of charge balancing, and these are the ones that are most likely to cause problems in use - including fire. Using lithium batteries without a proper balance charger is asking for trouble, and should not be done even in the cheapest of products. You might imagine that in a 2 cell series pack, only one cell needs to be monitored, and the other one will look after itself. This isn't the case though. If the cell that isn't monitored happens to have the lower capacity, it will charge faster than the other cell. It may reach a dangerous voltage before the monitored cell has reached its maximum.
+ +The principle of multi-cell monitoring is simple enough in concept. It's only when you realise that fairly sophisticated and accurate circuitry has to be applied to every cell that it becomes daunting. Because cells are all at different voltages, the main controller needs level shifting circuits to each cell monitor. This may use opto-isolators or more 'conventional' level shifting circuits, but the latter are not usually suitable for high voltage battery packs.
+ +Note: The circuits shown are conceptual, and are intended to show the basic principles. They are not designed for construction, and the ICs shown in 'A' are not any particular device, as the 'real' ICs used are often controlled by a dedicated microcontroller. There's no point sending me an email asking for the device types, because they don't exist as a separate IC. The idea is only to show the basics - this isn't a project article, it's provided primarily to highlight the issues you will be faced with when dealing with LiPo series cells.
+ +There are two classes of cell balancing circuit - active and passive (both of those shown are passive). Passive systems are comparatively simple and can work very well, but they have poor power efficiency. This is unlikely to be a problem for small packs (2-5 series cells) charged at relatively low rates (1C or less). However, it's critical for large packs as used in electric bikes or cars, because they cost a significant amount of money to charge, so inefficiency in the BMS translates to higher cost per charge and considerable wasted energy.
+ +I'm not about to even try to show a complete circuit for multi-cell balancing, because most rely on very specialised ICs, and the end result is similar regardless of who makes the chips. The system shown in 'A' uses a control signal to the charger to reduce its current once the first cell in the pack reaches its maximum voltage. The resistor as shown can pass a maximum current of 75mA at 4.2V, and the charger must not provide more than this or the discharge circuit can't prevent an over charge. Each resistor will only dissipate 315mW, but this adds up quickly for a very large battery pack, and that's where active balancing becomes important.
+ +The implementation is very different for the devices from the various manufacturers, and depends on the approach taken. Some are controlled by microprocessors, and provide status info to the micro to adjust the charge rate, while others are stand-alone and are often largely analogue. The arrangement shown above ('B') is simplistic, but is also quite usable as shown. The three 20k pots are adjusted to give exactly 4.2V across each regulator. When balancing is in effect (at the end-of-charge), the available current from the charger must be less than 50mA, or the shunt regulators will be unable to limit the voltage. There is an important limitation to this type of balancer - if one cell goes 'bad' (low voltage or shorted), the remaining cells will be seriously overcharged!
+ +However (and this is important), as with many other solutions, it cannot remain connected when the battery is not charging. There is a constant drain of about 100µA on each cell, and assuming 1.8Ah cells as before, they will be completely discharged in about 2 years. While this may not seem to be an issue, if the equipment is not used for some time it's entirely possible for the cells to be discharged below the point of no return.
+ +Quite a few balance chargers that I've tested are in the same position. They must not be left connected to the battery, so some additional circuitry is needed to ensure that the balance circuits are disconnected when there's no incoming power from the charger. One product I developed for a client needed an internal balance charger, so a relay circuit was added to disconnect the balance circuits unless the charger was powered. See Section 8 for more details on this approach.
+ +With any 'active zener diode' system as shown above, it's vitally important that the charger's output voltage is tightly regulated, and has thermal tracking that matches the transistors' (Q1 to Q3) emitter-base voltage. It would be easy for the charger to continue providing its maximum output current, but having it all dissipated in the cell bypass circuits. It also makes it impossible to sense the actual battery current, so it probably won't turn off when it should.
+ + +Battery and/or cell protection is important to ensure that no cell is charged beyond its safe limits, and to monitor the battery upon discharge to switch off the battery if there is a fault (excess current or temperature for example), and to turn off the battery if its voltage falls below the allowable minimum. Ideally, each cell in the battery will be monitored, so that each is protected against deep discharge. For Li-Ion cells, they should not be discharged below 2.5V, and it's even better if the minimum cell voltage is limited to 3 volts. The loss of capacity resulting from the higher cutoff voltage is small, because lithium cell voltage drops very quickly when it reaches the discharge limit.
+ +Because these circuits are usually integrated within the battery pack and permanently connected, it's important that they draw the minimum possible current. Anything that draws more than a few microamps will drain the battery - especially if it's a relatively low capacity. A 500mA/h cell (or battery) will be completely discharged in 500 hours (20 days) if the circuit draws 1mA, but this extends to nearly 3 years if the current drain can be reduced to 20µA.
+ +Protection circuits often incorporate over-current detection, and some may disconnect permanently (e.g. by way of an internal fuse) if the battery is heavily abused. Many use 'self-resetting' thermal fuses (e.g. Polyswitch devices), or the overload is detected electronically, and the battery is turned off only for as long as the fault condition exists. There are many approaches, but it's important to know that some external events (such as a static discharge) may render the circuit(s) inoperable. Lithium batteries must be treated with care - always.
+ +The drawing above shows a 3-cell lithium battery protection circuit. It doesn't balance the cells, but it does detect if any cell in the pack is above the 'overcharge' threshold, and stops charging. It will also stop discharge if the voltage on any cell falls below the minimum. Switching is controlled by the external MOSFETs, and the charger must be set to the correct voltage (12.6V for the 3-cell circuit shown, assuming Li-Ion cells).
+ +These ICs (and others from the various manufacturers) are quite common in Asian BMS boards. The datasheets are not usually very friendly though, and in some cases there is a vast amount of information supplied, but little by way of application circuits. This appears common for many of these ICs from other makers as well - it is assumed that the user has a good familiarity with battery balance circuits, which will not always be the case. The S-8253 shown has a typical current drain of 14µA in operation, and this can be reduced to almost zero if the CTL (control) input is used to disable the IC when the battery is not being used or charged. The MOSFETs will turn off the input/ output if a cell is charged or discharged beyond the limits determined by the IC.
+ + +Battery 'fuel gauges' are often no more than a gimmick, but new techniques have made the science somewhat less arbitrary than it used to be. The simplest (and least useful) is to monitor the battery voltage, because lithium batteries have a fairly flat discharge curve. This means that very small voltage changes have to be detected, and the voltage is a very unreliable indicator of the state of charge. Voltage monitoring may be acceptable for light loads over a limited temperature range. It monitors self discharge, but overall accuracy is poor.
+ +So-called 'Coulomb counting' measures and records the charge going into the battery and the energy drawn from the battery, and calculates the probable state of charge at any given time. It's not good at providing accurate data for a battery that's deteriorated due to age, and can't account for self discharge other than by modelling. Coulomb counting systems must be initialised by a 'learning' cycle, consisting of a full charge and discharge. Variations due to temperature cannot be reliably determined.
+ +Impedance analysis is another method, and is potentially the most accurate (at least according to Texas Instruments who make ICs that perform the analysis). By monitoring the cell's (or battery's) impedance, the state of charge can be determined regardless of age, self discharge or current temperature. TI calls their impedance analysis technique 'Impedance Track™' (IT for short), and makes some rather bold claims for its accuracy. I can't comment one way or another because I don't have a battery using it, nor do I have the facilities to run tests, but it appears promising from the info I've seen so far.
+ +This article is about proper charge and discharge monitoring, not state-of-charge monitoring. The latter is nice for the end user, but isn't an essential part of the charge or discharge process. I have no plans to provide further info on 'fuel gauges' in general, regardless of the technology.
+ + +The 18650 cell (18mm diameter × 65mm long) cell has become very popular for many portable products, and these are now readily available at fairly reasonable prices. They are not all equal of course, and many on-line sellers make rather outlandish claims for capacity. Genuine 18650 cells have a typical capacity ranging from 1,500mA/h (milliamp hours) up to 3,500mA/h, but fakes will often grossly exaggerate the ratings. I've seen them advertised as being up to 6,000mA/h, which is simply impossible. The highest I've seen is 9,900mA/h, and that's even more impossible, but no-one seems to care that buyers are being misled.
The 18650 cell is the mainstay of many laptop battery packs, with a 6-cell battery being fairly common. These may be connected in a series/ parallel combination to provide twice the capacity (in mA/h) at 11.1 volts. The battery enclosure contains the balancing and protection circuits, and the cells are not replaceable. This is (IMO) a shame, because it will always be cheaper to replace the cells rather than the entire sealed battery pack. However, the cells in these packs are generally of the 'tabbed' type, having metal tabs welded to the cells so they don't rely on physical contact to make the electrical connection. This means that it's not possible to make them 'user replaceable'.
+ +One of the advantages of using separate cells is that many of the issues raised in this article can be avoided, at least to a degree. Being separate cells, they will normally be used in a plastic 'battery pack', typically wired in series. A set of four can provide ±7.4V nominal (each cell is 3.7V), and that's sufficient to operate many opamp circuits, including mic preamps, test equipment and most others as well. Recharging is easy - remove the cells from the battery pack and charge them in parallel with a designated Li-Ion charger. Provided the charger uses the correct terminal voltage (no more than 4.2V, preferably a bit less) and limits the peak charging current to suit the cells used, charging is safe, and no balancing is necessary.
+ +As with all things, there are caveats. The circuitry being powered needs some additional circuitry to switch off the battery pack when the minimum voltage is reached. This is typically 2.5V/ cell, so the cutout needs to detect this fairly accurately and disconnect the battery when the voltage reaches the minimum. However, if you use 'protected' cells, they have a small PCB inside the cell case that will disconnect power if the cell is shorted, it (usually) prevents over-charging, and (usually) has an under-voltage cutout.
+ +There's a catch though! While they still use the same size designation (18650), many protected cells are slightly longer. Some can be up to 70mm long, and they won't fit into battery compartments that are designed for 'true' 18650 cells. Others are the correct length, but have lower capacity, because the cell itself is slightly smaller so the protection circuit will fit. These cells also differ in the positive end termination - some use a 'button' (much the same as is seen on most alkaline cells), while others have a flat top. They are often not interchangeable.
+ +Just to confuse the issue, there are also AA sized lithium cells (14500 - 14mm diameter × 50mm long). Because they are 3.7V cells, they are not 'AA' cells, even though they are the same size. You can also buy 'dummy' AA cells, which are nothing more than a AA sized shell (with wrapping like a 'real' cell) that provides a short circuit. These are used in conjunction with Li-Ion cells in devices intended to use two or four cells. One or two Li-Ion and one or two dummy cells are used, and most devices are quite happy with the result. My 'workhorse' digital camera is fitted with a pair of AA size Li-Ion cells and a pair of dummies, and it usually only needs recharging every few weeks (or even up to a couple of months if it's not used much). There is absolutely no comparison between the Li-Ion cells and the NiMh cells I used previously.
+ +There are several ways that more 'traditional' Li-Ion batteries can be used safely. A project I worked on a while ago used a 3S Li-Ion pack (three series cells) with a nominal voltage of 11.1V. It was installed in the case along with the electronics, so removal for charging wasn't practical. A small balance charger was installed along with the battery, with the balancing terminals connected via relays. This was necessary because the balance circuits would otherwise discharge the battery. The cost of the balance charger was such that it wouldn't be sensible to try to build one for anything like the same money. Even getting hold of the parts needed can be a challenge!
+ +By adding the relays and balance charger to the system, it was only necessary to connect an external supply (12V) to a standard DC socket on the back, and that would activate the relays and charge the battery. The relays dropped out as soon as the external voltage source was disconnected. This made a potentially irksome task (connecting the charger and balance connector) to something that the 'average' user could handle easily. Those using the device would normally be (decidedly) non-technical, and expecting them to mess around with fiddly connectors was not an option. A photo of the arrangement I used is shown below. The battery normally used was rated for 1,500mA/h and could keep the data logging system running continuously for 24 hours. The charger could be plugged in or removed while the system was running.
+ +The balance charger is designed specifically for 2S and 3S batteries, and cost less than $10.00 from an on-line supplier of various hobby batteries, chargers, etc. A diode is used to prevent the battery from keeping the relays activated when the charger supply is disconnected. Without the relay disconnection scheme used, the balance circuits would discharge the battery in a couple of days. The circuitry powered by the system shown had built-in voltage detection, and that was designed to turn everything off when the total supply voltage fell to around 8 volts. A fuse (½A) was included in line with the DC output as a final protection system, lest anything fail catastrophically on the powered circuitry.
+ +In the photo you can see the balance charger board mounted above the relay and connector PCB. The LEDs were extended so they peeped out through the back plate, and the DC input connector is at the far left. The high-current leads from the battery aren't used in this application, because the current drain is so far below the maximum discharge rate. The two relays are visible on the right, and only three balance terminals are disconnected when external DC power is not present. The balance charger looks very sparse, but it has several SMD ICs and other parts on the underside of the board.
+ +The circuit diagram shows how the system is connected. This is easy to do for anyone thinking of using a similar arrangement, and a small piece of Veroboard is easily wired with the relays and diodes. A diode is shown in parallel with the relay coils, and this is necessary to ensure that the back-EMF doesn't damage the charger circuit when the 12V input is disconnected. D1 must be able to carry the full charger input current, which for this example is less than 1A. All the complexity is in the balance charger - everything else is as simple as it can be. D1 prevents the battery voltage from being coupled back from the charger, so the relays will only be energised when external power is present. The fuse should be selected to suit the load. This circuit is only suitable for low current loads, because it doesn't use the battery's high current leads.
+ +This is only one of many possible applications, and as described above, sometimes it's easier to use an 'off-the-shelf' charger than it is to build one from scratch. With other applications you may not have a choice, because 'better' chargers can become quite expensive and may not be suitable for reuse in the manner shown. For one-off or small production runs, using what you can get is usually more cost effective, but this changes if a large number of units is to be manufactured.
+ +Sometimes you only need a single cell, and it may be uneconomical to get a dedicated charger. This is especially true if the Li-Ion cell is low-cost, but needs to be charged safely, possibly from a solar cell array or a 5V charger. Solar cell arrays are found in all manner of budget lighting, such as 'solar' path lighting and other similar products. I have an LED 'lantern' that's regularly used when I need to delve behind my computer system or anywhere else that doesn't get much light. When the original battery died (3 x Ni-MH cells) I went for this instead. The series diode scheme is intended where you aren't too fussy about getting the cell to the full 4.2V, but it will reach 3.99V with 'typical' 1N4004 diodes. The main circuit just uses the diodes, with a transistor to disconnect them when the cell isn't being charged. Without D1 and Q1, the cell will be discharged to (about) 3V or so quite quickly, as the diodes will continue to conduct down to ~500mV. This is a true 'junk box' design, as it only uses parts that most people will have in stock.
+ +A better scheme if you have to buy parts is to use a TL431 variable voltage reference. The trimpot (VR1) lets you set the voltage precisely, ideally to about 4.1V maximum. The transistor and D1 are still essential to disconnect the regulator when charging stops, or the cell will discharge through VR1, eventually becoming completely discharged. This will ruin the cell unless it has internal protection against over-discharge (some do, others don't). This circuit will win no prizes for accuracy, but it's cheap, and works quite well in practice.
+ + +Many appliances now use lithium based batteries, from household items (vacuum cleaners, massage guns, etc.) to recreational (e-bikes, scooters) and professional tools. These will almost always have an internal balance system, but often the current is somewhere between limited and very limited. Some people will be tempted to get a more powerful plug-pack charger power supply to replace the original for a faster charge. For example, I have a battery drill that uses a 17.5V, 1.7A external supply for charging. I know that the cells can take a great deal more, but using a higher current supply would be most unwise. Likewise, a small vacuum cleaner I own has a charger supply of 21.6V output at 300mA (nominally an 18V battery). Charging is (predictably) rather slow, but using a higher current supply would be a very bad idea.
+ +The reason can be seen in Fig. 4. The balance circuits have limited bypass current, in this case about 75mA. That means that if one cell becomes fully charged (4.1V), the balance charger has to bypass current to that cell to prevent it from overcharging. There may be some (limited) leeway, but once one (or more) cells in the battery have deteriorated past a certain (highly unpredictable) point in the life-cycle, you are (perhaps literally) playing with fire if you use a power supply capable of more current than the balance circuit was designed for.
+ +I don't know (and there seems to be little information available) how many battery fires are caused by this. Many websites will tell you to use only the power supply/ charger designed for the specific appliance, but they don't tell you why. The reason for this is fairly obvious, as most people don't understand electronics or battery technology, and will look at you blankly if you mention 'balance chargers'. Unfortunately, if you don't provide a valid reason (whether the user understands it or not), it may be ignored. Of course, some people will ignore it anyway.
+ +While it might seem that charging at a higher rate 'saves time', even that is not necessarily true. When a cell/ battery is charged at a low rate (0.2C or so), once each cell reaches 4.1V, it is most likely fully charged. When a higher rate is used (e.g. 1C), the battery need to have a prolonged 'top-up' phase (see Fig. 1). It's apparent from the graph that if a cell is charged at 1C, its capacity will only be ~70% when it reaches 4.1V. For a 3,000mA/h cell, 1C is 3A, and the complete charge cycle will take about 3 hours. The great majority of that time is in top-up phase [ 8 ]. Most Li-ion battery
+ +Charging the same cell at 0.2C (600mA initially), the full charge cycle takes around 10 hours, but at the end of that time it will be fully charged, and not subjected to any stress. Internal temperature will not rise significantly (if at all), and you'll likely get longer life as a result. I suspect that there are very few Li-ion batteries sold now that don't have balancing circuitry included, because it's become very cheap to do so. ICs are expensive when made in small quantities, but when production is in the millions per year, the cost is (comparatively) insignificant.
+ +The danger of over-charging is greatly increased with Li-Po cells. They lack the robust outer casing, and don't have a pressure vent to (hopefully safely) release internal pressure before the casing explodes.
+ + +Lithium cells and batteries are the current 'state of the art' in storage technology. Improvements over the years have made them much safer than the early versions, and it's fair to say that IC development is one of the major advances, since there is an IC (or family of ICs) designed to monitor and control the charge process and limit the voltages applied to each cell in the battery. This process has reduced the risk of damage (and/ or fire) caused by overcharging, and has improved the life of lithium battery packs.
+ +In reality, no battery formulation can be considered 100% safe. Ni-Mh and Ni-Cd (nickel-metal hydride & nickel cadmium) cells won't burn, but they can cause massive current flow if shorted which is quite capable of igniting insulation on wires, setting PCBs on fire, etc. Cadmium is toxic, so disposal is regulated. Lead-acid batteries can (and do) explode, showering everything around them with sulphuric acid. They are also capable of huge output current, and vent a highly explosive mixture of hydrogen and oxygen if overcharged. When you need high energy density, there is no alternative to lithium, and if treated properly the risk is actually very low. Well made cells and batteries will have all the proper safeguards against catastrophic failure.
+ +This doesn't mean that lithium batteries are always going to be safe, as has been proved by the many failures and recalls worldwide. However, one has to consider the vast number of lithium cells and batteries in use. Every modern mobile phone, laptop and tablet uses them, and they are common in many hobby model products and most new cameras - and that's just a small sample. Model aircraft use lithium batteries because they have such good energy density and low weight, and many of the latest 'fad' models (e.g. drones/ quad-copters) would be unusable without lithium based batteries. Try getting one off the ground with a lead-acid battery on board!
+ +It's generally recommended that people avoid cheap Asian 'no-name' lithium cells and batteries. While some might be perfectly alright, you have no real redress if one burns your house to the ground. There's little hope that complaining to an online auction website will result in a financial settlement, although that can apply equally to name brand products bought from 'bricks & mortar' shops. Since most (often unread and regularly ignored) instructions state that lithium batteries should never be charged unattended, it's a difficult argument. However, when the number of lithium based batteries in use is considered, failures are actually very rare. It's unfortunate that when a failure does occur, the results can be disastrous. It probably doesn't help that the media has made a great fuss every time a lithium battery pack is shown to have a potential fault - it's apparently news-worthy.
+ +One thing is certain - these batteries must be charged properly, with all the necessary precautions against over-voltage (full cell balancing) in place at all times. Ensure that batteries are never charged if the temperature is at or below 0°C, nor if it exceeds 35-40°C. Lithium becomes unstable at 150°C, so careful cell temperature monitoring is needed if you must charge at high temperatures, and should ideally be part of the charger. Avoid using lithium cells and batteries in ways where the case may be damaged, or where they may be exposed to high temperatures (such as full sun), as this raises the internal temperature and dramatically affects reliability, safety and battery life.
+ +As should be apparent, a single lithium cell is fairly easy to charge. You can use a dedicated IC, but even a much simpler combination of a 4.2V regulator and a series resistor will work just fine for a basic (slow) charger. Single cell (or multiple parallel cell) chargers can be obtained quite cheaply, and those I've used work well and pose very little risk. Even so, I would never leave the house while a lithium battery or cell was on charge. I have never personally had any problems with Li-Ion batteries or cells, and I use quite a few of them for various purposes. These are apart from the most common ones - phones, tablets and laptop PCs. Li-Ion chemistry has proven to be a far more reliable option compared to Ni-MH (nickel metal-hydride), where I recently had to recycle (as in take to a recycler, not 'cycle' the cells themselves) more than half of those I had!
+ +When you need lots of power in a small, low weight package, with the ability to recharge up to 500-1000 times, there's no better material than lithium. If they are treated with respect and not abused, you can generally expect a long and happy relationship with your cells and batteries. They're not perfect, but they most certainly beat most other chemistries by a wide margin. There's a lot to be said for LiFePO4 (commonly known as simply LFP, LiFePO or LiFe), because they use a more stable chemical composition and are less likely to do anything 'nasty'. However, as long as they are not abused, Li-Ion cells and batteries are capable of a safe, long and happy life.
+ +For a battery cutout circuit that will disconnect the battery completely when the voltage falls to a preset limit, see Project 184. This was designed specifically to prevent a damaging over-discharge if battery powered equipment is accidentally left turned on after use.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +LM358 /LM324 Opamps |
Everyone knows that the LM358 opamp (or the quad version, the LM324 which uses an identical internal circuit) can't be used for audio. You'll find countless forum queries and answers that tell you so, and in a way they are right. There are much better opamps available for sensible prices, and for the most part there's no good reason to use an LM358 in any audio circuit. However, this opamp has some useful characteristics, and it's very low power, which may well be just what you need. However, the modification shown here means that the IC is no longer a true 'low-power' device. This is due to the extra current drawn by the resistor that's added to convert the output stage to Class-A.
+ +The problem you'll face without modification is distortion, which can easily exceed 0.5% THD (total harmonic distortion). However, there is a way to use the LM358/ LM324 in such a way that the distortion falls to the levels you'll get with 'audio class' opamps, and in some cases it may be less. The 'fix' is nothing more complex than a resistor! If it's selected properly, the opamp's output stage operates in Class-A, which (at least in theory) makes it very usable indeed. Interestingly, a couple of the application circuits in the datasheets show this extra resistor, but don't explain the reason it was included.
+ +This does not make it a recommended device for audio though. It can be made to work very well with low distortion, but it's not particularly quiet (the opposite in fact), and it has a fairly leisurely slew-rate (i.e. it's not fast). However, if you happen to need a buffer or an extra 6dB or so of gain in a 'line level' circuit and have a spare ½ LM358 on a layout, it can be used without compromising the audio path. Any pretense at low power is lost though, because the Class-A current can be several milliamps - far more than the IC normally draws (about 500µA).
+ +One of the more endearing features of the LM358 is its ability to operate from as little as ±1.5V up to ±16V. That's an operating range that is almost unheard of with most others, making it useful for a wide range of applications. By adding a resistor, we can improve it even further, by eliminating the inherent crossover distortion. While this is easily measured at 1kHz, it become easily visible on a scope trace with frequencies above 10kHz or so.
+ +The trick shown here isn't new - it's been demonstrated in several websites (which I found after I'd run my own tests), but it's hard to find the information if you don't understand the problem already. In addition, people have claimed for years that the same modification will 'improve' many other opamps, but that's generally completely untrue for any device that's characterised for audio performance. One would be ill-advised to use the technique described with an NE5532, OPA2134, LM4562 or any other very low distortion opamp. They don't need you to add anything, because they are already very well behaved, and have minimal distortion. Trying to force them into Class-A is more likely to increase distortion than improve matters.
+ + +The circuit itself is straightforward (compared to 'better' opamps at least), and it's designed specifically for low-current operation. There are four current sources/ sinks, numbered I1 to I4 on the drawing. The input stage is unusual, in that it allows the inputs to operate with a voltage that's up to 500mV peak below the negative supply voltage. This is (potentially) useful if the IC is used with a single supply, with the negative supply pin connected to ground. The LM358 is a dual opamp, and LM324 is a quad, which shares the same circuit. The pinouts shown are for the LM358.
+ +
Figure 1 - LM358 (LM324) Internal Schematic (One Channel)
When you look at the schematic of the IC you can see why distortion is a problem. There's no bias network to ensure that Q6 and Q13 are 'pre-biased', and the output stage is Class-B. Crossover distortion is inevitable! You'd normally expect to see a bias transistor (or diodes) between the base of Q5 and Q13, but in the LM358/ LM324 the bases are shorted together, so there is zero bias current for the output stage. I4 is a current sink which is intended to maintain conduction through output transistor Q6. However, with a rather measly 50µA, it doesn't take much signal level before it ceases to be effective. With a total load of 10k (the following load plus feedback network), the maximum signal level is only -500mV peak before I4 can no longer provide enough current to prevent distortion.
+ +The solution is simple, and requires nothing more than the addition of one resistor. I took some measurements, and with 10k feedback resistors as shown in Figure 2, the distortion (without R4) was only a little higher than my oscillator's residual (<0.01%) with output voltage up to 400mV peak (280mV RMS). Above that, the distortion climbed rapidly. Adding the output 'Class-A' resistor from the output to the negative supply reduced the distortion back to the residual, with no evidence whatsoever of any 'excess' distortion. Of course, this is not a panacea, and doesn't magically convert the LM358 to a true 'audio class' opamp, but it does mean that it can be used in a non-critical area if needs be.
+ +Q7 is the output current limiter, which sets the maximum output current to 40mA (typical), although it might be as low as 20mA according to the datasheet. It's obviously important to ensure that the extra current drawn by the added R4 doesn't push the peak current to any more than around 10mA with maximum positive output, or the current limiter will create a 'new' opportunity for distortion. The minimum value of R4 with ±15V supplies is 3.3k, but try to keep the value as high as you can, consistent with minimum distortion.
+ +
Figure 2 - Test Circuit With Class-A Operation
The resistor (R4, highlighted) needs to be selected so that it will always have some current flow. This is determined by the expected output voltage and the feedback network in parallel with the following load. If you expect to drive (for example) a 3.3k load (the following stage) with up to ±6V (4.25V RMS), then make R4 3.3k too. With ±15V supplies, the quiescent current through R4 will be about 4.5mA, and driving a 3.3k load the current through R4 won't fall below 2.7mA. That means that Q13 (the negative output transistor in the IC) is now redundant - it doesn't do anything. The positive output device (Q6) handles the full audio waveform, so is operating in Class-A for the full signal swing.
+ +Be careful though, because if the value of R4 is too low you'll cause excessive dissipation in Q6 of the opamp, which will lead to overheating and possible failure. Ideally the extra current will be just enough to handle the peak audio level expected with the load impedance you're using. As a result, this arrangement isn't recommended if you need to drive low impedance loads with any voltage greater than around 1V. It's up to you to work this out for yourself.
+ + +To show the crossover distortion, I first used a 10kHz sinewave, with the output adjusted for 2V RMS output. The distortion can just be seen with 1kHz, but it's just a tiny glitch in the waveform. At 10kHz the distortion is much more visible because the opamp isn't fast enough to compensate. The distortion offset due to the 50µA current sink is clear, because the crossover distortion is shifted by -1V relative to the zero volt position. The internal current sink is almost exactly 50µA, as the total impedance of the feedback network is 20k (2 × 10k in series). 50µA with 20k is 1V, the exact offset you can see on the scope trace.
+ +
Figure 3 - LM358, 10kHz Response
Although I created a simulator model of the LM358 using the IC schematic, the results were not in line with the results I measured, so the scope results are shown for the output signal (yellow trace) and distortion residual (violet trace). My distortion meter has a maximum sensitivity of 0.1% THD full scale, and it's possible to measure down to 0.01% with reasonable accuracy. The oscillator I used (Project 174) has a residual distortion that's well below my measurement limit. In fact it's so low that the distortion meter's output consists mainly of the fundamental, because the meter cannot null any further. This sets the lower limit for measurements at around 0.01% THD.
+ +
Figure 4 - LM358, Feedback Resistors Only (No Load)
Figure 4 shows the distortion without R4. It measured 0.31% on my distortion meter, with an output level of 2V RMS, the distortion meter's output is shown by the violet trace, and it measures 760mV RMS, with a peak level of 2.2V (positive) and 3.2V (negative). The distortion itself is a nasty waveform, and the sharp spikes indicate a serious (and sudden) discontinuity. Although the distortion level might seem to be 'ok' (compared to some valve amps for example), the spiky nature of the waveform makes it very audible. This was (and is) a limitation of distortion measurements when the measurement is presented as a simple percentage, without the benefit of a waveform that lets you see the nature of the distortion. When this is omitted, you have no idea what to expect!
+ +
Figure 5 - LM358, With 'Class-A' Resistor Added (No Load)
Once the extra resistor (R4) is added, the distortion falls back to the residual of the oscillator and distortion meter. That shows as <0.01%, both at the input and output of the opamp circuit. The distortion waveform shown in Figure 4 is essentially identical to that from the oscillator, and consists primarily of the fundamental! There are no sharp discontinuities, only the residual fundamental plus some low-level harmonics. This indicates that the LM358 has contributed no measurable distortion (with the test equipment I have to hand) in this test. I also tested the circuit with a 2.7k load, and the distortion didn't change appreciably (it was a fraction higher, but remained 'benign').
+ +Unfortunately, I'm not in a position to be able to afford an Audio Precision analyser, so my measurements are somewhat limited. However, it's a reasonable assumption that if the LM358 didn't contribute any excess distortion that I could measure, its actual distortion is well below 0.01%. Considering what came before, this shows that converting the LM358 to Class-A offers a benefit that belies the simplicity of the solution.
+ + +No matter what you do, the LM258 is never going to be an 'audiophile' opamp. However, by forcing its output stage to operate in Class-A, it is far better than most people give it credit for. However, it's no longer a 'low-current' opamp, because the Class-A current needs to be greater than any expected load current. While it would (at least in theory) be 'better' to use a current sink in place of R4, that would add quite a few more parts, but without any tangible benefits.
+ +The only reason you'd use an LM358 for audio circuitry is if there's no other choice (which is rather unlikely). However, if you happen to be stuck and have nothing else available, converting its output to Class-A is a workable solution. In case you were wondering, using a resistor from the output to the positive supply also works, but nowhere near as well. The PNP output transistor (Q13) has comparatively poor performance, so bypassing it with R4 gives much better results.
+ +At one stage (the idea seems to have gone away for the most part), this mod was suggested for other opamps as well, with (completely unsubstantiated) claims that it would 'improve' performance. In the vast majority of cases with decent (low distortion) opamps, adding the resistor is more likely to make the performance worse, and especially for devices with a limited output current. The LM358 is a little different from many others, in that it can source up to 40mA during the positive part of the output seemingly without any noticeable stress.
+ +The tests I did are not subjective, and requires no BS explanation of how it will improve the 'sound stage' or any other parameter so beloved by the subjectivist brigade (to whom measurements are usually an anathema). This is just simple, straightforward engineering, allowing the use of an otherwise unusable opamp to perform well enough to be used in an audio circuit. It's also instructive in its own right, because it shows that a very basic opamp can still give very good results if you understand the problem properly in the first place.
+ +You now have the ability to use that otherwise unused half of an LM358 in your project for something useful. With the addition of just one resistor, you can improve its distortion performance by at least an order of magnitude (×10), at a cost of only a few cents. One thing you do need to ensure is a very clean negative supply, as the opamp has great difficulty removing any supply noise passed through the added resistor.
+ + +![]() |
Elliott Sound Products | Lock-In Amplifiers |
![]() ![]() |
I suspect the first question will be "WTF is a lock-in amplifier?". Fair question, and it's certainly not something that everyone needs. Indeed, most people will never have heard the term, so it's something that needs a good explanation. A lock-in amplifier (LIA) is designed to extract a usable signal from one that's otherwise almost all noise. Traditional lock-in amps have a DC output that's proportional to the 'buried' signal's amplitude. This may not be considered 'acceptable', but when you read an AC voltage on a meter, that's been converted to DC first anyway. We accept that (almost) without question, so there's no real reason to be suspicious of an instrument that simply produces a proportional DC signal.
Lock-in amps are a stock item in many laboratories, where there is often a requirement to measure signal levels that are so low that noise becomes the predominant factor. With a digital oscilloscope, it's sometimes possible to use the averaging function to see the waveform, but you need the scope to be triggered from the original (noise-free) input signal. Without that, the averaging process will fail because there's no fixed reference. This is demonstrated further below.
The LIA also uses averaging to obtain a DC voltage that represents the amplitude of the output, but without the noise. It's not just broadband (thermal) noise that will be removed - 50/60Hz hum and other unwanted frequencies will also be eliminated. It's common for lock-in amps to include notch filters to reject mains hum, but they (like all filters) introduce phase shift that can make the output voltage unpredictable. Actually, it is predictable, but only if you know just how much phase shift has been introduced so it can be compensated.
Early lock-in amplifiers were (predictably) all analogue, with the earliest versions using valves (vacuum tubes). The design is credited to Robert Henry Dicke (1916-1997), although this may be disputed. The first commercial LIA was developed by Princeton University and the Plasma Physics Laboratory in 1962. This was the model HR-8. One of the major suppliers today is Stanford Research Systems, but there are plenty of others.
You can buy a lock-in amplifier from eBay for around AU$100 AU$200 (the price mysteriously doubled as this article was being written), and it's just a PCB with an AD630 (Balanced Modulator/Demodulator) from Analog Devices, plus a couple of opamps. The AD630 datasheet even includes the circuit for an LIA, and I suspect that the eBay version follows the circuit shown reasonably closely. The price seems high, but if you buy an AD630 from a distributor, that IC alone will cost roughly half the cost of the complete PCB. Is it any good? Quite frankly I was astonished! It's very good indeed!. Similar (and identical) units are available from several other on-line outlets, invariably in China at various prices.
I ran a test with a voltage divider that gave me 10μV (RMS) with a 2V input signal (a similar but slightly different attenuator was used for the scope capture), and I tested the module (after a calibration pot) at a number of frequencies, and down to 1μV input. The output is calibrated to provide 100mV DC for a 10mV RMS signal, and I used my low-noise test preamp with a gain of 60dB (×1,000) in front of the module - a total gain of 10,000 (80dB). With 10μV in, the signal after the preamp was full of 50Hz hum and broadband noise (see scope capture further below), and the output from the LIA was rock-steady at 98.7mV - that's 1.3% accuracy, measuring a 10μV signal. There's a zero-signal DC offset of about -100μV from the LIA, so your input signal must be high enough to make that irrelevant (I suggest ≥100mV of signal if possible).
If this happens to be something you desperately need (or just want), the module shown will take some beating. Obviously, I cannot vouch for specific eBay sellers nor make recommendations because things can be unpredictable, but if you get one that works properly you won't be disappointed. You will need a low-noise, high-gain preamp, and you need at least 60dB gain (switchable) for it to be useful. If there is sufficient interest I'll put together a project version of the complete system. The module I bought will be assembled into a case to become a complete (albeit basic) lock-in amp that I can use when I have very low voltages to measure. It won't be used very often, but it most certainly will be used!
The circuits shown below have been simulated, and they don't show everything in detail. The multiplier version is pretty easy, because it's only an 8-pin device and it's not ambiguous. The synchronous rectifier is trickier, because it's shown 'in principle' rather than a complete design. However, there is a version of the AD630 modulator/ synchronous rectifier shown as a complete circuit. That was adapted from an application note, and should work as shown. For anything else, you need to add amplifiers, DC offset correction, filters and phase correction to suit your specific application.
Mostly, if we need to measure a noisy signal we'd use a scope with averaging. My test setup used a 1V, 400Hz signal, with a direct feed to the scope's external trigger input. Triggering was set to use the external input, and I was very careful to ensure that triggering was 100% reliable. The signal was attenuated by a factor of 185k, using a 500k resistor feeding 2.7Ω The voltage across the 2.7Ω resistor should be 5.4μV. The left-hand capture was done with a 4ms/ division timebase, the right at 1ms/ division. 50Hz (20ms period) is quite visible on the left capture. Note that averaging with a scope depends on the scope itself. Some do a poor job, even if set up properly.
The 5.4μV signal was passed through my low-noise test preamp (see Project 158), with the gain set to 1,000 (60dB). Predictably, the output from the preamp should contain 5.4mV of the wanted signal, plus vast amounts of 'stuff' we don't want. There's thermal noise aplenty, along with 50Hz hum. I quite deliberately didn't shield any of the outboard circuitry (two resistors plus scope probes and clip leads) because I wanted to see how much hum was picked up, even at such a low impedance. The non-averaged trace shows 25mV of noise, with 5.4mV at 400Hz hiding within after amplification. The 400Hz signal is (more-or-less) visible, but it's overwhelmed by broadband noise and 50Hz hum. Reading the voltage is not possible with all that noise.
The 5.4μV output is amplified by 1,000, but that leaves the signal obscured by noise and hum. Averaging makes the signal and its waveform visible, but if there are cycle-by-cycle waveform variations, they too are averaged, so what you see is not necessarily the signal as it really is. It might be, but you can never know for sure. This is similar to reading the voltage with a meter - you see only the amplitude, and have to guess at the waveform. This is why we use scopes, but they can't measure such tiny voltages without external circuitry and averaging.
![]() | ![]() |
Note: Hover over the image to see the full sized version
The recovered waveform using averaging is more-or-less what one would expect, but the amplitude is a little low. I don't know if this is an artifact of the scope's averaging process, but I know that the reading should have been 5.4mV. The scope shows 5.2mV with 64 averages (which seemed to be the 'happy place' for this measurement). I think this is just acceptable, particularly when you look at what was retrieved after the ×1,000 preamplifier. It shows that the signal is present and at least close to the expected value, with most of the noise eliminated. This can't be considered a bad result when you look at what I started with. 5.4μV - that's not much! While averaging certainly works (provided it's available on your scope), there's a fair bit of messing around to get the triggering to be perfect, and you need to wait for the average to settle. Any disturbance during the averaging period causes it to mess up and start over, which can become tedious.
Measurements of such small voltages that are buried in noise will always be a problem, and the LIA is the most effective solution to date. Scope averaging works quite well with less noise, but the limitations are obvious. I set up a crude multiplier circuit to measure the voltage using the technique described below, and that gave a result of 5.46mV. I suspect that most of the error was due to DC offset - I included an offset adjustment pot, but setting it accurately is a chore (due to the integrator).
If you use a lock-in amplifier you have to be 100% confident that it has no DC offset, and that the displayed voltage is within expectations. A multiplier wired up on a bit of Veroboard with wires everywhere doesn't quite qualify (the multiplier board is for another project), but with a system that's set up properly you have the ability to measure voltages that would otherwise be quite impossible. My experiment wasn't quite an unqualified success, but it does show that the process works, even as a lash-up.
Using the Fig 0.1 module was significantly more successful than my test multiplier, even though there were still cables all over the place. On this basis, I'd have to recommend synchronous rectification over the multiplier approach, although there's no reason that a fully developed multiplier-based LIA can't be just as good. The two techniques are explained below.
The nice thing about AC is that it is AC, so at normal signal levels (and impedances) the results are generally unambiguous. If there is a DC offset it can be removed by using AC coupling on a scope or by adding a capacitor to the output of the circuit under test. When your signal is DC, there's no problem with normal supply voltages and small inaccuracies rarely matter much. When a measurement system provides a DC output to represent AC, DC offset becomes a real problem, especially at low levels. When your signal is only 5mV DC or so, otherwise inconsequential DC offsets can cause a very large error. An offset of only ±50μV with a 5mV signal is 1%, which may be your entire error budget!
Note: When taking a measurement with a scope or an LIA, the waveform from the preamp (including noise) must remain within the dynamic range of the scope's input preamp or the multiplier. If noise peaks exceed the allowable dynamic range (i.e. there's [internal] clipping), the reading will vary between wrong and terribly wrong!. The scope is especially vulnerable, because when averaging is selected, you see the average, and the peaks are 'invisible'. You will probably be tempted to increase the gain to get a better waveform, and while you will see a higher level, it will be wrong. How do I know this? Predictably, my first scope tests gave me answers that were clearly incorrect, and the reason was realised fairly quickly. I tell you this so you don't make the same mistake.
While I have only covered low frequency operation, lock-in amps are available with frequencies up to 200MHz, although most are limited to 250-300kHz. The principles don't change, but phase alignment becomes critical. Many are also fitted with VCOs (voltage controlled oscillators) using phase locked loop techniques, and this allows for the measurement of harmonics. Many commercial instruments also include a quadrature detector, allowing phase to be measured. These are not covered here, so if you're interested you have lots of external reading to do.
Not all wanted signals are very low level, but that doesn't always mean that they're not noisy. The general principles work just as well with 1V as with 1μV (mostly better), but the most common use for lock-in amps is to extract a low-level signal from a comparatively large amount of noise. The 'noise' may not even be standard random ('white') noise, it can just as easily be a strong interfering signal or series of signals that are too difficult to filter out. In electronics (and to our ears) anything that we don't want to see or hear can be considered noise.
Noise is the enemy of all low-level measurements. It's a problem with both AC and DC, but AC is worse because the wanted signal is in amongst broadband noise, which spans a frequency range dependent on the measurement bandwidth. You can filter out the noise, but this can be difficult because narrow-band filters require a high Q (quality factor), and they have an extended settling time. Ensuring that the gain remains constant as the frequency is varied can also be a major challenge. The waveform is also changed, so what started as a squarewave will look like a sinewave if the filter is sharp enough. This isn't always a problem of course. DC is a little easier because an 'extreme' low-pass filter will remove most noise but allow the DC to get through. However, with DC you have other problems, notably opamp drift and other physical effects that create offset voltages that are often temperature dependent. Amplifying DC is covered in more detail in Section 5. Unfortunately, even comparatively small amounts of noise can make AC measurements difficult to interpret with accuracy.
Believe it or not, the next graph shows 70mV of 400Hz signal, 700mV of 50Hz hum and 1V of random noise (all values are RMS). The 50Hz component is quite obvious, as is the broadband noise. An LIA will have no difficulty extracting the signal (as a DC voltage) and almost complete rejection of everything else. This is almost impossible with any other method. Not being able to examine the wanted signal's waveform is a limitation, but it's not a game-changer. Having confidence that the DC output accurately represents the signal voltage is far more important for many measurements that are undertaken under extreme conditions. This is the signal I used for some of the simulations. The total voltage is 1.4V RMS.
In almost all cases, the reference and the wanted part of the 'dirty' signal are (or are assumed to be) sinewaves. You can't use a lock-in amp to look for waveform distortion through the DUT, because it can't be done. The output is DC, and while another multiplier can reproduce a sinewave of the same (or amplified) voltage as the input, it's fake. It may look the part, but the actual waveform may be badly distorted, and you won't know. This is where using a scope with averaging can save the day - provided the distortion is consistent from cycle to cycle over the averaging period (which may be from 2 to 1024 waveform cycles).
Unlike 'normal' voltage measurements where we can see the output immediately (or close to it), any system that uses averaging will take a long time. Greater accuracy is obtained by taking more averages, so you may not get a steady signal for 5 seconds or more. These are not everyday techniques though, and if you have 1μV of signal it must be amplified first so the multiplier has enough signal to work with, and in this case you'd need at least 100dB of gain (×100k). That gives a signal level of 100mV, but expect at least 31mV of noise from the amplifier (assuming a 'perfect' amplifier with 100kHz bandwidth and an input noise of 1nV√Hz), plus noise generated by the signal source and picked up from the surroundings. The total noise could easily exceed a couple of volts. Using more amplification to get a better signal level is advised, but of course that will also increase the noise.
Consider an amplifier with an equivalent input noise (EIN) of 1nV√Hz. If the bandwidth is limited to 100kHz and the gain is 60dB (×1,000), the output noise of the amplifier alone is ...
EIN = 1nV × √100k = 316nV
Output Noise = EIN × Gain
Output Noise = 316nV × 1k = 316μV
This is only for the amplifier (and 1nV√Hz is a very good noise figure). Add to this the noise from your external circuit, noise picked up from nearby switching power supplies (including LED lighting), 50Hz magnetic fields from linear power supplies, and general noise that's inevitable unless you have a very expensive Faraday cage at your disposal. If you're trying to measure even a 10μV signal, it's quite apparent that noise will dominate (see Fig 0.2A). My test preamp has less than 1.2mV of output noise with a 50Ω input termination and 60dB gain. Its bandwidth is restricted to about 50kHz (theoretical noise is ≈224μV). All external circuitry adds noise, including resistors. For an in-depth article on the topic, see Noise In Audio Amplifiers.
Noise does not add algebraically because it is random. When there are two (or more) noise sources, we use the square root of the sum of the squares. So ...
Noise = √( N1² + N2² + Nn² ) For example ...
Noise = √( 1V² + 1V² ) = 1.414V
As already discussed, the average value is zero, but to eliminate DC errors capacitor coupling is recommended. The averaging process means that you have to expect it to take time before you get a usable result. For the graph shown at the beginning of this article, a usable output isn't reached for 5 seconds. A great deal depends on low frequency noise (aka 1/f, shot or flicker). By its very nature, this type of noise increases as the frequency is reduced. It's always a part of the noise signature of opamps, and the corner frequency (where 1/f noise transitions to broadband noise) depends on the fabrication of the device. It's typically around 50-60Hz, but it can extend to 200Hz or more with some devices.
This is a pain, because it's precisely the kind of noise we don't want when integrating to obtain a DC level. Being 1/f noise, as the frequency is halved, the amplitude doubles. If you happen to need very high gain after your sensor, you can use a 'chopper' (aka zero-drift) opamp. This will effectively eliminate 1/f noise, but only that from the opamp itself. External 1/f noise is amplified as normal.
'Atmospheric' noise is a far bigger problem now than it used to be. This is due to the multiplicity of switchmode power supplies (SMPS) that power almost everything. Equipment like oscilloscopes and the like are carefully shielded internally, but problems will always arise with probes and other wiring. In theory, all SMPS have passed the required tests for conducted and radiated emissions, but the proliferation of low-cost Asian products means that we know that at least some will never have been tested and will generate far more noise than they should. Other sources include general background noise that exists everywhere, lightning, the neighbour's lawn mower, etc., etc. In short, noise is everywhere, and techniques to eliminate (or at least minimise) its influence are very useful. Enter the Lock-in amplifier.
Depending on the lowest voltage you expect to try to measure, you may (just) get away with an external gain of 1,000 (60dB), but you're more likely to need anything from 10,000 (80dB) to 100,000 (100dB). Then there's the LIA's input gain of 10, so you could easily have a total gain of 1,000,000 (1 million, or 120dB). The input noise (EIN) of the first stage will dominate, so if you have an EIN of just 1nV√Hz, that translates to 141mV of noise with 120dB of gain (assuming just 20kHz bandwidth). That should allow you to measure a signal level of just 100nV (0.1μV). Maintaining the stability of a preamplifier with a gain of 120dB will be a challenge!
The signal - including noise - must not clip! We may tend to think that the wanted signal is simply 'hiding' inside the noise waveform, that's not the case at all. With any electrical waveform, there is one (and only one) voltage present at any given time. The wanted signal is 'riding' the noise, so if the noise is clipped, so is our wanted signal. This is easy to see with just two frequencies, but it's harder to visualise with broadband noise. So, the noise signal must not be clipped, and this will often set the limit for the lowest input voltage that can be measured. Using filters will reduce the noise, but will also add phase shift, so care is needed to ensure that any phase shift is equal but opposite. Phase shifts can then cancel, without attenuating the signal (2-octave spacing above and below the test frequency is the minimum for 2nd order filters).
The basic operation isn't too difficult to understand. There are two main approaches - a dedicated phase-sensitive detector or a multiplier. Both achieve the same result, but the multiplier is (marginally) easier to understand. Multiplier ICs such as the AD633 are cheaper (although still expensive for an IC - around AU$30 each), and the description that follows is based on this IC.
When two signals of the same frequency (and phase) are multiplied together, the output is always a positive value. Their amplitudes can be quite different, and that doesn't affect the outcome, but it does alter the relationship between the (signal) input level and the multiplier's output. If the signal to be measured is DC, it's standard practice to modulate it, because an LIA can't work effectively with DC inputs. The modulation can be generated by using the reference signal synchronised to the modulator itself. For photodiode applications, a 'chopper' wheel is often used to modulate the light source (it is what it sounds like - a wheel with cutouts to 'chop' or modulate the light beam used to illuminate the photodiode).
NOTE: This is not a project, although if constructed as described it will work. There's probably very little requirement for most hobbyists to have to extract signals buried in noise. I've had to do it on occasion, but not for any audio project. Having said that, it's still interesting, hence this article.
One important point needs to be understood. The average of any AC coupled AC waveform, no matter how simple or complex, is always zero. This is why audio equipment (for example) should always be AC coupled, using capacitors or transformers to remove the DC component. If the average value of a waveform is zero, then an extreme low-pass filter will leave only the average value - zero. This extreme low-pass filter is used at the output of lock-in amplifiers.
The above is a small sample, and it's easy to show the waveforms of any number of frequencies. Each of those shown has a long-term average of zero, having been passed through a capacitor which as we know does not pass DC. The important part is 'long-term', as several waveforms (especially if asymmetrical like the one in the centre) can have a DC component that takes time to settle.
A simulation of the basic circuit shown in Fig 1.3 was done, using an input of 100mV at 500Hz, 1V of random noise with a 100kHz bandwidth, plus 1V of 50Hz. The output is amplified by 10 to get a 500mV nominal output level. There are six samples, and each is different because the noise is random. After 5 seconds, the output is passably stable and the measurement is accurate to about 10%. A much longer averaging time will improve that, but the simulations become such that it takes too long to get a result. If the averaging time is extended by a factor of 10, you can expect the error to be reduced, but it's not a linear relationship. Waiting for 30 seconds to get a reading sounds alright, unless you need to perform perhaps hundreds of measurements!
The waveforms shown below used an EIN (equivalent input noise) of 5μV with a 1μV 400Hz signal. The signal was amplified by 120dB (1,000,000), and filtered with 2nd order filters at 10Hz and 25kHz. A usable average is reached in 800ms, and the fluctuations due to noise are less than 2% - an overall accuracy that is unthinkable using any other method. If the input voltage is reduced to 100nV, the accuracy does suffer, but you should be able to get a measurement within 5%. That can be improved by using closer filters (set for 100Hz and 1,600Hz for example) and a longer averaging period.
The noise signals (random noise plus 50Hz) aren't affected by a phase-sensitive detector (unlike the signal) because the frequencies are all different. They remain effectively random, uncorrelated and retain their zero voltage average. Noise is only amplified when the reference voltage is non-zero (any voltage multiplied by zero is zero). The noise is modulated by the reference signal, but doesn't change its average value because the two signals are unrelated. Note that expecting to retrieve a signal at 50/60Hz is unrealistic, because the signal and mains hum will correlate when they are in phase (and this will happen). This includes test frequencies that are within 10% of the mains frequency (you'll still get interference, but the average remains correct.
Note that the deviations seen in Fig 1.2 are somewhat exaggerated due to the characteristics of the simulator's noise generator. In what's laughingly known as 'real life' you will still see deviations, but hopefully they won't be as extreme. A great deal depends on the low frequency content, and at least some of it can be removed with a filter (see below for more details about the use of filters). Restricting the bandwidth (to around 20kHz for example) will also have a large impact on the noise, and will help greatly for low-level measurements.
In operation, an LIA is only usable for tests where an input signal is used to drive other circuitry, the output of which is very low amplitude. When I say low amplitude, we are referring to an output signal that may be well below 1μV, making it very hard to measure because thermal noise (as well as other man-made noise) will completely swamp the signal we wish to examine. This is something I've had to do, but it was for a project for a university. I don't have a lock-in amplifier, but the university does, and much of the original data I had to work with was measured using it.
Needless to say, this elevated my curiosity to the point where I had to know more, so that's why this article was written. I have no idea how many people will be interested, but gaining some 'new' knowledge can never be a bad thing. Apart from anything else, while you may not need to know any of this now, there may come a time in the future where you find yourself with a signal that's embedded in noise. That's exactly what an LIA is for.
As already noted, you can't use an LIA to resolve a signal that has no reference. They can only be used where your circuit/ device under test requires an input signal, and has an output that is the direct result of the input you provide. An example might be a LED driver (input) and a photo-diode (output), and indeed this is one of the areas where they are commonly used. The output from photo-diodes (and other photo-detectors) can be tiny, and noise will be a serious problem for measurements.
A lock-in amp has two inputs, one for the output of your signal source (the reference), and one for the output of the device under test. It's the reference that makes all the difference, in the same way that triggering a scope from a signal generator and using averaging for the DUT's output can remove much of the noise (this also provides a clue as to how you can often measure noisy signals with good accuracy without an LIA). The integrator shown has a -3dB frequency of about 0.6Hz.
The principle is deceptively simple. The two signals are applied to the inputs of a multiplier. One input is 'clean', direct from the signal generator and the other is 'dirty', with broadband noise, hum (50/ 60Hz) and other noises. Of the multiplicity of different frequencies at the second input of the multiplier, only one has no (or minimal) phase shift and is at the same frequency as the noisy signal. That signal is your circuit's output. The voltage can be as low as a few nanovolts, or it may be a current that's converted to a voltage with a transimpedance amplifier. The amplifier shown in Fig 1.3 could have a gain of 1,000 (60dB) or more, depending on the output signal from the circuit under test.
A system I worked with had an output current of around 120pA (yes, picoamps), and produced an output of about 150mV. Transimpedance amplifiers (V/I) are quoted as having a gain of volts per amp, and this had a gain of (and no, this isn't bullshit) 1.23GV/A. Without an ultra-quiet opamp for the current to voltage converter (transimpedance stage) and severe filtering to remove out-of-band noise, the output was pretty much unusable. The only way to get a reliable result was to use a lock-in amplifier.
At the wanted signal frequency, the multiplier has only two signals that are in-phase and at the required frequency, and these are multiplied together (effectively squared). When a signal is squared, it has one polarity - positive! All other frequencies are uncorrelated with the input signal except for instances of time when they just happen to coincide. These 'coincidences' are random, and if averaged, they will eventually cancel.
But what of the noise? It passes through the multiplier, but it's not squared, so will still have an average value of zero. All AC waveforms that are capacitively coupled have an average of zero, regardless of signal 'complexity'. 50/ 60Hz hum is also uncorrelated, and it also has an average value of zero. In short, any input that is not at the same frequency and phase as the reference is subjected to averaging, and has an average of zero volts. Only the wanted signal can pass.
For visibility, the 'noise' is a 50Hz sinewave and the signal is a 400Hz sinewave at 100mV. The reference voltage is 2V (400Hz), as this conveniently removes the 'divide by two' action of the multiplier. The 800Hz signal is a sinewave, but it's not symmetrical around 0V, it's symmetrical around the average DC - which is 100mV in this example.
If we take our wanted signal (400Hz) with the noise (50Hz) and multiply that by the 400Hz reference signal, we get several frequencies at the output. Multiplication causes sum and difference frequencies to be generated, so we see 350Hz, 450Hz, 800Hz and (most importantly) 0Hz (DC). 400Hz multiplied by itself will output the sum (800kHz) and difference (0Hz). This is shown in the next graph. The output requires scaling so that the DC voltage represents the RMS value of the input signal. With a multiplier, the output might be divided by 1.414 to display the RMS value of 100mV peak (70.7mV).
The DC value is directly proportional to the wanted signal and the reference, so if the signal is at 100mV and the reference is 2V (both peak values), the product is 200mV and the recovered signal is at 1/2V, or 100mV. This is a DC output, because the next stage takes the average by filtering out everything that isn't DC (clearly not possible, but we can get close).
This leaves one signal out of all the noise that has a non-zero value, and that's the wanted signal. Unfortunately, the process of squaring a waveform means that the effective amplitude is halved. For example, a 1V peak signal has a peak-peak value of ±1V (2V total), but if squared, the output is only 1V p-p (you can prove that with a calculator). The output signal has double the frequency, but half the amplitude. Importantly, it is always of a single polarity - positive (-1² is +1).
If the noise (in all forms) has an average value of zero, then an ultra-low-pass filter (having a time constant of at least one second, preferably more, signal frequency dependent) will remove all the random 'stuff' leaving you with a DC offset that's determined solely by the multiplier acting upon the signal and reference frequencies. The secret is the integrator - it must be slow enough to remove random fluctuations at the lowest limit of your measurement bandwidth.
You also need to be aware of any phase shift within your test circuit. If the phase is displaced, the wanted signal is attenuated. A rough formula is as follows ...
Vout = 1/ ( 2 × sig1 × Ref × cos(Φsig - Φref ))
Make sure your calculator is set for degrees, not radians, or the formula doesn't make sense (yes, I got caught). We need the reference and the signal to be in phase, as closely as possible. A phase shift network is often used to get the best alignment (highest output level), and this will also be necessary if filters are included to minimise the noise. These may be band-pass, low-pass and high-pass, or band-reject (notch) filters, but they will all affect the phase unless widely spaced from the test frequency. This makes the use of filters less useful than they would otherwise be, unless phase compensation is included.
Small phase differences are of no great consequence. If you include a filter that causes a 5° lead or lag, you can expect an error of less than 0.5%. Should the lead or lag reach (say) 15°, the error increases to about 3.6%. Given that there will nearly always be perturbations caused by LF noise, this is likely to be fine in practice. A 45° phase shift will cause an error of 30%, which is quite unacceptable but should not be unexpected.
Consider that a multipole filter (high or low pass) and many circuits will cause a phase displacement of hundreds of degrees, but if the signal and reference are in phase (at the inputs to the multiplier), the gross phase shift is of no consequence. A 4th order filter generates a total phase shift of 360°, or 180° at the -3dB frequency.
There is a catch if you use a multiplier such as the AD633 (most others have the same 'catch'). The output is determined by the formula Out=X×Y/10, with the divide by 10 included to prevent internal overload. However, that means that if you have an input voltage of 5μV, amplified by 60dB (1,000), the multiplier's input voltage is only 5mV for the signal. With a 2V reference (all voltages are peak), the maximum output will be 5m×2/10, or 1mV. The average is only 500μV DC, so you are working at the lower limit of the multiplier's linearity. Ideally, the input gain will be at least 10,000 (80dB) and the gain stage will have a noise output of at least 2.5mV. So, while a multiplier works as an LIA, it's probably not the best solution. The tests I performed all worked, but DC offset proved to be a serious problem, even at moderate levels.
Using a multiplier is simple, but many lock-in amps use a different technique that is (at least in theory) better from a DC offset perspective. The result is much the same, but it's achieved differently. The module shown at the beginning uses an AD630 balanced modulator/ demodulator, and this uses a squarewave to alternately reverse the polarity of the wanted signal. When the input is positive (referred to the reference voltage) it's switched straight through, and when negative, it's inverted before being switched. The result is basically a synchronous full-wave rectifier (aka a synchronous demodulator), and the output is very similar to (but slightly different from) the output of a multiplier.
There is a significant difference between a multiplying lock-in and a synchronous rectifier lock-in amp. With a multiplier, you can change the gain of the lock-in amp itself by varying the reference voltage. You must ensure that there's no internal distortion or the result will be wrong. This has no effect with the switching/ synchronous rectifier version, because the reference is only used for switching and the amplitude doesn't change the output level. It must be high enough to get reliable switching, but it cannot affect the rectified output amplitude.
I thought this would be a little harder to simulate, but as it turned out the simulation was much easier than I expected. It gave results exactly according to the theory, which is always a bonus. Because of the synchronous rectification, only an input with the same frequency and phase as the reference signal is rectified properly, and everything else remains random (at least as far as synchronous rectification process is concerned).
The red trace shows the result when the signal and reference are in phase, and the average is 0.641 'unit'. With a 90° phase shift between the signal and reference, you get the green trace, and its value is 0.447 unit. The red trace is correct, and the green trace shows a serious error. The importance of this will become clear when you read through Section 5.
As with the multiplier, the wanted level is expressed as a DC voltage, but there's one small difference. The average value of a full-wave rectified sinewave is 63.7% of the peak, not 50% as we obtained with the multiplier. There are still sum and difference frequencies (0Hz, 350Hz 450Hz) as well as the 800Hz ripple we expect with a full-wave rectified 400Hz sinewave. However, the signal is switched, so we get a whole slew of additional frequencies, extending to several hundred kHz. These are of no consequence because they are all removed by the integrator.
All other factors are the same as with a multiplier configuration. The AD630 is an expensive device (local suppliers want over AU$90 for the DIP version), but the same results can be obtained with lower cost circuitry - at least in theory. CMOS analogue switches are perfectly capable of synchronous rectification, but of course there are more ICs involved, and DC offset may be a problem. This can be balanced out with the AD630, but it's likely to be harder with a semi-discrete circuit.
The primary difference is the reference voltage. With a multiplier, it's typically a sinewave, but the synchronous rectifier requires a squarewave, with the waveform changing polarity at the exact zero-crossing point of the reference sinewave. This is easy to achieve, and it's performed within the AD630 IC.
The above circuit is adapted from an application note (AN683 - Strain Gauge Measurement Using AC Excitation), with a number of simplifications so the intent is clear. The way it works is identical to the arrangement shown in Fig 2.1, with the only difference being that the signal inversion, switching and squarewave generation are all inside the AD630. This simplifies the circuit, but as noted above, at considerable cost. However, compared to a commercial LIA it's still a bargain (although you can't expect equivalent performance).
The strain gauge is driven with a 400Hz signal which is also used to synchronise the AD630. A differential amplifier provides a gain of 1,000 (60dB), the output of which goes to the AD630. The 3-stage averaging circuit removes (most) noise, leaving a DC signal that represents the unbalance of the strain gauge. A strain gauge can output positive (strain) or negative (compression), and either can be detected easily. Note that supply connections and decoupling caps are not shown, but are obviously necessary.
The final integrator is most often a series string of three to five R/C networks, and it's important to use film capacitors. You could (maybe) get away with low-leakage electrolytic caps, but film caps are a better choice overall. There are conflicting requirements, in that you need good averaging, the impedance should be low and the cap value(s) need to be 'sensible'. High-value film caps are large and expensive, so you don't want to have to use more of them than you must.
A multi-stage passive integrator is more effective than a single stage (with higher ultimate HF rolloff, -51dB at 100Hz), but the overall resistance is higher. The integrators I used are shown as 5.1k and 2.2μF, but there's no reason that you can't use 5.6k resistors. The -3dB frequency is 2.76Hz. If you were to use a single stage, the capacitance must be larger than expected, and HF rolloff is poor. With 5.1k and 12μF the -3dB frequency is the same as the 3-stage network, but it's only 31dB down at 100Hz.
In the circuits I've shown, a simple 3-stage integrator is included, but this is not the ideal way to remove the AC component. A (very) basic R/C integrator is worthwhile following the synchronous rectifier (or multiplying) circuit to remove high frequency components, but faster response is provided by a 'traditional' 3-pole (18dB/octave) filter. This has to have a response that is well below the lowest frequency you expect to encounter - including noise!. This usually means that the filter will be slow, generally requiring at least a couple of seconds before the reading is stable.
With the values used (10k and 2.2μF), a 3-stage filter/ integrator has a -3dB frequency of 1.5Hz, whereas a single integrator using 10k and 12μF has the same risetime, with a -3dB frequency of 1.42Hz. The big difference is at higher frequencies. At 100Hz, a single-stage filter is only 37.5dB down, vs. just over 68dB for a 3-stage filter. A 3-pole active filter using the same capacitance but 33k resistors has a -3dB frequency of 0.8Hz, and is 100dB down at 100Hz (allegedly - this assumes ideal parts). This filter will be within 1% after 1 second (close enough).
This is probably one of the few places where a simple ultra-low pass filter cannot use electrolytic capacitors. There are two problems, dielectric absorption (not normally a problem) and leakage (ditto). The filter is expected to give an accurate result, and anything that compromises that is not acceptable. The networks I showed use 10k and 2.2μF, with three in series. This will be within 1% in 535ms and has a -3dB frequency of 1.42Hz. This isn't bad, but it can be improved, especially if you need to measure low frequencies (2Hz or below).
As always in engineering, we need to compromise. If we simply use higher resistance values we will get a lower frequency, but we may also get DC offset due to opamp input current, and the circuit becomes susceptible to PCB contamination, humidity, etc. Much depends on the accuracy we're trying to achieve. If we're happy with around 2% (and that's not unreasonable) we can take a few liberties. A low-cost JFET-input opamp may have an input offset of around 3mV, but that can be removed with a simple offset adjustment. If our minimum signal level after amplification is (say) 100mV, the offset represents an error of 3%. That's easily adjusted to be below a few microvolts.
The circuit shown above is a good compromise, and will provide an output level of 1% accuracy within 900ms. The circuit uses a TL071 which includes an offset-null facility using VR1, and a gain control that can be adjusted to get exactly 1V output for a 1V input (a synchronous rectifier is assumed). The gain required is ×1.1, which demands odd-value resistors. A trimpot would generally be used, as that makes it easy to get an exact output. The final integrator isolates the opamp's output from capacitive loading and creates a 100Ω output impedance.
A precision opamp is another solution, such as an OPA627, with an input offset of around 100μV without adjustment (depending on the grade of the device). If this is arranged to have gain (as required for the detector), you can kill two birds with one stone (as it were). Adding gain will change the filter very slightly, but it doesn't affect its performance.
When the recovered signal is from a multiplier, the average DC value depends on both the signal amplitude and the reference amplitude. That means that if you arrange for the multiplier to have a gain of two, the average output amplitude is the same as the RMS value of the signal. With a synchronous rectifier, the average is 0.637 of the peak, or 0.9 of the RMS value. That means that you need a gain of 1.11 to ensure that the RMS value is represented accurately.
Figure 3.2 shows the result when the signal and reference are in phase and out of phase. To get an accurate result, the phase error between the signal and reference signals should not exceed 5°. Even then there is a small error, but it's unlikely to cause anyone to lose sleep. The question is, how does phase shift occur and what do we do about it?
Note that this discussion is based on analogue filters and phase-shift delay networks. If you use a digital sub-system, you can use FIR (finite impulse response) filters, which can be configured to have no phase shift. Alternatively, you can use an IIR (infinite impulse response) filter and correct its phase with a digital delay. A digital approach requires a DSP (digital signal processor), and if you go that way much of the circuitry described can be implemented in the digital domain. Modern commercial LIAs use digital processing. This is completely outside the scope of this article, and will not be discussed any further.
With any filter comes phase shift, so for the frequency of interest you must be able to tweak the phase so the reference and recovered signals are in phase. Depending on the filter you use, the phase will be either leading or lagging. A filter will create 45° of phase shift at the -3dB frequency for each 'order'. A first order filter (6dB/ octave) contributes 45°, second order (12dB/ octave) 90° and so on.
A high-pass filter will create a lagging phase shift and low-pass is leading. A leading phase shift simply means that the output signal comes before the input (which may seem impossible, but it's true nonetheless). This is a 'steady state' condition, meaning that it takes a number of cycles of the input before the long term output is stable. A low-pass filter is leading, so the output is moved forward in time compared to the input. The three waveforms are shown below, for simple 6dB/ octave filters at a frequency of ~400Hz (the filter frequency with 10k and 39nF). When phase-shift is used, it will generally be the reference waveform that's shifted, as this prevents any more noise from being introduced to the signal.
You can see how the two conditions (leading and lagging) are set up as the signal is applied from time zero (t0). It's obvious that the high-pass filter's output peak does indeed occur before the input signal peak from the second complete cycle and thereafter. This means that if you add a filter to remove noise, it may affect the phase of the wanted signal. Any phase displacement causes an error in the output as described for the multiplying lock-in amplifier.
If you need to add a filter you also need to correct the phase, so you'll need to include a phase shift network, along with an inverter. Phase shift networks are lagging, so if you need to correct a phase lag, you're in for a world of pain. It's far easier to delay the reference than it is to attempt to advance the phase. It can be done, but it's not intuitive and there don't appear to be any formulae that can be used to calculate the required advance as a time shift. You can calculate phase and arrange for a long phase shift, but the delay is not necessarily constant. I'm not going there!
You will need an inverter so you have flexibility with external circuits that alter phase/ polarity. Any phase-shift networks you add will probably require switched capacitor values if your tests are at anything other than a single frequency. Your external circuit may also introduce a phase shift that needs to be corrected, and it may be leading or lagging. Remember that the closer the filter frequency is to the test frequency, the harder it becomes to make the necessary phase corrections.
If you are measuring the output from a strain gauge or other simple circuit (and this is a good use for a basic LIA), keep the modulation frequency low (around 400Hz is ideal) as that minimises phase shifts through opamps and other circuitry. 400Hz is also high enough that if you do need to add a high-pass filter you can do so with little phase disturbance provided it's tuned to no higher than 20Hz (12dB/ octave Butterworth filter). This will cause a ~5.7° phase shift at 400Hz, resulting in less than a 1% error.
Trying to find a formula that works for determining phase shift in a network as shown is not easy. There are several candidates, but none gives the same phase shift/ delay as the simulator shows, and I know that the simulator is very accurate with such circuits. Note that the phase-shift network is lagging (so the output appears after the input). An inverter lets you switch the phase by 180° as needed. Importantly, the phase shift (in degrees) changes depending on the frequency, but the time delay does not. If you need a 50μs delay, it remains constant with frequency. The network shown has a delay from 24μs (minimum) to about 264μs (maximum). Different ranges are provided by using switched capacitors.
The frequency where a phase shift network provides a 45° shift is calculated using the standard formula ...
f45 = 1 / ( 2π × R × C )
f45 = 1 / ( 2π × 11k × 12n ) = 1.2kHz and
f45 = 1 / ( 2π × 1k × 12n ) = 13kHz (close enough)
The delay introduced at the frequency determined by Rp and Cp (400Hz signal frequency) is determined by ...
ω = 2πf
φ = arctan( 1 / ( ω × Rp × Cp )) or if you prefer ...
φ = tan-1( 1 / ( ω × Rp × Cp ))
The above tells us that the network shown has a 45° phase shift (VRp + R1 = 11k) at 1.2kHz, but we knew that already. What we need to know is how much phase shift we get if Rp is reduced to 2.2k (as used in Fig 5.3). If you use the equation above, the answer is clearly incorrect, but it can be subtracted from 90° and doubled to get the right answer (after converting degrees to delay in μs). I did that and got a delay of 52.7μs. Too much faffing around, and no guarantee that it will work in all cases! The delay can be more easily approximated by ...
Delay = Rp × Cp × 2 For example ...
Delay = 2.2k × 12n × 2 = 52.8μs (The simulator says 52.4μs, but I won't argue the point)
We're not even slightly interested in the 45° phase shift frequency, provided it's at least 2-3 octaves removed from the test frequency. The amount of delay is varied by changing the resistance (VRp). Less resistance reduces the delay (and vice versa). Changing the capacitance also changes the delay (more capacitance, more delay). VRp is a pot to allow continuous correction over a reasonable range. The minimum resistance should be at least 1kΩ to minimise opamp overload. In general, you probably shouldn't need to change the delay by more than ≈100μs, but that may depend on your circuity.
The filter and phase correction shown will create almost perfect phase alignment at 400Hz. When a high-pass filter is included in the signal (plus noise) return path, the delay must follow the filter, but with a low-pass filter, the phase is leading, so you need to delay the reference signal. There is no equivalent to an all-pass filter (phase shift circuit) that can advance the phase of a signal. If we can't advance the signal, we can retard the reference. The net result is the same - the signal and reference are in phase, so the multiplier or synchronous rectifier will work properly.
The combination of a filter and phase-shift network will function over a limited frequency range. At ±½ octave, the phase shift between the signal and reference will be within ±6°, which is probably acceptable. The error is around 0.5%, rising to 1.7% if the phase discrepancy is 10°. Whether this is alright or not depends on your expectations. As noted elsewhere, a 15° phase difference leads to a measurement error of about 3.6%.
The values for the filter are approximate, as they have been simplified. You can use design software such as TI's 'FilterPro' software, which gives a very accurate result but with impractical values. A simplified version will be just as good for our needs, and far easier to build. It's obviously impractical to try to cover every eventuality, so from here on you'll have to work it out for yourself. Changing the filter, signal frequency, external circuit or anything else that affects phase will mean re-calculation of the values, but everything is easily scaled if you work with relative frequencies (e.g. double the frequency, half the phase shift). The trimpot (VR1) lets you adjust the delay, and phase alignment can be verified using a scope to look at the input and output.
Otherwise, you'll need to simulate or calculate the values needed to suit your circuitry and its phase shift. Determining the phase shift (or time delay) created by a simple R/C network is actually quite difficult to do, and you need to go through a few gyrations to get there. There is some info that can be useful in the article Using Phase Shift Networks To Achieve Time Delay For Time Alignment. It's aimed at loudspeaker time alignment, but the same principles apply.
You'll need to be able to re-patch the system so that you can measure a filter's phase shift at a given frequency. For example, the 2nd order, 40Hz high-pass Butterworth filter I used will introduce a 'time shift' (leading phase) of about 53μs at 400Hz. We can convert that to degrees with the formula ...
Delay = Phase° / f / 360
Phase = 360 × Delay × f
Phase = 360 × 53μ × 400 = 7.63°
Amplitude = sin ( 90 ± Φ° )
For example ...
Amplitude = sin ( 90 - 7.6 ) = sin ( 82.4 ) = 0.991 = 0.9% accuracy
When determining the amplitude, the 'reference' figure is unity, at 90°. As the phase is shifted from 90° the amplitude falls. At 45° it's down to 0.707, and that's the amplitude at the -3dB frequency for a filter. Feel free to play around with this, as it's helpful to understand the relationship between phase and amplitude at a more 'personal' level than we're used to.
If the phase shift (between signal and reference) is more than 5° you may need correction. As noted above, a 5° phase shift creates an error of less than 0.5%, but a 15° phase shift will cause an error of ~3.6%. The phase shift network introduces a lagging group delay that pulls the filtered signal back into alignment with the reference signal. A low-pass filter means that the reference signal must be delayed, as a phase shift network can only introduce a delay. The (group) delay is calculated by ...
Delay = 2.2k × 12n × 2 = 52.8μs
The phase shift will be well below 0.5°, and that's close enough. The whole process is an exact science if you're willing to throw enough maths (and money) at it, but that just leads to very large formulae with a high chance of error. A simple, step-by-step approach is easier and easier to implement for most people. With a phase error of only 15°, without correction that would cause a 3.6% error.
Group delay is generally used to refer to a range of frequencies, but we're using it referred to a single frequency. If you use a simulator that can plot group delay, you'll see that it's very high at or near the filter's -3dB frequency, and is greatly reduced once your signal is 1/10 or ×10 of fo. I selected a Butterworth filter because it offers flatter response. You can use any filter alignment, but you need to determine its group delay at the signal frequency.
Remember that in some cases you may have 180° phase shift caused by an inverting amplifier. This is easily compensated for by reversing the reference signal polarity by adding an inverting unity gain amplifier. If you don't correct a 180° phase shift/ inversion, the output will be negative, not positive.
Phase shifts and associated time delays can be very confusing (and not just for beginners), and if you can avoid adding filters or using circuitry that creates a phase shift it can save you a lot of grief. The frequency used for these examples is 400Hz, chosen specifically because most circuitry will have very little phase shift at this frequency. Where capacitor coupling is used to eliminate DC offsets, the coupling cap must be large enough to ensure that it causes no significant phase shift. The -3dB frequency should be around 1/100th of the frequency of interest (e.g. for 400Hz, rolloff at no more than 4Hz).
Ideally, you'd change the 40Hz filter shown above to a 4Hz filter, and that will remove the need for phase correction (but it lets more LF noise through). A 4Hz filter might use 1μF and 39k, and will cause a group delay of 4μs at 400Hz (less than 1° phase shift). If both the reference and signal inputs of the phase-sensitive detector use the same coupling or filter circuits, the delay is equal for both, and no error results.
If you plan on adding filters, I suggest rigorous testing, as relying on calculations can be misleading if you're new to this sort of thing. A simulator is highly recommended, because it's far easier than trying to take accurate measurements on the signal with a scope. It can be done - most modern scopes have cursors that let you take accurate measurements, but it can still get tedious. Using a trimpot with phase-shift networks allows you to see phase alignment in 'real time'.
Something you can do that will be very useful if you find yourself performing a lot of measurements is to use two identical filters (or sets of filters). One is included in the reference signal path, and the other after your test circuit and preamplifier. Provided the latter have no phase shift of their own, the phase of the reference and recovered signal will be the same, because both have passed through identical filter networks. There is phase shift, but it's the same for the two inputs to the LIA, so the net phase shift is zero. This allows you to use a high-pass filter (in particular) with a -3dB frequency that's much closer to your test frequency than could otherwise be the case. Low-pass filters can help reduce high-frequency noise that may otherwise overload the lock-in amp. If used, a frequency of around 10kHz is probably a good compromise (12dB/ octave should be enough), but if your test frequency is no higher than 400Hz, a 1-2kHz low-pass filter is better. You need two - one for the signal and one for the reference, so they will be in phase since they've both been subjected to the same filter.
This arrangement uses more components than a phase shift network, but with selected parts (mainly capacitors) you can be sure that there is little or no phase difference between the reference and signal. Adding DC blocking caps and equal-value resistors at the lock-in amp's inputs minimises the risk of DC offsets causing problems. The two caps and resistors should be used regardless, as they ensure that the average input level remains at zero volts. They aren't shown in the LIA circuits above for clarity, but they should always be used.
An ideal arrangement for fixed frequency signals is to use a pair of filters that are equal but 'opposite' (i.e. high-pass and low-pass, with the same order). If these are spaced at (say) 2 octaves below and above the test frequency, their phase shifts will cancel, and much of the noise will be removed. For a 400Hz test signal, a pair of 2nd-order filters at 100Hz and 1,600Hz will remove most of the noise but have almost no effect on the amplitude or phase of the wanted signal. A second set of identical filters is used in series with the reference, so there is no phase difference between the reference and the signal.
Interestingly (or perhaps not), this section on filters and phase was by far the hardest part of this article to write. I won't blame anyone for thinking it's confusing, as I had difficulty at times making sure that what the graphs show and what I wrote were in agreement. Leading and lagging phase shifts can be hard to get your head around, because it's not easy to look at the graph and see it like it is. A leading waveform means that you see it occur before (i.e. at an earlier point in time) than the input. When you look at it, it appears to be following behind the input, but that's because of the way we perceive time. You need to look at the time intervals, and it becomes obvious that a leading waveform does really occur at a time before the input. Likewise, a lagging phase shift means that the output appears after the input signal (more time has elapsed before it rises to its peak). I know that others will struggle with this, hence this explanation. If you see an error please let me know.
In the context of this discussion, be aware that the phase shifts described have nothing to do with voltage vs. current as you find with power electrical installations. Power factor of reactive loads is a different topic altogether, so please don't conflate the two different forms of phase shift. Yes, in filters and phase-shift delay networks the voltage and current will have different phases, but in electronic systems (as opposed to electrical distribution) we don't care about the current, only the voltage.
Sometimes you will find that the output of your LIA is still lower than you'd prefer. This can make accurate measurements difficult is the voltage is much less than ~100mV, because some meters have poor resolution if the input is less than 10mV or so. Amplifying DC is always fraught with difficulty because of DC offset, even with precision opamps. One source of errors is the thermo-electric (aka Peltier) effect, where dissimilar metals generate a voltage at their junction. This can be minimised by keeping DC amplifiers in their own enclosure that ensures that everything is at the same temperature. Thermo-electric/ thermo-couple voltages are usually low - a few μV per C° - but some materials are worse.
The traditional way to amplify low DC voltages is to use a 'chopper' or 'zero-drift' opamp. These have internal switching that auto-zeros the opamp, at a frequency that varies from a few kHz to 200kHz or more. The 'original' chopper amplifier concept basically chopped the input voltage to AC, amplified it then synchronously rectified the output. That approach can be compared to the AD630 style synchronous rectifier lock-in amp. The earliest chopper amplifiers used valve (vacuum tube) amplifiers and electromechanical chopper/ synchronous rectifier.
There are many chopper stabilised opamps available, with prices ranging from AU%6.00 to AU$20.00 or more. Some are SMD only, others are through-hole. Many are low-voltage (5V), but they are available with a total supply voltage of up to 18V (recommended voltage is ±5V). If you expect to get output voltages in the order of a few millivolts (as I did with my test LIA) then further amplification will almost always be required.
The above is a very generalised representation of a chopper stabilised opamp. The input and output are continually switched to correct for DC offset. There are many different ways they are implemented, and the above is intended as a guide only. The internal circuitry depends on the manufacturer, and there are several different approaches. They all achieve the same end goal, but some create intermodulation distortion based on the chopper frequency, while others have gone to great lengths to eliminate AC signal errors.
Chopper opamps are used in much the same way as any other, and for many readers this will be the first they've heard of these as well. Most of the time, we don't need to amplify DC, and in the few cases where we do, an offset of a couple of millivolts is neither here nor there. If your signal is only a few millivolts, you need the offset to be a few microvolts, and a chopper is the only way you'll get that. For example, the ICL7650 has a claimed DC offset of 1μV. Most chopper stabilised opamps are only capable of modest output current, so after amplification they can be followed by a conventional opamp as a voltage follower.
The above is adapted from the Maxim ICL7650 datasheet, but is fairly representative of most chopper opamps. The output is integrated (again) to remove residual noise, and the buffer opamp provides a low output impedance. The 100Ω output resistor prevents oscillation if a shielded lead is used (it should not be omitted). The two capacitors are the recommended value of 100nF, and will ideally be polypropylene for minimum settling time, although Mylar (PET, polyester) caps are quite alright for most applications.
The stage is configured for a gain of 10, with a maximum recommended output of 5V DC. In some cases a gain of 100 may be required, and of course it can be switched if necessary. The extra gain stage isn't essential of course, and you may get perfectly good results without it. Remember that you have to compensate for the converted DC level too, and this is not shown. With a multiplying LIA, the DC output is half the original signal peak (multiplied by the reference voltage), and for a synchronous rectifier type the output is 63.7% of the input AC peak. This is probably most easily done using a trimpot.
This is very much a basic introduction to lock-in amplifiers. Commercial products often use a dual-phase detector (sine and cosine) to allow phase measurements, as well as many other functions not discussed here (or not in any detail). Few hobbyists will have one, and most people will never have the opportunity to use one. However, if you find yourself with an intractable noise problem with a measurement, an LIA may be the only solution. One task that springs to mind is measurement of very low resistance values. This is normally done with a low-ohms meter or a current source and multimeter, but a lock-in amp is another technique that can be used. Because they require an AC source, accurate measurement of a capacitor's ESR (equivalent series resistance) would be easy to do, even at very low values. Of course you can just use an ESR meter, but where's the fun in that?
Lock-in amplifiers are a very specialised tool for extracting the amplitude of very small AC voltages that are buried in noise. It's generally possible to get a usable signal with a 100dB 'noise to signal' ratio - that's 1V of noise and only 10μV of signal. There's a great deal to be gained if you can remove very low frequency noise, as that will always place a limit on the accuracy of your final measurement. However, the filter can't add significant (uncorrected) phase shift at the frequency of interest.
This will be a challenge if you expect to measure very low frequencies, so you'll generally have to put up with some variability as it's no easy feat to remove (close to) DC noise without affecting frequencies below 1Hz. For anything that most of us mere mortals will need to do, the fancier parts of commercial units will generally have to be dispensed with, because there are practical limits as to what can be done without spending months (and a great deal of money) to perfect the design.
That's not to say that it can't be done of course. The problem is that the commercial manufacturers have years of experience, earlier units that can be used for inspiration, dedicated test fixtures and everything else, including test gear that we can't afford. This must make a new design easier, because they have everything they need to hand, which we won't.
This is not a reason not to play around with the idea, and you might have a requirement to measure low signal levels right now, and a fairly basic LIA such as those described here might be all you need to get a result. Sometimes, there are good reasons to try new techniques when working on project ideas. One that comes to mind immediately is the use of Hall-effect current sensors. These are quite useful, but very noisy, and a simple lock-in amplifier may be all you need to get a clean, noise-free output signal. However, that won't work if you expect fast response (e.g. overload detection), because the integrator will always slow down the measurement result.
Importantly (probably most importantly), it's additional knowledge that you can add to your arsenal, that may be of great assistance with a design or project that would otherwise be almost impossible to realise. This particular topic was new to me, but its importance was immediately obvious. Even though I've had very few projects over my long career in electronics that needed an LIA, there have been times when I've had to use averaging on a scope to see very low levels, with varying degrees of success. I've acquired one (I was going to build it using a multiplier), and while I don't expect to use it often, I know that it will come in handy.
One thing that you will see elsewhere is a lengthy list of formulae for every aspect of the lock-in process. These have been avoided here as is general practice on the ESP site. It's not that the formulae are not useful or necessary, but they do tend to deter those who are not 'classically trained' engineers or mathematicians. Some formulae are unavoidable of course, but I try to keep them to a minimum to ensure the material is readable without leaving out details that are important. Complete mathematical equations are available elsewhere if you need them.
The photo above (hover to view full size) is a 'typical' lock-in amp, the model SR850 from Stanford Research. It includes a selectable transimpedance stage (current to voltage converter) at the input, and can measure down to nanovolts or femtoamps. It measures phase and amplitude, and is far beyond anything described here. The info I've included will allow you to build a basic analogue LIA, but if you need all the bells and whistles, you just have to buy the 'real thing'. Expect it to cost at least US$6k, and it will come with a serious learning curve.
Prior to the lock-in amplifier, an instrument known as a 'boxcar integrator' (aka boxcar averager) was used to achieve a similar result. The system used sampling that was locked to the signal frequency (and phase), then it took a short sample at a defined point on the waveform. For coherent waveforms (i.e. same frequency for the signal and reference), the average would not be affected by random/ non-coherent noise or interference. These also used very clever circuitry. They were first used in ca. 1950 (according to Wikipedia), so it's almost certain that the earliest versions would have used valves (vacuum tubes). I don't propose to cover these here, but there is quite a bit of info on-line (including a video of one in use).
In some respects, the boxcar integrator could do more than an LIA, but they have faded into obscurity for the most part. There are still devices that do the same thing, but they're more likely to be called a 'gated integrator' (e.g. SRS SR250). Units available today are not all-inclusive, so external metering and signal generation will probably be required.
Noise has always been the enemy of accurate measurements, and methods of minimising its impact will continue to be developed. Modern oscilloscopes with averaging can be considered (at least to an extent) a reasonable equivalent to the boxcar integrator, but they are still limited - at least for those models that remain affordable. As seen in Figs 0.2A and 0.2B, averaging does work fairly well, and it's certainly much simpler to set up than a boxcar integrator, or even a lock-in amplifier.
It should be noted that not all reference material describing lock-in amplifiers is completely reliable. To function, a lock-in amp needs the input and reference signals to be present for long enough for a decent average to be obtained. If the noise and signal levels are comparable (e.g. 100mV signal and ~100mV of noise after amplification) then the number of averages required may be reduced.
There's a great deal of literature available for anyone who wants to know more, but I suggest that you don't take everything you read as gospel. Like any other topic on the interwebs, some info is good and some is terrible. The following sites/ publications were used in the preparation of this article.
Note that some of the links may break without notice. I have PDF copies of some of those that are most likely to vanish (or lose their drawings as happens regularly with EDN).
Just for a laugh, I asked ChatCPT to tell me about lock-in amps. The result was less than ideal, with a fair bit of detail, but nothing actually useful. The result is (mostly) technically correct, but it's the sort of response you expect from a politician - lots of words that say very little. It's worth reading, if only to prove that AI isn't quite ready to take our jobs.
![]() ![]() |
Elliott Sound Products | +Lock-In Amplifiers (ChatGPT 'Explanation') | +
A lock-in amplifier is a specialized electronic instrument used to extract a small signal that is buried in noise. It is particularly useful in situations where the signal of interest is much weaker than the surrounding noise. Lock-in amplifiers are widely used in fields such as physics, chemistry, biology, and engineering for applications such as spectroscopy, microscopy, and other sensitive measurements.
+ +Here are the key features and principles behind lock-in amplifiers:
+ +While lock-in amplifiers are powerful tools for signal extraction in noisy environments, they do have limitations and are best suited for specific applications where their unique capabilities are needed.
+ +Me: How does it work
+ChatGPT + +The operation of a lock-in amplifier involves several key steps:
+ +The key to the lock-in amplifier's effectiveness lies in its ability to selectively amplify signals at a specific frequency (the reference frequency) and to reject noise at other frequencies. This makes lock-in amplifiers invaluable in situations where the signal of interest is much weaker than the surrounding noise, such as in many scientific experiments and measurements.
+ +Hardly very helpful!
+ +![]() |
Elliott Sound Products | Beginners Guide to Logic |
Logic ICs aren't what they used to be. In the early days, we had RTL (resistor-transistor logic) along with DRL (diode-resistor logic), DTL (diode-transistor logic), TTL (transistor-transistor logic), PMOS (P-Channel metal oxide field effect transistors - MOSFETs) then CMOS (complementary MOSFETs, both P and N-Channel). High speed circuitry used ECL (emitter coupled logic) which operated the transistors in the non-saturated (not turned fully on) to obtain higher speed. Before that valves (vacuum tubes) were used for either analogue or limited digital computers. They were limited because the number of valves needed became unrealistic, as did the power supplies needed to feed them electrons. As technology progressed, the size of components shrunk, until we now have a chip you can hold in one hand (with lots of room to spare) having millions of transistors.
It would be a pointless exercise to even attempt to describe the internals of a modern microprocessor, but the building blocks used to create the circuit perform the same jobs as the earliest circuits that were ever used. Likewise, it would not be helpful to try to describe how (very) early mathematical 'engines' worked. There were mechanical, with the best known being Charles Babbage's 'Analytical Engine' which was actually made operational by Ada Lovelace, and she is often regarded as the first 'computer' programmer. For anyone interested, this is a fascinating area, and formed the basis of modern computing as we know it today. The earliest computers were built using TTL logic, often helped by including a pair of ALUs (Arithmetic Logic Units), most commonly the 74181 (now long obsolete). These reduced the size of a basic computer from several large PCBs to just one (still large) PCB. I worked on these back when it was still economical to fix computer boards, during the mid 1980s.
The essential elements of any logic circuit are gates. These provide an output based on the voltage applied to their inputs, and the most common are the AND gate, OR gate and inverter (also called a NOT gate). There are many others as well, but these simple gates form the basis of most digital circuitry as we know it today. In order to gather and present data to and from the real world, there is also a need for analogue to digital converters (ADCs), and their opposites, digital to analogue converters (DACs). I don't propose to cover these here.
Most of the time, a logic 'low' ('0') is defined as being at (or near) zero volts. The logic 'high' ('1') voltage used to be 5V, but some CMOS ICs can use 15V, and many of the newer circuits use 3.3V or less (as low as 1.8V for many high density ICs). PMOS was really the 'odd man out' in all of this, because it used a negative supply voltage. All logic has a defined range where an input is accepted as a '1' or '0', and in between is no-man's-land. Voltages in this undefined region may be interpreted as a '1' or a '0', and good design ensures that all voltages are within the specified regions, except when switching from one state to the other of course. As logic becomes faster and faster, there are issues faced that take many digital designers well outside their comfort zone. This is covered in some detail in the article Analogue vs Digital - Does 'Digital' Really Exist?.
All of the circuits shown in this article can be built on a standard plug-in breadboard, or wired on Veroboard. Readily available transistors are used throughout, and any small signal NPN transistor can be used in any of the demonstration circuits. Mostly, you don't need to bother, because the operation of most is (almost) self explanatory if you understand basic transistor theory.
There are a number of different 'families' of logic. Standard TTL is generally prefixed by 74, with a 7400 being a dual 4-input NAND gate. 74L series are low-power versions, 74LS are low-power Schottky. Military or extended temperature range TTL is prefixed by 54, and TTL compatible CMOS uses 74HC or 74HCT numbers. Basic CMOS is usually designated as the 4xxx series, but there are many variations. The basic logic families are as follows ...
RTL Resistor Transistor Logic ¹
DTL Diode Transistor Logic ¹
TTL Transistor Transistor Logic
ECL Emitter Coupled Logic
PMOS P-Channel Metal Oxide Semiconductor ¹
CMOS Complementary Metal Oxide Semiconductor Logic
Those shown with '¹' are now obsolete. Two that continue in discrete form (generally for DIY projects) is DRL (Diode-Resistor Logic) and DTL, which is often used to create simple AND/ NAND and OR/ NOR functions. it's not particularly efficient and is usually slow, but for some simple tasks that's of no consequence.
Most logic circuits can be described by a 'truth table', which for a NAND gate looks like that shown below. I also included 'pseudo code', which is not intended to be seen as being in any specific programming language, but describes how the function would be implemented in code. 'DIM' means dimension, and 'BOOLEAN' (often abbreviated to 'BOOL') is a (usually) single bit that can be '1' (aka 'True') or '0' (aka 'False'). Some languages allow numerical values ('0' or '1'), others insist on 'true' or 'false'. The 'IF...THEN...ELSE' format is common to many different languages, but formatting may differ. In some languages, an 'ENDIF' statement is required to delineate the 'IF...ELSE' from other code ('THEN' is not required by all languages, or may be optional).
A B Y 0 0 1 1 0 1 0 1 1 1 1 0
DIM A, B, Y as BOOLEAN
IF A=1 AND B=1 THEN Y=0 ELSE Y=1
This is convenient for small logic devices, but becomes unwieldy very quickly with more complex gate arrays or other highly integrated logic circuits. For example, the truth table for a modern microprocessor could end up being almost infinite if every possibility of input and internal states were to be documented. Many datasheets include the truth table, and often have graphs to show the timing requirements - particularly for latches, counters, shift registers, etc. Many of these have a minimum 'set up' time, where (for example) data must be present for a small period before the clock signal is applied.
Internal circuits are shown only for TTL, because CMOS circuits are usually much more convoluted, and are difficult to build if you want to see how they operate. The internal circuits are not widely available, and tend not to provide an easily understood operation. In contrast, TTL is straightforward, there are many circuits on the Net, and they are (mostly) easy to understand if you have some basic transistor theory knowledge. The component values shown below are from the standard E12 series, and are not the same as shown in datasheets. The difference is minimal.
Over the years there have been countless TTL functions, but a vast number of those are now obsolete. Most of the old memory ICs are no longer made, and they were only common during the time where computers were built using TTL logic to build the processor itself. These are now fully integrated, ranging from PICs (Programmable Interface Controllers) through dedicated microprocessors and ending with the latest multiple core processor chips used in PCs, tablets and high end mobile phones.
All logic circuits suffer from propagation delay. Modern fabrication techniques use lower voltages (as low as 1.8V in some cases) which allows the logic to draw higher current for the same or less power consumption. Speed is always a function of power - if you want to go faster, you need more power (or in this case, current). As an example, the CMOS 4040 counter IC (more on this further below) has a propagation delay of up to 330ns with 5V, 160ns at 10V or 130ns at 15V. The difference is due to the current drawn - higher voltage, higher current.
Propagation delays occur because nothing is instantaneous. Transistors can't turn off instantly, as it takes time for the carriers in the doped silicon to disperse to the point where the transistor is truly off. This is made worse with TTL, because the transistors are driven to saturation (more base current than necessary to turn the transistor(s) on). ECL (Emitter Coupled Logic) overcame this by operating the transistors in their linear range (neither fully on or off), and while that increases speed, it reduces noise immunity and increases power consumption. To provide a bit of 'nuisance value', ECL uses a negative supply, with the positive supply rail being earth/ ground.
There are countless compromises in all electronics, and logic is no different. CMOS is now the dominant technology, and is used in nearly all processors, from basic PIC microcontrollers up to the VLSI (Very Large Scale Integration) required for processors (including graphics processing), ASICs (Application Specific ICs, generally custom made for a particular purpose) and FPGAs (Field Programmable Gate Arrays). The latter can be configured to perform complex tasks very quickly, and are programmed by the customer/ end user to perform specific tasks as efficiently as possible. They are well outside the scope of this article!
There are CMOS variants of many of the 74 series TTL ICs, generally prefixed with 74HC... or 74HCT... The 'HCT' types are designed to be compatible with 74 series TTL, and cannot be used at the higher voltages common to the 4000 series of CMOS. 74HC types often allow (a bit) more than 5V, but that varies, so you must consult the datasheet. Making assumptions is unwise. If you need the full 15V (18V absolute maximum) CMOS ICs, then you need to stay with the 4000 series.
There isn't a digital circuit that doesn't use one or more of these basic gates. They are the basis for all 'higher level' functions, and the outcome is (in logic terms) either true ('1') or false ('0'). Boolean algebra is only mentioned in passing here, but it's the basis of modern computing - which is in turn based on the three functions AND, OR and NOT. Boolean algebra was first developed by English mathematician George Boole, and was described in his first book 'The Mathematical Analysis of Logic', published in 1847. To learn more on the topic, I suggest you check out the Wikipedia page, which goes into much greater detail.
The premise of an AND gate is that the output will be high (logical '1') only if both inputs are high (input1 AND input2). A NAND gate will output a logic '0' under the same conditions (Negative AND). Although IC designers can do 'odd' things (such as build a transistor with multiple emitters), this can't be directly translated to a breadboard. The circuits shown can be built on breadboard, and they are presented in a way that makes it easy to do.
Figure 1 - TTL NAND Gate
Q1 and Q2 are shown as separate transistors, but in reality it's a single transistor with two emitters. Performance is the same either way. The output will go low when and only when both inputs are high. That means that the output is Negative, when 'A' and 'B' are positive. The circuit is therefore a NAND gate. An AND gate is created by adding an inverting stage between Q1/2 and the totem-pole output transistor driver (Q3).
Q1 and Q2 use their emitters as the input. When the emitters are held low, the transistors both conduct, and bypass the base current for Q3 (provided by R1) to ground. Q4 is turned on via current through R2, and Q5 is off because there is no voltage across R3. This condition exists if either input is low, so the output is high. When both Q1 and Q2 emitters are high, Q3 turns on. This forces Q4 to turn off, aided by D3 which ensures that all current through R2 is diverted from the base of Q4. Q5 conducts fully, and the output is low. In common with most logic circuits, there is a comparatively high current drawn during the transition from high to low and vice versa, and this is why bypass capacitors are required for all logic circuits.
Commercial ICs can have more than two inputs, with 4 inputs being common for AND/ NAND and OR/ NOR gates. Another option is open-collector outputs, which allow TTL to interface with CMOS using a different supply voltage (higher or lower than 5V), or to permit connection to other circuitry. Open collector outputs are generally able to handle more than the normal 5V supply, with up to 30V being fairly typical. There are specialised open collector ICs that are designed for driving small relays, and they include the back-EMF diode required with relays.
Figure 2 - TTL NOR Gate
A NOR gate is somewhat more complex, even though (at least in theory) it's a simpler function. If 'A' or 'B' is high, the output is low. Like the NAND gate, an OR gate is created by adding an inverter between Q3 and Q4 (which are in parallel) and the totem-pole output stage. Operation is very similar to that for the NAND gate, except that Q4 is turned off if either or both 'A' or 'B' is high.
A B Y 0 0 1 1 0 0 0 1 0 1 1 0
DIM A, B, Y as BOOLEAN
IF A=1 OR B=1 THEN Y=0 ELSE Y=1
Figure 3 - Inverter (NOT) Gate
The inverter is a simple function, but its importance in logic cannot be underestimated. As noted above, AND and OR gates require inversion to get the required function from NAND and NOR gates respectively. So many logic functions rely on inversion that we'd be lost without them. An open collector version is also shown for the inverter (the transistor numbers are kept the same so it's easier to follow).
If you are working with any logic circuitry, it's not uncommon to find that you need a simple method to include a simple OR or AND function (or its inverse). Often, it's easiest to use a few diodes and a resistor, optionally with a transistor. This can be used to decode a simple sequence from the output of a counter or shift register, and can save the hassle of including another IC where you may only need a single gate.
Figure 4 - Diode Logic Examples
In the above (#1), the output will be low if A, B, C or D (or any combination thereof) are low. In other words, it's an AND gate, requiring that all inputs are high to give a high output. The second circuit (#2) is an OR gate, so the output will be high if any input is high. Finally, circuit #3 is a NAND gate. It's the same as #1, but the output is inverted by Q1 (a small signal MOSFET).
These circuits can be intermingled to get some interesting combinations, and there is no practical limit to the number of diodes used. In some cases, it can be convenient to connect the input resistor (3.9k) to the output of another gate, allowing some quite complex logic functions to be created that may require several TTL or CMOS gates. The circuits are all shown with four inputs, but you can use more or fewer, depending on what you wish to achieve. Surprisingly, this technique is still useful (although it's very slow compared to 'real' gates), and was the basis for some of the earliest logic.
It can be put to use anywhere you need a simple way to decode the output from a counter or shift register, or if you happen to need a odd function that isn't readily available with existing gates. As shown, you can't use a bipolar transistor for Q1, because the forward voltage of the diodes will prevent it from turning off properly. You could use a Darlington transistor for Q1, but that has a higher saturation voltage, but should still work with TTL.
Some of the basic TTL and CMOS circuits are available with 'Tri-State' outputs. The third state is 'output disconnected' - the output is neither on nor off, but is floating (high impedance). This is used so the different systems can use a common data bus, and only the selected device provides a signal to the bus. This is known as multiplexing, where several different data streams are transmitted over a single wire or PCB trace. The devices that are transmitting and receiving data are active, and other devices that share the bus wait for their turn. This is a very common requirement as it can save a great deal of extra wiring and/ or PCB real estate than providing a conductor for every data stream. However, it's also slower because each section has to wait until its turn to transmit and receive data.
Another function is devices with Schmitt trigger inputs. These are used primarily for greater noise immunity, but with CMOS logic, Schmitt trigger inputs are often used to create simple R/C (resistor capacitor) oscillators and timers. They are not precise in either role, but that's often a secondary consideration. Not all logic has to have especially precise timing, and where this is necessary more complex circuitry is required.
One final 'basic' gate that can't be ignored is the 'exclusive OR' (XOR). This gate provides a 'high' output, only when the two inputs are different. If both are high or low, there's no output, but if either input is the opposite of the other, an output is produced. The truth table is shown below. Note that '!=' means 'NOT EQUAL' - this may vary with different programming languages.
A B Y 0 0 0 1 0 1 0 1 1 1 1 0
DIM A, B, Y as BOOLEAN
IF A!=B THEN Y=1 ELSE Y=0
The aim of this is to detect if two logic levels are different from each other. There's also an XNOR gate and there used to be a gate where the user could select XOR or XNOR operation (74x135, now obsolete). With CMOS, an XOR gate is often used as a 'zero crossing detector', designed to produce a brief output pulse whenever a logic level changes. It works equally for rising and falling edges.
A function available in CMOS is an analogue switch. The 4066 is a quad bilateral switch, and can pass analogue signals that are between the two supply rails. It's not uncommon to see them used with ±7.5V supplies so ground referenced audio signals can be turned on and off. The original version was the 4016, but that had much higher 'on' resistance, which could mean signal distortion from high impedance sources. These switches have interchangeable analogue inputs and outputs, and have a control pin for each switch (as well as the required VDD positive and VSS negative supply.
These have no equivalent in any other type of logic - they are unique to CMOS. While it's difficult to classify them as 'audiophile' devices in terms of signal integrity, they have been used in a great deal of audio equipment because they are easily controlled with other logic, although if operated with ±7.5V supplies, level shifting is necessary. The information in the datasheet is very comprehensive, and if this sounds like something you need then they are easy to use and ideal for audio switching. Don't expect especially low noise or distortion (relays are far better in these respects), but they are very low power (like nearly all CMOS) and are not affected by vibration, unlike relays.
There are newer analogue switches that beat the 4066 in almost every respect, but like so many modern devices, many are only available in SMD packages. Where the 4066 has been around for decades, the same can't necessarily be said for the newer devices. One thing to be wary of is the specific type number. The 74HCT4066 is designed for an operating voltage of 5V, not 15V as is normally expected. The 74HC4066 is rated for a maximum supply voltage of 10V. You always need to be careful with any 74HC prefix CMOS ICs, and verify the supply voltage before you apply power.
This is where 'complex' logic starts. They are essential building blocks in any logic system, and operate as 'short-term' memory of a past event, or to store data, divide frequencies, etc. For example, in a multiplexed system, a latch of one kind or another is necessary to 'remember' the data that was passed by a sub-system within the circuit. 'Flip-flop' is an old term, but it has managed to live on, regardless of technological progress. It's derived from the fact that the outputs can flip from on state, then flop back again with the appropriate input.
Of these, the simplest is the 'set/ reset' latch. A signal on input 'S' (Set) causes Q2 (Q) to go high, and Q2 (Q-Bar, Bar-Q or NOT-Q) to go low. A signal on input 'R' (Reset) causes it to revert to the original state. Note that latches and counters generally use the term 'Q' to signify an output, and 'D' to indicate data. Set and Reset (aka Preset and Clear) pins are used to ensure that the latch is in a known state when the system is initialised (powered on or reset). The simplest 'true' latch is the D-Type, which uses a clock signal to load the level present on the 'D' input into the latch.
An example of a set/ reset latch is shown below. It's not intended to follow normal TTL input and output conventions, but is implemented with two transistors. This is a 'level triggered' circuit, which means that it responds to the level of the input, and isn't affected by the rate of change of the signal. The other triggering system is 'edge triggered', where the logic IC relies on a rising or falling signal, with a defined polarity (rising or falling). For example, a rising edge triggered flip flop is only triggered on a rising edge (from '0' to '1') and it ignores the falling edge ('1' to '0').
Figure 5 - Set/ Reset Latch (Bistable Flip-Flop)
The other latch that's been used for many years is the 'master/ slave' J-K flip flop. These are the most flexible of the latches, and are common in MSI (medium scale integration) to create counters. Counters are a special case in all logic, and it's outside the scope of this introductory article to try to cover them in any detail. However, a basic counter is shown below.
D-Type flip-flops are very common, and the value of the signal on the 'D' (data) input doesn't cause any change until the clock transitions (usually from low to high). The value of the 'D' input is then provided at the Q output, and the opposite polarity on the Q-Bar output. If 'D' is a '1', then Q is also a '1' and Q-Bar is '0', and vice versa.
Figure 6 - 4-Bit Asynchronous Counter
The input signal is applied to the Clock (Ck) input of D-Type #1, and is divided by two with each successive flip flop. At the end, the input frequency is divided by 16 (24), so if a 16kHz signal is applied to the clock input, the output is a 1kHz squarewave. This is the binary sequence in a nutshell, and while it may initially seem that large numbers would require a vast number of flip flops, consider that using eight divides by 256, and using 16 divides by 65,536,000 (64k in binary). A CMOS 4040 is a 12-stage ripple counter (using the same basic architecture as that shown above), and using just two in series will divide by over 16 million.
Intermediate division ratios are usually provided by the use of NAND gates to decode the binary sequence to obtain the ratio needed. An example of this can be seen in the Frequency Changer (Figure 4) in the clocks section of the ESP website. To obtain a divide by five function, a CMOS divider is used. When both Q2 AND Q3 are high, the counter is supplied with data of the opposite polarity. There are many different ways these 'odd' division ratios can be created. While the trend these days is to use a PIC to handle reasonably complex division ratios, sometimes standard logic is a better alternative.
Figure 7 - 4-Bit Synchronous Counter
A synchronous counter doesn't suffer from accumulated propagation delays, as does an asynchronous (aka ripple) counter. However, there's a price to be paid, because the logic is more complex. J-K flip flops are used (aka 'master/ slave') and to get the same division, a pair of AND gates is required. Instead of the clock signal being passed along from one to the next, it drives all the flip flips simultaneously. This can be critical in some applications, where the propagation delay may cause invalid logic states (often called glitches). J-K flip flops mostly use a negative reset, so taking the reset pin to ground resets all flip flops to the same state. This isn't always necessary.
On the subject of clocks, a standard quartz crystal clock has a 1 second impulse (i.e. one second for a complete cycle). This is derived from a 15-stage binary counter, using a 32.678kHz crystal, which although it may seem like a very odd number, is designed specifically for the purpose of providing the timing for a quartz clock movement. See Clock Motors & How They Work for details of how these are designed. The waveform shown below applies to both the synchronous and asynchronous counters. The accumulated propagation delay is not visible, so the two look identical.
Figure 8 - 4-Bit Counter Waveforms (10kHz Input)
Another 'interesting' use for binary counters is shown in the Digital White/ Pink Noise Generator project. This uses cheap ICs (4094 CMOS 8-bit shift registers) with feedback (derived from an XOR gate) to provide a pseudo-random binary sequence that resembles white noise. While this can be done with a PIC, tat provides only a lesson in programming, but does little for one's understanding of the circuitry.
This has (sneakily) introduced another fundamental logic building block - the shift register. These look at bit like counters, but they pass data from the input to the output in the same pattern as it was received. An old term for them was a 'first-in, first out' register, and they are commonly used as a buffer, often to 'clean up' a data stream that was received with inaccurate timing. The data is loaded into the buffer, and read out with the exact timing expected by the following circuitry. They come in various configurations, and can be operated as serial in, serial out; serial in, parallel out; parallel in, parallel out or parallel in, serial out. They are also used to shift a digital byte or word left or right by 'n' digits.
A variation on the theme is called a Johnson or 'twisted ring' counter. These are something of a special case, as the output from each flip flop is the same frequency (determined by the length of the counter). A 5-stage counter will divide by ten, but the pattern of the outputs becomes the defining factor. In the article Sinewave Oscillators - Characteristics, Topologies and Examples (section 8) there's a Johnson/ twisted ring counter shown, and its outputs are summed to produce a crude sinewave. This can be filtered (and/ or more stages used) to improve the distortion characteristics.
Figure 9 - 4-Stage Johnson Counter
The term 'twisted ring' comes from the arrangement of the feedback. The output from the final Q-Bar output is fed back to the 'D' input of the first flip flop, so the bit pattern is shifted right one step for each clock cycle. The output is shown below. It's necessary to apply a reset to ensure that the counter starts with a defined state. D-Type latches are edge triggered, so only the '0' to '1' transition advances the cycle. There is also a ring counter (without the twist), and that requires that the first flip flop is set or it will be undefined, and may output zero forever. More than one flip flop can be set at power on, which provides specific sequences.
Figure 10 - 4-Stage Johnson Counter Waveforms (10kHz Input)
The truth table for a Johnson counter is shown below. At the 8th clock cycle it's back to the beginning, and the sequence continues as long as the clock signal is active, and the reset line remains disabled. To get a predictable output sequence, a Johnson counter requires a reset to each flip flop upon power-on. All ring counters are shift registers, with the data shifted from one to the next in order. Shift register ICs are also available with 'shift left' and 'shift right' control inputs.
4-Stage Johnson Counter Truth Table
CK # Q0 Q1 Q2 Q3 0 0 0 0 0 1 1 0 0 0 2 1 1 0 0 3 1 1 1 0 4 1 1 1 1 5 0 1 1 1 6 0 0 1 1 7 0 0 0 1 8 0 0 0 0
You can see that the waveform from each output is the same, but shifted in time from one to the next. The delay between each transition is one clock cycle. Since the clock used was 10kHz, each waveform is delayed by 100µs referred to the previous output. The positive signal lasts for 400µs, with a full cycle completing in 800µs (1.25kHz).
In many cases, a logic gate is not as it seems. Designers have long used the idea of inverted logic to obtain the results required using the smallest number of gates. If a NAND gate is used inverted (so inputs are normally high), the result is an inverted logic NOR gate. If In1 or In2 is low, the output is high. Likewise, an inverted NOR gate becomes a NAND gate, where both inputs must be low before the output can go high. This is generally covered using Boolean algebra, which can be used to determine the gates needed for the required truth table.
I don't propose to say too much on this, as it's often used without the designer even realising that negative logic has been used. It can save many gates in a complex circuit, but these are becoming less and less common with the wide usage of microcontrollers (such as PICs and other small processors). Things that used to be done in logic are now easily programmed, with a myriad of devices to choose from. 'Hard' logic is almost a thing of the past, but a surprising number of simple tasks still use basic logic ICs. An equally surprising number of simple tasks that could be done easily with a few gates are now consigned to a PIC, because they are so cheap. If the logic doesn't work as planned, you simply change the code until it does what you want.
Logic devices as we know them have changed radically since they were first used. Having gone from mechanical systems (some of which lasted until the 1970s or so), valves, discrete transistors and then integrated circuits, only a few of the originally available logic ICs are still available. They remain is use for the simple reason that they work, and are extremely reliable. System complexity has increased exponentially since the early days, and Moore's Law (named after Intel co-founder Gordon Moore) states that the number of transistors in ICs doubles every two years, and that the end result is cheaper. While there are rumblings that Moore's Law is 'dead', it seemingly refuses to lie down.
Most people carry far more computing power in their pocket than was available for the first moon landing craft, and a modern PC (of any 'flavour') is vastly more powerful than machines that used to occupy entire floors of large buildings. Data centres have hundreds or thousands of high-end machines, with storage capacities that are mind boggling. When I first started using computers, the standard machine was 8-bit, had 64k of RAM, typically a Z80 or perhaps 6800 series microprocessor, and booted from a floppy disk (remember them?). A 10MB hard disk was (literally) the size of a domestic washing machine, and if you needed a lot of storage you had to use tape drives that could take several minutes to find the data you were searching for.
Early processors such as the somewhat revered PDP11 made by DEC (Digital Equipment Corporation) started life in 1970, and quickly became the machine of choice for engineering and science. Meanwhile machines like the Datapoint 2200 (launched in 1970, and the first desktop machine that could (almost) be classified as a PC) captured a great deal of the business sector. The CPU was built entirely from standard logic ICs, and its instruction set (machine code) was the basis of the first microprocessor chip, the Intel 8008. I have rather fond memories of various Datapoint machines, as I worked for the company for about eight years.
![]() | + + + + + + + |
Elliott Sound Products | +Mains Power Quality |
We tend not to think too much about the power that we use for daily activities, and this includes sound systems. I doubt that anyone would be heard to complain that their morning coffee tasted odd because of mains interference or distortion, but there is an entire industry that will try to convince you that without their mains filter, sinewave reconstruction power supply, isolation transformer and/or $5,000 power cables your audio and video systems must be horrible (and your coffee will taste like cat pee as well!).
+ +Mains distortion is commonly cited as something that will cause the soundstage to be contracted/ compacted/ eliminated, and that the distortion will cause a loss of clarity, soften dynamics and mangle the bass 'slam' from your subwoofer. Naturally, we can expect that micro-dynamics will be damaged beyond repair, and the 'air' between instruments will be cloudy and grey with a 35.7% chance of reproductional ineptitude.
+ +Now, if you happen to be in (or near) a commercial or industrial area, there may indeed be various noises that pass down the mains distribution system and cause your system to generate clicks, pops, farts and other noises. If this is the case, you really might need some kind of filter, but if you never hear any of these things (or they are sufficiently infrequent they cause you no pain) then your power is perfectly fine just as it is.
+ +Those who make and sell this equipment are often guilty of claims that are at best specious, and at worst downright lies. There is usually a grain of truth to the advantages they describe, but often it's what they leave out that's the most important. Don't expect them to tell you that their expensive kit will probably make no difference - expect instead to be told that the mains quality determines how good (or otherwise) your system sounds. In reality it usually makes no difference whatsoever.
+ +One 'interesting' claim I saw ... "Everything we see and hear through our system is really the power from our home's wall socket manipulated to make music through our speakers by our electronics. The quality of that power is critical to the success of any high-end system." While this is superficially true, it ignores the details of the "manipulation" that happens in the electronics. There's nothing subtle about it - the power supply uses brute force to convert the incoming AC into DC, and if the conversion is good enough to remove ripple and noise, the DC will be exactly the same whether it comes from a generator, wall outlet or a very expensive AC power supply. The voltage needs to be the same, and the frequency needs to be close to the design value, plus or minus a few Hertz.
+ +Mostly, the statement is marketing BS, and has nothing to do with reality.
+ +Many of these 'mains reconstruction' devices are basically a high-power amplifier that outputs AC at the designated frequency and voltage, having first rectified and filtered the incoming AC from the mains outlet. The output has low distortion and is regulated, and the claimed benefits cover just about every area of reproduction. According to the makers, you can't afford not to have at least one of these wonders, even (apparently) if your system sounds just fine already.
+ +Strangely, the incoming mains quality doesn't seem to affect their power amplifier, even though it will affect yours - after all, the 'regenerator' is powered from the mains.
+ +You need to be aware that no mains reconstruction amplifier, filter or mains lead will have much effect with many of the common noise sources. If you hear a noise when your fridge switches on or off or when the vacuum cleaner is used, then the noise you hear is probably airborne (radio frequency interference aka RFI) and is not carried by the mains. None of the devices described here have any effect on airborne noise, which can only be fixed by suppressing the noise at the source. That means adding a filter to the device that causes the noise, rather than trying to get rid of it by expensive devices to power your hi-fi system
+ + +There is a new breed of scumbag that's emerged from the primordial slime-pit. They will hold a meter close to your power leads or wall sockets and tell you how high the reading is and how it will ruin your health in ways that you never knew existed. Those brandishing the meters never actually tell you what it's measuring, and don't expect peer reviewed medical evidence to back up the claims. It's quite obvious that the meters detect frequencies above a few hundred Hz, but there is never the slightest word about how they are calibrated or the units being measured.
+ +For all the good they do, they might as well tell you that your mains leads have 300 litres of horse feathers per furlong. Any meter reading is utterly useless without knowing the units, the accepted safe (or 'safe' if you prefer) exposure limits, and at least some idea of what is being measured and why. I've measured the mains at home, and I'm now down to only 27 litres of horse feathers per furlong, so that must be an improvement .
The mains can genuinely have significant high frequency noise along with the (more or less) sinewave that provides the power for your appliances. Some of this noise might be audible through your system, and if so (and if it bothers you) you will need to do something to (try to) fix the problem. Mostly, it's there whether you know about it or not, and it usually causes no noises, ill health or anything else - at least until someone measures it with a silly meter and gives you a scare.
+ +If we look at it dispassionately, anything that affects the waveform of the AC mains can be classified as 'dirty electricity', since noise is simply extra signals added to the mains at various (and often random) frequencies. Distortion is caused when the mains has harmonic frequencies of the base 50/60Hz waveform. Most distortion will be odd harmonics (e.g. 150, 250, 350Hz etc. for 50Hz mains). Even harmonics mean that the waveform is asymmetrical and contains a DC component. This can and does happen, and there is an article on the ESP website that explains how it happens and how to remove any DC offset - see Blocking Mains DC Offset.
+ + +Part and parcel of the mains these days is some degree of distortion. Connected to the grid is a vast number of switching and transformer based power supplies, and these only draw current at the peak of the AC waveform. With enough of them, it's inevitable that there will be distortion, and this typically shows up as a sinewave with the peaks flattened, as shown below. At this stage, other noises on the mains are not being considered - only distortion.
+ +
Figure 1 - Typical 'Flat-Topped' Mains Waveform
The question is ... does it matter? Quite obviously, if mains waveform distortion made a difference to how an amplifier or preamplifier sounds, it should be eliminated. If you enjoy listening to a pure tone from the mains at 50 or 60Hz, then 4-5% distortion would be a serious problem for you. We need to examine what happens in a power supply to allow us to decide whether distortion on the mains is a problem or not.
+ +The vast majority of power supplies used for home audio equipment (regardless of price) use a traditional 'linear' transformer based power supply. Almost without exception, these draw current at the peak of the mains waveform, and help to create the waveform seen in Figure 1. By implication and in reality, that means that the voltage that appears across the transformer primary will have exactly the same distortion components as shown above, even when presented with a pure sinewave!
+ +Yes, you did read that correctly. Even if you have paid $thousands for a pure sinewave mains 'regenerator' or similar, the voltage across the secondary winding (after the winding resistance has been taken into account) will look just like that shown above. This happens because the mains series resistance and that of the transformer allow the voltage to collapse when current is drawn. Since current is drawn only at the waveform peaks, the peaks of the sinewave are truncated.
+ +The only exception is if your equipment uses a switchmode power supply with active power factor correction (PFC), but these are uncommon in hi-fi systems because they add considerable cost and complexity and aren't warranted (or legally required - yet) for normal home use. In many cases, the mere mention of a switchmode power supply is enough to send dedicated audiophiles running in the opposite direction, because many feel that a high frequency switching power supply can never sound any good. Never mind that a switchmode supply with PFC has extremely good regulation and the DC output is completely free of mains frequency ripple. However, it is true that there will be some high frequency components superimposed on the DC, and these can interfere with the audio unless care is taken during design.
+ +The slightly distorted waveform actually results in a small improvement, with lower ripple and noise than if the rectifier and filter capacitors are fed with a pure sinewave. This happens because the filter capacitors have a tiny bit longer to charge while the mains is at its peak. To demonstrate, the FFT spectrum of the ripple current waveform is shown below. The DC voltage is nominally 35V, and the circuit is shown in Figure 3 with the ripple voltage shown in blue (but not to scale).
+ +
Figure 2 - Ripple Voltage Spectrum Of Sine And Flat-Top Mains Input
In reality, the above is a bit silly, because it's looking at signals down to 100nV which can only ever be resolved using a simulator. Anything below a few millivolts is not an issue, and measurement uncertainty makes it almost impossible to measure accurately. However, the trend is clear, and the ever-so-slightly lower noise with the flat-top waveform is obvious. To make it a little clearer, I chose not to include the transformer's series resistance, so the rectifier is supplied directly from the voltage waveform. With the typical transformer and mains resistance included there is so little between them that it's of no real consequence.
+ +
Figure 3 - Test Circuit For Sine And Flat-Top Mains Input
As shown in Figure 3, the simulations did include a token 100mΩ of series resistance. The ripple voltage for the two power supplies simulated was 249mV RMS with a pure sinewave input, and 230mV RMS with a flat-top waveform. DC output voltage and current were the same for both, with 32.4V DC at a current of 980mA. I used a 10,000uF filter cap with 10mΩ ESR, and the peak input current with a sinewave was 9.15A, reducing to 6.1A with the flat-topped waveform. This shows another benefit of not eliminating the distortion from the mains - it reduces the peak (and RMS) charging current, although as noted earlier the real differences are smaller than those simulated.
+ +So, ensuring that you have a perfect mains sinewave makes little or no difference, but the pure sinewave is actually slightly worse than the normally distorted mains in all significant respects. From this we can conclude that mains distortion is not a problem, and will not result in more ripple or noise. In fact, both ripple and noise are reduced very slightly if the mains is distorted, as is the current drawn from the mains (peak and RMS). I can guarantee that you didn't see that coming, and nor did I until I ran some representative simulations.
+ + +The impedance of the mains is normally quite low, and as a direct result, load regulation is at least fair. At a power outlet at home I measured the impedance at 0.8 ohm. This means that a 2,300W load (10A, the maximum for a standard outlet in Australia) will cause a voltage drop of 8V RMS, and represents a regulation of about 3.5%. While this isn't wonderful, it's generally considered perfectly acceptable and never causes any problems with sensibly designed equipment. However, that's by no means the end of the story on regulation.
+ +The mains voltage can be expected to vary by up to ±10% from the nominal supply voltage (see Note below). So, if the mains is nominally 230V, expect it to vary between 207V and 257V. 120V mains can vary between 108V and 132V. The limits are rarely reached, but your electricity supplier cannot guarantee that you'll always get the exact voltage specified. People who design equipment know all of this, and will nearly always ensure that the equipment they make will function normally across the full voltage range.
+ +++ ++
++
Note: the regulations vary from one country to another, so you might find that your supplier 'guarantees' that the voltage may vary by + perhaps +10% or -6% (or other similar numbers), so the above may be somewhat pessimistic. There will be exclusions though, and 'brown-out' conditions can happen + any time due to network faults. A brown-out is a condition where the voltage falls (well) below the nominal for an extended period. The voltage may fall by 20% + or more.
+ + There will also be losses within your house wiring, but for typical home hi-fi systems these can generally be ignored - especially if you have a dedicated + power circuit for your audio-visual equipment (which is a very common approach for 'high-end' systems). If you do use an existing circuit, use one that's not + connected to the fridge or anything else that may create electrical interference. +
Preamplifiers almost always use regulated supply voltages, and the regulator ICs will usually maintain the voltage within a few millivolts of the design voltage over the full voltage range. Power amplifiers rarely use regulated supplies because they aren't necessary and just add cost and heat to the product (heat because regulators are not very efficient and need substantial heatsinks). Even over the full voltage range (e.g. 207 - 252V RMS), the power difference is only 1.7dB, and that assumes that the amplifier's power supply has perfect regulation!
+ +Needless to say, this isn't the case, but the final error we get with this simplistic approach is quite small, so the figure of 1.7dB is quite reasonable. If your system is operated so close to the limit during critical listening sessions that 1.7dB will be the difference between clean and clipping, then it's well past time that you upgrade to a more powerful amplifier. Remember that the figure of 1.7dB is the total variation, from the full ±10% mains voltage change. A more realistic ±5% variation means that the voltage will change from 219V to 241.5V, a change of only 0.85dB. This is negligible.
+ +Not that there is anything wrong with a regulated mains supply of course. However, it has to be considered on the basis of cost vs. benefit, and for most people the cost will be far too high for the very small benefit you receive. Remember that any mains regulator device will have losses itself, so not only is the device itself expensive, but it may be very costly to run and may also add a significant heat load to your listening room. In hot weather that means air-conditioning systems will be working harder too, leading to comparatively high operating cost for a generally completely inaudible end result. Of course the heat isn't wasted during colder months, but a mains reconstruction amplifier is a very expensive room heater!
+ +Where the published 'benefits' step over the line is when they try to convince you that a 'BrandX' mains reconstituting unit (or other fancy device) will "increase the audible detail, bass 'slam', micro and macro dynamics*, etc.". This is clearly nonsense, and is right up there with $5,000 mains cables in terms of fraudulent claims. The simple reality is that regulated mains will do none of these things, because your amplifier already has a power supply that was determined to be somewhere between 'perfectly adequate' and 'way over the top', depending on the manufacturer. Adding an external regulator is simply a waste of money if you assume that any of the alleged improvements will make your system sound better.
+ ++ * The terms 'micro' and 'macro' dynamics are pretty much the exclusive domain of hi-fi writers, and the terms have no significant meaning. The resolution + (micro-dynamics if you like) of a hi-fi system is not affected by mains distortion because it runs from DC! The mains distortion is not magically + transferred to the DC. Bass performance mainly depends on the amp and its power supply, not on the mains which will always have better regulation than the + power supply. ++ +
As for using an external regulator and/or mains reconstruction amplifier (and that's what they are - an amplifier) for TV sets and the like, bear in mind that the vast majority of TV and other video gear use switchmode power supplies, which don't give a rodent's rectum about the incoming mains waveform. As long as the peak voltage is high enough for them to operate, a switchmode supply doesn't care if the input waveform is a sinewave or a square wave.
+ +None of this means that a regulated mains supply isn't desirable. In an ideal world, the power to our houses would be the exact voltage intended, but this will (and can) never be the case in the real world. However, the vast majority of equipment won't care if the voltage changes within normal limits, and the result will normally be completely inaudible. Remember that the equipment manufacturer has already designed the power supply to accommodate normal variations and to minimise noise. A stabilised supply may be a good investment if you normally experience large variations, or if the voltage regularly rises to more than 10% above the nominal value.
+ +The general principles of voltage stabilisers are described below. There are many different types, with many having fairly large steps (perhaps 5V RMS or so, sometimes more). These are probably alright for the odd industrial process, but are best avoided for h-fi. For the intended purpose, some of the commercial units may be acceptable. but you need to verify that your equipment and the stabiliser will play nicely together. As noted above, there is rarely any need.
+ + +With most equipment using a mains transformer, there is already a pretty good filter - the transformer itself. Because of its inductance (primary and leakage), high frequency noise is attenuated automatically, and common mode noise (applied equally to both active and neutral) is largely rejected. Unfortunately, most transformers don't have an electrostatic shield between primary and secondary. When fitted, this will afford excellent protection against noise coupled between the windings via the inter-winding capacitance. This notwithstanding, not very much noise can get past a 10,000uF filter capacitor!
+ +Despite glowing recommendations from deluded users, don't expect any noise filter to make a substantial difference to your system (positive or negative). Unless you have audible noise that is determined to be due to noise on the mains, a filter will not make the system sound 'better'. Internally, your amplifier, preamp, etc. converts the AC to DC, and DC has no 'sound' of its own. The worst that can happen is that a certain amount of noise might contaminate the DC so it becomes noisy. This is actually surprisingly uncommon.
+ +Noise on the mains covers a very wide range of possibilities. Audio frequency noise comes in many forms and has many causes. Some are even deliberate, such as the use of 'ripple control', where a medium-frequency (typically from a few hundred Hz up to 2kHz or so) is superimposed on the mains to turn off-peak and other systems on and off remotely. In addition, there are many other causes, ranging from momentary shorts (small animals and tree branches causing wires to touch), lightning, and a myriad of industrial processes.
+ +Because transmission wires are often very long, they also make good antennas and pick up radio frequency signals. In reality, not very much of this ever gets through to your equipment, and it's not usually a problem. Clicks, pops and other noises can be created by switches, small 'universal' motors (as used in vacuum cleaners for example) and refrigerators, the latter being a common source of transient noise, especially for older models. There are countless others of course, and some will be troublesome, others not.
+ +You don't need to regenerate the mains to get rid of noise. There are many filters that will help to clean up the mains, but some noises will defeat all your attempts to get rid of them. This may mean that either the filter doesn't live up to expectations (so return it and get a refund), or in some isolated cases the noise might be coming in via the protective earth lead. Airborne noise from nearby switching (especially motors and inductive loads) will not be reduced by a mains filter.
+ +So-called 'surges' include large spikes created by lightning strikes and much smaller short duration spikes from inductive loads as they disconnect from the mains. It's also possible to get mains voltages that are much higher than the normal range would indicate, and these are invariably the result of a fault condition within the mains distribution system. Very high voltages (greater than nominal voltage +10%) can also be fairly common in rural areas, and may warrant a stabiliser or regulator in some cases.
+ +Lightning is the worst thing that can happen. If it occurs nearby, it will usually cause a great deal of damage. In severe cases, nothing will survive, including the protective devices intended to prevent damage to equipment. The energy in a lightning strike is truly scary. There's an old saying that lightning never strikes the same place twice (not actually true), and I've always maintained that's because the same place isn't there any more .
Lightning notwithstanding, there is sometimes the need for a mains filter and it should have inbuilt protection. It won't save your gear from a catastrophic event (nothing will), but it will eliminate most noise and provide a measure of safety to ensure that most spikes and other disturbances will be absorbed by the filter board rather than your equipment. Having said that, I've run my system for close to 30 years in my current location without any 'protection' other than that included in the equipment itself.
+ +
Figure 4 - Typical Mains Filter
An example of a suitable filter is shown above. It includes MOVs (Metal Oxide Varistors) to help protect against transients, a common-mode and two additional chokes (inductors) to filter noise. All capacitors marked 'CX' are X2-Class, 275V AC types, and those marked 'CY' are Y2-Class electrically safe (and certified as such) types. Filters that include significant capacitance to earth are not legal in most countries, and may cause electrical safety switches (aka RCD, ELCB, GFI, etc.) to trip because of earth current. The inductors will generally be designed to have a relatively low Q ('quality' factor) to minimise the risk of sharp resonances through the circuit.
+ +The first inductor is a common-mode type. These offer minimal impedance to differential signals (the mains itself), but high impedance to common mode noise. L2 and L3 are normal filter chokes and these provide protection against differential noise on the mains. In extreme cases, fitting Y-Class caps in parallel with the X-Class types will help to reduce RF noise because they are ceramic types and have very good performance at high frequencies.
+ +The 1Meg resistor may appear to have no purpose, but it's there to ensure that the X-Class capacitors can discharge when the unit is disconnected. Without it, the caps can retain a significant charge for many hours, and they represent a potential shock hazard. The resistor will discharge them to a safe voltage in under 1 second.
+ +
Figure 5 - Internal Photo Of Mains Filter/ Power Board
The power distribution board in Figure 5 is fairly typical of these products. There is some fairly basic filtering - nothing as elaborate as shown above. There's quite a number of oversized MOVs which I quite like, and there are two thermal fuses included in case the MOVs get hot due to excessive dissipation. In common with most similar units, it has protected pass-through connections for a phone line and TV antenna, and of course it comes with all the outrageous claims and guarantees that are so common with these products.
+ + +You can get some additional basic filtering by using a split ferrite block in a plastic housing, clamped around the mains lead, similar to that shown below. These can be surprisingly effective, and are often found moulded onto the leads for LCD computer monitors (mains and/or video leads). When used, it's most often because the product would not pass EMI tests (e.g. CE, C-Tick, VDE, UL, etc.) without it. This alone tells you that they are effective - both for keeping equipment noise out of the connecting cable, or preventing external noise from getting into the gear itself (or both).
+ +
+
+
Remember that many noises are airborne, and adding a mains filter will have no effect. Airborne noise (which is primarily broad-band RF) can enter the system via a multiplicity of methods, including speaker leads, interconnects (especially non-shielded 'audiophile' types), or even via the mains earth. Sometimes, simply passing speaker leads through a clamp-on ferrite block can help, but elimination can be very difficult and is often not intuitive.
+ +These split ferrite cores can be particularly effective, and where noise or RF is a problem they should be fitted to all speaker leads, as close to the amplifier as possible. In severe cases, you might need to include them on signal leads as well, and you may need to use them at both ends if the RF still manages to get through. They are not always a complete cure of course, but they are cheap and generally work very well. They are usually available in a variety of sizes.
+ + +The mains frequency is remarkably accurate in most countries, and will never vary by enough to cause any problems. Short term variations are extremely small, as they must be to ensure that the distribution grid doesn't fail completely. Your home might be supplied from several power stations at once, and the outputs from each can't even drift by a few degrees in phase, let alone by a few Hertz. It's outside the scope of this article to discuss this in detail, but feel free to look it up, or even measure the frequency with an accurate frequency counter to verify it for yourself.
+ +It's very unusual for any mains powered device to care about the frequency - provided it is never lower than the minimum design value for anything that uses a transformer based power supply. For example, transformers designed for 60Hz may overheat and fail if used at 50Hz. See Voltage & Frequency for more info on this topic.
+ +However, the reverse is not true, so equipment designed for 50Hz will work just fine at 60Hz, provided that the voltage can be set to suit the 120V mains. This might be via a voltage selector switch, internal jumper or perhaps an external transformer. Any AC source that claims to make the mains frequency 'more accurate' is a scam, because it's already perfectly acceptable. In addition, even if it were to drift by (say) 0.5Hz, your equipment will still function exactly the same - it makes no difference.
+ + +There are several approaches to providing a stabilised mains voltage. Some may appear quite strange, such as a motorised Variac™ (variable voltage auto-transformer) that uses a servo system to physically rotate the Variac's moving contact to adjust the voltage. These can be fairly slow, but can provide almost perfect stability and regulation over the long term. This approach is uncommon, partly because it's not well known in DIY or hi-fi circles, and partly because few people need that degree of stability. See Transformers - The Variac for more information on Variacs in general. They are also expensive.
+ +
Figure 6 - Servo Controlled Variac Voltage Regulator
If the incoming mains is low (less than 230V), the Variac moving contact will be above the centre tap, and the voltage is boosted. If the mains is higher than it should be, the wiper will be below the centre tap, reversing the phase to the buck/boost transformer and reducing the voltage to the preset value. A very wide control range is available, but very fast correction isn't possible because of the motor drive. Efficiency is very good, and there's no waveform distortion.
+ +There is another system that's very similar to a motorised Variac, and that uses tap-changing on a transformer that is designed to have a number of taps that are connected either with relays (mechanical or solid state), or again using a motor to operate a sliding contact that changes the tap in use as needed. Voltage taps may be as coarse as 5V steps or finer than 1V, depending on the application and design. The step response of these can be a problem with some equipment, as the voltage may fall to (say) 225V and will suddenly be increased to 230V in one step. Likewise, the voltage may rise to 235V before it's reduced back to 230V. It's unlikely that anyone would consider that to be a positive change in a hi-fi system. Smaller steps mean far more relays, although it's theoretically possible to use a weighted sequence such as 1–2–4–8–16 so that the control has a good range without excessive switching devices.
+ +Another technique is called a ferro-resonant transformer. These literally use a mains frequency resonant circuit to saturate the core to a greater or lesser degree and provide very stable voltage and an almost complete rejection of noise. There's a fair bit of information on the Net, and some sites even manage to mention (often only in passing) that the output is commonly a squarewave (more-or-less), and it's not a good idea to use one to supply other transformers because the secondary voltage will be considerably lower than expected or the core may saturate. Sinewave ferro-resonant transformers are also available.
+ +It's also possible to use a magnetic amplifier. Mag-amps (as they are often known) are a rather ancient technology, but they are still used in quite a few areas. I've seen several references to them being used for voltage stabilisation, and they show excellent stability, reasonably fast response (within a few cycles) and very high efficiency. While there is some electronic circuitry involved, it mostly operates at fairly low power levels and should be very reliable. It's probable that a mag-amp based stabiliser will beat almost any other technology for efficiency and low losses in general, but it's inevitable that some distortion will be created in the process. I don't know if that would create a problem or not because my experience with mag-amps is limited (although it's on the agenda to do some tests).
+ +Then there are the electronic versions. These can use a rectifier and filter to produce DC, then a high-power amplifier to reconstruct the AC sinewave. Efficiency is generally rather low, and in some cases a smaller power amp will be used that is designed to only add (or subtract) the amount of AC needed to maintain the preset output voltage. This type of circuit can be extremely accurate and fast acting, and may also reduce mains distortion. However, as noted above, this is not necessarily worthwhile.
+ +The general scheme is shown in highly simplified form below. To reduce power dissipation in the output transistors, the AC is rectified but not smoothed. While this does help, dissipation will be at the maximum when the input voltage is ~24V higher than the desired output voltage (buck mode), when the output devices are carrying the maximum possible current with close to the full rectified AC voltage across them. Dissipation is greatly reduced in boost mode, where the output voltage is higher than the input.
+ +
Figure 7 - Simplified Circuit For Electronic Voltage Regulator
The amplifier might operate from around 25V RMS via TR1, and will be able to adjust the mains voltage over the range of at least ±20V using a 1:1 transformer for TR2. If the mains voltage is low, the output from TR2 is simply added to the mains to increase the output voltage. To reduce the voltage when the mains is high, the amplifier inverts the output waveform so it's subtracted from the mains voltage. Worst case dissipation in the amplifier occurs when the incoming mains is either equal to the preset regulated voltage or above it, where the circuit has to reduce the voltage. This arrangement will work for an output of up to 2kVA or so with readily available power MOSFETs or bipolar transistors.
+ +The same thing is done by several manufacturers, but using a Class-D amplifier which improves efficiency (up to 96% claimed), but at the expense of complexity. As shown, maximum efficiency will be around 80%, but the normal operating efficiency will be somewhat less depending on the incoming mains voltage. The worst case average dissipation in the output devices can be as high as 500W (simulated with a 1.8kVA output), which is a lot of heat to dispose of.
+ + +Some people have added a mains balancing transformer, and again users will maintain that their system sounds 'better' as a result. The idea behind this is that the mains is inherently unbalanced, with the neutral conductor connected to protective earth, often at the switchboard of each dwelling serviced by a distribution transformer. For unknown reasons, many people seem to think (or even think they know) that balanced connections sound 'better'. This is nonsense, unless using a balanced connection also results in less external interference that produces audible noise.
+ +There may be situations where the use of an isolating transformer set up to provide a floating or balanced mains supply may help to reduce noise. There is also the probability that the transformer will also reduce the regulation you expect from the mains. If some of your equipment uses a switchmode power supply, the overall noise and distortion experienced by other equipment connected to the same supply may increase. Isolating/ balancing transformers aren't a magic bullet that will make your system immune from noise. Any substantial noise reduction is likely to be the result of additional filtering that may be included with the transformer (or indeed by the transformer itself), rather than the result of balancing the mains wiring.
+ + +Ultimately, the decision to use a voltage stabiliser, balancing transformer or just a filter is up to the individual. However, claims that using an all-singing all-dancing mains reconstruction device will make your system sound better are either gross exaggerations or completely false. A sinewave input does not make your audio sound better, but anything that reduces or minimises audible mains noise is a worthwhile improvement.
+ +There are many myths around the mains - especially including those that involve very expensive mains cables. Most hi-fi equipment has significant filtering (mostly provided by the transformer itself and the smoothing capacitors), and that usually removes most of the noise that is carried by the mains. Once the AC mains is converted to DC, it is nonsensical to assume that there is any audible difference between DC from highly filtered and regulated power supplies and that from the same supply when it's powered via an expensive mains lead, a complex filter or a mains 'reconstruction' device. The one exception to this is where adding the device reduces audible noise. Audible noise is often very difficult to track down, and if it's bad enough it may require several different approaches used together to make a worthwhile improvement.
+ +All that any of these devices can do to change anything is remove impulsive noises or other audio frequency interference from the mains. If your system doesn't have any noises that come from the mains, then adding expensive and/or complex external systems will do exactly nothing. Sinusoidal mains don't make 'cleaner' DC, and if noise happens to get into the audio path from the safety earth then none of the options will help much - if at all.
+ +Depending on where you are (near an industrial area for example), the safety earth might have some noise. In this case you need an electrician to install a dedicated earth stake that complies with all regulations, rather than pay for costly external gizmos that probably won't help anyway.
+ +In the vast majority of cases, no double-blind testing is ever done by people who claim huge differences, and anyone who insists that the system's sound stage (imaging), midrange clarity or high frequency reproduction is 'better' is almost certainly a victim of self-delusion and/ or the experimenter expectancy effect. Both are well known in professional circles (especially medical), and double-blind testing is the only way that anyone can be confident that a device is effective or not.
+ +Finally, it's worth stating again that no mains reconstruction amplifier, filter or mains lead will have much effect with many of the common noise sources. If you hear a noise when your fridge switches on or off or when SWMBO (she who must be obeyed) uses the vacuum cleaner, then it's quite likely that the noise is airborne. Filters, regenerators and stabilisers will have no effect on airborne noise, and the problem can only be fixed by suppressing the noise at the source.
+ + ++ 1. Ferroresonant Transformers - General Transformer+ +
+ 2. Voltage Stabilisation Techniques - Claude Lyons
+ 3. Magnetic Amplifier Voltage Regulator System, US Patent 3323039 A (1967)
+ 4. Magnetic Amplifiers, another lost technology - (US Navy, 1951)
+ 5. AC Voltage Stabilizers & Power Conditioners - (Ashley-Edison UK) +
![]() | + + + + + + + + |
Elliott Sound Products | +Electrical Safety Requirements |
Contents
+ +The requirements for electrical safety are (perhaps surprisingly) fairly consistent world-wide. European standards are now the basis for many others, and most of the definitions are (close to) identical no matter where you are. These definitions are important, because they determine the safety rating for any given piece of equipment. The standards of many countries were fairly lax until around the 1970s, but even after that they were often poorly enforced. A great deal of older equipment is positively (and negatively ) dangerous, with some 120V equipment becoming potentially lethal if used at 230V without appropriate safety modifications (see 'death capacitor').
Most DIY people make their own power supplies, but there are also many who rely on external 'plug-pack' supplies (aka 'wall warts') or wall transformers. This is often done to ensure electrical safety, especially by newcomers who are not comfortable with (or qualified for) working with mains wiring. It's not at all uncommon for 'nanny state' regulations to make it difficult to get the parts needed, and (more importantly) it's close to impossible to get a copy of the necessary standards that apply where you live. In most cases, they are only available if you buy the standards documents, and this can get very costly, very quickly.
+ +It can even get confusing if you need (for example) a small (probably switchmode) power supply that's totally isolated from hazardous voltages. this may be to provide power to a small electronic device, or perhaps as an auxiliary (always-on) supply inside equipment to control switching or retain memory settings. The voltage needed depends on the purpose, but will typically be somewhere between 5V and 24V DC. It's not always easy to know if a particular power supply or other piece of gear is not only safe, but legal where you live. Most supplies purchased from reputable suppliers are safe, but ebay is a one-stop-shop for many people, and the goods sold are often poorly described, with little or no safety information. Many items (especially direct imports) do not comply with any standards, and a few have been proved to be lethal during coroner's inquests!
+ +The photo shows the (modified) intestines of a switchmode supply that was removed from a plug-pack ('wall-wart'). It was intended to be used inside another piece of equipment where an external supply was not considered acceptable, and I used an Australian approved plug-pack to get the PCB. The various points of interest are shown on the photo, including the isolation barrier and the slot used to increase creepage distance under the optocoupler, between the mains (hazardous voltage) and the output (extra-low voltage). The outer case (now discarded) had the required Australian approvals moulded into the plastic. Although it is modified, no changes were made that could cause the electrical safety to be compromised. However, to retain safety, it has to be installed in such a manner that it can never become detached, and that even if it did become detached it would still remain safe.
+ +We expect to see standards markings, such as a C-Tick (AS/NZS compliance for Australia & New Zealand, now called RCM - regulatory compliance mark), BS (British Standard), UL/CSA (US, Canada), CE, IEC (Europe), VDE (Germany). Many of the 'far eastern' suppliers do not actually run any tests at all, let alone have tests done in a certified laboratory (as required by all standards bodies). The certifications may well be imprinted on the supply, but that does not mean it is actually compliant. There have been deaths in many countries as a direct result of non-compliant power supplies (especially phone chargers) bought at markets or from on-line auction sites. Some of these will claim compliance, but have never been tested. Not only have they not been tested (or 'classified'), but there are many that will fail (often spectacularly) if tested to any relevant standard.
+ +There are countless examples of fake 'name brand' phone chargers on the Net, and while some might be alright, many are not. There have been cases worldwide where people have died or been injured because of fake (and non-compliant) phone chargers that have failed and placed the full mains voltage on the output. Without exception, these fakes are bought from on-line vendors on auction sites, or from 'pop-up' market stalls where someone has imported them to sell. In Australia, there are often raids performed by compliance officers on market stalls, with non-approved and potentially unsafe products seized and destroyed (and the vendors fined). Be aware that many media (especially social media!) reports and/ or claims show only that the writer has no understanding of how any electrical equipment operates, so reports can be (and often are) somewhat non-sensical. Such 'advice' should usually be discarded.
+ +Equipment classes divide electrical appliances and other sub-systems into different classes. These describe the safety arrangements that apply (or not), and in some cases the claimed class may not match reality. Medical classes are (generally) the same as the other classes used for non-medical equipment, but most countries only allow for Class I (using a safety earth), Class II (double or reinforced insulation) and/ or Class III (safety extra-low voltage). These are discussed in greater detail below.
+ +While this article is primarily about insulation, equipment classes and requirements for safety isolation, it's also important to understand the implications of frequency. With most modern gear using a switchmode power supply it's not an issue, but it's something that must be understood for transformer based supplies and some electric motors. See Importing Equipment From Overseas ... Effects of Voltage & Frequency on Electronic Equipment. You may also want to read the article Electrocution & How To Avoid It, which also covers some of the info shown here.
+ +Commercially made mains powered equipment that's only a few years old will generally be to a reasonably high standard in terms of electrical safety. In most countries, it is a requirement that mains wiring cannot be accessed by the user without the use of a tool (which can be as simple as a screwdriver). It's common now for security screws to be used to make it harder for anyone to get inside. IMO this is silly, as non-technical people don't want to get in, and technical people will get in regardless. A great deal of new equipment is double insulated, and no earth wire is used. A 2-pin plug, 2-core mains lead and wide-range power supply allow operation world-wide, and only the mains lead has to be changed to suit the importing country's mains outlets.
+ +In this article, I have used the normal Australian terms for mains conductors. These are 'active' (aka phase, line or live), 'neutral' and 'earth' (protective earth, earth ground, ground, etc.). The terms differ world wide (as do the colours used), but it's generally hard to be confused because the terms are fairly self-explanatory. Two terms that are not very sensible are 'grounded conductor' (neutral) and 'grounding conductor' (earth). These terms are sometimes used in the US, but are not used elsewhere.
+ +There are literally countless documents, standards and pieces of legislation that cover electrical products worldwide, and I cannot possibly even try to list them all, nor can I provide country-specific information. There are rules and regulations not only for equipment, but for mains leads and connectors, how they must be marked, and whether or not they specifically require individual type approvals. This varies widely in different countries, in some cases approved test houses must test and certify the product, and in others only a 'declaration of suitability' or similar may be required. In some places it may be illegal (or at least unlawful) to perform mains wiring of any kind unless licensed by the appropriate authority, while in others it's quite alright. It's entirely up to the reader to determine what is or is not permitted whey they live - I can't (and won't) even attempt this.
+ +It's also important that the reader understands that this article covers only the electrical safety aspects of an electronic circuit. There are other regulations as well, covering EMC (electromagnetic compatibility), which places defined limits on the radio frequency noise, including radiated and conducted emissions. Radiated emissions are those that can be picked up by a nearby radio receiver, and conducted emissions refers to noise passed back into the electricity grid via the electrical outlet. Linear power supplies (using a conventional mains frequency transformer) and approved switchmode supplies will generally pass both tests, but a switchmode supply you make yourself (not advised) or one purchased from ebay probably will not.
+ +There are additional regulations that cover the risk of fire, and in many cases tests are conducted on plastics and other materials to ensure they self-extinguish once the heat source is removed. These test may or may not be mandatory, or will be required for some products but not for others. This is a minefield, and again, to get the proper information you need to purchase a copy of the relevant standard. In most cases, you won't even know which document(s) you need to buy, and even trying to find out will result in many hours of frustration.
+ + +Electrical appliances using mains voltage must (in most countries) provide at least two levels of protection against electric shock to the user (e.g. double-insulation). This ensures that if one of the protection layers were to fail, there is a back up (the second layer) still in place. Provided all external wiring is up to standards, this makes electrical equipment safe to use. Insulation classes are also subject to temperature limits.
+ +++ ++
+Functional Insulation between conductive parts which is necessary only for the proper functioning of the equipment + Basic Insulation applied to live parts (e.g. the plastic insulated connectors that hold the active and neutral wires in + place) to provide basic protection against electric shock + Supplementary An independent insulation, in addition to basic insulation, to ensure protection against electric shock in the + event of failure of the basic insulation + Double Insulation comprising of both basic and supplementary insulation + Reinforced A single insulation system applied to live parts, which provides a degree of protection against electric shock equivalent to + double insulation +
In the electrical appliance manufacturing industry, IEC (International Electrotechnical Commission) protection classes are used to differentiate between the protective-earth connection requirements of devices.
+ +Depending on how the protection is provided, electrical appliances are put into five classes of equipment construction, Class I, II, III, 0 and 01. Of these the most important (and generally the only ones that are relevant for modern equipment) are Class I and II. For historical reasons, Class 0 is also covered. Class III is uncommon, at least as officially specified, but many products could be designated as Class III if supplied with a compliant transformer or power supply.
+ +The temperature rating of insulation is important, and failure is almost certain if it's exceeded. Such failures are rarely instantaneous, and in some cases it may take several years for the materials used to degrade to the point where they are no longer insulators. All insulating materials will break down at elevated temperatures, and the enamels and resins used in power supply transformers and/or PCBs are generally fairly low temperature. The list below shows some of the temperature classes (based on IEC thermal classifications). Letter codes are also used, and can be found on-line easily enough. Those shown are the IEC (60085 Thermal Class) numerical codes, which make a lot more sense than letters (with gaps!).
+ +++ + ++
+90 90°C Paper (not impregnated), silk, cotton, vulcanised natural rubber, thermoplastics + that soften above 90°C + 105 105°C Organic materials such as cotton, silk, paper, some synthetic fibres + 120 120°C Polyurethane, epoxy resins, polyethylene terephthalate (PET/ Mylar®/ Polyester) + 130 130°C Inorganic materials such as mica, glass fibres, asbestos¹ with high-temperature binders + 155 155°C Class 130 materials with binders stable at the higher temperature + 180 180°C Silicone elastomers, and Class 130 inorganic materials with high-temperature binders + 200 200°C Mica, glass fibres, asbestos¹, Teflon® + 240 240°C Polyimide enamel (Pyre-ML) or Polyimide films (Kapton® and Alconex® GOLD) + ¹ Asbestos was common, but due to the damage it causes to lung tissues it is no longer used in any 'normal' application. It might still exist in old equipment, so beware.
+
The ambient temperature must be considered as well. In all cases with electronic parts, the ambient temperature is that measured in the immediate vicinity of the part, and not the temperature inside the room, building etc. All insulators should be operated at the lowest feasible temperature, but usually not below 0°C unless unavoidable. Most common power supplies and transformers will be rated for no more than 120°C maximum temperature, with switchmode supplies generally lower (105°C is the upper limit set by electrolytic capacitors).
+ +Most insulation failures are the result of age and temperature. When built, a 50 year old transformer used materials that may no longer be considered safe, but if used within its ratings it can easily last for another 50 years without risk of failure. While short term overloads are (generally) easily tolerated, repeated or prolonged abuse will reduce the life of a transformer. So-called 'hot spotting' (where one part of a winding gets much hotter than the overall/ average) can reduce a transformer's life significantly. Mechanical damage can cause a failure by physically breaking the insulation between windings or from a winding to the core.
+ +Conventional 50/60 Hz transformers are incredibly reliable when used sensibly, and even after 50 years (or more) most can be relied upon to perform as expected and remain safe to use. Modern switchmode supplies have much greater complexity, and the life is not related to the transformer, but the support components (ICs, transistors and especially electrolytic capacitors). Because there are so many more parts, there are many more things that can go wrong. Elevated temperature shortens the life of all parts, and the maximum ambient temperature should be kept as low as possible. This is not always easy.
+ +The insulation requirements extend beyond the transformer and associated parts (where used). Other wiring can degrade or be damaged as well, and the mains wiring in old valve equipment is especially vulnerable. The heat inside the chassis can cause accelerated degradation, particularly where the insulation is vulcanised rubber or low-temperature plastic. This is especially important for original Class 0 equipment (see next section) where the only level of protection is basic insulation with no backup of any kind.
+ + +A brief rundown of some of the equipment classes and applicable standards follows. These are important to understand, as mis-application can result in equipment that is unsafe, with the risks of electric shock, fire or both. The standards applied vary by country, but most use the following definitions and requirements.
+ +++ ++
+Class 0 Electric shock protection afforded by basic insulation only. No longer allowed in most countries for new equipment, + but 'legacy'/ vintage gear is commonly Class 0 (especially that of US origin). Unsafe, and should be upgraded to Class I without delay. + Class I Achieves electric shock protection using basic insulation and protective earth grounding. This requires all + conductive parts that could assume a hazardous voltage in the event of basic insulation failure to be connected to the protective earth conductor. + Class II Provides protection using double or reinforced insulation and hence no ground is required. + Class III Operates from a SELV (Separated Extra Low Voltage) supply circuit, which means it inherently protects against + electric shock, as it is impossible for hazardous voltages to be generated within the equipment. +
The markings shown above are not entirely 'universal', but are standard for Australia (& New Zealand) and most of Europe. It is mandatory in most countries to show the Class-II (double-insulated) symbol for equipment so designated, but Class-I gear will usually have no markings - it will be supplied with a 3-core IEC lead (or fixed 3-core lead) that removes any doubt. Note that in most countries, leads with a 3-pin IEC connector must also be fitted with a 3-core lead, for active, neutral and earth, and a matching 3-pin plug on the other end. This must also be followed if you make your own lead, because it's potentially very dangerous to have an IEC lead without an earth wire, because it can be used with any equipment fitted with a matching IEC receptacle, including gear that must be earthed. The current rating of the cable must match that of the plug and socket (10A for an IEC C13 socket). All connectors and cable should be approved for use in the country where you live.
+ +Understanding the safety standards and the above classes of equipment requires a clear understanding of the circuit definitions, types of insulation and other terminology used in relation to power supplies. There are (according to some sources) sub-standards of the above, such as Class 0 (basic insulation, no provision for a safety earth) but these do not exist in any standards documentation that I've seen, and should not be used.
+ +However, many 120V countries have been using 'Class 0' for decades, where the insulation class is 'basic' (i.e. not reinforced) and no protective earth (ground) is provided by the mains lead or plug. Countless guitar amplifiers, pieces of hi-fi gear and other appliances are still in service that can only be classified as 'Class 0', and while passably safe when used at 120V AC and in dry conditions, such equipment is decidedly unsafe at higher voltages (such as 220-230V mains). The sale of such equipment is now generally unlawful in most countries, but so-called 'grandfather' clauses in regulations may allow this gear to co-exist with Class I and Class II. In general, any Class 0 gear you have should be upgraded to Class I to ensure it doesn't kill anyone (including you).
+ +The idea of 'protection' being afforded by (aging and possibly disintegrating) basic insulation with absolutely no backup (no safety earth or reinforced insulation) is not something to inspire confidence. Such equipment is inherently dangerous, and doubly so if it's been modified for 230V but without adding an earth connection. Various US guitar amp makers went the extra 'mile' to ensure that the danger was as great as possible, by including what has become known as the 'death cap' (no, not the mushroom). This was nearly always just a high voltage (typically 630V DC, and around 39-47nF) capacitor connected to the (unearthed) chassis by a switch. The user could select the switch position that gave the least noise (or perhaps the milder electric shock). This topic has its own section below.
+ +Note that Class 0 products will be prohibited from sale in most countries. No new equipment should use this class, and existing equipment is expected (but unfortunately not mandated) to be upgraded to Class I. I've not mentioned Class 01, but that's also prohibited. Class 01 refers to products that have provision for an external earth (protective or functional), but it's not connected via the mains cable. For example, vintage radios (almost always AM) often had an earth terminal on the chassis, but used a two-wire mains lead. The earth terminal was often used in conjunction with an external antenna to improve reception.
+ + +This is something of a can of worms. World-wide, there are many different standards, and detailed info is mostly available from the standards documentation, which is only ever available if you pay for it. The test voltage is usually DC, but 'hi-pot' (high potential) tests are also done with AC. This is one of the areas where head-scratching and general confusion are the only sensible options. While a product (such as an isolation transformer or DC-DC converter) may claim it has been tested at 1kV, that does not mean that it can be used at that voltage. In some cases the actual recommended voltage might be less than 100V RMS.
+ +Some parts (such as optocouplers) are specifically designed to provide a high isolation voltage. Common devices are rated for 7.5kV AC isolation, which is far more than can actually be used on a normal PCB. These parts are used extensively in switchmode supplies to provide voltage feedback, and they typically can have the full mains voltage across the isolation barrier. It's a great deal harder to maintain high isolation with a wound component (i.e. a transformer), because there may be air pockets between windings, and the windings have to be physically segregated while trying to keep package dimensions to the minimum. Now, two more terms come into play - creepage and clearance. These will be covered later.
+ +Cable/Equipment Operating Voltage | DC test voltage + |
24 to 50 V | 50 to 100 VDC + |
50 to 100 V | 100 to 250 VDC + |
100 to 240 V | 250 to 500 VDC + |
440 to 550 V | 500 to 1000 VDC + |
Hi-Pot tests can be destructive. In such a test, the test voltage is increased until the insulation fails, which gives an indication of the dielectric strength of the insulation material. Non-destructive tests are at a lower voltage, and verify that the part or product meets specifications. Test times range from a few seconds to 1 minute or more.
+ +Material | Dielectric Strength + |
Vacuum (reference) | 20 - 40MV/ metre + |
Air (Sea Level) | 3.0MV/ metre + |
Aluminium Oxide | 13.4MV/ metre + |
Ceramic | 4-12MV/ metre + |
Kapton | 120 - 230MV/ metre + |
Mica | 160MV/ metre + |
Polycarbonate | 15 - 34MV/ metre + |
Polyethylene | 50MV/ metre + |
Polyester/ Mylar/ PET | 16MV/ metre + |
Polypropylene | 23 - 25MV/ metre + |
Polystyrene | 25MV/ metre + |
Teflon | 60 - 150MV/ metre + |
Dielectric strength values are not exact, and it's surprisingly hard to get hold of anything definitive. The above table was taken from the ESP article on capacitors. It's common (but not very useful) to specify dielectric strength in V/m (volts per metre), and that's what is shown in the table. To get something meaningful requires some simple maths. Volts/ µm (micrometre) is easy, simply call the value shown 'volts' instead of MV. For example, polyester/ PET has a dielectric strength of 16V/µm, so a 25µm film can withstand 400V. The US generally uses the 'mil' (1/1,000") which is close enough to 25µm.
+ +The voltage depends on many factors, including the thickness of the film, the shape of the electrodes used for the test and the temperature of the material being tested. The rise-time of the test voltage also affects the result, so test systems have to comply to the relevant standards. ISO/IEC standards specify a material thickness of 1mm for testing.
+ +The most common and best known insulation (dielectric) tester is the Megger ®, which has been used to verify electrical installations for many, many years. For 230V installations, the recommended test voltage is 500V DC, and the insulation resistance of a circuit must exceed 1MΩ. These testers can also be used for components (transformers, isolators, etc.) and are now readily available with multiple test voltages. Of course, the latest ones are digital and use a switching supply to generate the high test voltage.
+ + +Power supply voltages are categorised depending on voltage and type of supply (AC or DC). The vast majority of DIY power supplies for power amps and preamps will include 'Hazardous' voltages (all mains wiring) and 'ELV' (extra-low voltage) for both power amp and preamp supply voltages. Some power amplifiers have supply rails that exceed the ELV ratings (and they can provide an output voltage that also exceeds ELV), but there is no consensus worldwide as to whether this constitutes a hazard or not.
+ +++ ++ +
+Hazardous Voltage Any voltage exceeding 42.2V AC peak or 60V DC without a limited current circuit. + Extra-Low Voltage (ELV) A voltage in a secondary circuit not exceeding 42.4V AC peak or 60V DC, the circuit being separated from hazardous + voltage by at least basic insulation. + + Separated Extra-Low
Voltage (SELV) +A secondary circuit that cannot reach a hazardous voltage between any two accessible parts or an accessible part and protective earth under normal operation or while + experiencing a single fault. In the event of a single fault condition (insulation or component failure) the voltage in accessible parts of SELV circuits shall not exceed 42.4V AC peak or + 60V DC for longer than 200ms. An absolute limit of 71V AC peak or 120V DC must not be exceeded.
+ SELV circuits must be separated from hazardous voltages, e.g. primary circuits, by two levels of protection, which may be provided by double insulation, or basic insulation combined with + an earthed conductive barrier.
+ + SELV secondaries are considered safe for operator access. Circuits fed by SELV power supply outputs do not require extensive safety testing or creepage and clearance evaluations.
+ +Limited Current Circuits + These circuits may be accessible even though voltages are in excess of SELV requirements. A limited current circuit is designed to ensure that under a fault condition, the current that + can be drawn is not hazardous. Limits are detailed as follows: + +
+ To qualify for limited current status the circuit must also have the same segregation rules as SELV circuits. +- For frequencies < 1kHz the steady state current drawn shall not exceed 0.7 mA peak AC or 2mA DC. For frequencies above 1 kHz the limit of 0.7mA is multiplied by + the frequency in kHz but shall not exceed 70mA. +
- For accessible parts not exceeding 450V AC peak or 450V DC, the maximum circuit capacitance allowed is 0.1µF. +
- For accessible parts not exceeding 1500V AC peak or 1500V DC the maximum stored charge allowed is 45µC and the available energy shall not be above 350mJ. +
The above may look either straightforward or complex depending on your experience. It's probably more complex than it appears, because all of the terminology relies on insulation and equipment classes. ELV isn't at all daunting, and that's what most of us will use for preamps and power amps, along with a great deal of other equipment. It is important to understand that the 'basic' insulation that separates ELV from hazardous voltages must be rated for the worst-case maximum input (hazardous) voltage, with an adequate safety margin to ensure longevity under adverse conditions. It's also important that no component failure can cause a breach of the safety barrier or create a fire hazard.
+ +The term 'SELV' is claimed to stand for either 'separated extra-low voltage' or 'safety extra-low voltage', depending on the source. SELV (in its true form as defined by the standards) only applies when a fully compliant SELV transformer is used. While an off-the-shelf part may provide extra-low voltage, it usually can't be referred to as 'SELV' unless the transformer is an approved type. This is not possible in most cases, due to cost. The secondary of a SELV transformer is not connected to the mains protective earth - it is intended to be floating.
+ +Limited current circuits are not common. An example is a 'touch' switch that operates only from the mains (no low voltage transformer), and these rely on a tiny current drawn as your finger touches the trigger plate to function. It should be immediately apparent that this type of circuit has to be carefully designed, and that current-limiting components must be totally reliable. They can become open circuit, but never short circuit. Class Y capacitors (preferably 2-3 in series) and high value, high voltage resistors are called for.
+ +Medical applications are not covered here. These add significant restrictions to ensure patient safety, and also require extensive laboratory testing to verify compliance. This is an expensive process, and is not something that most people will experience. There's also no attempt to cover telecommunications requirements. This is another area where many things can change (including definitions) and it is complex and expensive to obtain approvals. While there are many similarities world wide, there are also some significant differences that make this a rather specialised field.
+ + +The Low Voltage Directive (LVD) is a European standard that covers health and safety risks with electrical equipment. Internal voltages are not part of the standard unless they are accessible from outside the enclosure, which would most commonly only be accessible by using a tool - a screwdriver is generally considered a 'tool' for the purposes of much legislation. For most electrical equipment, the health aspects of electromagnetic emissions are also covered by the LVD. The LVD applies for electrical equipment operating with an input or output voltage of between ...
+ ++ 50 and 1000 V for alternating current (AC)+ +
+ 75 and 1500 V for direct current (DC) +
The LVD applies to a wide range of electrical equipment for both consumer and professional usage, such as ...
+ ++ Luminaire plugs and socket outlets for domestic use+ +
+ Appliance couplers, plugs, outlets
+ Cord extension sets Plug + cable + socket outlet, with or without passive components
+ Installation enclosures and conduits
+ Travel adaptors
+ Household appliances
+ Cables
+ Power supply units
+ Certain components (e.g. fuses or other safety-critical parts) +
The EU legislation in this area is important to ensure that health and safety requirements are the same across Europe for products placed on the market. However, many other countries don't apply the same criteria, or they are applied differently. Some of the LVD requirements may be unique to Europe, but most other countries have rules that achieve the same goals. As always, if you need the full scope of the LVD you have to purchase the standards documents.
+ +There is information available on-line, but you're unlikely to find any finer details such as the test methodology, scope of testing, or anything that's actually useful for someone building their own equipment. Adherence to basic safety guidelines will help, but even that can be difficult if you can't find the information anywhere. This is a recurring theme - to ensure compliance you need the detailed knowledge of the requirements, but you can't get that without paying the (usually hefty) price for the standards documentation.
+ + +These are two terms that most people do not understand. This is not surprising, because although they are self-explanatory, the explanations themselves don't mean anything without context. Clearance is the distance, through air, separating hazardous voltage from phase to neutral, earth or any other voltage. The minimum value is typically 5mm, but there is a vast variation depending on pollution categories (not normally applicable inside sealed equipment) and voltage. Using the minimum figure is not sensible for hobbyists, and it's preferable to ensure that the separation is as great as possible.
+ +Creepage is the distance across the surface of insulating material, including printed circuit boards, plastic terminal blocks, or any other material used to separate hazardous voltages from phase to neutral, earth, or any other voltage. Again, 5mm is generally considered 'safe', but that depends on the material itself, pollution categories (again) and the voltage(s) involved. Note that the creepage distance is from the closest edges of PCB copper pads or tracks, and not the pins of the connector or other device. The following drawing shows the difference between creepage and clearance.
+ +In the above, creepage is shown between two transformer windings (only the layer adjacent to the primary/ secondary insulation is shown). The second drawing shows creepage across the PCB and clearance between the wire 'cups' on a barrier type terminal block. Creepage exists on both sides of the board. Where pollution is expected, this may be able to bridge the creepage distance with partially conductive 'stuff', possibly allowing sufficient current to cause fire. Be aware that burnt materials (such as PCB resins) can become carbonised (and therefore conductive) if heated beyond their rated maximum temperature. I've seen it happen and it is a very real phenomenon, so you should withdraw your scoff immediately .
As with most of the other standards, you will only get those that apply where you live if you pay for the relevant documents. There is information on-line, and some of it has been 'extracted' from standards documentation. Other material you find may or may not be relevant or even accurate, so you need to do the best you can to ensure that creepage and clearance distances are as great as you can make them, without being silly. If at all possible, ensure around 8mm (0.315") for both creepage and clearance. Where space allows, greater distances may be used.
+ +'Officially', the minimum clearance distance depends on the 'overvoltage category', which for 120/230V equipment is usually 4kV. Deciphering some of the info you may find (if you look hard enough) can be difficult, and designing for the minimum is unwise anyway. While you might get away with using the minimum, that doesn't mean that your project would pass lab testing. Items with a 4kV overvoltage category must allow a minimum of 3mm clearance between mains conductors, but IMO that would be less than ideal.
+ +In some cases, switchmode power supply manufacturers place a cut-out slot beneath optoisolators and/ or transformers to increase the creepage distance (see Figure 1). This is potentially useful to avoid a conductive path between mains and low voltage if the PCB material become contaminated (for example, if an electrolytic capacitor loses its electrolyte). This is commonly seen in higher quality units, but not so much in 'budget' or non-certified supplies. Open PCB supplies (no casing) are commonly used within other products, and become an integral part of the overall unit, and if type approval is required the PSU is tested along with everything else.
+ +It's important to understand that creepage and clearance distances are not limited to your wiring. Transformers are subject to the same constraints, as are small switchmode supplies, whether 'stand-alone' or sold as wall transformers (AC or DC). In Australia, all wall transformers (aka 'wall-warts') are 'declared articles' (formerly known as 'prescribed articles'), and safety testing is mandatory. That means they must be type-approved, and will be subjected to a barrage of tests (some of which may be destructive) to ensure that there is no single failure that can render the item unsafe. If the possibility of multiple failures is identified, then that will also be tested.
+ +Most countries don't have such a rigorous approach, but all major countries do insist that products bear the appropriate safety standard markings for the country where it's sold. This is the responsibility of the manufacturer or supplier, and government agencies may demand to see the test results (perhaps on a random basis) to ensure compliance. Such demands will be made routinely if there is a reported injury or death attributable to the power supply in question. None of the fake 'name-brand' products will have been tested, and the required safety logos are simply applied to the product to make it appear legitimate.
+ +For the average (or even skilled) user, it can be almost impossible to verify that the product really has been tested, but sometimes you can get a good idea if you can look inside. The use of 3kV ceramic capacitors instead of certified Class Y caps is not uncommon, some have almost laughable creepage and clearance distances, and others may actually look to be alright. However, without proper testing, you have no way of knowing if the insulation in a small switchmode transformer is up to standard, nor can you know if proper creepage distances are maintained between the windings (creepage and clearance do not apply if the transformer has been varnish impregnated). Where a transformer has been impregnated or potted, the standard test is the 'hi-pot' test, with the voltage increased to 4kV or more, depending on the insulation class claimed.
+ + +A part of most safety tests for Class I equipment (incorporating a safety earth conductor to the power outlet) includes verifying that the earth lead is capable of handling a reasonable current, and has a low resistance (typically 100mΩ, or 0.1 ohm). A test lab will use a dedicated tester for this, and PAT testers provide this function as well. The test is normally conducted at 1.5 times the power outlet rating (so 15A for a 10A outlet), with a maximum test voltage of 12V (AC RMS or DC). The maximum current is 25A.
+ +This is not something that most home constructors will ever verify, but it is obviously important. There's no point including an earth lead that can't handle enough current to open a circuit breaker or blow a fuse. Elsewhere on the ESP site, I've provided a circuit for an 'earth loop breaker', which uses a high-current diode bridge in parallel with a 10 ohm resistor and a 100nF capacitor. Clearly (and as advised in the articles where it's shown), technically this will be unlawful in most countries if it's simply in series with the earth lead. If used, the 'loop breaker' should simply lift the common ground of the internal electronics, with the earth lead firmly connected to chassis (including the frame of the power transformer if it's a 'conventional' (E-I laminations) type). An example of this circuit is shown in the power supply of Project 27.
+ +Even then, if one follows the letter of the regulations, this may still be unlawful, because there will be a 2V drop across the diodes if there is a primary-secondary transformer insulation failure. These are very uncommon, the risk is therefore small, and the diodes will pass the test current easily. However, the measured 'resistance' will be well in excess of the allowed 100mΩ, and the test may be deemed a fail, depending upon the test methodology used. The test methodology specifies that it's carried out between the earth pin on the mains lead, and any earthed (or intended to be earthed) metalwork or user accessible earthed contact points. Provided the input and output connectors are not claimed to be earthed, then the test should pass, but this may depend on the person performing the test.
+ +When something 'unusual' is done (such as an earth loop breaker), there are several possible interpretations, and regulations may not consider such an arrangement to be an 'acceptable' practice. As far as I'm aware, this has not been verified one way or another with any authorised test house, so it's not possible to say with any certainty that it would pass required tests. As already noted, there may be something in the standards documentation that covers it, but I can't afford to purchase endless official standards documents, and nor can prospective constructors.
+ + +It used to be that only valve (vacuum tube) amplifiers had high internal voltages, but there are also many transistor amps that have a total supply voltage of well over 150V DC (±75V). This is not necessarily considered dangerous, but it can still give a nasty bite. Valve equipment has HV potentials of up to 700V DC, and occasionally even more. This is most certainly dangerous, and it's essential that the high voltage is properly 'contained' so that no-one can come into contact with it.
+ +It seems likely that (some of) the possible dangers have 'slipped through the cracks' to some extent, since the regulatory bodies probably don't take much notice of niche products. If everything is enclosed in a 'cage' of some kind (or a perforated steel cover protects the valves) then there's no risk to the user, but much of this equipment has no protection. The user is separated from the HV by a very thin and fragile glass envelope, and if that is broken, touching the internal structure of a valve could be fatal.
+ +Children are particularly vulnerable because they have no awareness of the danger. However, there don't appear to be any reported deaths associated with valve amps in general, but this is no reason for complacency. Valve equipment is especially dangerous when you are working it, and there's no-one I know who's never received a shock when working on valve amps if it's done regularly. Such shocks can be fatal, but are more often just very disconcerting and definitely get the adrenaline pumping.
+ +It's obviously essential to ensure that all wiring is safe, and uses insulation that's designed to withstand the voltage(s) used. All forms of insulation (not just the wiring) need to be adequate, and there don't appear to be any specific regulations that apply to high internal voltages, provided they are inaccessible from outside the chassis. Some valve equipment (guitar amps in particular) uses the minimum possible insulation, but breakdown is rare and few faults can be traced to insulation failure. This doesn't include output transformers or valve bases and sockets, where insulation breakdown is not at all uncommon.
+ +Due to the lack of readily available regulatory information, the only things I can recommend are based on common sense. While some form of protective cover for the valves themselves (especially output valves) is preferable to just having them in 'free air', this is uncommon, despite the fact that they get very hot and can cause serious burns if touched. Most users are aware of the dangers, and it's advisable to ensure that children are warned not to touch any valve equipment that isn't protected.
+ +One thing I advise everyone to ensure is that you do not wear a ring, bracelet (including watch band) or long neck chain when working on valve gear. Rings and bracelets can get caught on parts of the chassis, making it hard or (perish the thought) impossible to withdraw your hand if you get a shock. Neck chains can touch (and/ or short circuit) high voltages, and can be very dangerous. Anyone who claims that you should keep one hand in your pocket to prevent a hand-to-hand shock (so current is passed through your heart) has never fixed anything, and is talking through his/her hat. You can't do anything useful with one hand. However, you must remain vigilant. It's almost certain that you will receive an electric shock at some time if you work on a lot of valve gear, and if you are careful and sensible you'll live to receive another .
Some people recommend the use of an isolation transformer. In a word ... don't! This is a myth that's been around for longer than I have, and it's flawed thinking at its worst. An isolation transformer should be used only if you are working directly with the mains (not the secondary voltages provided by a power transformer), and even then with extreme care. An isolation transformer completely disables your workbench safety switch (you do have one, don't you?), so if you touch the mains and something else in the chassis at the same time, the safety switch won't trip and you may be killed. When working on secondary voltage (e.g. valve amp high tension) circuitry, the isolation transformer does absolutely nothing to make it 'safer'. However, if you don't understand the proper usage of an isolation transformer you may become complacent - complacency and electricity are not compatible with life!
+ + +The 'death capacitor' (or Death Cap) was used in many guitar amps and early (AM radios, almost always those made in the US or intended for the US market. It's only comparatively recently that global trading has allowed these old guitar amps in particular to 'escape' to 230V countries in reasonable numbers. While the capacitor used was typically rated for 400 or 600V DC, the dielectric would usually withstand 120V AC without a guaranteed failure. That does not apply with 230V AC. Almost all DC capacitors will eventually fail if used across an AC voltage of more than ~250V peak (177V RMS). The reasons are complex and are not covered here, but the cap must be removed regardless of mains voltage.
+ +When used as shown below, this practice is no longer permitted under any regulations in any country on earth, but there is also no specific requirement to remove it if found. The sensible technician will always remove the death capacitor and fit a 3-core mains lead with a safety earth and 3-pin plug. The non-sensible technician could find himself/herself at the wrong end of an unlawful death or manslaughter charge if someone dies because this deadly arrangement was left in place. Even when this practice was widespread, it was limited to 120V countries and as far as I'm aware it would have been unlawful elsewhere because it's so dangerous. It's now illegal in most countries, meaning this practice is specifically declared as something that is not permitted.
+ +Consider that a 50 year-old guitar amp has 50 year-old insulation, and unless upgraded, that's something I'd trust almost as far as I can kick a piano. The easiest and least intrusive upgrade is to fit a 3-core mains lead, 3-pin plug, and earth the chassis securely with the green/ yellow (or just green) protective earth conductor. If fitted, the 'death cap' must be removed to ensure compliance with modern safety standards. While many owners of vintage gear often don't like making changes, safety must override all other considerations. Owning a completely original vintage amp that kills you is not something you should aspire to.
+ +When the cap fails, the failure mode is almost always a short circuit, followed by the cap exploding and spreading metallised film everywhere.
+ +Even today, there is argument on the Net as to whether the 'death cap' is a safe practice or not. A great deal has been written, and a much of that is either complete nonsense or shows that the author doesn't actually have a clue. It's unfortunate that anyone can post a video and claim to be knowledgeable in the field they discuss, when they actually don't know what they are talking about. There is one (and only one) answer to the question "Should the death cap be removed?", and that is "YES!". There's no room for "Maybe" or "Sometimes" or anything else that implies it may be optional. Many of the people who have 'investigated' the death cap are unqualified, and their opinions don't count. Many of those who comment on the cap have no idea what they're talking about, and have no idea why it was used.
+ +Because the amps were wired as if they were Class II (but without the additional insulation required), the chassis would normally float at some voltage between zero and perhaps 110V RMS or so. This always had the ability to cause a 'tingle' or even a 'bite' if the musician's lips touched an earthed microphone. The small current could also create an unacceptable hum level - especially with guitars having no internal shielding (no, shielding does not 'ruin' the tone). By switching the 'death cap' so that the chassis was referenced to the neutral via the capacitor, hum (and/ or 'bites') could be reduced dramatically. The death cap acts as a low impedance path for voltages induced into the chassis by stray capacitance (mainly from mains wiring and the power transformer).
+ +The comparatively high value of the death cap means that it creates a capacitive voltage divider, which (capacitively) connects the chassis to neutral (or active/ live if the switch is in the wrong position!). The value of stray capacitance varies widely, depending on the internal wiring layout. Assuming 100pF as shown, the death cap will reduce the 60V RMS to less than 200mV RMS if switched to the neutral. If switched to the live by mistake (or deliberately), you'll get close to 120V on the chassis! The current can exceed 2mA with 120V mains, greatly exceeding the limits for 'current limited' circuits (0.7mA peak). The situation is much worse with 230V mains, and as already noted is likely to be lethal when (not 'if') the capacitor fails.
+ +Checking many US 'brand-name' schematics will show that the death cap was very common, and it's probable that most such amps are still in use. People don't discard name brand amps - they are sold, refurbished (perhaps), sold again, and continue to be in use for decades after they are built. While electrocution is (hopefully) fairly unlikely as long as they are used only with 120V mains, the real problems arise when they are on-sold worldwide, with most countries using 230V mains. DC capacitors are dangerous when used with 230V AC, and they will fail at some point. It seems that the practice of using the death cap continued until some time in the 1980s, so there will be plenty of amplifiers with it still fitted in common use. There is one (and only one) way to wire a guitar amp's mains input, and that's Class I, with the chassis earthed via a 3-core mains lead.
+ +The bottom line is that the death capacitor is well named. It's dangerous and unsafe with 120V mains and exceptionally dangerous (and potentially lethal) elsewhere. This wiring is not permitted in new equipment anywhere on this planet (other galaxies might have different rules ). Use of a DC capacitor (of any voltage rating) guarantees eventual failure with 230V mains, and the only capacitor allowed in this role (between active or neutral and chassis) is a fully safety certified Class Y capacitor of (usually) no more than 10nF. All 120V countries now have exactly the same requirement - the only capacitor that may be connected between either mains lead (active or neutral) and the chassis or other user-accessible conductive parts is a Class Y component. Capacitors connected between active and neutral (not earth/ ground!) must be either Class X or Class Y. Class X are more common in this role as larger values (i.e. >10nF) are often used between active and neutral. Many suppliers don't stock Class Y caps above 10nF. The most common value is around 2.2nF (or less), which will allow a maximum RMS current of 145µA at 230V/ 50Hz, or 91µA at 120V/ 60Hz.
Double Insulation or Class II (Contributed By Phil Allison) + +
Class II appliances are claimed to possess safety advantages over regular, earthed ones but this is not always the case with audio and video equipment. + +
Contrary to expectation, relying on the earth conductor creates a safety hazard.
+ +Relying on the earth conductor is itself a safety hazard because ...
+ +Achieving improved safety, without reliance on an earth conductor, is WHY Class II construction was developed.
How Class II Appliances Achieve Better Safety
+The basic idea behind Class II construction is that exposed metalwork is made to simply float - it connects to nothing and hence is no more hazardous to touch than any other metal object. This applies to the external case as well as to internal wiring that is made accessible to users through connectors and the like. There are numerous design rules that must be complied with when producing a Class II appliance so that internal current carrying wires simply cannot come into contact with exposed metalwork that houses the unit or any external connections. Two layers of insulation around live parts is the norm but extra thick insulation is also accepted.
Class II construction rules allow for AC supply transformers overheating or even burning down without breaching insulation barriers. User accessible fuses cannot be relied on and are not.
+ +Temperature cut offs and one time thermal fuses are commonly used to meet Class II safety requirements when using transformers. The devices are specified to open the AC supply circuit before a temperature is reached such that the primary to secondary insulation is likely to become damaged.
+ +Correct operation is lab verified by progressively overloading sample transformers while monitoring their internal temperatures. Even with a deliberate short on the secondary, failure of the primary to secondary or other insulation is not permitted.
Connecting Class II And Earthed Appliances To Each Other
+Although prohibited by the rule: 'Class II - do not earth', linking Class II and earthed items of audio and video gear is done routinely via the shielding on signal carrying cables. Though users enjoy a great bonus by eliminating ground loop hum, doing this eliminates all the safety advantages of Class II and allows for a horrific possibility.
+ A potentially lethal hazard occurs if ever an earthed appliance in such a system becomes live on its chassis or internal ground circuit - the fault condition will then pass the full AC supply + voltage onto the exposed metalwork of each and every Class II item in the system.+ +
+ As shown in Figure 6, this can happen merely because a mis-wired but quite functional supply lead (IEC or hard wired) is used with an AC outlet that has the otherwise harmless error of reversed + Active and Neutral. +
While the following may seem unlikely, most service techs will have seen similar scenarios with mains leads that have been 'repaired' by unskilled people. Reversed active and neutral are surprisingly common, especially in older houses and venues, or where unskilled people have performed 'upgrades' to existing wiring. Not everyone is capable of following simple colour codes and/ or identifying which lead is which in an installation (compounded by older wiring using different colour codes).
+ +The incorrectly wired plug shown will work more-or-less 'normally' in a correctly wired outlet, but it will trip the safety switch - if one is present. Without a safety switch, it's probable that no-one would ever realise that the lead is mis-wired unless a tester is routinely used to verify that all leads used are wired properly. While this might happen with a touring band, it most certainly will not happen in a private residence, and the fault will go un-noticed until a mis-wired outlet is used. The combination is then deadly.
+ +While I've shown an Australian mains outlet and plug, the same principles apply worldwide. It's nothing to do with the style of the connectors used, only the way they are wired.
When Class II Is Not Safe
+There are many situations where Class II items should NOT be used because spillages, rainwater ingress or physical damage are likely. Portable Class II appliances can become a serious hazard if used inside bathrooms. It is simply left to the good sense of users not to use Class II appliances in hazardous conditions.
Guitar amplifiers and mixer/ amplifiers are items that should never be built as Class II. Typical live music environments often involve careless handling of beverages while outdoor performances run the risk of rain soaking the stage and equipment. The chance of a chassis becoming live while a performer holds onto the metal strings of a guitar or the handle of a microphone is way too high.
Changing Class II Appliances To Become Earthed
+In general you can replace the two core lead of a Class II appliance with a three core one, as long as you also remove all markings that indicated the item was previously Class II. I would not hesitate to modify a Class II mixer/ amp if one was on my bench as doing so might save someone's life.
+ FYI: Yamaha sold and may still sell Class II audio items including mixer/ amps that had to be changed to become fully earthed because vocalists were receiving nasty shocks on their lips from + microphones. Significant AC voltage was being coupled onto the metalwork and circuit common by capacitive leakage in the internal, Class II power transformer. Performers with amplified + guitars were the most affected as their bodies were well earthed via the steel strings. ++ +
My thanks to Phil for his contribution. As he's noted, Class II relies on the common sense of the user in many cases, but unfortunately, common sense is often surprisingly uncommon. It doesn't help when manufacturers (and those who devise the rules) fail to think ahead, and make assumptions that can't be realised in practice. The requirement that Class II equipment must not be earthed is fine in theory, but fails to consider reality. Ideally, an entire system should be Class II throughout, but some products that people use routinely are Class I (many preamps and power amplifiers being cases in point), so using them with a DVD or CD player (usually Class II) is actually breaking the rules. Using optical (TOSLINK or S/PDIF) connections is fine, because they are optical systems that use a non-conductive fibre optic 'cable'. However, few DIY preamps have TOSLINK capabilities, and an optical receiver is needed for every Class II source, which also must have an optical output. Somehow, I doubt that will happen any time soon.
+ + +One of the most difficult questions that may arise concerns DIY Class II builds. While it's theoretically possible, in general it's not possible to ensure that all requirements are satisfied. You may be able to purchase small transformers with the appropriate safety ratings and an internal thermal fuse, but that alone isn't enough. Ensuring that all design rules are satisfied isn't something that a DIY person can do, largely because the specific rules that may apply are unavailable (standards documents again!). Class II appliances (by definition) should not be earthed, yet this is inevitable because a preamp will be connected to a power amp, and Class II power amplifiers are well beyond the capabilities of most hobbyists. Obtaining a certified double-insulated power transformer will often be well-nigh impossible (very, very few toroidal transformers are Class II), and Class I is the only sensible option.
+ +This makes a Class II preamplifier non-compliant as soon as it's connected to the power amp (or any Class I source), because you have just earthed (grounded if you must) a Class II appliance which is against the rules. In many respects, the use of Class II for hi-fi equipment is at best naive, and at worst potentially dangerous. It should be apparent that this hasn't been thought through by the 'authorities' who devise these rules, and there are probably very few home built systems (and few commercial systems as well) that are Class II throughout and don't use the mains earth at all.
+ +Because this is a difficult question, there are (and can be) no easy answers. In general, Class I is the easiest to implement, even if the internal electronics aren't directly connected to the chassis. Quite obviously, it's absolutely essential to ensure that active/ live, neutral and earth/ ground are all connected properly. If at all possible, get someone else to double check them for you, as it can be surprisingly easy to overlook a mistake that you made yourself. A visual check is not enough - use a meter to verify that there is conductivity from and to the correct pin, wire, chassis, etc. Make sure that the earth connection to the chassis is done securely (with a 4mm or equivalent metal thread screw, and two nuts - the second is a locknut) so that the connection cannot come loose.
+ +There must be good electrical conductivity between the chassis and any panels, whether removable or not. If necessary, use a wire to join panels to the chassis if the painted or anodised finish can interfere with the conductive path between different parts. This can be a pain to achieve in some cases, because equipment enclosures are often 'general purpose', and the manufacturer and supplier expect that the end user will know what safety precautions are required.
+ +This doesn't mean that you can't achieve Class II insulation for a DIY project, but it is difficult. You may want to consider 'SELV' (see Voltage Classes) as a solution, using an approved wall supply (AC or DC output) providing the power. That means that your project is as close to 'inherently safe' as you can make it, since all hazardous voltages are external within the wall supply, and your electronics (and chassis work) no longer have to comply with any of the safety standards that may otherwise be irksome to apply. This isn't isolated to DIY - many commercial products use the same strategy so they can avoid (some) regulatory barriers to the sale of their products.
+ +It may be imagined that if an approved Class II transformer is used, there's little difference between Class 0 (basic insulation, no earth) and Class II, but the devil is in the details. For any product to be classified as Class II it must use double or reinforced insulation for all internal mains wiring. That means that any wiring to power switches (which must also be approved to Class II standards) must also be double-insulated, so the usual practice of using single-insulated mains cable internally is not acceptable if it is in contact (or may come into contact) with any conductive part of the enclosure. Additional (approved) sleeving is necessary to provide the second layer of insulation required, and that must be used to ensure that there are always two independent insulation barriers between mains and chassis. For the uninitiated, this can prove to be somewhere between difficult and impossible, because you can't get the information to prove that the insulation is up to the required standards.
+ +So, while you may technically be able to satisfy the requirements, there are no test reports to prove that the equipment qualifies as being 'truly' double-insulated. It would be a very brave (or perhaps very foolish) DIY hobbyist (or even DIY 'master') who would adorn the back panel with the double-square symbol that identifies Class II products. I've been building electronic products most of my life, and I certainly wouldn't do it. All mains powered equipment I've ever built is Class I, and I'm quite happy with that.
+ + +I came across this as I was making up some short IEC mains leads for my test bench gear. I cut IEC cables to get 2 × 400mm cables, added a standard Oz 3-pin plug to one, and an IEC socket to the other, making two IEC mains leads. When I cut and stripped the one shown in the photo, I couldn't believe my eyes! The printing on the jacket claims 0.5mm² area, which is already too small for 10A (as marked on the plug). When I measured it, it was 0.5mm diameter (give or take), so the area is about 0.196mm², making it almost suitable for just 2A. I tested it at 2A, and it became noticeably warm after only a minute or so. I only tested one lead of the three - with the active (live) and neutral both carrying 2A in normal operation it would (and did) get hotter (and faster).
+ +Measurement of the diameter was flawed because the wire is springy, and it wasn't possible to get an accurate reading, so ...
+ +Measuring a single strand showed a diameter of 0.09mm - an area of just 0.006362mm². There are only 10 strands in each conductor, giving a total area of 0.064mm², well below the calculated figure above (0.196mm²). By comparison, a 'real' 10A cable has (typically) 32 strands of 0.19mm copper wire, with a total area of 0.907mm² - close enough to the claimed 1mm². There is undoubtedly some small error in my measurements as I didn't use a micrometer, but a dial caliper. This is my tool of choice as it doesn't use batteries that are always flat when you need it.
+ +The end-to end resistance (both mains conductors in circuit) was 3.2Ω, which is scary. A 10A mains lead should not have more than ~30mΩ/ metre (for each conductor), and many will be less than this. All the 'proper' leads I checked have an outside diameter (the sheath) of at least 6.5mm, the dangerous lead is only 5.3mm diameter. There is nothing about this Chinese lead that meets expectations! In the photo, the wire size is shown as 0.2mm², but that was annotated before I had measured each strand and calculated the true area (about 0.062mm²).
+ +To add insult to injury, the blue wire was the active and the brown(ish) wire was neutral - the opposite of what is required - and active/ neutral swapped places from end to end. It goes without saying that there were no approval numbers on the cable or the connectors. Mains cables require mandatory approval in Australia (along with 'external power supplies' [plug-packs etc.] and a number of appliances), and it's an offence to sell any prescribed/ declared product without approval number(s) printed on (or moulded into) the cable, plug, socket, appliance, etc.
+ +By way of another comparison, I checked the really thin Figure-8 (zip cable) sold as 'speaker wire'. Each strand of that is 0.12mm diameter (0.0113mm²), with 14 strands in each conductor. That's a total area of 0.158mm² - almost 2.5 times the area of the Chinese cable! No-one would use this cable for mains (and it would be illegal to do so), but it's capable of more current, and has thicker insulation on each conductor (2mm diameter).
+ +So, let's tabulate the results so you can see at a glance the differences between the Chinese travesty and a 'real' 10A mains lead.
+ +Characteristic (1 Metre) | Real | Fake + |
Current Capacity (Claimed) | 10 A | 10 A + |
Current Capacity (Actual) | 10A | < 1A + |
Conductors (number / diameter) | 32 / 0.19 | 10 / 0.09 + |
Claimed Cross Sectional Area (CSA) | 1 mm² | 0.5 mm² + |
Actual CSA (Measured/ Calculated) | 0.907 mm² | 0.064 mm² + |
Resistance (2 conductors in series) | 46 mΩ | 3.2 Ω + |
Cable Dissipation at 10A | 4.6 W | 320 W (Dangerous!) + |
Outer Diameter | 6.5 mm | 5.3 mm + |
Conductor Outside Diameter | 2.42 mm | 1.7 mm + |
Earth Colour (Mandatory) | Green/ Yellow | Olive-green + |
The results are damning - the Chinese 'Fake' cable is grossly under-rated for its claimed current rating and is positively dangerous. If an unsuspecting user were to use this cable with a high-current appliance, there is a real risk of fire as a result of insulation failure. The earth (ground) lead is insufficient to conduct fault current to ground, because it's the same as the others (grossly under-rated). I've never come across a mains lead this dangerous before - it's a recipe for disaster. I can't even begin to imagine how anyone, anywhere thought that this was appropriate for use with mains current. The cable is fitted with a 3-pin, 10A mains plug and a 10A IEC C13 socket (neither has approval numbers for Australia), and is marked as shown ...
+ ++ XD · · · · · POWER CABLE P.V.C 3G 0.5mm² (U-2005) ++ +
Authorities regularly target 'flea-markets' and other places where unapproved (and sometimes literally lethal) goods are sold. Their job is made just that much harder by the interwebs of course, because people can import directly and sell dodgy product on-line. An eBay account or website can be shut down, but the sellers will just pop up again either somewhere else and/or under a different name. Despite the best efforts of the authorities ('Fair Trading' or similar government institutions), the supply of unapproved products just keeps on giving. I suggest that you also read Dangerous Or Safe? - Plug-Packs (aka 'Wall Warts') Examined to see the scope of the problem.
+ +As a side-note, it's expected that almost all 'audiophool/ high-end' power cables/ cords (or 'chords' ) sold here in Australia are illegal, as they will not have the required approvals. Most will probably be safe to use, but the claims made for their 'improved sound quality' are fraudulent. I don't know of any hi-fi retailer who's been audited though, let alone fined (and the fines can get very costly!).
Your only real option is to a) understand that very dangerous mains leads exist, and b) know (or learn) what to look for. Unfortunately, there will be countless people who are unaware of the dangers, and the whole idea of the standard (10A) IEC lead is that it is interchangeable. Most 'normal' users will imagine that they are fully interchangeable, and indeed, this is supposed to be the case. Abominations like the one shown can easily cause a fire if subjected to their rated current by an electric jug or kettle (typically up to 2,200W (2.2kW) in Australia for the 10A rating with a (small) safety margin.
+ +A proper 10A mains cable in Australia (1mm² area) has an equivalent diameter of ~1.13mm (based on a solid wire), and a resistance of about 17mΩ/ metre. Measuring the diameter of a multi-strand cable isn't easy, so my measurements are approximations. At 10A, a 1m (proper) cable will dissipate about 3.4W (assuming current in both mains conductors). The Chinese abomination should have a resistance of 178mΩ/ metre if it were made from annealed copper (using its actual rather than claimed area), but it's not!. I measured a single cable at 1.6Ω for 1.1 metres, so 1.45Ω/ metre - almost 8 times what it should be! I have absolutely no idea what material the internal wire is made from, as it's resistance is higher than anything I've used other than dedicated resistance wire. It's not magnetic, but it is slightly 'springy' (and difficult to twist together), indicating that it's an alloy of some kind. My best guess is brass (high zinc content ≥30%), based on its resistance and appearance.
+ +At 10A, the cable will dissipate 10² × 3.2Ω - 320W. That's not a misprint. It gets noticeably warm with only 2A (12.8W), and power is related to the square of current. My high-current test transformer can't provide enough voltage to force 10A through this rubbish - the end-to-end voltage needed is 32V, and that's how much voltage is lost across the cable at 10A. It won't be for long though, as the cable will almost certainly either fuse or catch on fire (I'm not joking) rather quickly.
+ +For comparison, I measured a 1m length of 0.75mm² mains cable, and obtained a resistance of 29mΩ/ metre. Although this is a little higher than the 'official' figure (~23mΩ/ metre), that's most likely due to the fact that the IEC connector was included, adding a small extra resistance. Compared to 3.2Ω total resistance for the 'Chinese Menace' it's clearly nothing to be concerned about.
+ +As a matter of course, please be aware that this rubbish not only exists, but can appear anywhere. There's a YouTube video of a cable that is virtually identical, but fitted with a US mains plug. It also had active (live) and neutral swapped, and part of it can be seen to catch on fire with a current of only 6A. I won't provide the link here (I generally avoid YouTube links as a matter of course). The only thing I tested on the cable in question that was a 'pass' was its insulation strength - at least until it melts at high current.
+ +Some of the resistance values mentioned were determined using the Wire Resistance Calculator, which is a useful tool for verification of cable resistance for various materials. Other resistance figures were measured.
+ + +This is one of several articles on similar topics on the ESP website, and I make no excuses for presenting the information differently in the various articles. It's desperately important that hobbyists (and ideally the general public) understand the risks involved, and are aware of the requirements for electrical safety. At best, nothing will happen if you do something wrong (or non-compliant), but at the other end of the spectrum a poorly conceived idea can lead to serious injury or death.
+ +Electrical safety is far more important than any other factor in your final project, and if you don't know what you are doing the consequences can be dire. It is (IMO) a travesty that standards organisations worldwide charge dearly for a copy of the very information that ensures that constructors know what is required to ensure compliance. It's usually impossible to even obtain a 'summary' that explains the general requirements and/ or principles that apply. This really isn't good enough, but it's been the same for as long as I can remember.
+ +Anyone who is working with mains must have a thorough understanding of the safety (and legal) requirements where they live. In some developing countries regulations are often lax, and may not be enforced by anyone. This doesn't mean that you can do whatever you like, such as build and use Class-0 equipment with no safety precautions other than basic insulation. As an individual who should understand electrical safety, it's up to you to ensure that anything you build or repair is safe. Remember that it's usually not only you that uses the equipment, so your partner or children are also at risk if you don't take due care.
+ +There is also a risk of fire if an electrical appliance (or its mains lead) fails. While this may seem uncommon, it probably happens more often than you might imagine. Fuses must always be the correct rating, and the fuse holder has to be in good condition to ensure proper contact. If there is any doubt about the fuse holder's condition, replace it, and remember to use heat-shrink or other plastic tubing to protect against accidental contact. The fire risk is greatly reduced by proper fusing, but there are some possibilities that could allow a fire to start without blowing the fuse. Of these, a sustained electric arc is not uncommon, and this is more likely where high voltages are used. The regulations worldwide assess this risk, and they are included in the test methods prescribed for electrical products.
+ +Repairers need to be aware that as the last 'qualified' person to work on a piece of equipment, you may be held liable if someone is injured or killed because of a fault. This means that if a customer brings unsafe gear to be fixed, it's up to the repairer to make it safe before it's returned. The customer may object, and the only safe option is to simply cut off the mains cable and hand it back. I did this a number of times when I was repairing equipment, and while it certainly annoys the (now ex) customer, you are protected against prosecution if you can demonstrate that you disabled the unsafe product. This places the customer as the problem, not you. It may be wise to take a photograph of the cut-off lead as proof should it be needed - this is very easy now (but less so in the 1970s).
+ +As described in Section 10, you also need to be vigilant when it comes to mains leads. If the cable seems to be thinner and/or more flexible than you're used to, check what's printed on the cable itself, and verify that the CSA is within the allowed lower limit for 10A capacity (all full sized IEC connectors are designed to be used at up to 10A). If in doubt, measure its resistance, and cut off the connectors to prevent anyone else from using it if it doesn't measure well below 0.1Ω. No-one wants their house to burn down because of a $2 dodgy mains lead.
+ +Electrical safety is one of those things we tend to take for granted. We don't expect to get an electric shock from anything we use, so it is often not at the forefront as a major consideration. It doesn't matter if you are an inexperienced amateur or someone who's done electrical wiring all your life. Anyone can make a mistake, and thorough testing is always necessary to verify that what you've done is safe to use. Electricity doesn't care one way or another, but it will let you know if you screw up!
+ + +![]() |
Elliott Sound Products | Electronics Maths Functions |
![]() ![]() |
Before calculators and computers, many mathematical functions were performed using operational amplifiers. They got that name because they can provide operational functions, such as addition, subtraction and comparison. They are now commonplace, and are generally just called 'op-amps' or 'opamps'. They revolutionalised many mathematical computations, as they could come up with an answer very quickly - much faster than people could manage.
Very early 'computations' used mechanical means, but these must be (almost by definition) complex and delicate. Possibly the most well-known is the Babbage 'analytical (aka 'difference') engine', which was eventually completed by Ada Lovelace (Charles Babbage never got it working). There's a mountain of information on-line, and I don't propose adding even more. However, one can but marvel at the ingenuity and skill of these early pioneers of computing. Most of these early devices were never commercialised, although several well known (but not necessarily still operational) companies started life selling 'adding machines' (sometimes referred to as 'comptometers' although they are a separate class of mechanical computer). For more info, see Adding Machine (Wikipedia).
Naturally, prior to the introduction of mechanical means, all maths were performed by the normal (human derived) processes of multiplication, division, addition and subtraction. Complex problems required great skill (and a lot of paper). The basis of maths as we know it is ancient, with some quite advanced methods developed to solve 'difficult' equations. Things we now consider to be trivial (e.g. square roots) had mathematicians of old trying to come up with the most elegant solution. It's educational to do a web search to see some of the history behind the maths we use today.
The subject of this article is the calculation of mathematical problems using analogue electronics. The simplest (by far) are addition and subtraction, which can be done very accurately using commonly available parts. More difficult are problems involving multiplication and division, and not only for electronic systems. These continue to be an issue for many people, and it has to be considered that there are people who are 'no good at maths' (often their own claim to avoid situations where they are expected to work out something). Don't expect to see quadratic equations, polynomials or other 'esoteric' maths constructs here - I've kept to the basics, so don't be scared off just yet.
Note that I will always use the term 'maths' (plural) rather than the US convention of 'math' (there really is more than one type). That notwithstanding, I'll only be looking at relatively simple circuitry (and therefore simple equations), and I must stress that the circuits included have all been simulated, but not built and tested. There are some good reasons for this, with the main ones showing up where multiplication and division are involved. Without closely matched transistors, simple log/ antilog amplifiers will be wildly inaccurate.
Many functions use (or used) logs and antilogs, something that I suspect will cause many readers to shudder at the very thought. Fear not, while I do explain logs and antilogs, a complete understanding is not necessary to follow the general reasoning. Until I was able to afford a calculator (in ca. 1969 IIRC), I used log tables for most electronics calculations I performed because it was far easier than long division (in particular). I also used a slide rule (does anyone remember those?). I preferred log tables because I found them to be easier and more accurate.
Of all the functions, square roots were always one of the most troublesome. Early calculators could square easily, simply by multiplying the number by itself (e.g. 12 × 12 = 144). Attempting square roots with the early circuits was much harder. I challenge anyone with a good maths background to work out how to perform a square root. It seems simple enough on the surface, but when it comes down to the nitty-gritty (i.e. actually extracting the square root) it's likely to fall straight into the 'too hard' basket. It was always easy using log tables - just divide the logarithm of the number by two, then take the antilog. The simple method I often use with a calculator (particularly for other less common roots) is shown below.
For anyone interested, I recommend that you look at Calculate a Square Root by Hand (WikiHow.com). Daunting doesn't even come close when you have 'odd' or 'irrational' numbers (an irrational number cannot be expressed as the ratio of two integers - i.e. a simple fraction such as 1/4 or 5/8). I'm not about to provide a maths lesson here, but I do recommend that the reader looks into some of the concepts. I also won't cover 'complex' numbers (J-notation [j=√-1], aka the 'imaginary' part of a 'real' number).
Cube roots are uncommon for analogue processing systems, and this is good because they aren't very good at solving this type of problem. Calculators have many functions these days, and when you know how, you can perform most 'irksome' calculations with ease. Raising to a power '^' (may also be shown as xy or yx) is one such 'trick' that doesn't seem to be as well publicised as it should be. If it helps, you can take the nth root of a number ('X') with the formula ...
nth root = X ^( 1 / n ) A cube root is therefore ...
³√X = X ^(1/3) For example ...
³√123 = 123 ^(1/3) = 4.973189833
Why might you need these? If you know the internal volume of a speaker box, you can get the basic inside dimensions by taking the cube root of the volume in litres. The answer is in decimetres (1 decimetre = 100mm), so multiply by 100 to get millimetres. The final shape is determined by multiplying/ dividing the cube root by a suitable ratio (see Loudspeaker Enclosure Design Guidelines (Section 13) for the details.
Consider too that an octave has 12 semitones, logarithmically spaced between (say) A440 and A880. The 12th root of two is 1.059463094, and if you multiply that by itself 12 times, the answer is two. You've just re-created the equally tempered musical scale. It's not within the scope of this article, but it is nonetheless something useful to know (well, I think so anyway). This is how the distance between frets is calculated for a guitar.
Everything shown here can be worked out with a calculator, and for most of the complex circuits that's how I verified that the circuit was behaving as it should. There are some exceptions of course, in particular the calculation of RMS from a non-sinusoidal and/ or asymmetrical waveform. However, if you follow the general idea through, hopefully it will make sense. These are not audio circuits for the most part, but the concepts are used in some audio circuitry. VCAs (voltage controlled amplifiers) are a case in point, especially those with a logarithmic response to the control signal (typically measured in dB/mV).
If you simulate these circuits shown, you may or may not be able to duplicate my results. Simulators from different vendors need different 'tricks' to make them work with odd circuitry (these definitely qualify). I use SIMetrix (Release 6.20d), and others will behave differently. The opamps were supplied with ±15V for all circuits unless otherwise noted, and supply bypass caps are not shown (they are essential for any real circuit).
You can be forgiven for thinking that analogue computing is no longer relevant. However, you'd be wrong, as there's a current resurgence in interest from academia and IC manufacturers. Just as I thought this article was almost complete, I received an industry email with an interesting story and a link to a startup (Mythic) extolling the virtues of analogue processing. It turns out that many of the major IC makers are also looking in the same direction, with an analogue front-end used for its speed, followed by digital processing to get the best accuracy. For example, if you were to read up on successive approximation ADCs you'd find that there can be many processing steps to get the answer. If an approximate answer is provided as the starting point, the number of steps can be dramatically reduced, saving time and reducing power.
An example is The Analog Thing (THAT). The design featured has a collection of the circuits described below, including integrators, summing amps, comparators and multipliers. There are also pots (potentiometers) to provide inputs, a patch panel to configure the processes and a panel meter to display results. There's a hybrid port to allow digital configuration, and 'master/minion' (aka 'master/slave') ports to allow multiple THATs to be daisy-chained for more computing power.
I expect this to be the beginning of a 'new era' of analogue computing, as researchers are looking at using analogue front-ends to AI (artificial intelligence) processors and many other processor intensive applications. Analogue processing can be very fast, while consuming modest power. Things like integration are difficult on a digital processor, but are dead easy with an opamp, a resistor and a capacitor. The same goes for differentiation. An analogue multiplier is blindingly fast, with some designed to operate at 100MHz or more. The same thing done digitally requires significant processing, which increases with the complexity of the numbers - integers are easy, floating-point 64-bit numbers far less so.
We can expect to see many more systems that use a hybrid analogue/ digital architecture in the coming years. The precision of digital isn't always necessary, and the speed of analogue may more than compensate for 'real world' applications. We have come to expect numbers to be accurate to 6 or more decimal places, because that's what we get from calculators. We very rarely need (or use) all those decimal places, and no-one will calculate a particular frequency to more than a couple of decimal places, and usually less.
Some of the examples shown have passed their 'best-before' date, in particular log/ antilog circuits. These were never particularly accurate, and even simulations (which have perfectly matched transistors and exact resistors and capacitors) have errors of more than 1%. It's usually impossible (or close to it) to set up an analogue computer to duplicate a calculation made previously, because of component tolerances, thermal drift and the effects of external noise (for example). However, when used appropriately, this won't matter at all if it allows a complex calculation to be performed to an 'acceptable' accuracy. No-one would expect to be able to calculate the trajectory of an artillery shell to the millimetre (for example), because the atmospheric conditions prevailing will have an effect that simply cannot be calculated (especially wind speed and direction).
The current focus appears to be on improving AI (artificial intelligence) techniques by using analogue processing in conjunction with digital analysis. The aim is to reduce the power needed (in watts) to compute the front-end system's responses to external stimuli (vision in particular), much of which is currently handled by power-hungry GPUs (graphic processing units). These feature massively parallel architecture to perform complex calculations. By using an analogue front-end, it is theoretically possible to reduce consumption from 100W or more to less than 10W.
Somewhat predictably, this is not something I will cover, other than this brief introduction. I suggest that if you are interested, do a web search, as there's a vast amount of information available. It's up to the reader to determine the usefulness of the information found - not all of it is likely to be accurate, and much of what I have seen is in general terms only. Most companies aren't about to reveal their trade secrets.
There are countless applications in electronics where we need to know if an input signal is 'greater than' or 'less than' a reference level. The absolute input level is usually not so important, but if the reference voltage is passed (in either direction), an indication is required. These can be set up to be very precise, and operation is generally assured if the input voltage is greater/ less than the reference voltage by only a few millivolts. Examples include clipping indicators (the signal voltage has exceeded the maximum/ minimum allowed), 'successive approximation' analogue to digital converters (ADCs) as used in many digital multimeters, or battery circuits where we need to stop charging above a preset voltage or disconnect the load if the voltage has fallen below a preset minimum.
Analogue Class-D amplifiers use a comparator to generate the PWM signal, and industrial processes use them for monitoring temperature, pressure, and many other processes that require on-off control (which may be many times per second). They are also used for lamp dimming (leading or trailing edge), heater/ oven temperature control and motor speed control. The device used for these processes is a comparator. There are ICs designed for the purpose (called comparators), but where speed is not a consideration, you can even use an opamp. Almost all comparators use an uncommitted collector output, and a pull-up resistor is required. Low values are faster, but consume more current. High value pull-up resistors are uncommon unless speed is not a requirement.
A 'composite' circuit is called a window comparator. The signal must remain within a specified 'window', defined by two amplitudes. The output is high as long as the signal remains within the upper and lower bounds that define the window. It can be broad (several volts between upper and lower bounds) or narrow - just a few millivolts. There are many projects on the ESP website that use comparators, and the ability to detect when a voltage has crossed the preset threshold is used in (literally) countless circuits in common use, both household and industrial. See Comparators, The Unsung Heroes Of Electronics for an in-depth article on the subject.
The examples show a 'greater than' and 'less than' comparators and a window comparator. The 'less than' function is achieved simply by swapping the inputs, and a window comparator has both 'greater than' and 'less than' functions. The output of the example shown remains high if the input is within the window (1.67V with the values shown). To change the window, it's simply a matter of increasing or reducing the value of RW. Note that both comparators in B) use the same output pull-up resistor (R4), and the outputs are simply paralleled. If you were to use opamps for the same function, the outputs would need to use isolating diodes, and the output level is less than the main 5V supply voltage (no level shift).
While an opamp can be used as a comparator, the reverse is not true. Comparator ICs almost always have an uncommitted 'open collector' output to allow level shifting, so the circuit can be operated at (say) 5V, but have a 12V (or more) output. Comparators have little or no compensation, and cannot be used with negative feedback. They have propagation delays that are much shorter than any opamp, and are designed specifically for the task of comparing, rather than amplifying.
Digital circuits can also use comparison, and it's a feature built into every programming language ever known. Not every process needs to take a measurement, other than to decide if the input is above or below a threshold. This can happen at any interval that's suitable. For example, a possible water tank overflow (or nearly empty) may only need to be tested each half hour (or longer for a large tank), where a dimmer circuit makes the comparison 100 (or 120) times/ second. A Class-D amplifier will make a comparison at anything up to 500,000 times per second.
Where noise is a problem, comparators are often used with positive feedback, arranged to provide hysteresis (a Schmitt trigger). This improves noise immunity, but it reduces the absolute accuracy of the detection threshold. It can still be made to operate at a precise voltage, but everything has to be taken into account (the reference voltage and output supply voltage). Where a particularly accurate detection voltage is required, it may be easier to make the reference voltage adjustable.
Hysteresis is a property of magnetic materials, where it takes more energy to reverse the magnetic poles than to magnetise them in the first place. It's also used with comparators, primarily to provide noise immunity. Several digital ICs (e.g. 74xx14, 4584) offer hysteresis, most commonly referred to as having Schmitt trigger inputs. A common example of mechanical hysteresis is a toggle switch, where the actuator has to be moved beyond the halfway point before the switch will operate.
In the example circuit, I've used an opamp, partly to show how they are used as comparators. With 12V supplies, the opamp's output voltage can be ~±10.5V. The voltage divider formed by R3 and R2 provides positive feedback, and has a division of 10, so the input voltage has to be greater than ±1.05V before the output will change state. The reference voltage is zero, as the inverting input is grounded. The input can have up to ±500mV of noise, but the output will still switch cleanly, without 'false triggering' caused by the noise. However, the switching levels are not centred on zero (the reference voltage) because of the hysteresis. This type of circuit is used when noise immunity is more important than absolute accuracy. The amount of hysteresis is determined by the ratio of R2 and R3. Increasing R3 improves accuracy but reduces noise immunity.
Note that with the arrangement shown, the source must be a low impedance. Any resistance/ impedance in series with the input effectively increases the value of R1, increasing hysteresis. This may mean that the circuit doesn't work with your input signal, which would be annoying. The inputs can be reversed (+in grounded via R2) and the signal applied to the inverting input. This reverses the output, so it will go low with a positive input, and high with a negative input. The trigger thresholds are reduced because the output voltage is divided by 11, so it will trigger at ±954mV.
Because an opamp was used, the circuit lacks the precision that can be obtained with a comparator. Even the relatively high slew-rate of a TL072 (13V/µs) means that to traverse the total supply voltage of 24V takes 1.5µs, where an LM393 comparator with a 1k pull-up resistor can swing the voltage in about 180ns (almost ten times as fast!). The LM358 is a low-power and economical opamp choice, but it's painfully slow. Rise and fall times will be around 35µs. Not quite enough time to have lunch while waiting.
Addition and subtraction are easy, and are as accurate as the resistors used (with a precision opamp). The basic adder is a common sight in audio, but as it's inverting, U3 is used to return to 'normal' polarity. Voltages add mathematically, so if In3 were -2V (for example), the resulting output is 900mV (((3+4)-(-2)) / 10). These circuits are very common in all types of analogue circuitry. Note that all stages are inverting, with the opamp's positive input grounded.
The 'divide by 10' function is included so that input voltages that add up to more than ~13.5V (the maximum available from the opamp) can be processed without error. The basic adder (U1) can have many inputs, and with the values shown you could have up to ten inputs without creating any significant errors. Unused inputs are ideally left 'floating' (not connected), as this keeps noise to the minimum - provided there are no long wires or PCB traces attached. The final outputs of multiple adders may be presented to a log amplifier (for example) so they can be multiplied or divided as needed by the circuit function.
In Fig. 3.1 I've shown a separate inverter to obtain subtraction, but it can all be done with a single opamp. A differential input opamp stage is commonly used to add the signal voltages together, but cancel (via subtraction) any noise voltage present on the signal lines. It can also perform addition/ subtraction as shown next.
The output is equal to the difference between the voltages at In1 and In2. With the voltages shown, the output is 200mV, because the output is divided by 10. If R3 and R4 are made 100k (or R1, R2 are 10k), there is no division, so the output would be 2V. If both inputs are equal (at any voltage within the opamp's input voltage range) the output is zero. Should the negative input be greater (more positive) than the positive input, the output is negative. Both inputs must be from a low impedance source (ideally less than 100Ω). Opamps can achieve this easily. Any external resistance will cause an error in the output. The circuits in Figs. 2.1 and 2.2 work with AC or DC.
Some readers will be old enough to remember using log/ antilog tables for multiplication and division. These are now a part of history, but for a long time they made calculations a lot easier before we were spoiled by calculators. Even early calculators didn't provide things like square roots, so a 'successive approximation' technique was adopted to solve these. Now, calculators can perform most operations, including complex numbers (aka 'J notation') that are used in electrical (and electronics) engineering. While this is possible with log tables, it's not something I'd recommend to anyone.
The earliest multipliers and dividers were single quadrant, meaning that all inputs and outputs were unipolar (usually positive). 'Quadrants' are covered below. The logarithmic behaviour of diodes or transistors was exploited in these early circuits, with the one shown in Fig. 4.2 being described (albeit briefly) in the National Semiconductor 'Linear Applications' handbook, published in 1980. There are many versions elsewhere on the Net, but many are highly suspect, and some don't work at all. The three caps (all 1nF) were included to make the simulated circuit stable. Without them it will oscillate, and the 'real thing' will be no different.
Logs are easy. Obtaining a logarithmic response from an amplifier only requires a resistor, an opamp and a transistor. However, the function is not particularly linear logarithmic as we expect from a calculator or the like. These work electronically because, with very carefully matched transistors, the function can be reversed (almost) perfectly.
Below 50mV input, the combined output is 'undefined', but above that the functions of the log and antilog amps are complementary, so the output is the same as the input. It looks like you should be able to add a voltage divider or perhaps a series resistor to the emitter of Q2 to get division, and you can. Unfortunately, it's highly non-linear and not useful. The circuit only becomes usable when we add more opamps and transistors.
The circuit shown uses log amps for the three inputs, and an antilog amp for the output. When using logs, multiplication is achieved by adding the logarithms, and division is by subtraction. The answer is the antilog of the added (and/ or subtracted) results. The logarithm base (e.g. Log10, Ln [natural log, base 'e'], etc.) is immaterial - the result is the same. The ability to multiply and divide numbers is essential for any analogue computing system. These were used for ballistics calculations (e.g. military applications) and other processes before digital computing existed. It's probable that similar circuitry is still used in some systems, because it's comparatively low-cost, and can be very fast. However, like the Fig. 4.1 circuit, it doesn't work properly if any input is below ~60mV. However, if all inputs have the same voltage (not particularly useful) it will function down to about 10mV on all three inputs.
The transfer function of the complete Fig. 4.1 circuit is ...
Vout = ( Vin1 × Vin2 / Vin3 ) / 10
The log and antilog amps are neither 'natural' logs (base 'e') nor log10. The base is determined by the transistors, which are used as 'enhanced' diodes. While it is possible to use diodes, the dynamic range is severely restricted. In the above, and as simulated, if In1 is 5V, In2 is 3V and In3 is 1V, the output is 1.4937V (it should be 1.5V). Note that In3 must be 1V for simple multiplication, because if In3 were (for example) 0V, that would create a 'divide by zero' error, and the output will try to be infinite. It can't exceed the supply rail of course.
If In3 were made 0.5V, the output still follows the formula almost perfectly, giving an output of 2.967V (it should be 3, an error of 1.1%). To use a value we're all familiar with, if In1 and In2 are 1.414V (In3 at 1V), the output is 200mV (1.414² is 2, as 1.414 is the square root of 2 - √2). By applying the same signal to In1 and In2 with In3 at 1V, the circuit generates the square of the input. 2V input will result in 400mV output (4V/10). It's easy to see why the divide by 10 is included, because squaring any voltage over 3.87V would cause the output to (try to) exceed the opamp supply rails.
Log/ antilog circuits can use diodes, but they have a limited dynamic range and one of the things included above can't be incorporated - division. There are many examples, but not all simulate properly, some not at all. You need to select this type of circuit carefully if you wish to analyse them. They are not intuitive, and they all have limitations.
A basic understanding of logarithms is essential in electronics, especially where sound and light are involved. Human senses are logarithmic, as that's essential for us to be able to (for example) hear very quiet sounds, and not be completely overwhelmed by loud sounds. Our hearing has a range from 0dB SPL to around 130dB SPL, a range of about 3.16 million to one. Our other senses are also logarithmic (light, touch, etc.), and this is a huge evolutionary benefit. It allows us to experience an awesome range of sensations without 'overload'. The decibel is the best known of these log progressions, and we encounter it every time we work with audio electronics.
Pitch perception (musical notes) is also a log function, with the Western 'equally tempered scale' being based on the 12th root of 2 (an octave is double or half the starting frequency). An octave is covered by 12 semitones. There are several (many) other scales that follow different rules, but the equally tempered scale (aka 'equal temperament') is one of the best known and widely used for 'Western' music. There's lots of info available that won't be repeated here, nor do I intend to discuss the 'just' scale (which is similar, more 'tuneful', but irrelevant here).
Analogue multipliers are often described by the quadrants they can handle. The simplest is a single quadrant, where all inputs and outputs are a single polarity. A two-quadrant multiplier allows for one input to be of one polarity only, with the other able to be either positive or negative. The output is also bipolar. The most useful are four-quadrant types, where both inputs and the output can be positive or negative.
The convention is that the inputs are designated 'X' and 'Y', and they can be single-ended or (more commonly with ICs) differential. The output is almost always scaled (generally reduced by ×10) so that the output doesn't saturate with high input voltages. Most also provide for an output DC offset. One of the earliest analogue multiplier ICs was the MC1495, a wide band four-quadrant type. The inputs had to be manually trimmed to minimise DC offsets, and the output scale factor could be changed from the default. I first came across these in the mid 1970s, as they were used in the original version of the Electronics World 'Frequency Shifter For 'Howl' Suppression', designed by M. Hartley Jones (see Project 204 for an updated version).
The datasheet for these ICs is very comprehensive, and shows the things it can be made to do. Of these, obtaining the square root remains a problem, but it's not insoluble. Squaring (which includes frequency doubling for AC inputs) is easy. There used to be quite a few analogue multiplier ICs, but the number has shrunk. Today, the AD633 is a 'low cost' version, and the AD834 is a high-speed version (and very expensive). The TI MPY634 is another (also expensive) but it includes some extra circuitry to allow square roots without an external opamp.
Type | Vx | Vy | Vo |
Single Quadrant | Unipolar | Unipolar | Unipolar |
Two Quadrant | Bipolar / Unipolar | Unipolar / Bipolar | Bipolar |
Four Quadrant | Bipolar | Bipolar | Bipolar |
'Simple' circuits like that shown in Fig. 4.1 are single-quadrant. All inputs to that circuit are positive, as is the output. While this works for basic calculations, it's very limiting for many other tasks that use multiplier circuits. As shown above, multiplication is easy, but division is somewhat less so. Many of the early circuits used logs and antilogs to compute the result. They require very carefully matched (and thermally coupled) transistors, but can use surprisingly 'pedestrian' opamps. Most of the simulations I did used TL072 opamps, and the results are 'satisfactory'. Unlike a calculator where the result is accurate to perhaps 10 decimal places, they are rather wildly inaccurate by comparison (but generally within 2% or so).
It could be argued that an opamp gain stage is a multiplier, since the input voltage is multiplied by the gain. However, this is inflexible, as one operand remains fixed. It can be made adjustable with a pot or switched resistors, but it's still recognised as a gain stage, not a multiplier. The same applies to voltage dividers or transformers. A true multiplier does what it sounds like - it multiplies two (or more) values together. The input(s) can be voltage or current, depending on the source transducer and what you are trying to achieve.
Four-quadrant multipliers have been available as ICs since the early 1970s, with the MC1496 balanced modulator/ demodulator and MC1495 wide band four quadrant analogue multiplier being good examples. The original purposes were mainly radio frequency, for tasks such as amplitude modulation and synchronous detection. The original datasheets made no reference to audio frequency applications, but it didn't take long before people discovered that they worked just as well at audio frequencies as RF.
The basis of (almost) all multipliers is the Gilbert Cell, using cross-coupled long-tailed pairs with a variable 'tail' current used to change the gain. Barrie Gilbert is said to have based his invention on an earlier design by Howard Jones (1964) - see Wikipedia for all the details. A greatly simplified version is used in Project 213, a DIY voltage controlled amplifier that uses a 2-quadrant multiplier. It could be argued that it's really a 1½-quadrant, because both of the inputs have to be positive, but the output is bipolar.
The following drawing shows an MC1496 multiplier, configured as a 'typical' modulator. Although RF operation is assumed, the 'carrier' signal can be audio, and either a variable DC voltage or a low-frequency sinewave can be used for modulation. These will provide gain control or amplitude modulation (tremolo) respectively. Predictably, when either input is at zero volts, the output is also zero (any number multiplied by zero gives a zero result).
The 51Ω resistors are intended for RF usage (50Ω is a common RF impedance), and can simply be increased to something more suited to audio. Around 10k will work just fine. Because of the way it works, the audio would be applied to the 'signal' input, and C1/ C2 would have to be increased to around 10µF to provide a low impedance. The modulating frequency might be a 2-15Hz sinewave applied to the 'carrier' input to obtain tremolo for a guitar or other instrument. The modulation input also requires a DC bias, otherwise there would be no audio without the modulation. If it were to be biased to 1V, the audio output without modulation will be the same as the input level. The modulation can be a maximum of ±1V with respect to a modulation input bias of 1V.
Note that the circuit shown operates as an amplitude modulator with suppressed carrier (radio buffs will understand this). With 1 1MHz carrier and 1kHz modulation, the output contains the upper and lower sidebands at 999kHz and 1,001kHz, but the 1MHz carrier is not included (it's suppressed - hence the term). 'Traditional' amplitude modulation can be obtained by swapping the carrier and modulation inputs. A complete description is outside the scope of this article, but I encourage you to research this further if you think it's interesting. I think it is, but my interests extend well beyond audio.
The MC1496 is a four-quadrant multiplier, but the MC1495 would normally be the device of choice. I used the 1496 because its datasheet has the (simplified) internal schematic. This isn't provided for the 1495. These ICs have been obsolete for many years, and the modern equivalent is the AD633. This is a much better IC, and it's laser trimmed during production to minimise problems with DC offsets and to ensure it meets accuracy specifications.
As noted above, Project 204 (frequency shifter) uses a pair of AD633 multipliers, which improved the performance and ease of setup over the original using MC1495 multipliers. While the AD633 is listed as 'low cost', that's a matter of opinion. Personally, I don't consider an AU$30 IC to be 'low cost', but it is true compared to others costing over $100. For the purposes of explanation, the multiplier used in following drawings is 'ideal' (created as a non-linear function in the simulator). The transfer equation is (mostly) unchanged. The exception is Fig. 6.2, which is a reproduction of the Project 213 VCA.
The circuit is a bit of an odd-ball, because it doesn't really fit into the definitions of quadrants. Had the current sink (Q3, Q4) been referred to the negative supply, that would allow it to handle a bipolar input signal, but the control signal remains unipolar. By that definition, it's a 2-quadrant multiplier. I didn't design it like that because it doesn't work as well as the version I published (yes, I tested it), and it has the advantage of a control signal that's ground referenced.
Four quadrant multiplier ICs can be used for multiplication, division, squaring and square roots. Division and square roots require an external opamp. The square root circuit is still tricky, because a diode is needed to prevent latch-up that can occur if the input is zero or negative (even by a couple of millivolts). You can't take the square root of a negative number (or zero). Very careful offset control (or an ultra-low offset opamp) is required, or the circuit below can't take the root of any value less than 3mV (the answer is 54.77mV). Whether this is a problem or not depends on the application. It is limiting though, unless the input signal remains above the lower limit at all times.
The multiplier uses almost the same formula as shown above (Vout = Vin1 × Vin2 / 10), but the final divide by 10 is omitted. The diode prevents issues with zero or negative inputs. If an offset is applied (which must be temperature compensated), it's (theoretically) possible to take the square root of 1mV (which is 31.6mV), but expect a significant error at such a low input! The result will be reasonably accurate when the input is greater than 100mV (√100m = 316.2m).
The square root extractor is still capable of working accurately with less than 100mV input, and the lower limit of the circuit shown (as simulated) is 50mV (peak or DC) for passable accuracy. Using a Schottky diode for D1 may help, and the circuit can theoretically measure down to less than 50mV input (√50m = 223m), and it's acceptably accurate down to that level. There doesn't appear to be a sensible way to improve the performance beyond that lower limit. With some messing around, I was able to simulate a square root of 5mV (70.7mV) and get a result of 70.9mV. With this kind of circuit, there's a continual fight between man (me) and machine (the simulator software). Simulators often need to be 'tricked' into doing what they're told. The 'trick' in this case was to include Rin and Cin. These prevented any momentary excursion into negative territory which causes the circuit to latch-up. Zero and negative values are 'illegal' states for a square root extractor.
Bear in mind that these results are simulated, and use an ideal (zero error) multiplier. Should you build one using real parts, expect to be disappointed. You also need to know your simulator pretty well, and know how to trick it into doing things it normally won't do. Square roots are as irksome in hardware as they are anywhere else.
They have been the bane of maths teachers' lives since ... forever. Some of the important properties of square roots are listed on line at a number of sites [ 4 ], and I don't propose to go into detail here. I do suggest that you do a web search though - if for no other reason than to see the different approaches and to understand that a square root is (or was) a pain in the bum!
If you think that obtaining the square root of a number looks complex, it is. If you look up 'square root algorithm' in a search engine, the number of pages is impressive, and the methods vary from being complex to very complex. With calculators and computers we tend not to give it a second thought, but the process is quite involved. Irrational numbers can take considerable computing power, regardless of the method used. One technique that seems to be missing almost everywhere is ...
Sqrt X = X ^ (1/2) For example (and simplified) ...
√123 = 123 ^ 0.5 = 11.09053651
Alternatively, you could use (base 10) logarithms ...
√123 = 10 ^ (log(123) / 2) = 11.09053651
... and get the same answer.
I know which one is the simplest. It's also easy to remember without having to perform too many mental gyrations. Raising to a power is supported in most major computer programming languages, and it's (probably) fairly efficient, especially when compared to the 'successive approximation' technique. If you had to rely on 'standard' 4-figure log tables (assuming that anyone still knows how to use them), the result is 11.0904. Not exact, but close enough (when squared you get 122.9969). Almost no-one would bother with log tables any more, as most calculators have the √ function and exponentiation (raising to a power). They're even on my phone!
Any waveform that is not an almost 'perfect' sinewave will be subjected to potentially large errors if it isn't measured using 'true RMS'. Most low-cost meters use average-responding, RMS calibrated measurements, but the measurement is only accurate if the input is a sinewave. For example, a 1V peak (2V p-p) squarewave will be displayed as its average, RMS calibrated, which is 1.11V - 11% high. With a true RMS meter, it will show as 1V as it should. Some waveforms are much worse, with errors that can exceed -50% (more than 50% low).
With an RMS converter, the input signal must be squared, and not just full-wave rectified. The average of the latter is 0.6366 of the peak value, whereas the average of the signal squared is 0.5 of the peak. Squaring can follow rectification, but the rectifier is not necessary because the value of -x² is the same as x². The process of squaring includes rectification by default.
Since we don't have access to 'ideal' squaring and square root blocks outside of a simulator, we need to be more adventurous. While the circuit shown next still shows ideal multipliers, AD633 ICs will actually work fairly well, provided we're careful to minimise DC offsets. The method shown in Fig. 8.1 is (in the simulator) almost perfect - the result is virtually identical to the measurement taken with the simulator's maths functions that are used to measure the RMS value (amongst other useful things).
Multipliers can be used to convert a waveform to 'true RMS'. RMS stands for 'root mean squared', and is required with any waveform that's not a sinewave to prevent inaccurate readings. The limitation is the square root circuit, which as noted above is less than perfect. The concept is simple in theory - square the input voltage, take the average, and take the square root. For example, a 1V peak sinewave is squared, which provides a signal at twice the input frequency, but unidirectional (the square of a negative value is positive). The average taken at the positive end of C1 is 500mV. If we take the square root of 500mV we get 707mV (close enough), which is the RMS value of a 1V peak sinewave. This works with any waveform, and gives the true RMS voltage.
The circuit is conceptual, in that if multiplier ICs are used they must be configured to have unity gain (multiplier 2 in particular) rather than the default divide by 10. As shown, I used the SIMetrix 'Non-Linear Function', configured as an arbitrary source with the formula shown in the boxes. Both are configured to square the input. The output is accurate between 100mV and 2V (peak) input, but at lower voltages the accuracy gets progressively worse as the input voltage is reduced. A 100mV AC input has a mean value (after squaring) of only 5mV. The square root of 5m is 70.7m (70.7mV) but the output is 70.9mV (which is actually pretty good). It gets worse at lower inputs. The opamp must be a precision (ultra-low offset) type (I used an OP07E in the simulation).
Performance is fairly poor compared to an IC such as the AD737. These are described in some detail in AN-012, Peak, RMS And Averaging Circuits. These use a somewhat different principle to obtain the RMS value, that works down to low levels without losing accuracy (measurement speed and bandwidth are still limited at low input voltages though). An improved version is the AD536, but that comes at a cost (over AU$100 from the suppliers I checked). In some respects, this is all academic when compared to digital sampling measurement systems, where the RMS value can be determined (using digital calculations) almost instantly.
However, if the signal is varying, a digital readout is of no use to man or beast, and an analogue meter movement is a far better option. You need to be sure that you need something like this though, as the cost is significant (especially when you add a power supply, preamp, range switching, etc.). Mostly, we all just use a digital multimeter (preferably true RMS if you need accuracy). If a signal is varying over a fairly wide range (e.g. music) we can only estimate the voltage, and accuracy isn't possible whether we measure true RMS or average.
The waveform above consists of 1 'unit' at 1kHz, 4 units at 2kHz and 2 units at 3kHz (each unit is 333mV peak). The RMS value is 1.08V, but if it's full-wave rectified and the average (mean) taken (RMS calibrated), you'll get a reading of 957mV, an error of -11.4%. The two 'concept' circuits get the right answer, regardless of the apparent 'complexity' of the waveform. When a meter reads average but is calibrated as RMS (very common in cheap meters), any non-sinusoidal waveform will cause problems. With a sinewave, true RMS and average (RMS calibrated) meters give the same reading.
To display RMS (sinewave) with an averaged input, the rectified and averaged input signal needs a gain of 1.111. The sinewave, after the process of full-wave rectification (as opposed to squaring), gives a 636.62mV average output for a 1V peak sinewave, and if that's amplified by a factor of 1.111, the answer is 707mV, which is correct. It only works with a sinewave - other waveforms give wrong answers. Fig. 8.4 shows the difference between rectification and squaring, using the Fig. 8.3 waveform. The rectified average is 843mV, and squaring gives 1.667V.
The square root of 1.1667V is 1.08V (which is correct), but the rectified average is only 862mV, and after amplifying by 1.111 to get 'RMS' equivalent, the output is 957mV, which is clearly wrong. Unfortunately, these calculations are difficult, and the simplest proof is to use simulation.
A power measurement with DC is easy. Multiply the voltage and current, and voila! 12V at 1A is 12 watts, and there is no ambiguity whatsoever. With AC, it's very different, because the product of voltage and current is VA (volt-amps), and it may or may not be the same as the power. If the load is resistive (a resistor or heating element for example), then VA and watts are the same, but if the load is inductive, capacitive or non-linear, the two are usually very different.
A 'well behaved' reactive load (one with capacitance and/ or inductance) may show that the voltage and current measured gives (say) 100VA, with the power being 80W. The only way you can measure that is with a multiplier. It can be analogue or digital, but it must be able to distinguish the phase angle between the voltage and current. With a resistor, there is no phase angle - current and voltage are perfectly in phase. The 'power factor' of a load that draws 100VA and 80W is 0.8 (unity is ideal).
A 'proper' wattmeter was described in Project 189, an audio wattmeter that shows the real power delivered to a loudspeaker. Both the amplifier and the loudspeaker have to contend with the voltage and current, even when they don't contribute any energy to the motor structure(s). But you can't just measure these two quantities and call it 'power', because it probably isn't. This is a topic that I've covered in some detail in the discussions about power factor (see the Lamps and Energy section on the ESP site).
An analogue multiplier is a simple way to determine the real power. It still uses the voltage and current, but works with any phase displacement between voltage and current or a non-linear load, and provides the power, not VA. The electricity meter at your house only measures power, and that's what you pay for. In the circuit shown next, the output level is 1mV/W, but that's easily changed by adding gain (using one or two opamps). I've used an 'ideal' multiplier, but if you build the circuit with an AD633 it will perform perfectly. I know this because I've done so, and it's a great testing tool.
For the 'real thing' please see the project linked above. This is not a toy, it's a genuine wattmeter that indicates watts, not VA. The circuit above has an inductive load that draws 3.113A at 50V RMS. That's 155.5 VA (voltage multiplied by current), but the wattmeter shows that the power is 97W. The current transformer (CT) converts current to voltage, with a transfer ratio of 100mV/A. R3 is known as the 'burden' resistance, and it's always a low value to prevent core saturation in the CT. R1 and R2 form a 100:1 voltage divider, as a 'real' multiplier IC cannot handle an input of 100V RMS.
The output of the circuit will always show true power, regardless of the frequency, voltage, current, phase angle (between voltage and current) or waveform distortion. In a realistic circuit as described in the project page, there are upper and lower limits to all inputs. The current transformer can't handle frequencies much below ~40Hz (depending on its characteristics), and the multiplier has an upper frequency limit. For the AD633, that's quoted as 1MHz. The accuracy should be better than 5% overall, but it can be adjusted to be more accurate. The display would typically be an analogue meter movement if you're monitoring audio.
VA is often referred to as 'apparent power', versus 'true power' (in watts). A reactive load returns some of the current drawn back to the source, be it the household mains or an amplifier. This happens because the voltage and current are not in phase. Non-linear loads (such as a power supply - an example is shown to the left of the wattmeter) don't return anything to the source, but they usually have a poor power factor because the load current is distorted. In this case, the load draws 3.954A, giving 197VA and power of 161W. Without a wattmeter, you cannot determine the power without many tedious calculations.
Of course, you can just buy a digital wattmeter for mains measurements (these generally include the current transformer), and the calculations are performed digitally. See Project 172. These certainly work (I have several), but they aren't as much fun, and of course they don't teach you how the power is determined. They are useful though - that much is undeniable. Don't expect to use one to measure audio though, as the sampling rate is almost certainly far too low to handle anything above ~100Hz with any accuracy.
The final systems I'll look at here are integration and differentiation. These are common mathematical functions, that are used to extract an 'interesting' characteristic of a signal. They are also very common in mathematical equations. They are used in calculus, and are (or can be) complementary functions. Differentiation is used to determine the rate of change of a signal, while integration is used to work out the 'area under the curve' - how much charge is experienced over time. This article is not the place for detailed explanations of the mathematical functions, which include algebraic, exponential, logarithmic and trigonometric. Everything you wanted to know can be found on websites that concentrate on mathematical processes - there are many of them, and a search will find a wide range.
In electronics, integration and differentiation are quite common. For example, a differentiator provides the rate-of-change information of a signal, and an integrator provides amplitude and duration info, which may be cumulative. Both are achieved with opamps for precision applications. In the simplest of terms, an active differentiator is a high-pass filter, and an active integrator is a low-pass filter, but they are both more 'radical' than conventional filters. 'Active' implies the use of a gain stage, which is usually an opamp. Both are shown in Fig. 10.1.
The frequency is easily calculated using the standard formula (f=1/(2π×R×C), and is 15.9Hz for both circuits (R1=R3=100k, C1=C2=100nF). The integrator has a second defined frequency, set by R2 and C1, and it stops integrating at 1.59Hz. When wired in series, the output is flat down to 1.59Hz (the -3dB frequency). R2 is an unfortunate necessity, as without it the opamp has no DC feedback. With no input, the output will slowly drift to one or the other supply rail.
Integrators and differentiators don't have to use an opamp - a simple RC (resistor/ capacitor) network works, but it's not linear. The charge and discharge curves are exponential because the voltage across the resistor changes. The 'time constant' of an RC network is R×C, at which point the capacitor's voltage has risen (or fallen) by 63.2%. If 10V is applied to a 100nF cap via a 100k resistor (TC=10ms) the voltage will reach 6.32V in 10ms. When the same cap is discharged from 10V via a 100k resistor, its voltage will be 3.68V after one time constant (10ms). The -3dB frequency is calculated from the time constant too (f=1/2πRC). The term 'RC' is the time constant.
Passive integrators and differentiators are commonly used as simple filters, with a slope of 6dB/octave. Even the common coupling capacitor (in conjunction with an amplifier's input impedance) is a basic differentiator if the applied frequency is low enough. The -3dB frequency is calculated from the formula f = 1/2πRC. For most audio circuits, this will be below 20Hz, and often below 2Hz to ensure minimal rolloff at the lowest frequency of interest. We don't think of it as a differentiator, but it is.
Both circuits are normally inverting and they are controlled by the input current, which is supplied to the inverting input (a virtual earth/ ground). The differentiator uses the instantaneous current through the input capacitor to provide an output that's directly proportional to the peak amplitude and rate-of-change, and the output of the integrator is proportional to the amplitude and duration of the signal. The signal current causes the integrator capacitor to charge, and both the amplitude and duration determine the output voltage. If the two circuits are wired in series, the output is (almost) an identical copy of the input. The difference is due to R2 (1MΩ). Rs in the differentiator is included to prevent 'infinite' gain at high frequencies. High frequency response is limited to 7.23kHz with 220Ω as shown.
The input signal was deliberately slow so the transitions are visible. Rise and fall times are 5ms, which I selected so that calculations are within the voltage range that opamps can handle. The integrator uses a 100nF integration cap, and R2 is included so the opamp has DC feedback. This limits the low-frequency response of the circuit. While the signal is at its positive or negative maximum, the input current is limited by R1, and is ±10µA. The output of U1 is the integral of the input current, and the voltage increases/ decreases at a rate of 100V/s (100mV/ms). During the period of one cycle (50ms), the integrator's output swings from +1.1V to -1.1V. Because the rise and fall times are 5ms, the integrator provides a voltage that is proportional to the voltage above or below zero, and accounts for the rise and fall times. The maximum rate-of-change for the integrator is 10V/s with 100nF and 100k.
By their nature, integrators force a constant current through the capacitor, with the current determined by the input resistance and applied voltage. If a 1V DC signal is applied to the input of the integrator, its output will rise/fall at a rate of 100mV/ms, exactly as predicted. It's not normal procedure to apply a steady input voltage or a repetitive waveform to the input of an integrator, as they are intended to be used to determine (and perhaps correct) long-term error voltages. Integrators are used to remove DC offset from critical systems, and they are also used as a 'DC servo' for audio power amplifiers to (all but) eliminate any offset. The use of a servo can ensure an amplifier has less than 1mV of DC offset (see DC Servos - Tips, Traps & Applications for a full description).
An integrator creates a constant current across the capacitor so it charges linearly, as opposed to the exponential curve seen when a cap is charged via a resistor. The current is determined by the input voltage and the value of the input resistor. The voltage across a capacitor can easily be calculated for any capacitance and constant input current. A 1F capacitor will charge by 1V/s with an input current of 1A. This is easily extrapolated to more sensible values, so a 1µF cap will charge by 1V/s with a 1µA input current, 10V/s with 10µA, etc. The formula is simply ...
ΔV = I / C For the example shown in Fig. 10.1 ... ΔV = 10µA / 100nF = 100V/s (100mV/ms or 100µV/µs)
During a transition of the 20Hz waveform, the input current to the differentiator is ±40µA, through C2. As the rise/ fall time is 5ms and the capacitance is 100nF, the effective impedance of C2 is 50k (5ms/100nF), so the charge current is 40µA. The formula shown below is preferable to calculating the effective impedance, although both methods work. The voltage across R3 is I×R (40µA×100k=4V). If the rise/fall times were reduced to 1ms, the charge current is increased to 200µA, with an output voltage of ±20V. That's greater than the supply voltage, and the value of R3 must be reduced. In a real circuit, RS is almost always needed so the opamp doesn't have extremely high gain at high frequencies. The value will be between 100Ω and 560Ω in a typical circuit. I used 220Ω, which has a negligible effect at the impedances used. The capacitor current is determined by the voltage change and rate-of-change (2V and 5ms respectively) ...
I(C) = C × ΔV / Δt (Where Δ means change) For example ... I(C) = 100n × 2 / 5m = 40µA
The output voltage is then determined by the value of the feedback resistor ...
VOut = I(C) × Rf so ... VOut = 40µ × 100k = 4V
In some cases, integrators are set up with an automatic discharge circuit that resets the voltage to zero when it reaches a preset limit. This forms a very basic analogue to digital converter, where the output frequency is determined by the input voltage (a voltage-frequency converter). This is known as a 'single-slope ADC', which is enhanced to become a 'dual-slope ADC' - these were the basis of most digital multimeters, and are still used. The dual-slope ADC has the advantage that component tolerance is balanced out, and it's therefore more accurate. The number of pulses counted tells you the average input voltage over time, and measurements can be taken over a period of months or even years. The output frequency is directly proportional to the input voltage. The rate-of-change of the cap voltage is determined by V = I / C, so with 2µA and 100nF, the rate-of-change is 20V/s. That means it takes 200ms to reach the reset trip voltage of 4V.
The output frequency is 5Hz for a -20mV input, and if the input is increased to -40mV, the frequency is 10Hz (the input voltage must be negative for a positive output because the integrator is inverting). It can be scaled to anything you like, provided it's within the frequency range of the opamp. Scaling is done by increasing or reducing either R1 or C1. The level detector is set for 4V, and when the voltage reaches that, the cap is discharged and the cycle repeats. The switch will most commonly be a JFET, but it can be anything that has low leakage and a low 'on' resistance.
A circuit such as this can be very accurate, but a low-leakage capacitor is a must for long cycle times. I first saw this arrangement used for long-term temperature monitoring at a water storage dam near Sydney, in ca. 1975. For its time, it was a work of art. It should be accurate to within 1% over at least five decades, with the discharge time being the dominant error source. The detector's reference voltage must also be stable, and a high-stability capacitor is essential. PCB leakage is a potential error source with very low input current, and Teflon (PTFE) stand-off terminals may be needed if low input current is provided by the sensor. The opamp must have very low input offset and negligible input current.
As noted above, dual-slope ADCs are common (and still readily available in IC form). I don't propose going into more detail here as it's not really relevant to the general topic, but as always there's a lot of info on-line, including manufacturer datasheets and detailed descriptions. Most new ADCs are ΔΣ (delta-sigma), and integrating ADCs are becoming less popular.
A common application for integrators and differentiators is a 'PID' controller [ 5 ], which uses proportional control (a simple gain stage), the integral (from an integrator) and derivative (from a differentiator) to reach the target value as quickly as possible. These are discussed in some detail in the article Hobby Servos, ESCs And Tachometers (which goes beyond 'typical' hobby circuits).
A PID controller is shown above, and while it includes a motor, it can just as easily be a heater, cooling system, or any other process that requires rapid and stable servo performance. One common usage is 'high end' car cruise-control systems, where very good control is necessary to prevent over-speed (in particular). The proportional section (top) is the primary error amplifier, and it does most of the 'heavy lifting'. Many simple servo systems use nothing else. The differentiator (derivative) applies a voltage that's proportional to the rate-of-change of the feedback signal, and it's used to (briefly) counteract the main proportional control to minimise overshoot. The integrator accumulates and removes long-term errors. These controllers are 'state-of-the-art', although many modern ones are digital (or digitally controlled).
The graphs show what happens when the system is operating as intended (red), without differentiation (green) and without integration (blue). With the differentiator disabled, there is a large overshoot, and a smaller overshoot when only the integrator is disabled. The 'normal' graph shows almost perfect response. The load was simulated to have mass (inductance), inertia (capacitance) and friction (resistance). The signal rise time was set for 500ms, a 'sensible' limit for the simulation. Real life means real values of mass, inertia and friction, and the PID controller's trimpots are used to obtain the optimum settings. The damping effect of the derivative is particularly pronounced.
If the controlled element is not a servo as shown in Fig. 10.3, the sensor will be different from the 'position' pot shown. It can be a tachometer (to control a motor's speed), a thermistor (to control temperature) or a light sensor to enable 'daylight harvesting' for lighting systems. These are fairly new, and they are set up to dim (or turn off) internal lighting when there's sufficient daylight to allow the lamps to be operated at lower power. The energy (and cost) savings for a large warehouse (for example) can be significant. All of these functions are used in modern manufacturing systems.
This is as far as I'm taking the process here, but there are many engineering sites that go into a great deal of detail on the setup and use of PID controllers. The main points to take away from this relate to differentiation and integration. These functions are widely used, often without you realising that they are there. These 'simple' analogue circuits are truly ubiquitous - they are (literally) in so many systems that attempting to list them would be futile.
There's probably a lot here for you to get your head around, so it's best taken a step at a time. True RMS voltage readings aren't easy to grasp at first, and power (vs. VA) is something that causes many people problems. When you have phase shift or distorted waveforms, simple calculations don't work. However, even comparatively simple analogue multiplier (or RMS converter) IC circuits can solve these easily. Understanding how they work isn't essential to be able to use them, but it does fill in the gaps. Understanding the processes helps you improve your overall knowledge - never a bad thing.
None of the material here suggests that analogue techniques are no longer useful. Sometimes, analogue from input to output gives a better (human readable) output. One major problem with analogue computers is that they must be specifically configured for a particular calculation. This is limiting for 'general purpose' applications, but if there is a specific problem to be addressed (and cost isn't an issue), the analogue approach can still be a good solution. It has the advantage of speed, as there is no analogue to digital conversion (nor the inverse), and may be ideal where reconfiguration is never needed. All systems have limitations, and while modern computers are so powerful (and take up so little space), that doesn't mean that their limits cannot be exceeded (application dependent of course). They are also more of a 'one-size-fits-all' approach, as the system is configured in software, and not hard-wired.
However, once an analogue system is wired to do what you need, it can't be messed up by a software update, and it should perform as designed for many years. Thermal drift is a potential problem of course, and this may also affect the sensors used (that's an issue with analogue and digital systems). Should you decide to build a dedicated analogue computer, you will have many challenges. This applies if you elect to use a digital system as well, and while the latter can be reconfigured with software, the testing needed to ensure that it never runs off 'into the weeds' can be very time-consuming.
Many of the circuits described here are no longer in common usage, but they remain interesting and provide a background to the development of circuitry as we now know it. The 'old' ways of doing things haven't gone away though - they are just hiding. Most people will never get to play with an analogue multiplier, at least not called by that name. Voltage controlled amplifiers (VCAs) owe their very existence to multipliers, because that's what they are. Most true RMS multimeters use a dedicated RMS converter IC, even those that are microcontroller based. The micro generally only controls the display - it doesn't have the power or processing speed to perform irksome maths functions.
Some things remain difficult with analogue processes (e.g. square roots), and there's not much you can do to change that. As noted above, these are even difficult with a digital system that doesn't have an appropriate algorithm built-in, because they are troublesome, and have been since ancient times. Analogue hardware can only do so much before the whole system is tipped into instability or even lock-up. As always, if there's an alternative to a complex problem, use it.
Most of the functions that used to be done with multipliers (calculating RMS for example) are now performed with ICs dedicated to the purpose (ASICs), such as the AD737 (described as a 'low cost, low power, true RMS-to-DC converter'). Like the 'low cost' multipliers, the term is subjective, as they're not cheap. However, a single IC does almost everything. Simply apply AC to the input, and extract the true RMS value as a DC output. The hardware is specifically designed to avoid troublesome circuitry.
Please be aware that your simulator package may or may not run with all of the ideas posted here. Some will be no problem, while others just won't work. They do work with SIMetrix (with trickery in a few cases), but I haven't tested any of these circuits in any other simulator. I normally avoid circuits and simulations that can't be reproduced by anyone, anywhere, but these are 'special' cases. It's unlikely that anyone will try to build these circuits, and attempting to do so isn't recommended.
Some of the applications where analogue multipliers may still be used include Military Avionics, Missile Guidance Systems, Medical Imaging Displays, Video Mixers, Sonar AGC Processors, Radar Signal Conditioning, Voltage Controlled Amplifiers and Vector Generators. While we tend to think that 'everything is digital' these days, that's not really the case at all. Analogue techniques are far from 'dead', despite the capabilities of modern computers.
The wattmeter described is a very good example. This can be done digitally, but it won't be as responsive as an analogue circuit, and will require custom software. This probably isn't especially difficult, but unless your programming skills are pretty good you're likely to find it far more difficult than you thought. Digital circuits traditionally use a digital display, which is not helpful for a piece of test equipment intended to monitor a dynamic signal.
The mathematical functions of integration and differentiation are easy to describe, implement and simulate in electronics, but they are difficult to calculate, since calculus is required. This is an area of maths that usually causes people to run in the opposite direction, because it's one of the most difficult. PID servo systems are hard to simulate, and in real life they can be difficult to get right. Integration and differentiation are functions that are very common in electronics, although in most cases there are short cuts (formulae that have been worked out for us) for specific applications.
The datasheets for the various devices were a major source of information, but the 1980 edition of 'Linear Applications' (National Semiconductor) solved the final puzzle when looking at simple opamp-based multiplier/ divider circuits. While there are circuits shown on the Interwebs, some are simply wrong, and they don't work as claimed (some not at all). Even a lengthy video I saw that supposedly 'explained' how these circuits function used a flawed circuit that doesn't work. This isn't helpful. Additional references are in-line, with others shown below ...
For some further reading, I suggest analogmuseum.org. This is one of many sites that discuss analogue computers, but most are old (hence the museum). New versions are less well documented, as they will often be subject to patent applications or other impediments to ready access.
![]() ![]() |
![]() |
Elliott Sound Products | Attenuator Design |
Information about the design of multi-step attenuators is very sparse on the Net, but these important circuits are used in voltmeters, ammeters, analogue multimeters and oscilloscopes. There are a couple of examples on the ESP site, with the earliest I published being the Project 16 (P16) audio millivoltmeter. If you do a search for 'attenuator', mostly all you'll find is single-stage attenuators used for RF. You may also come across 'stepped attenuators' that are designed to be used as a volume control in some preamplifiers. The simplest multi-stage attenuator is a pot (potentiometer), but these are uncalibrated, and have poor linearity. They are not useful for metering applications.
Finding anything that describes the process of designing a multi-step attenuator is next to impossible. I'm sure that there is something, somewhere, but I was unable to find any formulae or process to determine the values. I obviously know how to do it, since the Project 16 page shows a couple of examples (as do a couple of other projects), but no-one seems to have published anything describing the design process. The intent of this article is to correct this.
Despite what you'll find on the market, an analogue meter remains the best tool for measuring AC voltages, both within the audio range and for RF. It's easy to see changes, which are displayed as a moving pointer rather than a bunch of digits that change seemingly at random, and averaging by eye is easy. You can't do that with a digital readout, and the impression of precision is usually more of a hindrance than anything else. The only thing that comes close is an LED bargraph, often seen on mixers, and rendered in software in audio recording programs such as Audacity.
When looking at varying signal levels, you're usually after a trend, not an absolute value. Measuring dB with a digital meter is possible if the function is included, but most digital multimeters have poor high frequency response, usually rolling off above 1-2kHz. Some manage higher frequencies, but you need to read the datasheet before you buy if that's your goal. I built my Project 16 audio millivoltmeter many, many years ago, and I have a couple of other (analogue) meters that perform much the same function.
Figure 1.1 - Typical 10dB Step Meter Face
The meter face shown is from a photo of my distortion meter. The different scale lengths are obvious, and this allows a direct reading in 10dB steps. The face also shows that the meter is average responding, but doesn't state that it's RMS calibrated. Most audio millivoltmeters are the same, although ideally they would measure true RMS. The 0dB reference is 1V (0dBV).
Because the step ratio is 1 - 3.16 - 10, the meter has two voltage scales. One uses the full deflection of the meter (10mV, 100mV, etc. steps), and the 3mV, 30mV (etc.) steps will provide full deflection at 3.16, so the scale is either truncated, or it's extended slightly as shown above. This ratio is essential if you need 10dB steps. If the two scales were the same length, you'll get just under 0.5dB error as the switch changes range. All meters that show dB have the same arrangement, and without exception that I'm aware of, all provide 10dB steps.
A common scale for analogue multimeters (for DC voltage) is 100mV, 500mV, 2.5V, 10V, 50V, 250V, 1000V (full scale). While this sequence is not covered in the process described below, it can be determined easily using the same process as any other scale. However, most multimeters just use switched resistors in series with the movement, and that's very easy to calculate once you know the meter's resistance and sensitivity. A typical 20kΩ/Volt meter has a sensitivity of 50µA. The meter movement's internal resistance is only relevant for voltages below 10V DC, and it's irrelevant above that (assuming a basic accuracy of ±3% or so). For example, the 10V range requires a total resistance of 200k, and the movement's resistance will be less than 2k.
Throughout this article you'll see references to the E12 and E24 series for resistors. If you don't know the available values, they are shown in Beginners' Guide to Electronics - Part 1 (Basic Passive Components, section 5.0).
The first step is to work out your specifications. These include the input impedance and the voltage steps required. The latter are determined by usage, and for an oscilloscope the de-facto standard is 1-2-5-10 increments. If you're working with audio voltages, then you'd use 1-3.16-10 steps (10dB ranges). The input impedance depends on preference to an extent, but again, the de-facto standard for oscilloscopes is 1MΩ. Another range is 1-5-10 which is good for voltage measurements (and is the standard for most scopes), but is unusable if you need to read in dB.
There's no standard for analogue multimeters, but these aren't the main topic here. The usual way to describe these is to use kω per volt, as the sensitivity is determined by the meter movement. A basic meter may be rated for 2kΩ/ volt (using a 500µA meter movement), which means that the impedance is 2k for the 1V setting, or 100kΩ on the 50V range. This caused many problems with measurements of high impedance circuits, because the hapless user would see a different voltage displayed depending on the range switch setting. Voltage ranges were often arbitrary due to switch position limitations, largely due to the provision of different measurements (DC volts, AC volts, resistance and milliamps being typical). Modern digital multimeters are usually 10MΩ (or more) input impedance for all ranges.
The first real attempt at making a meter with a constant high impedance input was the VTVM - vacuum tube volt meter. These used a valve stage to buffer and amplify the input. This allowed the input impedance to be much higher than an 'ordinary' meter, and it remained constant regardless of the range selected. FET voltmeters soon followed when 'solid-state' overtook valves as the dominant technology. This meant that there may still be an error (due to the impedance of the measured circuit), but at least it was constant. The input impedance depended on the manufacturer, and was typically between 10MΩ and 20MΩ.
So, the first decision is the input impedance. 1MΩ is always a good start, and that's what will be used for the examples. However, deciding on the impedance without some error margin can (and does) give unobtainable resistor values, so some flexibility is essential. In the first of the examples shown below, I ended up with 1.02MΩ because that gave resistor values that were easily achieved. Having an increase of 20k is not going to create problems, and it's easily corrected with a small gain change in the metering amplifier if needs be.
You may decide on a lower impedance. For example the distortion meter (the meter face is shown in Figure 1) has an input impedance of 100kΩ. This does occasionally cause problems when measuring high impedance circuits, but most 'solid state' gear has low impedances everywhere and the 100k load is of little consequence. The second attenuator example shown is designed for 100k, and this has the advantage that you don't need parallel capacitors provided the stray capacitance can be minimised. More on this later in the article.
Having decided that 1MΩ is a reasonable place to start, we now have to decide on the highest and lowest voltage ranges. The P16 millivoltmeter is designed to cover from 3mV to 30V in 10dB steps. That means that the nominal voltages will be 3mV, 10mV, 30mV, 100mV (etc.). Although the metering amplifier itself isn't covered here, there are several to choose from in the Application Note 002 - Analogue Meter Amplifiers. For a general purpose AC voltmeter, I'll use a range of 3mV to 30V, the same as the P16 circuit. If we want an impedance of 1MΩ the current with full voltage (maximum attenuation) is 30µA (30V / 1MΩ).
The easiest way to determine the resistor network is to start at the top (R1), with the 30V range. The attenuator is calculated backwards, so if we apply the full voltage (31.6V), the output from the 'top' of the attenuator is 31.6V, the next level down is 10V, the next is 3.16V and so on. Essentially, we look at the attenuator with the full voltage applied, and each step is worked out in turn, but in the reverse order.
Since the next voltage is 10V, the voltage difference is 21.6V. The current is 30µA, so the resistor must be 720k - a somewhat 'inconvenient' value (to put it mildly). This isn't a value in any readily available resistor series, so we need to change it. For the sake of this exercise, we'll use 680k, as that's a standard value. The full-range input current is changed, and becomes ...
I = V / R
I = 21.6V / 680k = 31.76470588µA (31.765µA is close enough)
The last resistor in the chain (R9, see drawing below) is expected to provide 3mV output with an input voltage of 31.6V at 31.765µA, so it must be ...
R = V / I
R = 3.16mV / 31.765µA = 99.48Ω
In this case we'd use 100Ω, which is so close it doesn't matter. The error is well below 1%, and can be ignored.
Now we can work out the nest resistor in the sequence (R2). We know the current and can easily work out the voltage difference between 10V and 3.16V, as this is the voltage across R2 ...
R = 6.84V / 31.765µA = 215.33kΩ (we'll use 215kΩ, easily made up with 200k + 15k)
After that, it's simple repetition, with each lower range using the same bases (6.8 and 2.15) divided by 10, and the lowest value in this sequence is 215Ω. The same procedure is used for any number of steps. First, work out the approximate input impedance required. Next, determine the first resistor value, and adjust as needed to get a resistance that's possible. Then, re-calculate the input current so that the last resistor in the attenuator can be calculated, and then determine the second resistor value.
For the attenuator we just designed, the maximum voltage is 31.6V, and the current is 31.765µA. The input impedance is therefore ...
R = 31.6 / 31.765µA = 994.81kΩ
This is a little lower than we wanted, so we'll include R0, with a value of 5.6kΩ, giving a nominal input impedance of 99.94k which is such a small error from 1MΩ that it's not worth worrying about.
Figure 2.1 - Basic Attenuator Circuit, 10dB Steps, 1MΩ
The attenuator shown is accurate to better than 0.05dB across the ranges, and doesn't require any 'special' resistor values. The 200, 2k, 20k and 200k resistors are from the E24 series, and these are readily available from most suppliers worldwide. The standard tolerance will be 1%, but you can select the values to be closer than that if you wish. While the procedure described is somewhat tedious, there's nothing hard about it. If you intend to mess around with stepped attenuators it's worth your while to set up a spreadsheet using OpenOffice or similar.
Rx | Voltage | Difference | Resistance | Closest R | Check |
R1 | 31.6 | 21.6 | 680,004.41 | 680,000 | 21.6 |
R2 | 10.0 | 6.84 | 215,334.73 | 215,000 | 6.82 |
R3 | 3.16 | 2.16 | 68,000.44 | 68,000 | 2.16 |
R4 | 1.00 | 0.684 | 21,533.47 | 21,500 | 0.682 |
R5 | 0.316 | 0.216 | 6,800.04 | 6,800 | 0.216 |
R6 | 0.100 | 0.0684 | 2,153.35 | 2,150 | 0.0682 |
R7 | 0.0316 | 0.0216 | 680.00 | 680 | 0.0216 |
R8 | 0.0100 | 0.00684 | 215.33 | 215 | 0.00682 |
R9 | 0.00316 | 0.00316 | 99.48 | 100 | 0.00317 |
Target Current | 31.7645 | µA | Total R | 994,445 | (31.7765µA) |
In the table, the 'difference' column is the voltage shown on that row minus the voltage on the next row. For example, in the first row, the voltage is 31.6V and the difference is therefore 31.6 - 10, which is 21.6V. The resistance is calculated from the difference voltage and the current, and the 'check' column multiplies the selected resistance by the actual current (yellow cell, determined by the 'Voltage' and 'Total R' values) to verify that the voltage drop across each resistor is within your chosen tolerance. Setting up the spreadsheet isn't difficult once you have a starting point.
Most analogue meters are fairly linear, but expecting to obtain better than around 2% accuracy and linearity is unrealistic. Parallax error (looking at the needle from a slight angle) will usually be far greater than 2%, and most analogue meters will have a claimed accuracy and linearity of between 1.5% and 5%. High accuracy meters will always be more expensive than 'ordinary' types.
One issue you'll have with a simple high impedance resistive attenuator is frequency response. As shown, the 10mV range has the highest impedance (just under 240k, but depending on the source impedance), and a mere 10pF of stray capacitance will cause the AC to roll off above 20kHz. The output will be 3dB down at 75kHz, with a loss of 0.3dB at 20kHz. Stray capacitance is inevitable with any attenuator, but the higher the impedance, the greater the problem. The amplifier (or preamplifier/ buffer) also has input capacitance, and protective diodes add some more. The traditional fix is to include a capacitive voltage divider in parallel with the resistive divider. This is shown in P16 (for the high impedance and 2-stage attenuators), and adds another layer of complexity to the final circuit. The derivation of a capacitive (parallel) attenuator is shown further below.
If you decide that an input impedance of 100k is acceptable and you like the ranges shown above, it's simply a matter of dividing each resistor value by 10. You don't need to do anything else unless you want to add or remove a voltage range. By lowering the impedance, stray capacitance has much less effect on the readings, and a capacitive divider may not be needed unless you wish to measure over 20kHz or so.
Figure 3.1 - Basic Attenuator Circuit, 10dB Steps, 100kΩ
As you can see, there's very little difference, other than a ×10 reduction in the value of all resistors. In some cases, you might want to shift the ranges to cover from (say) 10mV to 100V. Despite what you might expect, you can just 're-label' the ranges, and the attenuator doesn't need to be re-calculated if a small error is acceptable. In theory (and using 680k for R1), R2 should be 214.7k instead of 125k. If you use the combination of 180k + 33k you get to 213k (and sub-multiples thereof), which still has a more than acceptable error (within 1% on all ranges).
To account for the error which is consistent across the ranges, the meter face markings need to be altered ever-so-slightly. Unfortunately, creating a meter face isn't easy unless you have access to fairly sophisticated image creation/ editing software. That's outside the scope of this article, so you're on your own with that I'm afraid.
Sometimes, you just need a simple 1-10-100 type attenuator (decade steps 0dB, -20dB, -40dB). These are easy, and often don't even need any calculations unless you need a specific impedance. A circuit is shown below, with two options. If you wanted to use 10Ω for R3, then the others are 900Ω and 9k, or you can use 1k and 10k, so R3 has to be 11.111Ω. 11.1Ω is close enough, as it's an error of only 0.1%, better than the resistors you'll generally use. Either will work, and it can be scaled as needed. Additional ranges are achieved simply by using a 100k (or 90k) resistor above the existing 'stack', with 1MΩ above that if needed.
Figure 4.1 - Simple Decade Attenuator Circuit
Alternatively, you may want a 2-stage attenuator that has an initial range of 0dB, -30dB and -60dB, followed by a buffer and a second attenuator that provides 0dB, -10dB and -20dB. This is the same arrangement used in the 2-stage attenuator that's shown in P16 (Figure 2A). The maths are easy, and the only hard part is wiring the switch.
A 2-stage attenuator is just two 'simple' attenuators in series, separated by a buffer or gain stage. You need defined steps, typically 10dB for audio meters with a dB scale, or 1-2-5 sequence for oscilloscopes or other applications. The tables below show a 10dB 2-stage attenuator. The voltages are arbitrary, but I assumed 31.6V in each case for consistency.
Figure 5.1 - Basic 2-Stage Attenuator Circuit
Figure 5.1 only shows the switching in its most basic form. In reality, there will be nine switch positions, with inter-wiring of contacts on each switch wafer. The inter-wiring is not shown here, but there's a very good example in the Project 16 page. The buffer stage can be unity-gain or it can add some gain to the signal. If gain is added you need to be careful to ensure that the stage doesn't run out of headroom at any setting.
Rx | Voltage | Difference | Resistance | Closest R | Check |
R1 | 31.6 | 30.6 | 1,530,000 | 1,530,000 | 30.540 |
R2 | 1.0 | 6.84 | 48,420 | 48,500 | 0.970 |
R3 | 0.0316 | 0.0316 | 1,580 | 1,568 | 0.0314 |
Target Current | 20.00 | µA | Total R | 1,580,068 | (19.936µA) |
In the above, I aimed for an input current of 20µA, giving an impedance of about 1.5MΩ. The values selected can all be created with no more than two series resistors, although there are some E24 series values in the mix. Much as I'd like to be able to avoid using these, it's simply not possible when designing attenuators. I'll leave the determination of the series values to the reader (I'm not doing all the work ).
Without doubt, the hardest part of designing any multi-position attenuator is deciding how much error is permissible. Aiming for 1% is all well and good, but there's no point if the meter movement is only accurate to 5%. 1% is 0.086dB, and 5% is just over 0.42dB, but remember that dB is a relative measurement, and most of the time you'll stay on the same meter range to measure the upper -3dB frequency of an amplifier or filter circuit. It's certainly nice to have better than 0.1dB (1.1%) accuracy, but you also need to consider the complexity of your resistor network(s). If you choose to get as close as possible, then you'll almost certainly use more than two series resistors and increase stray capacitance accordingly.
Rx | Voltage | Difference | Resistance | Closest R | Check |
R1 | 31.6 | 21.6 | 10,253.16 | 10,220 | 21.57 |
R2 | 10.0 | 6.84 | 3,246.83 | 3,240 | 6.86 |
R3 | 3.16 | 3.16 | 1,500 | 1,500 | 3.17 |
Target Current | 2.1066667 | mA | Total R | 14,960 | (2.11229mA) |
The second stage of the 2-stage attenuator is a much lower impedance, because it's driven by the buffer stage. This removes the need for a capacitive attenuator in parallel with the resistor network. For Table 3, I aimed for around 15k total, and adjusted the current to get resistance values that weren't too difficult to obtain with just two series resistors at most.
To remove the effects of stray capacitance, high-impedance attenuators almost always use a parallel capacitive voltage divider in parallel with the resistive section. The design is simple, but implementation is almost always irksome. It's not usually particularly hard with repetitive sequences as shown in Figure 1, because like the resistors, the capacitors also follow the same sequence, but in reverse. The smallest capacitor is always at the top of the attenuator, because it has the highest impedance. The capacitance increases as the attenuator resistors are reduced. Eventually, you reach a point in the circuit where the resistance is less than 1k, and the capacitive divider can be truncated.
Figure 6.1 - Parallel Capacitive Attenuator Circuit
It's common with oscilloscopes (in particular) to make C1 a trimmer capacitor, with an adjustment range sufficient to cover likely variations in manufacture. Determining the capacitance is always a compromise, and it's based on the resistance and a 'suitable' frequency. In the case shown above, I used a frequency of 6.63kHz because it was derived from using 15pF for C1, but you'll generally find it next to impossible for all values to be obtainable without resorting to parallel combinations. With the values shown for C1, C2 and C3, the response is flat to better than 0.01dB from DC to daylight (well, up to a couple of MHz anyway). C3 can be omitted, but C2 needs to be recalculated - it's in parallel with R2 and R3, which are in series. In the simplified version, in theory 10pF of stray capacitance has no effect until you reach 100kHz. However, even a small stray capacitance (as little as 1pF) between the input and -60dB output can wreak havoc. Use the simplified version with care!
The capacitance value is determined by the usual formula ...
C = 1 / ( 2π × R × f ) (Where 'f' is a frequency that gives a sensible value for C3, in this case, 6.93kHz)
The frequency will generally be between 5kHz and 15kHz, as that's where stray capacitance starts to affect the attenuator. Use of a lower frequency means higher capacitor values, but better protection against stray capacitance. Mostly you'll only be able to optimise one series of values, but you can be lucky. The Figure 1 attenuator in the Project 16 article uses a sensible sequence for both resistors and capacitors, which just happened to work out that way when I designed it.
This also explains something that many would have wondered about, but couldn't find an answer. Most oscilloscopes are quoted to have an input impedance of 1MΩ in parallel with 20pF. The 20pF is a combination of the capacitance in parallel with the attenuator and stray capacitance (which includes the input BNC connector, and may include a short length of coax from the connector to the switch). By maintaining a known resistance and capacitance, ×10 probes can be calibrated for any oscilloscope.
The capacitance values need to be tweaked until you get a 'sensible' value. In this case 15pF is 'sensible', but the others are not. C2 is more troublesome at 473pF (or 458pF). This will generally be obtained using parallel combinations, selecting the caps until you get as close as possible. These will almost certainly be ceramic, and must be NP0/ C0G (thermally stable) types. C1 will ideally be a trimmer capacitor or perhaps a 12pF cap with a parallel 'gimmick' capacitor. These are nothing more than a pair of insulated wires twisted together, with more twist meaning more capacitance. It may sound crude, but these are common in RF circuits and are fairly stable once 'calibrated'. Expect about 0.4pF for each 10mm of twisted wire. A trimmer cap is the most sensible.
Figure 6.2 - Two Very Different Attenuator Circuits
Figure 6.2 shows two completely different attenuators, but both will perform well. The first (Version 1) requires more parts and a two-pole switch, but it has the best flexibility and should be easy to calibrate over a wide frequency range. Both have four steps, 10mV, 100mV, 1V and 10V. The second (Version 2) is similar to others shown here. Both of these were gleaned from the Net with a search for 'oscilloscope input attenuator circuit', and just happened to be close to the top of the image search. They are not 'definitive', but they show the different approaches taken. I modified Version 2 so it also has 4 steps (it originally was only a 3-step attenuator).
More expensive scopes are likely to use the first method, as it has the ability for each range to be adjusted easily. However, with the values shown it's not as accurate as the second circuit (maximum error is 1.3% vs. 0% for Version 2). This error is easily corrected of course, by changing resistor values (particularly R1 and R2, Version 1). In reality there's no point, because oscilloscopes are not 'high-precision' instruments, but are intended for looking at waveforms. A claimed 2% accuracy is normal. It's worth noting that 900k (etc.) is not a standard value, but can be obtained with 680k + 220k, which are both E12 series values.
The design process for both is the same as for 10dB step attenuators. The 1-5-10 sequence is one of the possibilities for an analogue volt or amp meter, and it's designed to ensure the pointer is always above the 10% lower limit of travel. Some meter specifications state that their claimed accuracy is only for the upper 90% of the scale. This scale used to be common for 'high-end' multimeters, sometimes with a '2.5' step included. It's a very usable scale for many measurement applications, but unlike a simple 1-10-100 sequence (as used with most digital meters) you need two separate scales on the meter face. Oscilloscopes generally use the 1-2-5 sequence (which is often 2-5-10).
Figure 7.1 - 1-5-10 Sequence Attenuator
The resistor values are all easily achieved using two series resistors for each range. For example, 400k (plus 40k and 4k) is made using 100 + 300 values (E12 and E24 resistor series), and 50k (plus 5k and 500Ω) can use 110 + 390 values (also E12 and E24 series). The input impedance with the values shown is 500kΩ, but you can multiply all values by two to get 1MΩ impedance (shown in brackets). The increments are exact with no error at all, other than that caused by the resistor tolerance. For AC, you will need to add a capacitive divider using the method described above.
This can be expanded to a 1-2-5-10 sequence, using the technique described above. The design process isn't changed, but of course you need to set up your spreadsheet (or piece of paper) to suit the desired attenuator steps. Any sequence you like is easily achieved, but the calculations can become tedious. Using a spreadsheet takes some of the pain out of the process and lets you make a simple change to the desired full-scale current and everything will be re-calculated for you.
Once you know the techniques for designing multi-step attenuators you can create any sequence you like, but not all will be useful. For audio (or RF) work, 10dB steps are usually preferred because they make the most sense. For other measurements, I'd recommend the 1-5-10 sequence, as only two scales are needed on the meter. You need three if you use a 1-2-5 sequence, requiring more work to create the meter face. A 1-10-100 scale means that you don't need to do anything other than choose a meter that already has the desired scale (100µA is always useful).
The 1-2-5 attenuators used in nearly all oscilloscopes are almost invariably two-stage types. The first is usually a 1-10-100 attenuator, and the second stage is either an attenuator or a variable-gain amplifier, with ranges of ×1, ×2 and ×5. The following assumes the latter arrangement, with the gain of the amplifier switched by the second switching stage. While the amplifier is shown as an opamp (or PGA - programmable gain amplifier), it will generally be a discrete circuit because few opamps have a wide enough bandwidth. For a usable scope, you need at least 50MHz, even for audio. You can (just) get by with 20MHz, but things may be missed. The 1-2-5 sequence used on most scopes is actually 2-5-10, with the most sensitive range being 2mV, but with some others they may start from 10mV. Sensitivity is always stated as 'per division' on the scope's graticule.
Figure 7.2 - 1-2-5 Sequence Attenuator
The switching shown is highly simplified, and both switches will have multiple interconnections to function as a true full-range 1-2-5 attenuator. The switch positions shown indicate an output voltage of 10mV, based on the input voltage of 2mV, divided by one then amplified by five. For the 5mV range, the PGA has a gain of two and an output of 10mV, and for 10mV input there's no gain or attenuation. The input impedance is 1MΩ, and the first divider is frequency compensated with parallel capacitors.
The 1-2-5 sequence is provided by the PGA, with multipliers (1:1, 10:1 and 100:1) provided by the input attenuator. The alternative is to have a fixed gain amplifier, followed by another attenuator with the 1-2-5 sequence. If an attenuator is used, it requires a 1-2.5-5 sequence rather than the expected 1-2-5 arrangement. This may not make much sense at first, so a few simple calculations are in order to verify that this is the case. It's somewhat outside the scope of this article to pursue this to its conclusion, but I suggest that you look at the schematic for a good oscilloscope front-end to see how it can be done. Figure 7.3 is a simplified example.
Figure 7.3 - 2-5-10 Sequence Oscilloscope Attenuator
In the above, with a 2mV input, there's no initial attenuation, and the signal is amplified by five. The second attenuator is also bypassed, and the output is 10mV. With a 5mV input, it's also amplified by five (25mV) then attenuated by 2.5, giving an output of 10mV. With a 10mV input, it's amplified by five (50mV) and attenuated by five to get a 10mV output. The requirements for a 2-5-10 sequence are satisfied. This is a simplified look at the front end of the Tektronix 2215 scope, but it uses separate attenuators rather than the series string shown above. I encourage the reader to run the calculations for herself/ himself, as it's not immediately obvious that a division of 2.5 is correct to obtain the proper sequence.
This isn't something that I'd expect anyone to attempt, because the switching is so complex. The switches are usually proprietary, and are made by (or for) the manufacturer of the oscilloscope. It's very doubtful that you'd be able to buy a switch that even comes close to what's required. The example I looked at uses a 12-position switch, with no less than ten separate sections (Tektronix 2215). Should you try to buy one, failure is almost guaranteed (even as a spare part from Tektronix).
An alternative to a multi-section switch is to use reed relays (or RF relays) for the switching. That means you can have a single-pole, 12-position switch (these are readily available at low cost), and use a diode matrix to switch the required relays for any setting. For the circuit shown above you only need five normally-open relays, three for the input attenuator and two for the gain stage. The diode matrix is another matter of course, and it's not covered in this article. It can also be done using a PIC or similar microcontroller, and this may be preferred as diode matrices are pretty tedious to wire up.
Especially with high impedance circuits, preamplifier protection is a lot harder than it may seem. The traditional use of a pair of diodes connected between the input and the power supplies doesn't work because of the diode capacitance. If the source impedance is more than a few kΩ the diode capacitance will cause the signal to roll off at a frequency determined by the source impedance and diode capacitance. A pair of 1N4148 diodes will have a capacitance of around 2.4pF, and this is more than enough to cause a serious limit to the maximum frequency for any given source impedance.
There are ultra-low capacitance ESD (electrostatic discharge) protection devices from a number of manufacturers, with various values of 'stand-off' voltage, being the voltage they can withstand before conduction. These vary from ultra-low capacitance diodes to TVS (transient voltage suppressor) devices, with the latter available as unidirectional or bidirectional. They are connected from the preamp's input to ground, assuming that the input is DC coupled and ground referenced. Another method is to use an RF transistor (base to collector junction), which must (almost by definition) have low capacitance.
If the source impedance is 500k and you expect to get to 1MHz (-3dB), the maximum allowable capacitance is only 0.32pF. This is another reason for using the parallel capacitive voltage divider for attenuators, as it makes it possible to protect the input JFET. If you rely only on the resistor divider then it can only ever work up to high frequencies if its impedance is comparatively low. The need for protection has come about as an industry standard, as most users will connect a test instrument set for a low range to a high voltage at some point. I've certainly done it, and I'm not alone.
Figure 8.1 - Preamp Protection Using A TVS Diode
Figure 8.1 shows an example circuit. Rprot is there to limit the current, and is a tradeoff between the possible maximum peak current and the capacitance of the JFET and TVS diode. Ccomp is a low value capacitor that compensates for any rolloff caused by the resistor and the capacitance of the TVS diode. The value isn't critical, but if it's too high there's the likelihood that a sudden voltage spike will pass through and overload the TVS diode (or other protection scheme). With the values shown, you can expect less than 0.05dB frequency error. It's important that the protection device's capacitance is kept to the bare minimum, as it will cause capacitive loading on the attenuator. If it's too high, the attenuator's parallel capacitors may be insufficient to maintain accuracy at high frequencies. Every part of this is a careful balancing act.
With any test gear that has multiple switched ranges, a good habit to get into is to never leave the range switch on the most sensitive setting. It's not often needed for most measurements, and with a higher range the amplifier stage is protected from damaging overloads due to the impedance of the attenuator itself. I built my P16 millivoltmeter well over 30 years ago, and I've never managed to damage the input JFET.
This is a topic that could easily be extended ad-infinitum, as there are so many possibilities. However, in the interests of everyone's sanity that won't be the case here. However, it's hopefully useful, and it fills the void on the Interwebs which has next to no information at all on the design of this type of attenuator.
There are 2-stage attenuator schemes designed for 10dB steps, and 2-5-10 (or 1-2-5) for oscilloscopes as shown in Figures 7.2 and 7.3. These generally don't use a string of resistors, and in some cases there are completely separate attenuators for each range (typically ×1, ×10 and ×100). This may seem like overkill, but it can simplify the circuit, although the switching becomes more complex. This isn't an issue for a manufactured product because the cost is amortised over the number sold, but it's prohibitive for DIY construction because the switches are difficult to get.
Even digital scopes have an analogue front-end, and they use attenuators and (in some cases) switched gain stages to cover the range, usually from 2mV to 10V per division. It used to be standard procedure to provide a service manual for scopes, including parts lists and full circuit diagrams, but with most of the new digital scopes that's no longer the case. That's a shame, because we can all get good ideas by studying what's been done by someone else. Even if it's not immediately useful, you still get ideas that may come in handy sometime.
An unfortunate consequence of the change from analogue to digital is that many people seem to think that the 'old' analogue techniques are no longer necessary. However, this isn't the case at all. Digital meters and oscilloscopes still need an input attenuator, and it will always remain a purely analogue process. Digital ICs can't handle input voltages of more than ±2V or so (assuming a 5V supply and a 2.5V bias), so the external voltage being measured always needs an attenuator if it's expected to measure anything greater than ~1V RMS. Lower voltages can be amplified using DSP (digital signal processing) techniques, but they fall over with very high speeds (50MHz or more) because it's hard to maintain accuracy with the limited bit-depth and high clock speeds. Analogue amplifiers have constraints too (especially for very high frequencies), but they have advanced as well, and are more than capable of handling frequencies up to several GHz.
There are a number of digitally programmable amplifiers available, but most are relatively low frequency. There are some that work up to 10MHz, and a smaller number that extend to higher frequencies. While these are digitally programmable, they are still analogue ICs. For example, the AD8250 can be programmed in 1, 2, 5, 10 steps by two gain-setting pins and/ or a microcontroller, and is specified for up to 10MHz (1MHz with a gain of 10). Obtaining higher frequencies (> 50MHz for general purpose oscilloscopes for example) is not a trivial undertaking. The THS770012 PGA (programmable gain amplifier) can provide from 10dB to 13.7dB gain at up to 200MHz (14-bit ADC). This IC is not digitally programmable, and it's described as a 'Broadband, Fully-Differential, 14-/16-Bit ADC Driver Amplifier'.
There are no references as such, because this topic does not appear to be covered elsewhere. There are several ESP articles that describe many of the basic circuits shown, and these are referenced in-line. These are repeated here for convenience.
![]() | + + + + + + + |
Elliott Sound Products | +Meters, Multipliers & Shunts |
![]() ![]() |
The moving coil meter movement (also known as a galvanometer) was invented by the French physicist and physician, Jacques-Arsène D' Arsonval in 1882. It is the basis for all modern meter movements, and the basic design principles remain the same after all this time. The actual construction can differ quite widely, but upon examination it is obvious that there are simply different ways to achieve the same outcome.
+ +Meters are common in audio. They are sometimes used as 'eye candy' to impress - especially on power amplifiers, but they have many real uses as well. Meters are used to display the level from mixing desks, either as a VU (volume unit) or PPM (Peak Programme Meter) display, and while LED meters save space and can be very fast acting, they have neither the coolness of an analogue movement nor the retro appeal. To many people, an analogue movement provides a better sense of what is happening, even though they lack the immediacy of a LED display. In some cases, the two may even be combined to give the best of both worlds.
+ +Meters are also used on power supplies and many other pieces of test equipment, and although it is assumed that digital is more accurate (you can see the exact voltage displayed), this is not always the case. Although digital meters appear accurate, this is often an illusion (read the specifications ... 1% ±1 digit is common, and that last digit can make a big difference sometimes).
+ +In addition, there are some applications where digital is essentially useless. If a voltage (or current) is continually changing, the readout from a digital meter is impossible to interpret accurately. With analogue, you can see peaks and dips, and it is easy to see a trend (or average) just by looking at the pointer. Analogue is far from dead, and to this day I still use many analogue meters on millivolt meters, distortion analysers, power supplies, etc.
+ +Although many of the techniques shown in this article are aimed at analogue applications, they are equally at home with digital meters - DPMs (Digital Panel Meters) are commonly available for about the same price as their analogue counterparts. This makes them very attractive for some applications - especially since good moving coil meter movements are now quite expensive and may be hard to get. Some applications are also shown for DPMs.
+ +There is one thing that has to be pointed out here, largely because there's no other ESP article that covers the topic in detail. People use digital multimeters for just about everything these days, and there is a pitfall that you probably didn't know about. All digital multimeters (including 'True RMS' meters) have a limited upper frequency. They are mainly intended to measure mains and other low frequency waveforms where a true RMS value is needed. However, the limited frequency response means that you will not be able to measure the frequency response of an amplifier above perhaps 1kHz. Some are better, but very few (and I really do mean very few) can measure 20kHz with any confidence.
+ +Even major brand-name meters will almost invariably show a reading that's considerably less than the actual voltage at 10kHz or more. Some high quality bench meters are 'better' but often not by very much. I tested my bench meter (5½ digits), a handheld 'True RMS' meter, and a cheap multimeter that is very ordinary in most respects. The results are shown below.
+ +Frequency | Bench RMS | Handheld RMS | 'Ordinary' + |
20 Hz | 4.9500 | 5.01 | 4.96 + |
100 Hz | 5.0005 | 5.05 | 4.94 + |
500 Hz | 5.0063 | 5.05 | 4.93 + |
1 kHz | 5.0064 | 5.05 | 4.93 + |
5 kHz | 5.0064 | 4.96 | 4.99 + |
10 kHz | 5.0099 | 4.75 | 5.38 + |
20 kHz | 5.0155 | 4.12 | 6.73 + |
50 kHz | 5.0370 | 0.937 | 11.23 + |
100 kHz | 5.2960 | 0.233 | 13.09 + |
The absolute level was confirmed on my oscilloscope at each frequency, and it's apparent that only the bench multimeter can be trusted at anything above 5kHz. However, at 100kHz even that meter read almost 6% high, and at 20Hz the reading was 1% low (which surprised me, but it uses a DC blocking cap on AC volts ranges which probably accounts for the error). The 'ordinary' (i.e. not True RMS) meter went mental above 5kHz, reading high, and showing well over double the actual voltage at 100kHz. The UNI-T RMS meter was within 1% up to 5kHz, but the reading died horribly above that. The hand-held meters I used were simply the first to hand, but the bench meter is my 'go-to' meter for most measurements.
+ +It's quite obvious that you need to verify that your preferred meter doesn't lie to you if you use it for response measurements. This is one of many reasons that the oscilloscope is always my preferred AC measurement device, because despite absolute accuracy being worse than a good meter, it tells you what you need to know, including waveform - something none of the digital multimeters can do. Even some of the best known brands do not specify their AC frequency range, only the accuracy figure. You can probably find it, but it may take some serious searching!
+ +For example, I looked up one of the better known brands, and went through the specifications. Nothing. I downloaded the manual, and finally found the details on page 20 (of 24). AC voltage accuracy is specified as 1% (+3 counts) from 45Hz to 500Hz, and 2% (+3 counts) from 500Hz to 1kHz. Above 1kHz, you're on your own - nothing is specified.
+ +There's surprisingly little on the Net that covers this aspect of digital meters. While many have frequency counters that extend to at least a few MHz, that does not imply that they can accurately measure the voltage at these frequencies. The uninitiated are unlikely to be aware of this limitation because it's not made easy to find in most cases. In general, I suggest that a 'True RMS' meter be used for AC measurements, as there will be significant errors if the waveform is not sinusoidal.
+ + +The basic analogue meter movement is the moving coil type. These have been the mainstay of most metering applications for a very long time, but there are others that are common in other industries. Moving iron meters are often used for mains applications (especially in switchboards and the like), and although they are non-linear this is not a limitation for the intended applications. The latter are interesting, but will not be covered because of limited availability and lack of usefulness for audio applications. Another interesting meter uses electrostatics to display the voltage. These are restricted to very high voltage applications and apply virtually no circuit loading. Like the moving iron movements, they are not useful for general workshop use because they are too specialised. A photo of a very ordinary moving coil meter movement is shown in Figure 1.
+ +Figure 1.0.2 shows the essential sections - yes, it is different from Figure 1.0.1. The drawing shows the way that moving coil movements were commonly constructed many years ago, which is somewhat easier to draw than more modern types. The essential parts are labelled so you get an idea of the construction of these meters. Nearly all moving coil meters are low voltage, low current devices, and the multipliers and shunts referred to in the title are used to convert the movement to read higher voltages and currents than it was designed for. This versatility is the reason that moving coil meters have stayed with us for so long. They can be made to read up to thousands of volts (or amps), AC voltage and current (with the addition of rectifiers), audio levels, or anything else where a physical quantity can be converted to an electric current.
+ +The beauty of the analogue scale is that a plant operator (for example) can tell at a glance if the reading is normal, whereas it is necessary to actually read the displayed value of a digital meter. You don't need to read a value on an analogue meter to see if it is normal. Look at the meter on a battery tester - it is simply labelled 'Replace' and 'Good' or similar - the exact value is unimportant, but you still see a linear scale so you can estimate 'Marginal' without even thinking about it.
+ +The moving coil movement uses a coil former of aluminium, around a centre pole and 'immersed' in a strong magnetic field. The coil is most commonly supported by jewelled bearings (although taut-band suspension is a much better arrangement, IMO). The coil is maintained at the zero position by the tension of the hairsprings, and one of these (almost always the top) is made adjustable from outside the meter case. This allows the user to zero the pointer. Current to the coil is carried by the hairsprings.
+ +Taut band suspension uses no bearings, but supports the coil on a tiny flat spring (a flat wire) at each end. The flat spring acts as both suspension and restoring force, as well as providing current to the coil itself. Unfortunately, taut band movements are not very common, possibly because they are sometimes not as mechanically rugged as the traditional jewelled pivot suspension, and are very difficult to repair if the suspension breaks (personal experience!). A major advantage is that they have very low (virtually zero) hysteresis - this is caused in jewelled movements if the pivot sticks slightly because of wear, contamination or damage.
+ +The aluminium former is almost invariably made so that it forms a shorted turn around the centre pole. This provides electrical damping, preventing excessive pointer velocity. There is a lot more to the analogue meter movement than meets the eye, but we shall leave the topic now, so that the usage of these devices can be covered.
+ + +All moving coil meters have a rated current for FSD (Full Scale Deflection), and this parameter is of primary importance. The FSD current determines how much load the meter will place on any drive circuitry, or for a voltmeter, how much current it will draw from the voltage source. This may or may not be important, depending on application.
+ +Most commonly available meters are readily available with a sensitivity of between 50µA and 1mA FSD. More sensitive meters are available, but the cost goes up with increasing sensitivity. The most sensitive meter I have heard of was used by Sanwa in an analogue multimeter - 2µA FSD, taut band movement!
+ +All meter movements have resistance, because the coil uses many turns of fine wire. The resistance varies from perhaps 200Ω or so (1mA movement) up to around 3.5k for a 50µA movement. These figures can vary quite widely though, depending on the exact technique used by the manufacturer.
+ +Normally, moving coil meter movements are suitable for DC only. Some (such as VU meters for audio) have an internal rectifier so that AC may be measured, but accuracy is generally rather poor, especially with low voltages.
+ +To obtain good AC performance requires the use of external circuitry. The project pages have a design for an AC millivoltmeter, and there is an interesting array of precision rectifier circuits in the application notes section of the ESP site.
+ +Some movements have a mirrored scale, where a band of highly polished metal is just behind the scale itself and visible through a window cut out of the scale. This is used to eliminate parallax errors as you read the meter, and can improve reading accuracy dramatically. When the pointer and its reflection in the mirrored scale are seen as one, the viewer is looking directly at the pointer and there is no parallax error. If you can see the reflection of the pointer then you must be looking at it at an angle.
+ +None of this is useful if the meter is poorly calibrated or non-linear. Moving coil meters can be non-linear if the magnetic path is not adjusted correctly - such adjustments are not recommended for anyone not trained or used to working on very delicate equipment. It also helps if you know exactly what to do, a topic that is well outside the scope of this article.
+ + +When a meter is to be used as a voltmeter, a series resistor is used to limit the current to the specified FSD with the maximum applied voltage that you want to measure. This is a very easy calculation to make, since it involves nothing more advanced than Ohm's law.
+ +For example, we want to measure the voltage from a power supply, and have a 1mA meter movement available, with a coil resistance of 200Ω. If the maximum supply voltage is 50V, then the meter should read from 0-50V. The total resistance needed will limit the current through the meter to 1mA with 50V applied, so ...
+ ++ R total = V / I = 50 / 1mA = 50kΩ ++ +
Since the meter has 200Ω resistance, the series resistor will be ...
+ ++ R mult = 50k - 200Ω = 49,800Ω ++ +
This is not a standard value, so will need to be made up using series / parallel resistors. Of course, one can always cheat and use a 47k resistor in series with a 5k pot, thus enabling the meter to be calibrated to a high accuracy. We do need to check the resistor power rating, because it is easy to forget that the multiplier resistor can dissipate a significant power - especially at high voltages. The resistor power is given by ...
+ ++ P = I² × R = 1mA² × 47000 = 53mW ++ +
The power dissipation is well within limits for even the lowest power resistor. Be very careful when determining the multiplier resistance for high voltages. Although the power rating may be quite low, the gradient voltage across the resistor may exceed its ratings. It is imperative that resistors are not operated above the maximum rated voltage for the particular type of resistor. This specification is not often given, so it is best to assume the worst case, and limit the voltage across any 0.5W resistor to no more than around 150V - less for 0.25W resistors.
+ +It is generally preferable to use the most sensitive meter you can get within your price range, so in this case, a 50µA movement would be a far better proposition. Less current is drawn from the measured voltage source, so there is less loading on potentially sensitive circuits. This was always a problem when measuring voltages in valve amplifiers, because typical cheap analogue multimeters often used relatively high current movements, and this loaded the voltage under test giving incorrect readings. Analogue multimeters usually had a rating of 'Ohms/Volt' - the 1mA movement described above uses 50k total resistance to measure up to 50V, so that would be rated at 1kΩ/ Volt.
+ +The better multimeters of yesteryear were rated at a minimum of 20kΩ/V up to 100kΩ/ Volt (the Sanwa meter mentioned above was 500kΩ/Volt!). To obtain even higher measurement impedance, the better equipped workshops and laboratories back then used a VTVM (Vacuum Tube Volt Meter), offering an input impedance of around 10MΩ. These were followed by FET input transistorised units, and finally displaced by digital multimeters. Despite their popularity, digital multimeters are still very bad at some measurements, and are often not as accurate as we tend to think they are.
+ +Using a 50µA movement, the multiplier resistor needs to be ...
+ ++ R mult = V / I = 50 / 50µA = 1MΩ - 3500 (meter resistance) = 996,500Ω ++ +
... which works out to be 20kΩ/ Volt. Again, this resistance can be made up by series connection of different values, but a 1MΩ resistor is perfectly ok. The error is much smaller than the tolerance of the resistor or the meter movement, at 0.35%. If you need greater accuracy you will need to use a trimpot with a series resistor as described above for the 1mA movement.
+ +That's all there is to multipliers - as stated in the beginning of this section, they are very easy to work out.
+ + +The situation is a little more complex when calculating a shunt for current measurement. Not so much because the calculations are difficult, but because you will be working with very low resistance values. It is also important to ensure that the meter is connected directly to the shunt - even a small length of wire in series may make readings uselessly inaccurate. The schematic diagram below shows not only the electrical connection, but also the physical connection to the shunt.
+ +In most cases, it is easier to calculate (or measure) the voltage across the meter movement for FSD. If you don't know the resistance, it can be measured with a digital multimeter. The current from most digital multimeters is low enough not to cause damage to the meter, but the pointer may swing rather violently. Connect with reverse polarity to minimise the risk of bending the pointer.
+ +Unless you are measuring low currents (less than 1A or so), the shunt resistance can be worked out using Ohm's law, and will be accurate enough for most purposes. This is covered below.
+ +Assuming a 1mA movement with an internal resistance of 200Ω, as an example we wish to measure 5A. This means that 4.999A must pass through the shunt, with the remaining 1mA passed by the meter movement. The shunt resistance can be found with the following formula ...
+ ++ Rs = Rm / ( Is / Im) where Rs is the shunt resistance, Rm is the meter resistance, Is = shunt current, Im = meter current ++ +
So for our example,
+ ++ Rs = 200 / ( 5A / 1mA ) = 0.04Ω ++ +If we use only Ohm's law (having determined that there will be 200mV across the movement - 1mA and 200Ω), the shunt can be calculated as ... + +
+ Rs = Vm / I where Rs is shunt resistance, Vm is meter voltage at FSD, and I is the current+ +
+ Rs = 0.2 / 5 = 0.04Ω +
This method will work to within 1% accuracy provided the measured maximum current is more than 100 times the meter current. One thing we have to be careful of with shunts is that the voltage 'lost' across them (known as the 'burden' is not excessive. This will reduce the voltage supplied to the load, and can result in significant errors, especially at low currents. For example, if we only need to measure 1mA, we can use the meter directly, but we lose 200mV across the meter. In the case of the 0.04Ω shunt calculated above, we lose ...
+ ++ V = R × I = 0.04 × 5A = 200mV ++ +
... exactly the same voltage loss! It's not a great deal, but can be critical in some exacting tests or at very low voltages. 200mV is almost nothing with a 50V supply (0.4%), but is very significant if the applied voltage is only 1V (a full 20% loss). The voltage drop can be reduced slightly by using a more sensitive movement. For a 50µA movement with 3,500Ω resistance, the loss is ...
+ ++ V = R × I = 3500 × 50µA = 175mV ++ +
There's not much of a gain, but there are also not many alternatives. DC current measurement will always lose some voltage, so it is important that the voltmeter is always connected after the ammeter, so that the 'lost' voltage is taken into consideration. Where extremely low voltage drop is important, one must resort to amplification. An opamp can be used to amplify the voltage across a much smaller value shunt, but at the expense of circuit complexity and temperature drift. Digital panel meters are often (but not always) better than analogue movements for current measurements. Note that for AC current measurements, a current transformer is the best solution - see Transformers - Part 2 for more.
+ +The idea of a shunt is all well and good, but where does one obtain an 0.04Ω resistor? It can be made up of a number of wirewound or metal film resistors in parallel, or a dedicated shunt may be available. Obtaining high accuracy at such low resistances is very difficult though, and shunts are generally cut, machined or filed to remove small amounts of metal until the exact value needed is achieved. The shunt must be made from metal having a low temperature coefficient of resistance to prevent the reading being affected by changes in temperature - either ambient, or caused by the load current heating the shunt. Common shunt materials are Constantan (copper-nickel, aka Eureka), manganin (copper, manganese, nickel) and nichrome (nickel-chrome).
+ +There is an easier way to calibrate a shunt, as shown in Figure 5. The voltage drop will be a bit higher than it should be, but you only need a few millivolts extra to be able to use the technique.
+ +Now it is possible to use 2 × 0.1Ω resistors in parallel, giving 0.05Ω. The voltage drop at 5A will be 250mV, but you have the advantage of being able to use standard tolerance resistors, which can represent a significant saving. The power is only 1.25W at full current, so a pair of 5W resistors will barely get warm. The trimpot can be adjusted to give an accurate reading, without having to resort to close tolerance resistors with impossible values. As an example for the above 5A meter, we could use a 100Ω trimpot in series with the meter. The value is not particularly important, but needs to be within a sensible range.
+ +What is 'sensible' in this context? Easy. We already know that the meter needs 200mV for full scale and that we will get 250mV across a 0.05Ω shunt, so we need a resistance that will drop 50mV at 1mA.
+ ++ R = V / I = 0.05 / 0.001 = 50Ω ++ +
Since we are using a pot, it is advisable to centre the wiper under ideal conditions to give maximum adjustment range (to allow for worst case tolerance), so a 100Ω pot is ideal.
+ +For AC measurements, a current transformer is better than a shunt, as it imposes no restriction on the load current. These are covered in detail in the article Transformers - Part II. The link takes you straight to the section that covers current transformers. They are also discussed (briefly) below.
+ + +There is an alternative method for measuring DC (or AC) current with almost no loss at all. ICs are available that use a thick conductor and a fully isolated Hall-effect sensor to measure the magnetic field generated as current passes through the conductor. An example is the Allegro Microsystems ACS770LCB-050B, a bidirectional Hall-effect sensor that can handle up to ±50A, providing ±40mV/A output, centred on the quiescent output voltage of 2.5V. A unidirectional (DC only) version is also available.
+ +With an output voltage of ±2V (referred to 2.5V), the output voltage range is from 500mV to 4.5V over the full range. While these are very useful devices, they are not inexpensive, and require additional electronics to obtain a usable output. If you need to sense low current, then be prepared for a fairly noisy output signal. Some of the noise can be removed with a filter, but that further increases complexity.
+ +The device mentioned is not the only one of its type, but is representative of those you can use. Another is the Honeywell CSLA2CD as described in Project 139. This is a more versatile device (which is also likely to be quieter), but they are not inexpensive, at around AU$40-50 each depending on supplier. Even the Allegro IC costs a bit more than you might expect, at around AU$13.00 each (one off price). There are many other current sensor ICs available, but this is not the place to go into great detail.
+ + +You may have seen expanded scale voltmeters used in cars to monitor the battery voltage. Since no-one is interested if the battery measures less than 10V (it's dead flat!), and it should never exceed 15V, a meter that measures from 10V to 15V is nice to have. This is surprisingly easy to do, and although absolute accuracy is not wonderful in a simple application, it is more than acceptable for the purpose.
+ +By using a zener diode, a base reference is established, and the meter only measures between the reference and actual battery voltage. We will use a 1mA movement again (as shown above). This scheme can be adapted for any desired voltage. The voltmeter only needs to measure the voltage drop across the zener feed resistor, which is needed to ensure that an acceptable current flows in the zener diode. The 1mA drawn by the meter is not enough to obtain a stable voltage.
+ +The multiplier is worked out in the same way as before ... + +
+ R total = V / I = 5 / 1mA = 5kΩ ++ +
Because the multiplier resistance is much smaller than before, we must take the meter resistance of 200Ω into consideration.
+ ++ R mult = R total - R meter = 5000 - 200 = 4800Ω ++ +
A 4.7k resistor will introduce a small error, but a 3.9k resistor in series with a 2k trimpot will allow the meter to be set very accurately. The zener feed resistor value is not critical, but should ensure that the zener current is between 10% and 50% of the maximum for the device (around 10% will usually give the best result). Assuming a 10V 1W zener, the maximum current is ...
+ ++ Iz max = P / V = 1 / 10 = 0.1 = 100mA (Max.) ++ +
Using Ohm's law, we get a resistance value of 470Ω for a zener current of about 10mA at 15V. This will fall as the voltage is reduced, and extreme accuracy with a zener diode is not possible. This arrangement should work fine as a 'utility' meter. Depending on the zener diode's characteristics, it can be advantageous to run it at a higher or lower maximum current. If Rz is less than 270Ω the accuracy may suffer. This basic idea has been around for as long as I can remember, and has been used in countless car (or boat, etc.) battery voltage monitors.
+ +If you are fussy (and want it to be accurate, what a nerve! ) you can use a voltage reference IC instead of the zener diode. The LM4040-N-10.0 could be used instead (the 10V version). The series resistance (Rz) may need to be changed to limit the current to a bit less than the rated maximum 15mA (390Ω will be fine), and you can expect it to work very well indeed. The calculations don't change, but you must ensure that the maximum reference IC's current is not exceeded. Rmult is not changed if you use the same meter (or you can use a trimpot so it can be adjusted).
My thanks to 'Roger' who wasn't happy with the zener, and tested using the LM4040. This worked much better, giving a very accurate reading.
+ +Note that you can also use a TL431 or equivalent as a reference, but these need to be programmed (with a pair of resistors) to the voltage required. The 'worst case' adjust pin current is 4µA, so the divider won't affect the reading much (if at all). These ICs are probably easier to find than the suggested LM4040, many of which are only available in an SMD package.
+ + +DPMs (Digital Panel Meters) are often very attractive, not just for their perceived accuracy, but because they can often be obtained for the same or less than a good analogue meter movement. They also have better linearity than most of the cheap movements, so there are some real benefits. Most are available with a quoted sensitivity of 200mV (199.9mV full scale), so are comparable to analogue meters in terms of voltage drop for current measurement. They have the great advantage of a (typical) 100MΩ input impedance, so voltage loading is extremely low. In addition, they will measure positive and negative voltage or current - this is available with a centre zero analogue meter, but they are hard to find.
+ +Most DPMs are classified as 3½ digit, meaning that they display up to a maximum of 199.9mV. The most significant digit can only be blank or 1, and the other 'half' is used to display a negative sign to indicate that the input is negative with respect to the common or ground terminal. This often means that much of the range is wasted if you want to display a range other than 0-1999. Note that most DPMs do not automatically select the decimal point, and there are extra pins to allow the user to select the position of the decimal point (or to ignore it completely). Analogue meters have no such limitation, because the scale can be calibrated with any units you wish, and covering any range.
+ + +Measuring voltage with a DPM is easy - most even come with instructions that show you how to do it. You do need to be careful to ensure that possibly destructive voltages cannot be coupled to the inputs. Like all ICs, the ADC (Analogue to Digital Converter) used is sensitive to excess voltage, and the IC can be destroyed. Although the following circuit uses a ½-wave rectifier, full-wave rectification is better (using a diode bridge). However, this may mean that a simple 'off-line' power supply cannot be used for the meter IC. The voltage divider should be re-calculated if a full wave bridge is used, because the ratio of peak to average is 1.58 (full-wave rectified 230V has an average value of 205.7V).
+ +Figure 5.1.1 shows the circuit of a DPM voltmeter I built recently. This is designed to monitor the output from my workshop Variac (variable transformer). To ensure an adequate voltage rating for R div1 4 × 100k 1W resistors were used in series parallel, maintaining the peak voltage across each to 163V (the peak of 230V AC is 325V). 1W resistors were not used for their power rating, but to have a large resistance section, maintaining a relatively low voltage gradient across the resistor surface. Because the Variac can deliver 0-260V, the voltage to the DPM will be 0-26mV, and this is a half-wave rectified signal. The meter averages the applied voltage. Note that the 5V supply must be isolated, because it could have the full mains potential on all terminals if the active (live) and neutral conductors are ever swapped around. This is critically important - the entire circuit (including power supply) must be considered as being at mains potential.
+ +To obtain the (approximate) average value of ½ wave rectified AC, you divide the peak voltage by 3.12. Based on this and for an average signal of 23mV, the average input voltage is 104V (325 / 3.12), so the voltage divider needs a ratio of ...
+ ++ Vdiv = Vin / Vout = 104 / 23mV = 4522 ++ +
For all reasonably high voltages, the division ratio is so high as to cause significant errors even with 1% resistors, and the use of a trimpot to adjust the value is strongly recommended. Since I used 100k for Rdiv1 (because I had 100k/1W resistors handy), the parallel combination of Rdiv2 and VR1 needs to be slightly more than ...
+ ++ Rdiv2 = Rdiv1 / ( Vdiv - 1 ) ≈ 22Ω (actually 22.12Ω, but all values are approximate because using fixed resistors is not sensible) ++ +
50Ω (as used) allows VR1 to be roughly centred, and there is plenty of adjustment range. Needless to say, exactly the same technique can be applied to an analogue meter as well, but you need to allow for the much lower input impedance (perhaps 100Ω rather than 100M for the DPM that I used). As it turns out, with an average voltage of 104V and a resistance of 100k, the current is 1.04mA, so the meter can be driven directly (leaving out Rdiv2 and VR1). You will need to readjust the resistance though, because the (in)accuracy is 4% - much better results can be obtained, but most analogue meter movements will have a greater error than that built-in. A pot is highly recommended because the AC waveform is not very predictable, and large errors may result from waveform distortion. This also applies if the mains is full-wave rectified. The divider network is still usable as shown, but Rdiv2 should be reduced to 39Ω.
+ +For a more conventional application, Figure 5.1.2 shows a basic 0-50V digital meter. The resistor values are fixed in this case. Because of the high input impedance of the DPM, we can use 1M for the upper divider resistor. The division ratio is determined the same way as before ...
+ ++ Vdiv = Vin / Vout = 50 / 50mV = 1000+ +
+ Rdiv2 = Rdiv1 / Vdiv = 1M / ( 1000 - 1 ) = 999Ω (Use 1k) +
Using a 1k resistor is not an issue, because the resistor tolerance is much greater than the 1Ω difference in the calculated values. The same result can be achieved using 10k and 10Ω (or 100k and 100Ω), but there is not normally any need to aim for very low impedances. You may find that the meter displays 'rubbish' values in the least significant digit - this means that noise is being picked up. Use of a lower impedance divider may reduce that, or you can place a cap (100nF or so) in parallel with RDiv2. If you need the circuit to be particularly accurate, then you will need to use 0.1% resistors or add a pot so it can be adjusted. A pot is a lot cheaper and easier to get than 0.1% resistors, especially if you end up with odd values.
+ + +DPMs have a benefit as ammeters, but usually only if you don't need the full scale. Since the typical sensitivity is 200mV, by using only a part of the maximum reading, you can use lower shunt resistances than with analogue movements. You can also use IC current monitors instead of a shunt if preferred (see Section 4.1 for details).
+ +The procedure for calculating the shunt is exactly the same as for an analogue meter, except that there is no meter current. You simply need to calculate the shunt based on the meter voltage for the desired current reading ...
+ ++ Rs = Vs / I = 50mV / 5A = 0.01Ω ++ +
This gives a much lower shunt resistance, because only 50mV is needed at the meter input. The circuit shown will work up to 20A (19.99A to be exact) with the same 0.01Ω shunt resistor. Note that the input is shown on the negative supply, with the +ve input going to the positive supply via the load. If the input and power supply -ve terminals are not at the same potential, then the supply for the meter must be floating - it cannot be grounded. If you wanted to monitor the current in the positive supply lead for example, you need a floating auxiliary supply.
+ + +There are AC ammeters available that are supplied with a current transformer. These are often part of a 'combination' module that displays voltage, current and power. Some include cumulative power (kWh) and/or power factor. A simple digital DC meter can be used if a rectifier is added, and it needs to be an active circuit (using opamps) for a good result. Current transformers impose no limit on the current (they only have the resistance of the current-carrying cable), but have a low output - typically 100mV/A (1,000:1 ratio transformer). The secondary must be fitted with a 'burden' resistor (usually 100Ω for small transformers) that converts the output current to a voltage. A 1,000:1 transformer outputs 1mA/A.
+ +You need to use an 'active' rectifier if you expect accurate readings, and perhaps amplification to measure the current properly. The output of the CT (current transformer) is 1mA/A or 100mV/A with the 100Ω burden, so a 5A load gives 500mV average rectified output. Measuring down to less than 100mA is easy with amplification. The nice thing about a CT is that the meter, rectifier and power supply are totally isolated from the mains. This provides far greater safety than a directly connected circuit, and the losses are low. Current transformers are available for currents ranging from ~5A to 500A or more (they tend to become large and expensive for higher current versions).
+ + +In general, this would have to be considered a silly topic. After all, one can buy a multimeter quite cheaply, and the switching is a nightmare. For specialised applications though, there may be perfectly good reasons for making a multi-range meter. Bear in mind that the circuit shown below does not include protection for the DPM, so if 2kV were applied when the 200mV range was selected, the meter will be destroyed. The attenuator values assume that the input resistance of the DPM is much greater than 10MΩ - preferably by a factor of at least ten!
+ +You need a 2-pole 5-position rotary switch, and the insulation must be sufficient for the maximum voltage. Any protection circuit that you add must not load the external circuit, otherwise the meter may appear as a short circuit to high voltages. As noted, this is basically a silly idea, but it may be useful (even essential) for some applications where a conventional multimeter would be inappropriate. No, I can't think of such a situation either .
Similar comments apply to the ammeter. In this case, the resistors and switch must be capable of handling the current, although this only becomes an issue on the highest current range. Like the multi-range voltmeter, the usefulness of Figure 11 is somewhat dubious, although it would be nice on a laboratory power supply. The ranges can be expanded or moved - for example you may find that ranges from 2mA to 20A suit your needs. Simply reduce all resistance values by a factor of 10, and that's what you have. I don't fancy your chances of getting a rotary switch that can handle 20A though, and that's why almost all meters with a high current range use a separate input connector.
+ + +These are needed for all multimeter circuits, as well as dedicated meters that have a number of different ranges. The calculations are based on a number of different requirements, but the thing that's most important is the current drawn by the meter movement. For digital panel meters, this is negligible, but you must know the input impedance/ resistance of the meter. Assuming 1MΩ is 'reasonable' as a first guess, but you need to know the actual impedance or the switched attenuator will not be accurate.
+ +We'll use a 50µA moving coil meter as an example, as these provide an input resistance of 20kΩ/ volt. Anything less sensitive is not very useful, as it causes loading on the circuit being measured, leading to errors. Cheap multimeters use 500µA movements, resulting in an input resistance of 2kΩ/ volt. This terminology may be strange to newcomers, but all it means is that if the voltage range switch is set to 1V, the meter load will be 20k (or 2k). When set for 10V, this will become 200k (or 20k). It's not relevant to digital meters, as most have a constant input resistance of (usually) 10MΩ (or 11MΩ).
+ +Each resistor in the attenuator is determined by the voltage range and meter current. Look at the attenuator shown in Figure 11, and you'll see a progression of values, ranging from 9MΩ down to 1k. This assumes that the input resistance of the DPM is much greater than the total attenuator resistance (10MΩ), which may or may not be the case in reality. The values for a moving coil meter are harder to calculate, as more ranges are required. The most common is a 1-2-5 sequence, as this allows you to select a range where the meter's pointer is within the 20-80% range.
+ +The resistors are all in a series string, and on any given range they limit the meter current to 50µA at the maximum voltage. R9 is always in circuit, and it includes the resistance of the meter's coil. If the coil is 1,200Ω, R9 will be 18.8kΩ (and most likely a fixed resistor in series with a trimpot for calibration). As an example, on the 10V range, R1, R2, R3 and R9 are all in series, so a total resistance of 200k is in series with the meter. 10V divided by 50µA is (not unexpectedly) 200k, so with 10V applied on the 10V range, 50µA flows through the meter and will show '10' on the scale.
+ +Feel free to work out the current for any range with the full voltage applied, and it will always come to 50µA meter current. This is how 99.9% of all analogue meters are wired. The resistances change with the meter's FSD sensitivity, so for a 500µA meter, all resistances will be divided by ten. With other meter types (in particular VTVMs and their 'solid-state' equivalents), the attenuator is designed to provide a voltage to the measuring circuit, be it valves (vacuum tubes), JFETs or based on an opamp. This allows the attenuator to be a higher impedance, with a constant 10MΩ being common.
+ +The more 'advanced' techniques aren't shown here, as the number of different circuits and calculations would rapidly make this article far too long. You may also have noticed that AC voltage measurements aren't included. These almost always use a separate attenuator, which will use lower resistances, and include a (usually crude) rectifier. The AC ranges on cheap meters measure the average value of the AC, and the meter is calibrated to show RMS. However, the measurement is only accurate with a low-frequency sinewave (such as 50-60Hz mains).
+ + +It almost looks like this section is pretty useless, but the final application allows you to do things that no normal multimeter will - measure very low resistances. 'Normal' analogue meters use a voltage source (most often a 1.5V cell) with a series resistance to suit the resistance range. The meter reads the voltage across the external resistor, so the scale is non-linear. This approach works, but not very well, as higher values are all cramped up at the lower end of the scale. Digital multimeters use a constant current, so the voltage across the DUT is directly proportional to its resistance.
+ +There are many reasons one may want to measure very low resistance values. Transformer windings, loudspeaker crossover inductors (assuming you are actually interested in passive crossovers), or perhaps you need to be able to measure current shunts.
For very low resistance values you have two choices - either use a very sensitive voltmeter, or a high measurement current. Both methods have disadvantages. High sensitivity is difficult for DC amplifiers because of drift. Changes in temperature cause opamp offset voltage and current to change, and that affects the readings. While there are methods to (almost) eliminate drift, they are beyond the scope of this article.
+ +High measurement current can cause the device under test (DUT) to heat, and that may (will) affect the resistance. Some things that have low resistance may not be able to even tolerate the kind of current that you may need to be able to measure them. In general, a maximum current of around 1A will allow most low resistance measurements without too many risks, but naturally the current source can be made variable, with switched ranges to provide a wide measurement range.
+ +With a measurement current of 1A you will get a meter that can measure 0.2Ω full scale, so very low resistances can be measured. Needless to say, battery operation is not recommended if you aim to make a resistance meter that will provide 1A or more (although Li-Ion cells can be used). The meter is shown using a 4-wire system (aka Kelvin) so the lead resistance doesn't cause an error.
+ +The use of the 4-wire system is essential for very low resistances. Two wires carry the current to the DUT, and the two measurement leads are then connected as close as possible to the device itself, with a component lead length equal to what will be used when the component is installed. This technique avoids errors caused by lead and connection resistances. While it is possible to null out the lead resistance, connection resistance tends to be variable, and can cause substantial measurement errors. This method is very common for this type of instrument. R1 is used to prevent possible damage to the DPM if it is subjected to an over-voltage condition.
+ +The adjustable current source requires accurate calibration, and will be as good as your construction and choice of components allows. Temperature drift is always a problem with precision circuits like this one, but the circuit as shown will be quite accurate within the normal ambient temperature range. The current setting resistors (those connected to SW1b) need to be as accurate as possible. The zener diode can be replaced with a 3-terminal (adjustable) voltage reference, such as the TL431 or equivalent. These are more stable than a zener diode. The TL431 has a nominal voltage of 2.5V without adjustment, which is fine in this role.
+ +The greatest difficulty is the switch used to select current ranges. Even the smallest amount of resistance will cause large errors. By switching both the resistor and the measurement point (the opamp's inverting input), the error is minimised because the switch resistance does not form part of the measurement circuit. R3 is included to ensure that the current source is switched off as you change ranges.
+ +VR1 is adjusted so there is exactly 1V between the opamp's positive input and the 5V supply. When exactly the same voltage (1V) is developed across any of the current setting resistors, the current through it must be as specified. A tiny error is introduced because the base current of Q1 is added to the total, but this should amount to less than 0.1%. The 5V supply needs to be well regulated, and capable of at least 1.5A without any appreciable change of voltage. If the 0.2Ω range is not needed, you can leave out the 1Ω resistor and simplify the switching accordingly. Q2 can then be changed to a BD140.
+ +Although a zener is not the most ideal voltage reference, they are easy to obtain. Precision voltage reference diodes are available, but they are relatively expensive and only stocked by a few major parts suppliers. The zener is deliberately operated at a relatively high current (about 100mA) so that it will get reasonably hot. This helps to stabilise it against ambient temperature variations, so the circuit will take a few minutes to settle down after power is applied.
+ +This circuit can also be used as a stand-alone low ohms adaptor. It obviously needs the power supplies, but you can use your multimeter to measure the voltage across the DUT. The resistance is read as a voltage (the same way that your meter does it internally), with the appropriate conversion based on the current source setting.
+ +The second section of the low ohm meter circuit can be used in conjunction with an analogue movement if you prefer. You will need to apply your own multiplier to the scale and add any necessary extra resistance for calibration, but it will work just as well. You will have to make your own scale - see conclusion, below.
+ + +The metering systems described here should be considered a guideline, rather than usable circuits in their own right. By following the information shown, you will be able to create a meter for almost any measurement for which meters can be used. If AC metering is needed, then I suggest that you look at the various meter circuits in the Projects pages.
+ +Although it may seem unlikely, this article has only covered the basics. Metering is widely used for many different applications, and it is impossible to cover every possibility in a short article. It is hoped that the information proves useful to anyone who has been wondering exactly how to go about adding a meter to their latest power supply project, or who has a real need to measure low resistances.
+ +It should be noted that for AC voltage or current measurements, the addition of a true RMS converter IC is highly recommended. AC measurements that are not RMS are misleading, and cause errors in calculations. This adds another layer of complexity, but it's worth every cent. Suitable examples are shown in Project 140, and while they are fairly expensive ICs, the extra cost is well worthwhile. IMO, any complex waveform AC voltage or current measurement that isn't true RMS is pretty much worthless.
+ +One final point - scales. It is often difficult (or impossible) to get a meter scale that is calibrated with the units you want. The resolution of modern printers is more than acceptable to allow you to create your own scale, which can then be printed. Ink-jet photo printing paper gives an excellent finish, and after you have cut the scale to fit, it can be attached over the existing scale with spray adhesive. Make sure that there is sufficient clearance for the pointer, and avoid 'whiskers' of paper that can cause the pointer to stick. While the meter is dismantled, be careful to ensure that no magnetic materials (iron filings, etc.) are allowed to enter the gap, as these will cause the meter to stick and are a real pain to remove (personal experience - I once worked in an instrument repair lab).
+ +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Electret Microphones |
Of all the microphones ever devised, the electret has taken the #1 position by a significant margin, and in a remarkably short time. First appearing in the 1970s, they are used in the cheapest PC microphones, the vast majority of all new telephones, high quality recording applications and fully certified noise measurement systems. MEMS (micro-electro-mechanical systems) microphones are now starting to make serious inroads, but we can expect electret mics to remain dominant in many fields for some time to come.
+ +No other mic has covered such a wide range of applications or had the same range of prices - from perhaps $1.00 or less right through to $1,000 or more. Once considered the 'poor man's' solution, even very cheap electret capsules can give higher performance than very expensive dynamic microphones. There are limitations of course, but this applies to every microphone type - none is perfect for all applications.
+ +This article looks mainly at the myriad powering schemes that have been used. Quite a few are already described in other ESP pages, but the purpose of this collection is to examine the different schemes to give the user a better idea of the options available. We will also look at the advantages and disadvantages of some of the schemes.
+ +Some people's ideas are very well engineered, while others are incredibly complex for no expected benefit. It is also extremely difficult to determine where some of the ideas first appeared, and who was responsible. This makes it hard to give credit because I wasn't able to determine the original designer in several cases.
+ + +Early electret mics used a 'pre-polarised' diaphragm, with a vacuum deposited metallic coating to make the diaphragm conductive. These mics were unreliable, and often lost their pre-polarisation charge. This rendered the mic useless. The current mic capsules are almost exclusively 'back electret' - the diaphragm backing plate is both the second part of the capacitor and holds the electret 'charge'.
+ +These mics are available in a wide range of sizes, and although the most common are omni-directional (pick up sound more or less equally regardless of direction), directional versions are also available. The back electret principle keeps the electret material away from potential contaminants, and the latest capsules have a long life and stable operating conditions. They are so good that they are steadily replacing traditional high voltage DC polarised capacitor microphones in even the most demanding applications.
+ +The primary drawback of electret mics is the internal preamp. The best measurement mics do not use an internal FET preamp, but expect the microphone preamp to have an input impedance of at least 1 Gigaohm, and often more. The electret capsule is connected directly to the preamp using a standardised thread and connection scheme. In most respects, the preamp is identical to that used by a true capacitor mic, except that there is no requirement for a polarising voltage (typically up to 200V).
+ +An external preamp can be configured to handle high signal voltages - typically up to 4V RMS. Most measurement mics are around 50mV/Pascal (i.e. 50mV output at 94dB SPL). The maximum output level is reached at a SPL (sound pressure level) of 132dB.
+ +By contrast, the typical electret capsule we buy from the local electronics supplier has an inbuilt FET, and is intended to be operated from as little as 1.5V from a single dry cell. Since these capsules operate from a low voltage, their ability to handle high SPL is limited in the extreme. Even if the supply voltage is increased, the internal FET limits the ultimate level - usually dramatically.
+ +It doesn't help the beginner that electret capsules have their sensitivity commonly quoted as (for example) -35dB (±4dB) referred to 0dBV at 1 Pascal. This demands that the user calculates the output level to get something sensible. The above specification reduces to ...
+ ++ V = 1 / antilog ( db / 20 )+ +
+ V = 1 / antilog ( 35 / 20 ) = antilog ( 1.75 )
+ V = 1 / 56 = 0.018 V = 18mV @ 1 Pascal +
Therefore, a mic with a sensitivity of -35dB referred to 1V/Pascal has an output of 18mV at 1 Pascal or 94dB SPL. With cheap inserts, this varies quite widely though, and the maximum SPL is generally rather limited. Those I've tested are ok up to around 100dB SPL, but after that their distortion rises quickly. Distortion at 114dB SPL is usually too high, so these cheap mics must only be used with comparatively low levels (singers, close mics on a drum kit or right in front of a guitar amp will be badly distorted, for example). The same process is used for any other specification where the reference is 1V/Pascal.
+ + +The output level of microphones should be rated in millivolts per Pascal (mV /Pa), although there are many variations. Other conventions used include dBm or dBu (referred to 775mV) or dBV (referred to 1V) at 0.1 Pa (this will always be a negative number). The older standards persist in some countries and with some manufacturers. There doesn't seem to be any logical pattern, but it's very annoying to have to convert units all the time.
+ ++ 1 Pascal = 10 micro-Bar = 94dB SPL+ +
+ 0.1 Pascal = 1 micro-Bar = 74dB SPL
+ 1 dyne/cm2 = 0.1 Pascal = 1 µbar +
There are also noise ratings (which vary widely, both in output noise and the way it is specified), output impedance, recommended load impedance, polar response, frequency response, etc. Frequency response claims are meaningless without a graph showing the actual response, and for directional mics this should also indicate the distance of the mic from the sound source. Cheap microphones are particularly bad in this respect, and it is not uncommon to see the frequency response stated as (for example) 50 - 20,000Hz. Because no limits are quoted (such as ±3dB) this is pointless - any microphone will react to that frequency range, but may be -20dB at the frequency extremes, with wide variations in between.
+ +Even cheap electret capsules usually have very good response, but only for omnidirectional types. Cheap directional capsules are a lottery at best and like all directional mics, have generally poor low frequency response unless used very close to the sound source. In this case, the bass is often heavily accentuated (due to proximity effect).
+ + +This section expands on the information provided in Microphones. I do not intend to cover capsules that require an external FET preamp, because these require the constructor to have access to resistors of at least 1 Gigaohm (1,000 Meg ohms), and often more. It also helps to have clean-room facilities, because even a tiny amount of contamination can cause reduced impedance, noise, or even failure to function at all. Mic capsules without inbuilt preamps are also generally at the very top end of the price structure. They are also rather delicate, and all too easy to damage.
+ +Consequently, I will look at the more common types - bear in mind that some of this material is duplicated in the Microphones or Project 93. This primarily looks at powering the microphone as a complete system, but there are schemes that appear to present a complete mic system with only the capsule and a few other parts.
+ +
Figure 1 - Basic Microphone Capsule Powering
Figure 2 shows two of the most basic possible powering schemes, and these cannot be recommended for any serious use. There are many variants, with some using an inductor to increase the available output. At 1.5V (Version 'A'), the available supply is simply too low to be useful, and it really needs to be upgraded substantially to be useful for anything other than casual amateur recordings. PC sound card microphones (Version 'B') use a similar scheme, except the supply voltage is 5V from the PC supply, and some of these are almost useful for low quality low level speech recording.
+ +As shown in Figure 2, the standard PC microphone connector is a stereo mini-jack (3.5mm diameter). Earth and shield is the sleeve as always, the signal is on the tip, and DC is applied via the ring. Presumably, the signal and DC were separated to prevent possible problems caused by DC on the mic input circuit, but IMO the whole idea was somewhat misguided from the outset.
+ +Apart from a few simplified examples, this article will concentrate on phantom power (DIN 45595). In all cases, phantom power should be provided at the nominal 48V. There are many pieces of equipment available now that rely on the fact that many phantom powered mics will operate fine at (often much) less than 48 Volts. This is an extremely poor practice, because there are also phantom powered mics that will not operate at voltages that are much less than the nominal value. It is perfectly alright for the P48 voltage to be as low as 43V or as high as 53V, as this is within a tolerance of ~10%.
+ +Traditionally, P48 is delivered to the two signal lines of a balanced connection via 6.8k resistors. You will often see these specified as 6.81k - the extra 10 ohms is immaterial, but implies that the resistors should be close tolerance. It has been claimed (although I don't recall where) that the resistors should be no more than 0.4% tolerance, but it's easy to select them to be much closer than this. I would suggest that 0.1% is more appropriate - this means they should be within 13 ohms of each other. Closer matching means better common mode rejection, but there is a practical limit imposed by everything else in the signal chain.
+ +Some mics use an internal cell or battery, and do not require phantom power. Most of these are hobbyist mics, and are also unbalanced and are not suited to professional applications. For those mics that use a 1.5V cell as their power source, as you can imagine the maximum output is extremely limited, and they distort readily even with normal speech at close range.
+ +
Figure 2 - Microphone Powering Methods
Figure 2 shows the two main mic powering methods in use. Phantom power (aka P48) is by far the most common, and is recommended for all applications. The alternative T-Power should be avoided as it is incompatible with P48 (although adaptors exist, they may or may not work), and it is all too easy to plug in the wrong mic type and cause damage. As you can probably guess from the tick and cross, I have a pretty strong opinion of the two powering schemes.
Phantom power uses equal voltage on pins 2 and 3 with respect to earth, but T-Power systems use 12V DC between pins 2 and 3. In some systems, pin 2 is +12 volts with respect to pin 3, but there is always a chance that the polarity may be reversed. The DC voltage on these pins is usually earth (ground) referenced, but not always! There are also systems where the DC supply is floating - it's not referenced to earth at all.
+ +In general, I would have to recommend that T-Power be avoided wherever possible. It is capable of providing up to 33mA through the voicecoil of a dynamic mic (P48 power does not put any current through a floating voicecoil or transformer). In addition, T-Power can provide as much as 66mA between the positive lead and earth limited by 180 ohm resistors on each signal line). In comparison, P48 is limited to a short circuit current of 14mA, which is only available if both signal leads are shorted to earth. Each lead is limited to a short-circuit current of 7mA ( 48V / 6.8k ).
+ +The term 'T-Power' is from the German 'Tonaderspeisung'. This is also known as A-B Powering and is covered by the DIN 45595 specification, but in some circles you might hear it called by other names as well (not all are for polite company, especially if you just killed a mic by using the wrong powering scheme). Unlike phantom power, T-Power may damage dynamic and phantom powered mics (and possibly others as well) not designed for it, and is thankfully becoming less and less common. Predictably, phantom power will very likely damage a T-Powered microphone.
+ +T-Powered mics may still be used with some film sound equipment, and for 'ENG' - Electronic News Gathering for radio or TV. Sennheiser still makes a range of RF 'condenser' microphones that are available in both T-Power and P48. T-Power systems are their own worst enemy in many respects. Not only is there no strict convention for polarity (a potentially disastrous situation in itself), but in some cases the power supply may be fully floating and doesn't use the shield (earth/ ground/ pin 1) connection at all, while in others the supply is earth referenced. The electronics don't actually care either way, but it's another level of abstraction that only gives people something to argue about, but has no benefit either way. The alternate connection is shown in grey in Figure 2 (the connection shown dotted is not used with a floating supply).
+ +Some of the older 'condenser' (capacitor) microphones had their own special power supply, and used a multi-pin connector for the different voltages. This was especially true of valve (vacuum tube) microphones, which were unable to use phantom power because their current demands were well above what can be supplied. These power supplies are used in-line with the mic, and typically present a standard XLR output with no voltages present. Many use a transformer to provide full galvanic isolation thus preventing earth loops.
+ +Finally, many test and measurement mics use a 4mA current loop supply. This is a completely different approach from the other methods, in that it is unbalanced. Despite claims to the contrary, an unbalanced system can be just as quiet and reject just as much noise as a balanced system, although in some extreme cases high frequency interference may cause problems. A complete 4mA microphone system using an electret capsule is described in Project 134. This system typically uses a 24V supply, and a microphone 'conditioner' provides a constant 4mA current to each connected mic.
+ + +This is where we actually start to look at the many different schemes that have been used. Remember, this article is devoted to electret mic capsules with an inbuilt FET preamp, so some of the more exotic schemes are not applicable. Most of these are already discussed in Microphones, which explains the different types and has a lot more generalised information.
+ +Many of the published schemes for powering electret capsules via phantom power have tried very hard to ensure that the circuit is symmetrical. While many of these schemes may appear to be perfectly balanced, this may not be the case.
+ +
Figure 3 - A Selection Of Microphone Powering Circuits To Be Avoided
The schemes shown above are some of those you may come across on the Net. Unfortunately, after finding this particular set of drawings (which I have redrawn and changed slightly), I couldn't find it again to give credit. While 'C' and 'D' look nice and symmetrical and would probably work well enough, their impedance is too high to allow a reasonably long cable to be used.
+ +'A' and 'B' are (IMO) unusable - while there is a convenient formula shown, there is nothing to indicate where the impedance figure of 492 ohms came from, and it is seriously doubtful that this is real. I was unable to verify the claimed value by calculation or simulation, and it will vary depending on the FET characteristics. Although these circuits appear to be impedance balanced, in reality they are no such thing and the two upper circuits should be avoided. The other two circuits should be avoided too, because of their excessively high output impedance. + +In addition, no measures have been taken to protect the capsule against high transient voltages created when phantom power is switched on. This is a group of circuits that should never be used. To make matters worse, the mic capsule's case is not at earth potential, and cannot be connected directly to the housing. This increases the likelihood of hum pickup.
+ +Rather predictably, if you need to use an electret capsule with phantom power, I suggest Project 93, not only because it's my design, but because it is a proven circuit, is well behaved and it works very well. The capsule is earthed to minimise hum, and although it uses impedance balancing only (the signal only appears on one lead), no-one who has built it has had the slightest problem with the design.
+ +There is no benefit to using a fully signal balanced circuit, and once the necessary protection is included they can become quite complex. The important thing for noise rejection is not signal balance, but impedance balance. If the impedance is exactly equal on the two signal wires, then noise rejection will be as good as the receiver can manage.
+ + +There is a vast amount of info around about the benefits of balanced systems, but in many cases this has been misconstrued - often to the point where original reason has been lost completely. For anyone who has not done so, I strongly recommend that you read the article Design of High-Performance Balanced Audio Interfaces, because a proper understanding is important.
+ +As noted in that article, there is no requirement whatsoever that a balanced circuit be symmetrical or even that signal be present on each conductor. What is important is that the impedance of the two conductors is equal over the full frequency range. I have had countless email questions that demonstrate that this point is not understood, with people insisting that "surely the circuitry should be symmetrical". Totally unnecessary in all respects - especially so because of one simple fact - symmetrical circuits aren't symmetrical at all. Just because every NPN transistor has a matching PNP transistor does not constitute symmetry, because the two devices are sufficiently different due to manufacturing processes that a perfectly symmetrical circuit is impossible. It's not necessary either - it may please the eye, but it makes no difference to the sound.
+ +As already described briefly, one of the most critical applications of all often uses unbalanced connections. This is in the area of noise measurement, which is critical not because it really matters, but because there is legislation behind it. I'm not about to launch into a diatribe about the noise measurement industry, but it is important to understand that measurements may be taken to extreme accuracy and the results used in court, yet unbalanced cables are considered perfectly alright. This is easily proved of course, and if balanced connections were found to be superior, they would be used.
+ +Unbalanced connections are regarded as inferior by most professionals, but they are every bit as good as balanced if done correctly. The signal travels along the inner conductor, and this is protected from external noise by the shield. High quality coaxial cable is readily available, and it may have a far better shield than many balanced microphone cables.
+ +Provided the impedance is low and high quality cable is used, almost no microphone needs to have a balanced connection. The balanced line is really based on convention, but it also adds a secondary means of reducing external noise. Because microphones are a floating source (having no secondary connection to other equipment), the balanced connection is overkill. Of course, it does absolutely no harm either, and the vast majority of all professional equipment uses balanced interfaces as a matter of course. Balanced connections are needed for phantom powering because the DC voltage is common mode (present equally on each signal line), and for this alone there's a good case for using all mics in balanced mode.
+ +Balanced lines became common because of the telephone system (which uses unshielded twisted-pair (UTP) cable). While fixed line telephones are considered to be rather 'old hat' these days, the phone network provided a vast amount of technique, nomenclature and convention, much of which has endured in audio even though the need or reason may no longer be apparent. Even the standard 48V phantom voltage is taken directly from the phone system, which has used 48V since phones were first implemented on a large scale.
+ ++ Completely beside the point, but interesting, is the reason that the phone system uses -48V. The negative phone line (with respect to earth/ ground) is used to prevent + corrosion of the phone lines. If the lines were positive with respect to earth, electrolytic action would create oxygen on the phone lines, leading to conversion of the + copper wires to copper oxide, which is a (poor) semiconductor, and the wire would eventually be eaten away completely. This was found during research into corrosion by + Sir Humphry Davy for the British Navy in 1834. It's called 'cathodic protection' when applied to pipelines, ships, etc. ++ +
Figure 4 shows the P93 mic capsule amplifier. This circuit is used by many people worldwide, and has extremely good performance for such a simple amplifier. The transistors are arranged as a Class-A opamp, with the microphone connected to the non-inverting input. Open loop gain is over 60dB, and open loop frequency response is within 1dB from 2Hz to just under 30kHz. It will outperform most electret mic capsules easily.
+ +
Figure 4 - ESP P93 Electret Capsule Powering Circuit
Normal operating gain as shown is about 10dB (3 times) but it's easy to have unity gain. Just reduce R8 to 1k (note that R1 may need to be increased to get symmetrical clipping - try ~82k). Frequency response extends from below 8Hz to over 100kHz within less than 0.5dB, and the output voltage can be as high as 2V RMS, with distortion typically below 0.02%. When gain is greater than unity, there is a little more output level available before clipping. The output is pseudo-balanced, which in this case means that it is balanced for impedance, but not signal.
+ +There are other circuits circulating on the Net that are also high performance, but you do need to be careful to make sure the circuit you choose will work as claimed. Many professional mics use comparatively simple circuits, and there are a few 'ready-made' electret mics that are phantom powered. Some will accept 'phantom' powering with voltages of 15V or so rather than the usual 48V. Several circuits require that the mic capsule is modified to make it 3-wire. While this certainly works with a (genuine) WM61A capsule, it's less certain with substitutes.
+ +
Figure 5 - Fully Balanced Electret Capsule Powering Circuit
The circuit above is published in a few places with various changes - this is my version, which is quite different from most of the others. It's based on a circuit that's claimed to be the schematic for a Behringer ECM8000 microphone. I can't comment on that one way or another, because very similar schemes are used by several manufacturers, with some having a JFET front end (rather than the bipolar transistor shown). These are often used with conventional capacitor capsules, and bias the JFET and mic capsule via 1G resistors.
+ +As shown, the circuit has a gain of two, because Q1 is operated as a unity gain 'phase splitter', similar to those used in valve amplifiers. It's quite a good circuit overall (at least as simulated). Note that the positions of Pin-2 and Pin-3 are reversed compared to Figure 4, because of the connection of Q1. I've not built one, and can't comment on its noise performance. Q1 should be a low noise transistor, but how it compares with the P93 circuit shown above is unknown. Many similar circuits show the negative end of C3 connected to earth/ ground, which reduces output and increases noise. Should anyone build the circuit, you are essentially on your own. Feel free to let me know how well (or otherwise) it works in practice.
+ + +While electret mics are often thought to be at the low end, they are now very common for the highest quality measurement mics, and are also common for nature recordings and elsewhere where high sensitivity, relatively low noise and wide response are required. 'True' capacitor (aka 'condenser') mics will usually out-perform most electrets, and for the very lowest noise levels it's almost impossible to beat a large diaphragm capacitor microphone.
+ +However, for the price, nothing else comes close to an electret capsule. Where it was once common to struggle by with a moving coil mic (in cheap sound level meters for example), now an electret is used which has more output, wider response, and will usually have lower noise. The simple fact that electrets are now common in very expensive sound monitoring and measuring equipment is testament to the fact that they are no longer the 'cheap and cheerful' devices they once were.
+ +It is somewhat regrettable (to put it mildly) that the Panasonic WM61A electret capsule is no longer made, as this was one of the great bargains of all time for its performance. While there are countless on-line vendors claiming that they have WM61A capsules for sale, unfortunately most are substituting whatever they can get in the same form factor (6mm diameter) and claiming it's the real thing. I have a small number of the real thing and quite a few 'fakes', and there is no comparison - especially at very low frequencies. For speech the substitutes are ok, but not for measurements where good LF response is required.
+ +It's unknown if MEMS mics will ever be able to equal a good electret for noise measurement or recording applications. They are certainly getting better all the time, but it may be a challenge to get frequency response from 0.1Hz to 20kHz - something that is easily accomplished for under $100 with an electret capsule. Most that you'll see are limited to a lower frequency of around 100Hz, but some claim 20Hz (but typically at as much as 20dB down which isn't exactly inspiring). Many also have a resonant peak at 4-6kHz, and while this is usually fine for voice applications it's of no use for accurate recordings or noise monitoring.
+ +Electronics is changing all the time, so at some stage in the (probably) not-too-distant future we may see MEMS mics taking a greater share of the market in more demanding roles. In the meantime, electret mics still give by far the best value for money of anything that's currently available.
+ + +![]() | + + + + + + |
Elliott Sound Products | +Microphone Splitters |
+ Introduction+ +
+ 1 - Impedance
+ 2 - 48V Phantom Power
+ 3 - Passive Splitters
+ 4 - Active Splitters/ DI Boxes
+ 5 - Active Microphone Splitters
+ 6 - Preamp & Attenuator
+ 7 - Signal & Clipping Detectors
+ 8 - Buffers & Transformers
+ 9 - Combining Channels & PFL
+ Conclusion
+ References +
This article may appear to be part project, and the schematics shown will all work, but the primary purpose is to discuss the various options when an audio signal has to be split to feed the signal to two or more different pieces of gear. Commonly, a signal is taken from an instrument, and sent to a stage amplifier and a mixing console. It can be a direct feed (from the instrument's output) or from a dedicated 'line output' as provided on some instrument amplifiers. Microphone signals also commonly need to be sent to multiple destinations.
+ +Signal splitters are common in recording and news gathering environments, especially where a live broadcast or recording is made of a live performance. There are many other applications as well, and there isn't a single solution to all possible requirements. If microphone levels are being split, it's common to use nothing more than a purpose designed transformer. There is inevitably some signal loss, but that's often preferable over an active solution because the noise penalty is generally much lower. However, passive splitting cannot be used where multiple destinations are required, because the signal would be greatly attenuated.
+ +The signal from a microphone can vary from as little as a few millivolts up to 1V (RMS), depending on what is being recorded and the microphone being used. A loud singer or a mic that's (often very) close to drums or instrument amp speakers can produce much higher levels than you expect, and losing a few dB of level is no big deal. However, if an orchestra, acoustic instrument or vocal ensemble is recorded from a comparatively distant microphone, the level will be low and using a passive signal splitter will reduce the overall signal to noise ratio.
+ +One of the most popular mics (for a very long time) is the Shure SM58, which has a typical output level of -54.5dBV/ Pa (1.85mV at 94dB SPL), and while most dynamic mics are similar, in practice the sensitivity can be significantly different over a fairly wide range. Capacitor (aka 'condenser') mics usually have a higher output level, while ribbon mics will be generally somewhat lower. Electret mics are also used sometimes, but some many are not be suitable for recording high level (loud) instruments.
+ +So-called 'line level' is actually an undefined term - what you need to know is the peak signal level (in dBV or dBu) and the nominal impedance. For professional audio, the reference level is generally around +4dBu, but may be higher or lower in some systems. +4dBu is a voltage of 1.23V, and the 0dBu reference level is 775mV. Levels can also be referred to 1V RMS (dBV), and +4dBV is 1.6V RMS. Most home equipment operates at lower levels, typically around -10dBV (316mV), but again, some equipment will provide either higher or lower levels depending on the whim of the manufacturer.
+ +It's common to refer to the separate outputs of a splitter as 'sends' - the signal is sent to one or more external destinations, most (if not all) of which are not part of the main installation. These will often be equipment owned and operated by radio or TV stations, or even online streaming services. Since the destination equipment could come from anywhere and be in any form of repair or disrepair, it's important that a fault that affects one send should not interfere with the signal being sent to other destinations.
+ +In most of the examples shown below, there is a single input for a microphone, a 'main' send (which goes to the system mixing console) and two auxiliary sends that can be used for outside broadcast (OB) equipment, live recording or whatever other function is needed. In some cases you might need to split a single microphone feed to multiple sends (perhaps 24 or more) for press conferences, political speeches, etc. Such a system must be active, and each send must be buffered to ensure that if one new gathering organisation shorts their signal line, it doesn't cut off anyone else's feed.
+ +To get full galvanic isolation (meaning that there is no direct ohmic connection) between the various sends from a splitter, a transformer is the only real option. No transformer model numbers are given in this article, because it depends on where you are, what you can get, and what you can afford. Good trannies are expensive and cheap ones usually have poor performance, but even a cheap transformer will often give better results than an expensive (relatively speaking) active circuit. Active balancing circuits and ICs are available, but none provides the complete isolation that you get with a transformer.
+ +When a microphone signal is split with a passive circuit, its level is always reduced. This is due to the load imposed by two or more mic input stages, each of which reduces the level as it's connected. The best way around this is to have a mic preamp at the splitter, but then there is the issue of gain control. These days it can be controlled via Ethernet with an app on a smartphone, but that adds considerable complexity, and everything must be secured against random (malevolent) punters who may find a way to hack the system. I will not even attempt to describe a system controlled via RS232 (serial), Ethernet, MIDI, WiFi, Bluetooth or any other remote system - most systems are proprietary and the details are inaccessible.
+ +![]() | Note Carefully: In the majority of the circuits shown below, opamps are not shown with supply bypass caps. This is for clarity, but in all + cases capacitors are required from each supply pin to ground, or between the supply pins. If caps are between supply pins, there should be at least one cap from one or both + supply rail(s) to ground. Omission of bypass caps may cause oscillation, especially with fast opamps. However, no opamp is immune from oscillation if not bypassed properly ! + |
To many people involved in audio, impedance is often a deeply misunderstood parameter. It's often quoted as being '600 ohms' for example, but for anything other than telephone systems it's generally arbitrary. The source impedance (microphone or 'line out') should be low, and the input impedance of the receiving equipment should be high. Impedance matching is only used when the lines are very long (from several hundred metres to two or three kilometres), where a matched impedance causes maximum power transfer with no reflections (which can manifest themselves as echoes if there's any additional delay - such as intercontinental phone links). This situation is extremely rare for any audio setup. Even in the telephone system, 600 ohms has been superseded by a 'complex' impedance which (for reasons unknown) varies from one country to the next.
+ +Microphones generally have a stated output impedance of around 200-300 ohms, and the mic preamp should present an impedance that's at least 5 times greater. Most mic preamps have an input impedance of around 4kΩ or more, so they don't load the mic and reduce the output level. For example, if a mic has an output impedance of 300 ohms and you load it with 300 ohms, the signal level is reduced by 6dB because you have created a voltage divider. You then have to increase the preamp gain by 6dB to get the same level, so noise is greater - again by 6dB.
+ +The arrangement where the source has a low impedance and the load (receiving device) has a high impedance is known as 'bridging' (the term comes from telephony). Bridging loads are by far the most common in all areas of audio. Preamps may have an output impedance of 100 ohms, and the load will be 10k or more. There is negligible level reduction (less than 0.09dB) and signal to noise is not affected.
+ +The common reference to 600 ohm lines comes (again) from telephony. This used to be the standard nominal impedance of the phone system, and 0dBm is represented by a power of 1mW into a 600 ohm load (775mV). dBu has replaced dBm in most instances now, indicating that we are interested in the voltage, and not the power. The voltage is the same for both, but dBu does not imply an impedance of 600 ohms.
+ +There are some instances where high impedance signal sources (guitar and bass in particular) need the signal to be split so that it goes to the stage amplifier and mixing console. In general, this type of splitter (aka 'DI' or direct input/ injection/ input, etc.) has a high to very high input impedance that acts as a bridging load. The input impedance can be from 100k to 10MΩ and the signal to the stage amp is not affected. The DI box sends the signal to the mixer via a transformer or electronic balancing circuit. A transformer ensures complete isolation, but an active balanced driver does not. One solution is shown in the Project 35 page.
+ +
Figure 1 - Dynamic Microphone Equivalent Circuit
+ Capsule - Lc = voicecoil inductance, Rc = voicecoil resistance, Transformer - Lp = primary inductance, Rp = primary resistance, + LL = leakage inductance, Rs = secondary resistance ++ +
Figure 1 shows the equivalent circuit of a 'typical' dynamic mic. The values will vary depending on the way the mic is made, and some may use a (relatively) high impedance voicecoil rather than a transformer. These are more fragile than low impedance voicecoils and may be more prone to failure. There are as many variations as there are models from the many manufacturers, so the above is merely representative.
+ +With the values given above, the mic's electrical impedance is 300 ohms. As with any electro-mechanical device, the impedance is a combination of electrical and mechanical (acoustic) components, but the above makes no attempt to duplicate anything other than the electrical circuitry. The model is not meant to be especially accurate, but is close enough for you to get an idea of what's involved.
+ + +Phantom power is common for many capacitor (aka 'condenser') microphones, and it's also used to power direct injection (DI) boxes that are used in live sound. However, the current is very limited, because the standard phantom feed is via a pair of 6.81k resistors. This means that the maximum current possible (into a short circuit, so there's no voltage available) is only 14mA. If you need (say) 10V to run the electronics, then the maximum current you can draw is 11mA.
+ +
Figure 2 - Phantom Power Feed System
Not included in the above is the essential protection circuitry needed in the mic preamp to protect it against the 48V supply. Project 96 shows a phantom power supply and protection network that is more-or-less typical, and Project 66 is a dedicated mic preamp that is also typical of a high-quality unit.
+ +It's certainly possible to increase the phantom supply current by using a non-standard feed circuit, but that means that your mixer is now non-standard. Making changes to the standard circuit isn't recommended, so it's necessary to design the powered equipment so that it will be functional with any mixing console or other gear that provides 48V phantom power. If this isn't possible (many early capacitor mics with valve preamps for example) then a separate dedicated power supply has to be used.
+ +Phantom power is very limited, and it's also important that a splitter passes the phantom power through to the microphone (or other equipment). However, it should only pass phantom power from the primary (main) mixer. If the auxiliary sends are connected to equipment that also can provide phantom power, the splitter should be designed to not pass phantom power if it's turned on at an auxiliary destination (anything other than the main mixer), and to ensure that phantom power from the main mixer is not passed through to the auxiliary sends.
+ +Failure to ensure proper phantom power isolation could result in damaged equipment. Only the primary/ main mixer should be able to provide phantom power to the microphone, it should not be accepted from (or passed through to) any of the additional sends from the splitter.
+ +All dynamic microphones can accept phantom power - even though it's not needed. No damage will occur, because the same voltage is applied to each end of the transformer or voicecoil, so no current flows. However, if phantom power isn't needed by the end equipment it should be turned off - always !
+ + +If you want a completely passive system, you have the choice of either a transformer or resistive splitter. A transformer system has lower losses and can provide galvanic isolation (no resistive path between separate sends), but is costly. You can get cheap transformers, but they will almost certainly have poor performance. A low cost transformer will typically suffer from one or more (perhaps all) of the following ...
+ +Transformers that can safely be used at levels of +4dBu or above with full range material (extending from 20Hz to 20kHz) are expensive. They aren't without loss either, and you can expect to lose between 3 and 6dB of signal level. This isn't usually a problem when they're operated at 0dBu, but if used with a microphone the loss of level will cause an equal increase of noise, because the mic preamps have to be run with extra gain to make up for the loss. When transformers are used, the primary inductance should be as high as practicable to ensure that there is the minimum possible loading on the microphone at all frequencies.
+ +Resistive splitters are cheap to build, but they don't provide any galvanic isolation between the separate sends (increasing the risk of hum loops), and the insertion loss is greater than a transformer. However, they can never clip or saturate at any level, and distortion is virtually zero. Using a resistive splitter for a mic signal would generally be considered a very bad idea, because the loss through the splitter means that more gain is needed at the mic preamps so noise is increased proportionally.
+ +
Figure 3 - Resistive And Transformer Splitters
Examples of both a resistive and transformer splitter are shown above. These are equally suited for 'line' level (+4dBu) or 'mic' level (-40dBu, but highly variable). The resistive splitter wins on cost, but the transformer version is a far better option overall. However, it also comes with a fairly significant cost penalty, and that can be a major disincentive if you have to pay from around $50 to over $100 each for the transformers. As noted above, cheap transformers almost certainly won't be up to professional standards. The same transformer often can't be used with both mic and line levels, and you may need splitters for both types of signal. Note the electrostatic (Faraday) shields on the transformer - each winding should have its own shield as shown, or noise can be coupled capacitively from one winding to the next.
+ +The resistive splitter shown is primarily for interest's sake. I wouldn't use it, and I suggest that you don't either. The losses are such that all sends will be attenuated (compared to using the mic directly), and this increases noise because the mic gain has to be increased to compensate for the signal loss. While a fault on one of the auxiliary sends can't reduce the signal level on the other sends to zero, it will attenuate it even further than normal. Use of phantom power is dubious - for the most part it cannot be recommended because the voltage will be fed to all sends as well as the mic. Capacitor isolation is possible, but then extensive protection needs to be provided on the send lines.
+ +The 'Earth Lift' (aka 'Ground Lift') switches and the RC network values for each are likely to be the subject of much debate. In some cases, total isolation may be the best, but that can only be achieved with the transformer version. The resistor (Re) will typically be anything from 10 ohms up to perhaps 1k or more, and the capacitor (Ce) is generally around 100nF. With the resistive splitter, the earth resistance should be kept to a fairly low value, or hum pickup from the cables is likely. The values will be a compromise in all cases, and may need to be determined by experimentation.
+ +Both splitters shown must be enclosed in a shielded metal box, which should be earthed to the main input-output connection. The straight through (Main) output is the one that goes to the primary mixing console - most often the FOH (front-of-house) mixer for live performances.
+ +The passive transformer based splitter has two major disadvantages ...
+ +In all passive systems (including those using transformers), the extra load on the mic due to it having to feed several mixers is only part of the problem. Cables have capacitance, ranging from around 42pF/ metre (low capacitance types) up to 105pF/ metre for 'normal' shielded mic cable. Since there may be several long cable runs - especially if the signal is simultaneously provided to an OB van, the total capacitance can become very high. This capacitance alters the response of the microphone, so the simple act of plugging an extra cable into a splitter causes the frequency response to change.
+ +If 100 metres of cable is connected to a microphone, the capacitive loading will be between 4.2nF and 10nF, depending on the cable used. In a large venue (especially outdoors) there may be a great deal more than 100 metres, and the mic will be affected. The only way around this problem is to use active splitters, which buffer the signal so the mic only 'sees' the cable between its own socket and the splitter.
+ + +By definition, an active splitter uses transistors, FETs or opamps to amplify and buffer the signal as needed. This means that it needs a power supply, and this may be from the mains (via a suitable transformer and power supply), batteries or 48V phantom power. Those using phantom power are very convenient, but the available current is low (typically no more than 10mA) so the circuitry must be optimised for fairly low current. Any power supply used has to be safe under all possible conditions, since a microphone is a shock hazard for anyone who plays guitar or bass because the strings of the instrument are always earthed via the musician's amplifier(s).
+ +
Figure 4 - Active DI (Direct Injection) Splitter (From Project 35)
The above is a splitter, although it's almost never referred to as such. It passes the original signal through (via the jacks), and sends a buffered signal to the mixer via the XLR connector. The example shown is intended for use with phantom power (48V only). The original project included provision for battery supply as well. While this circuit works as intended, the lack of galvanic isolation means that you can get into trouble with earth/ ground loops in some cases. Mostly it will be fine, but there will always be situations where there is a voltage difference between the stage equipment and the mixer. This is less likely in a studio, but it can still happen.
+ +There's another potential problem as well - you can use a TL072 as they have a low supply current, but are not especially quiet. The OPA2134 shown is a low noise opamp, but it draws far more current than the TL072. Fortunately, it also operates happily with a supply voltage as low as ±2.5V so it will function happily from the P48 supply, despite the low current available. This kind of trade-off is essential when you have a limited current available. However, with a low supply voltage the dynamic range is limited.
+ +A similar arrangement can be used with a transformer, but if full galvanic isolation is needed then you have to use either an external supply or batteries. When phantom power is used the shield is the DC return, and it can't be disconnected with a 'ground lift' switch. The phantom power feed resistors are also in circuit, so there are several connections that can't be disabled and still allow phantom power.
+ +When a transformer is used, it should be fitted with a Faraday shield between the primary and secondary windings, as shown in Figure 3. This helps to minimise inter-winding capacitive coupling which can otherwise couple noise between the windings. This is particularly important when long leads are used, or when the mic signal is split to create many separate sends.
+ + +When the splitter is used with microphones, there are added complications. The signal level can vary from less than a millivolt up to as much as 1V (RMS) depending on the signal source. To get 1V you need an SPL of a bit over 148dB with a 'typical' microphone, but this is comparatively easy to achieve if the mic is placed close to the cone of a guitar amp speaker (around 100mm or less is very common) or when a loud singer insists on trying to swallow the mic (also very common). This is a vast dynamic range, and is normally handled in the mixer by including a preset gain control and/ or a switchable attenuator pad (usually 20dB).
+ +A splitter shouldn't have gain controls as found on a mixer, but it has to be able to handle the full dynamic range without distortion or noise. It is often advantageous to provide gain, and it has to be high enough to keep the signal above the noise floor, but not so high as to risk distortion from overload. The maximum (adjustable) gain is therefore around 100 (40dB), but switched in 10dB increments (0, 10, 20, 30 & 40dB), which allows enough headroom for all but the highest level input signals (above 1V RMS or 0dBV). Where very high levels are provided, a 20dB pad is also useful. There will be some added noise, and this is unavoidable in any active system. Low noise circuitry is essential. Some commercial active splitters include switchable gain/ attenuation, and this is better than having a fixed gain.
+ +You can use a transformer to provide galvanic isolation for one or more outputs, while using phantom power from the 'master' mixer. This would be useful for a live recording or broadcast with separate mixers for each separate system. A system such as that shown below would be used for public speeches or political debates, where multiple news services all require their own feed from the main microphone(s) to ensure the best transmission quality.
+ +Ideally, a mic splitter will accept the signal from the source microphone, and distribute it to each send in such a way that a fault on any one cable will affect only that signal send. The others will continue to function normally. This way, the fault is easier to isolate (since it affects only one output to the destination with the fault), and in the case of an outside broadcast (OB) or similar situation where a common mic supplies signal to a local PA system and one or more OB vans that either record the signal or send it straight to air (radio or TV).
+ +
Figure 5 - Active Multi-Output Transformer-Based Splitter
Figure 5 shows the general scheme, but doesn't show the specifics of the gain, attenuation, peak detector or buffer stages. The peak detector is essential, because most of the time the person setting up the splitter won't know what gain is needed. Without the peak detector, the splitter's gain could be too high or too low and no-one would know until it got to the mixer(s). The PFL (pre-fade-listen) button allows an engineer to monitor the signal from each preamp, and while this helps ensure there is a valid signal, it can be difficult to detect clipping in a noisy environment.
+ +Including the buffers has many advantages, because they ensure total isolation of each auxiliary send. However, it also means that a separate transformer is needed for each send, which can get expensive. The gain stage will ideally have switchable gain/ loss (attenuation), pre-listen facilities (so each signal can be monitored via headphones) and any other features desired. It's not difficult to include a detector circuit that will show if an auxiliary send is shorted or has very low impedance. I don't know of any professional system that includes this feature though.
+ +The main send is the 'master', and is straight through as shown here. It can use one of the other sends of course, but then phantom power has to be included. A low noise preamp is needed, and ideally it will provide enough gain to ensure that all signals are well above the noise floor.
+ +The splitter can isolate all the outputs if desired, and it's not difficult to include a sensing system that detects when phantom power is turned on at the master mixer, and turn on the P48V supply to the mic. If phantom power is supplied from any other mixer it should be ignored - only the master should be able to control the P48V system. Using transformers for each send means that if phantom power is applied when it shouldn't be, it will be ignored (or you could have a detector to trigger a warning light or even a siren if you wanted to ).
A single attenuator and preamp is all that's needed for each channel of the splitter. Both have a comparatively high input impedance so the mic isn't loaded excessively, and although open-circuit noise (with no mic plugged in) is compromised, once the mic is connected the preamp in particular will be as quiet as any other mic pre. The circuit is not intended to bring the level up to +4dBV as may be used for 'line level' interconnections, but simply to provide enough gain (or attenuation) to ensure a clean signal to the auxiliary sends.
+ +Note that each preamp shows 100k input resistors, and these are included to minimise switching transients when the attenuator is switched in or out. If the 20dB attenuator pad is not included, these resistors should be reduced to 10k.
+ +two possibilities are shown below. Project 66 is a proven design that's fairly low cost but provides excellent performance. For this application it needs some minor changes because in its normal form it can't be reduced to unity gain (0dB). The modified version shown can run at unity gain with no problems. Note that the power supply bypassing isn't shown here, but it is essential that it's used.
+ +Both circuits require input stage protection against phantom power, and a suitable circuit is shown in Project 96. Both also require a resistor network between the points G1 and G2. These set the circuit gain, and the networks use a different resistor string for each circuit because of different internal component values. A pot can be used for continuous gain adjustment, but in this application switched gains ensure that the gain can be set with good repeatability.
+ +For Figure 6, with all gain switches open, the gain is close enough to 0dB (it's actually about -0.2dB). The gain setting resistors are Rg1 to Rg4. Rg1 (2.2k) provides gain of 10.3dB, Rg2 (560 ohms) gives a gain of 19.8dB, Rg3 (150 ohms) gives 29.6dB and Rg4 (27 ohms) gives a gain of 39.8dB. These gain values aren't exact, but it doesn't matter because the signal from any microphone varies widely in normal use.
+ +
Figure 6 - Mic Preamp Based On Project 66
The second option is to use the INA217 (or the higher cost INA103 which is a little quieter, but needs different gain setting resistors). This is a straightforward circuit, and it looks very simple compared to the modified P66. However, because of the cost of the ICs it will almost certainly cost more (and I don't have PCBs available for the INA217 mic preamp). While it might not look like it, Figures 6 and 7 are functionally almost identical - the internal circuitry of the INA217 performs in exactly the same way as the Project 66 circuit.
+ +
Figure 7 - Mic Preamp Based On INA217
In this version, when all switches are open, the gain is 0dB, Rg1 (4.7k) gives 9.8dB, Rg2 (1k1) gives a gain of 20dB, Rg3 (330 ohms) gives 30dB and Rg4 (100 ohms) gives 40dB. There are quite a few instrumentation amplifiers similar to the INA217, and many of them require different values for setting the gain. If you use something different, you may need to re-calculate the gain-setting resistors, and you'll need to see the datasheet to determine the values needed.
+ +Adding an attenuator is a problem with these circuits, because it needs to have a high impedance to prevent loading the source and affecting the signal to the main mixer. Unfortunately, a high impedance attenuator will add noise, but one way of including an attenuator is shown below. It is a comparatively high impedance, but the signal level will be high too, so that may not be an issue. However, it will cause a problem if phantom power is used, because the attenuators would drain the available current, so C1 and C2 are included to prevent that from happening. Ideally, these would be bipolar capacitors, because there may be times when the voltage across them is reversed. However, this is unlikely to ever exceed around 100mV, and polarised caps can withstand that without failure.
+ +The attenuator networks connect to the 100k input resistors of both mic preamps shown above. The four diodes (D1 ... D4) and associated resistors (R5 and R6) protect the preamp inputs from switching transients created when phantom power is turned on or off or when leads are plugged in or removed. This network is required in all cases, or the preamp will be destroyed!
+ +
Figure 8 - Switched Attenuator
With most of the ideas shown here, because of the number of circuits, connectors, transformers (in particular) and switches they become expensive. A full system may require 24 or more channels, so unless you have deep pockets, building such a system will be painful. It goes without saying that buying an equivalent system will be even more expensive, so if you plan to put a reasonable PA system together, it might be worthwhile.
+ +The benefits of building gear yourself are well known to anyone into DIY. In particular, you can build the system to do exactly what you need, rather having to accept a commercial system that may be lacking a particular feature that you require, or may have more features than are necessary for your particular application.
+ + +With any system that's out of sight of the sound engineer, it's necessary to provide independent clipping indicators. It is too easy for the gain to be set too high so that the remote splitter unit clips, especially if the gain is reasonably high. During the sound check, someone needs to verify that no clipping is evident on any channel. Because it's accepted that the occasional transient may cause clipping, the detector should be set for a lower voltage than might normally be used, so that the clip LED can come on every so often, but the signal will remain free of distortion.
+ +The clipping detector needs to have a fast attack so it can pick up brief transients, and a slow release to ensure that the LED is on for long enough for the operator to see it. It can also be useful to have a 'signal' LED, that comes on to indicate that there is a live signal at the preamp. The threshold is generally arbitrary, but around -40dBV (about 10mV after the preamp) is a fairly sensible level.
+ +
Figure 9 - Signal Present & Clipping Detectors
The detector input comes from the mic preamp. The opamp should be a TL072 or similar, which ensures a very high input impedance and low DC offset. The circuit shown isn't intended to be a precision detector, but it will reliably pick up the fact that a signal is present, and will indicate clipping if the peak input signal exceeds 4.7V (+13dBV peak). The peak detection threshold can be reduced by reducing the value of R7, which as shown sets the voltage at pin 6 to 4.68V DC. The detectors shown are half-wave and only work on the positive peaks. Full wave detectors can be used, but this adds cost and complexity but in this application is unlikely to be a major benefit.
+ +It might be tempting to include compression if the signal exceeds the peak threshold, but this is uncommon and usually a bad idea. Excessive compression is already common, and adding a compressor that is independent of the main mix is not recommended. It's also rather difficult to accommodate so it includes the input preamp, and this is where clipping is most likely. It can be done using microprocessor control of course, but that will only add complexity and is one more thing to go wrong.
+ + +For some applications, it might be acceptable to use a transformerless balanced output. One place where this may be fine in practice is where the splitter has a send to a foldback mixer, which will usually be on stage and close to the splitters. They may even be in the same rack enclosure, so earth loops are unlikely is the system is set up properly. A suitable balanced output is shown below, and it's based on the circuit shown in Project 87. The main difference is the addition of output capacitors and the zener diodes. These are intended to protect the circuit from the accidental application of 48V phantom power from the connected mixer.
+ +Active balanced outputs do not have galvanic isolation (so there is an ohmic path between all connected equipment), and this can cause havoc with earth/ ground loops. The capacitors might remove the resistive component, but they do nothing to remove the 'implied' earth that's created by all active systems. Each signal output is referenced to the ground bus of the source equipment, and a transformer is the only way they can be truly isolated. There are optical solutions, but they have comparatively poor performance against a transformer.
+ +
Figure 10 - Transformerless Balanced Output
Using a buffer can make a fairly ordinary transformer behave itself well enough to be usable over the full audio range. It is even possible (but not recommended) to use a negative impedance buffer that effectively counteracts the winding resistance of the transformer. This allows operation to lower frequencies and higher levels than would otherwise be possible, because a transformer driven from a zero ohm source generates zero distortion. See Transformers For Small Signal Audio for a complete description.
+ +However, it's preferable to use a decent transformer to begin with, because it will create fewer problems and just makes your life that bit easier (albeit more expensive). Because transformers have a comparatively limited bandwidth, they also eliminate (or reduce dramatically) any RF interference that may be present. This can be an intractable problem if there isn't complete isolation between the interconnected systems.
+ +
Figure 11 - Buffered Transformer Balanced Output
Assuming the use of a normal buffer to drive the transformer, you can use any competent opamp you like. NE5534/2, OPA2134 or LM4562 are all suitable, as are many others. Because of the very low DC resistance of a good transformer, DC coupling is not recommended, and the transformer should always be coupled via a capacitor. As noted in the reference above, the capacitor needs to be larger than you might think, and the final arrangement must be tested thoroughly over the full audio bandwidth to check for anomalies in frequency response, distortion and/or stability.
+ +Resistor R1 is included in both of these circuits so each can be operated and tested in its own right, but if you were to need (say) 24 separate sends you'd be better off using FET input opamps and you can then increase the value of R1 to 1Meg. This reduces the load on the mic preamp, although even as shown (100k) it's not a difficult load to drive. The earth lift arrangement shown may not require the parallel resistor and capacitor, but this is something that must be tested with the transformers you use. As noted earlier, there are no rules here, but around 1k and 100nF should work well in most cases.
+ + +In many cases, splitters will be set up so that outputs can be combined, rather than using a dedicated '1-in-many out' configuration. This provides the maximum flexibility, but of course you may have lots of input gain stages that aren't used most (or all) of the time. A common arrangement is to include 'link' push-buttons that allow an input circuit to feed the next set of output sends. If all the link buttons are used, only the first preamp is active, with its output sent to all outputs. These may be on the front panel or rear panel (or both).
+ +PFL (Pre-Fade Listen) can also be useful, so that inputs can be monitored with a pair of headphones to ensure that there really is a signal present, and not just noise picked up by a faulty lead or other source. The 'signal' LED shown above tells you that there's a signal present, but cannot differentiate between usable speech or music and noise. This requires a human to listen to the input(s). PFL has not been shown on the circuits above, except for Figure 5. The headphone amp can be a small power amp IC (such as an LM385) or a buffered opamp.
+ + +It should be readily apparent that ESP is not about to try to develop a project along these lines - this article is intended to look at options, problems and solutions, not to provide a complete system. Mic splitters and stage/ recording mixers are large and expensive projects. Project 30 has been available for many years, and a few hardy souls have made use of the info to build systems of varying size and complexity.
+ +If you wanted to add remote control, you'll probably use relays to switch the gain, and to switch attenuators in and out as needed. This means that you have to use an existing remote control protocol or devise your own. Unless an addressing system is used, it will involve multiple cables. The only real solution to this is to use networked systems, where each splitter has an individual address and commands can be sent via a common signalling system to change the gain, activate or deactivate attenuators and/or phantom power, or even to switch individual sends on and off.
+ +It doesn't take much thought to work out that such a system will become very complex, very quickly. Everything also needs a 'fail safe' setting, so that if communication is lost, the current settings are retained or fall back to a known (and hopefully usable) state. Commercial products exist that range from single transformer based splitters up to complex remote controlled multi-channel units.
+ +While making the essential building blocks of a full-blown stage box with multiple sends is certainly possible, it's rather unlikely that anyone will be tempted to build their own, simply due to the cost involved. A stand-alone mixer is a comparatively undemanding piece of kit to build, but if your splitter is expected to be remote controlled, provide sends to a foldback mixer, FOH mixer, a live recording mixer or perhaps an outside broadcast van at the same time, it has to be bullet-proof. The essential principles are all described here, and the end result will likely be similar to many commercial offerings. Whether there is likely to be a cost saving is another matter entirely.
+ +Yes, I could design a fully featured splitter that would satisfy most applications. No, I'm not about to do so .
![]() | + + + + + + + |
Elliott Sound Products | +Microphones II |
Before anything else is discussed, it's very important to understand that all sound measurements ultimately depend on the location of the microphone in relation to the sound source. Nearby surfaces cause reflections, some surfaces are (at least partially) absorptive, and the relative distances have to be considered in respect of the wavelength(s) of the sound being measured. Moving the mic (or meter) position by just a few metres can change the measured result by anything from a fraction of a dB up to 10dB or more. The relative sizes of any boundaries also have an effect, depending on whether they are larger or smaller than a wavelength at any given frequency.
+ +Anyone who has tried to measure the response of a loudspeaker will have seen serious anomalies in the region of 150-300Hz, where the distances between the mic, floor and ceiling cause reflections that show up as (usually) a huge response dip, which is accompanied by other peaks and dips at various frequencies where the relative distances are related to wavelength. These errors are not subtle, but as humans listening with our ears, the effect is greatly diminished - often to the point where we don't hear the response variations at all. Microphones are dumb, and they don't have our brain's processing power. This is why sound measurements and reality often don't coincide, unless extreme care is taken when the measurement is made. When measuring noise, A-Weighting only ever manages to make a bad situation worse.
+ +On the basis of the above, it's somewhat surprising that the 'authorities' (whomever they may be) will insist on the use of a carefully calibrated microphone, but don't usually ask for a detailed drawing of the measurement position, including reflection and absorption coefficients and sizes of nearby surfaces. Be that as it may, it's expected that any measurement of SPL will be done using meters of a certain standard, and that they will have been calibrated before use. Needless to say, measurements will almost always be A-Weighted, despite the fact that the use of A-Weighting is almost never appropriate because it throws away everything that is likely to be really annoying.
+ +None of the above is intended to imply that calibration is somehow 'unimportant' though. How the mic performs (with or without attached sound level meter) certainly matters, and both level and frequency response should be within predetermined limits to ensure that readings are as accurate as needed, and are reproducible. If two people take a reading from the same location using different meters, it is expected that they should get the same answer (with a small allowance of perhaps 1dB or so). The world of acoustics would be in a very sorry state if it weren't for the standards that exist to ensure that results are as accurate as can reasonably be expected. Other issues that may arise are then the responsibility of the person taking the measurement, not the equipment.
+ +There are many different types of microphone, but in the world of measurement there are only two that comprise the overwhelming majority. The first, and the one with the longest history, is the externally polarised capacitor ('condenser') mic, which dates back to 1916. A DC voltage of up to 200V is used to polarise the capacitance between the diaphragm and back plate. When the diaphragm moves as it's exposed to air pressure variations (aka sound), the capacitance changes and an AC voltage is generated that's an electrical equivalent of the sound.
+ +The other popular mic is the 'pre-polarised' capacitor microphone, more commonly known as an electret. Instead of an external DC supply, a charge is permanently 'embedded' into an insulating material, and this is generally used as the back plate (often referred to as a 'back-electret'). Early electret mics used the diaphragm as the electret material, but this is rarely seen any more. Electret mics come in two versions as well, with consumer versions having an in-built FET impedance converter. Professional electret mics use an external impedance converter as part of the powered preamp, and the mic capsule is screwed onto the preamp.
+ + +The frequency response of a measurement microphone is not as simple as it may seem. The response is determined by the sound field for which the microphone is designed. There are three different ways that a measurement microphone may be calibrated, being free field, diffuse field, and pressure. A free field is defined as a space with no reflections (an anechoic chamber), with the source being measured at 0° incidence to the microphone (i.e. directly facing the diaphragm). A diffuse field is a reverberant acoustic field in which sound has an equal probability of coming from any direction. The diffuse field response for a random incidence microphone is the average of the response of the microphone's response at varying angles of incidence. A pressure microphone is assumed to be flush mounted with the boundary of the acoustic chamber, and unlike the other two calibrations is not itself a part of the sound field.
+ +All microphones can be used in all sound fields, but each sound field registers a different response in the microphone. This is due to the geometry of the mic itself being a part of the sound field. In the case of pressure response, it is assumed that the mic is not 'immersed' in the sound field, so its geometry is not a factor. The different effects don't have any significant influence at frequencies where the size of the microphone's diaphragm is small compared to wavelength, but it becomes important at high frequencies (typically above 10kHz where the wavelength is less than 35mm). This is a topic that could easily occupy an entire article itself, but this is not that article.
+ +However, the smaller the microphone itself (and its housing) the better it will function at high frequencies. While 25mm (1") diameter measurement mics used to be the standard, most are now 12.7mm (1/2"), and some are available down to 6.35mm (1/4"). Because the smaller diaphragms have dimensions that are significantly smaller than the wavelength of sound (even at 20kHz), there is less disturbance of the sound field and HF performance is improved. When protective grilles are added, these also cause some interference, but of course they are necessary to prevent damage to the delicate diaphragm. We have to live with some limitations, and doubly so in a field measurement application where the mic has to be used in all kinds of environments. Not all of these are friendly!
+ +When taking any readings of SPL (sound pressure level), it's important to know that your meter and its microphone are accurate. Professional meters are either Class-1 (the best and most expensive) or Class-2. The latter are more affordable, but those you buy from your local electronics shop are usually neither - they don't have the required accuracy to be classified. They'll certainly give you an indication of the approximate level, but it could be out by a couple of dB and you wouldn't know. In many cases it doesn't matter, because as discussed (briefly) above, the location of the mic can change the response dramatically. There are also apps for smartphones, but most don't have any provision for calibration (even with an external microphone), so are best considered as toys.
+ + +While you can get any microphone or sound level meter (SLM) professionally calibrated, there's no guarantee that it will remain in calibration for any length of time, and in some cases the mic sensitivity may be temperature dependent, or affected by humidity. Rough handling can also affect the mic, and very high quality mics are likely to be rather delicate.
+ +It's important to understand that this article is a generalised view of microphone calibration, and is not intended to describe the different processes used in great detail. For example, mics can be calibrated using a pressure system or acoustic coupler (as described here) or 'free field' in an anechoic chamber. There is also calibration by 'reciprocity', where a reference microphone is used to provide an excitation signal (i.e. it acts as a sound source) for the mic under test. In this case, the diaphragms are close coupled, being separated by the smallest possible distance. In some cases, air is replaced by hydrogen as the coupling medium - especially for high frequency calibration. Another method uses an electrostatic field to excite the diaphragm directly (no sound is produced), but this only works with metallised diaphragms as used with capacitor ('condenser') mics (electret or externally polarised).
+ +This article concentrates on the use of a conventional air-filled acoustic coupler driven by a suitable small transducer (typically a miniature speaker), although the use of small pistons is also discussed. These calibrators are usually fixed frequency, although there are some that offer several frequencies.
+ +The standards that apply vary by country, but IEC 61672-1:2013 is recognised in most places. This defines a wide range of performance criteria that the SLM must meet. These criteria are technically complex and detailed and have tolerances for response at various frequencies. In the current IEC standard there are two levels of tolerance, and these are known as Class 1 and Class 2. The following table provides abridged data for the two classes ...
+ +Frequency | Class 1 (dB) | Class 2 (dB) + |
1kHz (Reference) | ± 1.1 | ± 1.4 + |
16Hz | +2.5, -4.5 | +5.5, -∞ + |
20Hz | ± 2.5 | ± 3.5 + |
10kHz | +2.6, -3.6 | +5.6, -∞ + |
16kHz | +3.5, -17 | +6, -∞ + |
Depending where the information comes from, you may find different results. The American National Standards Institute (ANSI) specifies sound level meters as three different Types 0, 1 and 2. These are roughly equivalent to the Classes defined by the IEC (International Electrotechnical Committee), but there are some subtle differences that mean the classifications do not necessarily translate from the US to Europe and other countries including the UK, Australia, New Zealand, and many others that use IEC based standards. Another class exists - Class 0, and these meters are generally considered laboratory grade and are not intended for field work.
+ +Regrettably, most countries mandate the use of an A-Weighting filter (See Project 17 for an example), which was originally intended for use only in quiet locations, but is now inappropriately used for everything. I've been railing against this insane approach for many years, because it completely negates the sound that travels the furthest and causes the most annoyance - bass! Nevertheless (and unfortunately) it exists, and there are great many noise polluters who will fight tooth and nail to ensure it remains. Why? Because it lets them get away with far more low frequency noise than is good for people's health and wellbeing (something they universally deny outright).
+ +There are other weighting filters used, with C-Weighting being common on all meters that are aimed at professionals. Pity that most don't actually use it. C-Weighting allows for some rolloff below 100Hz and above 10kHz, with a typical response that's roughly 6dB down at 20Hz and 10dB down at 20kHz. Z-Weighting is intended to be flat from 20Hz to 20kHz. There are some others as well, but if you want to know more, I suggest a web search.
+ +The response of all meters with A, C and Z-Weighting is the same at 1kHz, and the most common calibration tone is 1kHz (±0.2%, < 1% THD - total harmonic distortion). The level is usually 1Pa (1 Pascal) which equates to 94dB SPL. Some calibrators also offer higher SPLs, with 114dB being fairly common (10Pa). Laboratory calibrators can generally test over the full frequency range and at various levels in addition to the standard 94dB, and are used to calibrate lab grade microphones which are then in turn used to verify that a calibrator is accurate.
+ +This is all very convoluted, and it can be extremely difficult (and expensive) to get a calibrator properly calibrated unless you are willing to pay for a lab to perform the work for you. Early calibrators (in particular from Brüel & Kjær in Denmark), used what is called a 'pistonphone', where the reference SPL is generated from a carefully calibrated pair of pistons driven from a motorised cam. Because the displacement of the pistons and the volume of the measurement chamber are tightly controlled, the reference SPL can be calculated (with a barometric offset - and yes, the barometer is supplied in the kit). Pistonphones generally produce a 250Hz signal because it's not possible to obtain perfectly predictable displacement at higher speeds. Several manufacturers now make pistonphones, but they are expensive (even second hand). Because the frequency is 250Hz instead of 1kHz, a correction has to be applied for meters with A-Weighting (the signal will show an SPL of 85.4dB - 8.6dB lower than at 1kHz, as per IEC 61672-1:2013).
+ +Meters also use two different time weightings - 'F' (formerly known as 'fast', with a 125ms integration time) and 'S' ('slow', 1 second integration time). These are also defined in the appropriate standards, and set the rise-time of a reading. Fast response is needed to see the peak level of transients, while slow response is preferred when the background noise is steady. 'I' weighting (Impulse, 35ms) used to be common, but it's no longer defined in the standards and is not used. It may be available on some old SLMs, but isn't provided on any of the new ones.
+ +It's generally accepted that the measurement should be true RMS, although it's usually difficult to find out for certain is this is the case. Certainly, budget meters will be average reading but 'RMS' calibrated, and this means that the reading will only be accurate when monitoring a sinewave. While this is expected from a calibrator, most real noise sources will present a complex waveform, and the error can be substantial.
+ +There are many other facilities included in professional SLMs, such as long-term average SPL - LEQ, the sound pressure level in dB, equivalent to the total sound energy over a given period of time. It's accepted practice to include the frequency weighting as well, so LAEQ is the long term average, A-Weighted. Others include LAT - the equivalent steady level over a given period of time that contains the same amount of noise energy as the measured fluctuating sound level. Meters may also include band filters (typically octave or 1/3 octave).
+ +You will also see terms such as LA10, the noise level exceeded for 10% of the measurement period, A-weighted and calculated by statistical analysis and/or LA90, the noise level exceeded for 90% of the measurement period, A-weighted and again determined by statistical analysis. This is a complex area, and the meter setting needs to be set appropriately for the measurement conditions. + +
As noted above, while meter accuracy is obviously important, many people fail to understand that the position selected for the measurement can make a difference of 10dB or more either way, depending on the surroundings. A measurement taken from in front of a large wall (such as the side of a building) can give very different results depending on the distance from the surface and any openings therein. Unless the terrain information and measurement position is provided, the measurement is virtually useless, and the most accurate SLM in the world won't help one bit.
+ + +To be useful, a calibrator needs to meet several criteria, with frequency and level being especially important. A small variation of level due to temperature changes might be tolerable, but only if it's less than perhaps 0.2dB over the 'normal' temperature range of between 0°C and 40°C. Likewise, the frequency needs to be stable as well over the same range, and it shouldn't vary by more than ±0.2% (±2Hz). The distortion requirement isn't difficult to achieve, as it only needs to be less than 1%.
+ +There are many 'low cost' calibrators available on-line (around AU$150 or so at the time of writing), but they may not be especially accurate as supplied. I have modified quite a few for clients because the speaker's back enclosure was not sealed properly and the speaker could also move slightly, which caused the output to change when the calibrator was changed from horizontal to vertical (or vice versa). With most, the levels weren't right either, so they required a second trimpot so that the level could be independently adjusted for 114dB and 94dB. Unfortunately, some of these also show some level variance with temperature, so they are not at all useful for field work where wide temperature changes can be common. After modifications, they will probably scrape through in terms of specifications for Class 2, but they won't satisfy the criteria for Class 1 calibrators because the level changes too much.
+ +It's surprisingly difficult to get a sinewave oscillator to be very stable with temperature, largely because of the need for a system for ensuring that the level remains constant. This apparent contradiction is created by the stabilisation system itself. To minimise distortion, the gain must be dynamically varied so that the waveform doesn't clip (distort). If other factors affect the loop gain of the oscillator (such as thermal effects on capacitors or opamps), the stabilisation network will compensate, but the final output can vary by ±0.5dB or more.
+ +Common stabilisation schemes are to use a small lamp (the #327 lamp is commonly suggested - 28V at 40mA), or some form of electronic stabilisation. Electronic methods will use diodes to get a DC feedback signal (to control a JFET for example), so there's already a -2.2mV/°C change (the temperature coefficient of a 'typical' silicon diode) that has to be accounted for. The diode voltage change may not seem like much, but at a voltage of 1V and a temperature range of perhaps ±25°C, the diode alone represents a total error of a little over 0.5dB.
+ +The above doesn't include any other parameters that may change in other components, such as resistors and capacitors. There will also be changes in the bias current of opamps, and their saturation voltage changes with temperature. Making up a sinewave oscillator may not seem like such a big deal - especially when resources like the ESP article on Sinewave Oscillators - Characteristics, Topologies and Examples are available. Unfortunately, getting a stable sinewave is difficult, particularly when it will be subjected to relatively harsh treatment in the field and will have to perform over a much wider range of temperatures and supply voltages than any piece of normal test equipment. A field calibrator also has to run from batteries, and the varying supply voltage as the battery discharges has to be considered.
+ +More than acceptable frequency stability is assured by using a crystal, and then it only requires a digital divider to obtain the 1kHz needed, plus a means of converting the output squarewave into a reasonable sinewave. This means filters, and they can be affected by temperature as well - largely due to the temperature coefficient of the capacitors used. For this reason, it is essential that plastic film (polypropylene (-200ppm/°C) or polyester/ PET (+400ppm/°C)) capacitors are used. Use of high Q ceramic caps (very common in SMD styles) is not acceptable, because they have a very high thermal coefficient as well as a significant voltage coefficient. That means the capacitance varies widely depending on the instantaneous voltage present, so distortion can be high as well as having very poor thermal characteristics.
+ +The electromechanical part (the miniature speaker or other transducer) also needs to have stable performance over the normal temperature range. The thermal coefficient of resistance of copper is +0.00386/°C *, so a (measured) 8 ohm voicecoil at 25°C will be 7.228 ohms at 0°C and 8.772 ohms at 50°C. This is rather large change, and probably came as a surprise. If driven from a constant voltage, the power change is a little over 0.84dB over a 50°C range (±0.42dB referred to 25°C). This effect can be mitigated by feeding the voicecoil from a higher than normal impedance so the resistance change doesn't cause such a significant error. If an 8.2 ohm series resistor is used (for example), the total variation is reduced to well below ±0.01dB. The optimum output impedance value for the driving amplifier is equal to the voicecoil resistance.
+ ++ * Note that the temperature coefficient of resistance of copper is somewhat variable, depending on the reference used. The figure shown is typical of published values. ++ +
Unless compensated, the resistance change can be a significant source of error over the temperature range typically encountered. Other transducers (piezoelectric for example) are even less stable, so should not be used unless they have undergone extensive testing to ensure thermal stability. It's quite obvious that a calibrator using a copper voicecoil in the speaker cannot be expected to give a consistent result over a wide temperature range unless it's fed from the correct impedance.
+ +Even the atmospheric conditions (temperature, humidity and barometric pressure) make a difference to the measured SPL, and it is vitally important to ensure that any mic inserted into the calibrator doesn't change the internal volume. I was able to find the formulae for calculating the change of SPL caused only by the change of volume of the measuring chamber. Unfortunately, the formula used dynes/cm², an old and outdated measure of pressure (the Pascal is now the standard), but this information is extraordinarily difficult to find anywhere. I must confess that I think that finding the formula at all can only be put down to pure luck. I've converted the formula to suit current standards ...
+ ++ P = γ × Po × ΔV / V Pa ++ +Where + +
+ γ = (gamma) the ratio of specific heats for the gas in the enclosure. For air at 20°C and at 1 atmosphere, γ = 1.402+ +
+ Po = atmospheric pressure = 101.325 kPa
+ ΔV = the change in volume of ...
+ V = the reference volume
+ Reference SPL (0dB SPL) is 20µPa - we'll call this Pref +
An excellent example is described in the references [ 4 ], where the displacement of a Brüel & Kjær (B&K) pistonphone is explained in detail. The internal volume is 19ml, and the pistons change this by 6.28µl. If you want to see the process used to determine the piston displacement, please see the referenced document, as the displacement calculations are not shown here. Applying the formula shown above shows that the volume change created by the pistons causes the peak SPL to be 127dB, so the RMS value is 3dB less. First, we calculate the pressure variation ...
+ ++ P = γ × Po × ΔV / V+ +
+ P = 1.402 × 101.325k × ( 6.28µ / 19m )
+ P = 46.954 Pa +
Calculating the SPL ...
+ ++ SPL (peak) = 20 × log ( P / Pref )+ +
+ SPL (peak) = 20 × log ( 46.954 / 20µ )
+ SPL (peak) = 127.413 dB
+ SPL (RMS) = 127.413 - 3 = 124.4 dB SPL +
Note that the calculated 0.4dB variation is within the specification for a Class 1 instrument (±1.1dB at 1kHz). As should be apparent by now, none of this is trivial, and even seemingly insignificant changes to the reference volume (because a mic goes too far or not far enough into the chamber for example) will affect the accuracy of the calibration. I leave it as an exercise for the reader to calculate the effect of changing the reference volume by ±1ml for example. Naturally, if the chamber is made smaller, the effect is magnified - and vice versa.
+ +However, the chamber's dimensions must remain small compared to the wavelength of the calibration tone, or standing waves may create gross errors. A larger chamber also needs a greater displacement from the transducer - this is all a careful balancing act. The wavelength of a 1kHz tone in air is about 345mm, and ideally all dimensions will be smaller than 1/4 wavelength (86mm). When B&K designed the original pistonphone, the selection of 19ml for the mic chamber was almost certainly the result of some serious calculations and experiment, and it should come as no surprise that many microphone calibrators use a similar volume to this day. There are variations of course, but in general I expect that based on those I've seen, few will be much less than around 10ml. Note too that most calibrators have a small vent in the main (mic) chamber so that the delicate diaphragm isn't damaged by over (or under) pressure as the mic is inserted and removed. The vent has to be small enough to ensure that it doesn't affect the pressure response at the test frequency.
+ +
Figure 1 - Calibration Chamber Example
The drawing shows the essential parts of a calibrator's mechanical (hardware) parts. There are many possibilities for the transducer, including dynamic microphone capsules operated in reverse, small speakers, headset/ headphone drivers, etc. Most will be no more than about 40mm diameter. The outer casing can be steel, aluminium, plastic, or a combination of materials. The volume of the two chambers can vary somewhat without affecting performance, because calibration will set the SPL to the correct value. As noted earlier, if the front chamber is too small, the calibration level becomes much more dependent upon very consistent microphone insertion depth. If the mic goes in too far, it reduces the size of the chamber and increases the apparent SPL, and vice versa.
+ +There's a small lip at the end of the hole where the mic is inserted, and the microphone must always be pressed in (gently of course) until it can't go any further. This ensures that the effective chamber size remains the same for each mic you use. It doesn't always work out that way though, because some microphones have a protective grille that distances the diaphragm from the end of the receptacle, and others may penetrate the chamber due to their geometry. If the chamber is large enough, these small variations will only have a minor effect. All calibrators make use of an O-ring to seal the mic and provide a reasonably firm attachment so the mic doesn't move during calibration. + +
The electronics consist of a battery powered oscillator, with very (hopefully) stable output level and frequency. Some may use more than one level (94dB and 114dB SPL for example), or have several available frequencies. In multi-frequency calibrators, there may be a separate adjustment for each frequency, because the transducer is unlikely to have flat frequency response. It's also necessary to include a battery voltage monitor that either stops the oscillator or turns off the power LED if the voltage is below the allowable minimum.
+ + +There are many suppliers of mic calibrators, ranging from top-of-the-line Class 1 units from major manufacturers, all the way down to budget versions available from well known on-line auction sites. However, even 'cheap' calibrators are fairly costly, and even more so if you discover that they don't perform well. Consistency is critical, and if you calibrate the same mic twice in a short period and get two different answers, then which one is right? The first? The second? I suggest that neither can be trusted, so either you aren't inserting the microphone properly each time (so you need to perfect the technique and understand that the insertion distance is usually critical) ... or the calibrator is rubbish.
+ +We can discount the option of reciprocity calibrators (where one mic drives another) because the equipment is very expensive, and the setup is critical and time consuming. Few of us can afford a dedicated anechoic chamber (free field) or even a reverberation chamber (diffuse field), so that leaves us with no choice but to use a pressure calibrator, generally at a single frequency and with a reference level of 1kHz at 94dB SPL. Determining the frequency response is difficult and expensive, so mostly we rely on a calibration certificate from the manufacturer. Some are generic - the graph shown is typical of that type of mic, but more expensive mics will have an individual graph indicating the serial number of the mic and its tested response.
+ +For the vast majority of users, the only sensible option is a pressure calibrator, where the mic is inserted into a close-fitting receptacle (usually sealed with an o-ring). The required frequency and SPL are generated by an electronic oscillator driving a small moving coil transducer, which may be a miniature loudspeaker or even a dynamic microphone capsule used in 'reverse'. Pressure calibrators are the most common, and are the only ones that are suitable for field work because they are easy to use and compact enough to be carried to the site along with the other equipment.
+ +It's rather unfortunate that calibrators are expensive. Even 'cheap' ones are typically at least AU$150 and those from major measurement mic manufacturers have price tags that are quite scary ($400 to $1,000 or more). Having seen the insides of several (from 'cheap' to expensive), I can only assume that the price is based on the comparatively small number that are made and sold, and they don't quite manage to get much economy of scale. A significant part of the cost will always be the microphone adaptor(s) and the transducer + housing. These are difficult for most people to build because they require machining, which generally means that a lathe is necessary.
+ +This doesn't mean that you can't build a calibrator yourself of course, but the required machining makes it that much harder. You also have to calibrate it to a known standard, and that will almost certainly cause most people grief. It's notable that there are almost zero schematics available - most that come up in a search are not calibrators at all.
+ + +As noted earlier, the high frequency response is affected by the wavelength of sound, and as the diaphragm size starts to become significant compared to wavelength, the response will change. There is usually nothing you can do to improve matters, so for precision work it's essential to have a microphone calibration chart so corrections can be made as needed. Some high-end measurement microphones use TEDS (Transducer Electronic Data Sheet [ 5 ]), which can provide a compatible measuring instrument with details such as type, operation, and attributes of a transducer. This includes model, serial number, sensitivity, operational limits, and other information that is used to tell the measuring instrument what has been connected.
+ +The minimum frequency for any capacitor microphone is a combination of two main factors. The first is the size of the vent or bleeder. All omnidirectional microphones need a vent so that atmospheric pressure equalises on both sides of the diaphragm. Without any form of vent, a normal increase of atmospheric pressure would push the diaphragm in towards the backplate, and a decrease will pull it outwards. In the extreme, the diaphragm may be damaged, but in all cases the change of distance between the diaphragm and backplate will affect the mic's sensitivity. If atmospheric pressure forces the diaphragm closer to the backplate, the effective capacitance is increased and the sensitivity will be increased. The converse also applies of course. This venting is standard for most directional mics, because the rear vent is relatively large and is part of the process of modifying the directivity.
+ +To circumvent this very real problem, microphones use a tiny vent so that the pressure can equalise, and good low frequency performance requires a very small vent so that pressure equalisation time is large compared to the minimum frequency. For example, if you need to measure below 1Hz, the equalisation time needs to be at least 10 seconds to prevent premature rolloff.
+ +The other major contributor is the impedance presented by the preamplifier, whether it's included in the capsule or external. Consider that the capacitance may only be a few picofarads for a small mic, so the input impedance of the preamp has to be extremely high. For example, a 6mm diameter mic has a diaphragm area of about 28µm². If the diaphragm is spaced 50µm from the backplate, the capacitance can be calculated (as an approximation, because there are several assumptions made) ...
+ ++ C = 8.85E-12 kA / t ... where C = capacitance (Farads), k = dielectric constant, A = area (m²) and t = dielectric thickness (m)+ +
+ C = 8.85E-12 × 1.5 × 28µ / 50µ = 7.4pF +
The dielectric constant is a guess, because it's partly the diaphragm material (typically Mylar) and partly air, but the calculated capacitance is not far from what I'd expect for a mic that size. The total resistive load on the capacitance can now be determined for the lowest frequency of interest. So, if we want the microphone to be able to measure down to 1Hz (-3dB frequency), the total resistance (including the FET's gate leakage) needed is ...
+ ++ R = 1 / ( 2π × C × f ) ... where C = capacitance (Farads) and f is the frequency in Hz+ +
+ R = 1 / ( 2π × 7.4p × 1 ) = 21.5G ohms (yes, that's over 21 gigaohms!) +
It goes without saying that if a lower frequency limit is needed, the resistance has to be even higher. I have some electret mics that have been specified for response down to 0.1Hz - and they have been tested and verified at that frequency. That means the 'load' resistor is probably well in excess of 200G ohms - that's not a resistor, it's an insulator. In many cases, the gate of the internal FET is simply connected to the metallisation on the diaphragm with no resistor at all, and the circuit relies on the miniscule surface leakage resistance of the diaphragm and its insulating support to bias the FET.
+ +Of course, many capacitor microphones are larger than the one calculated above, so have more capacitance and can tolerate a lower resistance without suffering loss of sensitivity at low frequencies. However, it's unrealistic to expect more than 40-50pF in any capacitor mic, and even that requires a comparatively large diaphragm area.
+ + +Noise in electronic circuits is a fact of life, and can't be eliminated. In most cases, it's necessary to ensure circuit impedances are as low as possible. A 200 ohm resistor generates 257nV over a 20kHz bandwidth and at 25°C - see Noise In Audio Amplifiers for a complete description and some worked examples. Using the above example of a 21G ohm resistor, we need to consider current noise rather than voltage noise. A 21G ohm resistor can be expected to generate a little over 125fA (femto amps) in a circuit, or a voltage noise of 2.64mV with a 20kHz bandwidth. For many mics, this noise voltage could easily be greater than the signal level.
+ +Fortunately, the capacitance of the microphone forms a low pass filter for the noise voltage, effectively shorting it to ground. However, noise at the low frequency end is not attenuated by very much, and it's only reduced by 3dB at the low end corner frequency (1Hz for the example in the previous section). This always means that there is more noise at very low frequencies, and it's made worse by semiconductor shot (1/f) noise. A microphone can be made less noisy by using a higher load resistance, and discarding the extreme low frequency part of the spectrum by tailoring the size of the bleeder vent. The microphone then performs down to the design frequency, but the capacitance is still able to reduce the noise by a useful margin.
+ +A microphone's self-noise (the noise generated by the mic and its associated preamp if it's a capacitor mic) requires a completely soundproof chamber to be measured, and it is almost always expressed in dBA (A-Weighted). For most mics it's expressed as the total noise that exists due to the mic alone, expressed as 'equivalent input noise'. This is the same as using an ideal noiseless microphone in a room with the same noise level. In general, if you need very low self noise you need a large diaphragm capacitor mic, as they are more likely to be capable of getting below around 10dBA, and with comparatively high sensitivity. This is based on 0dBA being the threshold of hearing, a sound pressure of 20µPa. Low impedance dynamic mics are quieter, but are also less sensitive so need more gain.
+ +In some cases, the mic specifications will provide the SNR (S/N ratio or signal to noise ratio). For example, a mic with a sensitivity of -44dB (referred to 1 Pascal) might quote a S/N ratio of 68dB. This means that its noise is 68dB below the reference level, so in this case the self noise is equivalent to 26dB SPL (94dB - 68dB). The use of an A-Weighting filter artificially improves the apparent S/N ratio by filtering out frequencies above 4kHz and below 1kHz. Depending on the frequencies involved, the use of an A-Weighting filter may provide an apparent 'improvement' of 10dB or more, so the figure can be rather misleading, especially if you need to use the mic to measure low frequencies at low levels.
+ +Because they have no electronics, it may be thought that dynamic mics would be quieter than capacitor types, but that's not necessarily true. Ultimately, Brownian motion of air molecules will generate some noise, and as mentioned earlier a 200 ohm resistor has a 20kHz bandwidth noise level of 257nV, and it makes no difference if the resistance is from a metal film resistor or a copper winding. If the mic has a sensitivity of (say) 6mV/Pascal (-44dB), the noise contributed by a 200 ohm voicecoil or transformer winding is -87dB referred to 1 Pascal (unweighted). Equivalent input noise is therefore 7dB SPL (unweighted).
+ +This may sound very good (and it is), but the small signal from a microphone always has to be amplified. A perfect (noiseless) mic preamp will have a wideband noise output of 257µV with a 200 ohm source and 60dB of gain. In reality, there is no such thing as a 'noiseless' preamplifier, and even the quietest will add a couple of dB of noise to that from the mic itself. An 'ideal' mic preamp (zero noise) has an equivalent input noise of -129.6dBu or -131.8dBV (20kHz bandwidth, 200 ohm source). A preamp with an input noise figure of 1nV√Hz has 141nV of input noise at full 20kHz bandwidth, but it's actually higher than that because the noise figure is generally quoted at 1kHz, and it's worse at low frequencies.
+ +We can add the noise voltages together to get total EIN (equivalent input noise), noting that random noise signals cannot simply be added algebraically ...
+ ++ Total noise = √ ( Noise1² + Noise2² )+ +
+ Total noise = √ ( 257nV² + 141nV² ) = 293nV (20kHz bandwidth) +
If the preamp has a gain of 60dB and we refer the noise to 1V output, that gives us an EIN of -130dBV which is 1.8dB more noise than the ideal case. This figure is rarely found even in the best mic preamps, so if you expect to be able to record a signal at (say) 20dB SPL (200µPascals), you will be competing with the system's background noise. The only option is to use a mic with the highest possible sensitivity, and that will generally mean a large diaphragm capacitor microphone.
+ +It's difficult to get a definitive answer on noise created by Brownian motion of air molecules, but it appears to be in the order of -20 to -24dB SPL, which works to be between 1.25µPa and 2µPa (note that these numbers vary depending on the source, but around -24dB seems to be a popular estimate). It's very doubtful that Brownian motion will ever limit the overall signal to noise ratio of any microphone.
+ + +Although nearly all measurements will generally be made using A-Weighting, as regular readers will be aware I consider this to be a fool's errand at best. At worst, it can almost be considered a conspiracy, because it allows noise polluters to escape any form of punishment for generating low frequency noise at often intolerable levels. Even at 31.5Hz (well within our normal hearing range), the SPL is reduced by 40dB and it will barely register. If you listen to sound with 31Hz content, the low frequency content is clearly audible - even at a relatively low SPL. So much for 'international standards'.
+ +
Figure 2 - 'A', 'C' And 'Z' Weighting Curves Compared
The above graph shows the accepted weighting curves, with A-Weighting being by far the most common, and equally the least useful. C-Weighting is better, and Z-Weighting (linear from 10Hz to 20kHz ±1.5dB) is the best of all. Few meters can manage Z-Weighting, because getting flat response to 10Hz is difficult (as noted above in the 'Low Frequency Response' section).
+ +Note that at 1kHz, all weightings provide the same sensitivity, so a calibrator only needs to be able to produce a 1kHz tone. While multi-frequency calibrators exist, they are used primarily in calibration laboratories, and are not generally suited to field usage.
+ +In the interests of science, I conducted a basic test some time ago. I freely admit the test was rudimentary, but it is easily repeated by anyone who cares to do so. I have no doubt that the results will be similar, although will probably be more accurate (I have a basic workshop, not an acoustics laboratory). The test was conducted in my workshop, with the radio playing through my normal system. This includes a subwoofer that can reproduce 30Hz quite easily. Using a sound level meter and a parametric equaliser, I was able to boost the very low bass quite easily. Bass was boosted below about 70Hz, and all other frequencies were unaffected. Average SPL was around 60dBA and 70dBC for these tests. This is roughly the level of normal speech at ~1 metre.
+ +When the sound level meter was set to A-weighting (dBA), it registered no discernible increase in sound level when the low frequency range was boosted, even though the deep bass was clearly audibly increased! Setting the meter to C-weighting (close to flat response), a consistent 6dB increase of SPL was easily measured. Both the meter (when set to C-weighting) and my ears easily detected the low frequency boost, yet the meter indicated no change when set for A-weighting. Bear in mind that most music has little recorded bass below 40Hz and insists on changing as we listen, so a wideband pink noise source was also tested.
+ +The noise level was adjusted until the meter indicated 60dBA, and when the low bass was increased by about 8dB (the range of my equaliser at these frequencies) no increase was shown on the meter. The increased bass was clearly audible, and I verified this by inviting my wife into the workshop to listen to the test. Initially, she thought the deep rumble came from outside (not sure what she thought may have made the noise), but several tests later it was easy to tell whether the equaliser was in or out of circuit. The difference between the normal (flat) condition and deep bass boost was consistently audible. The meter sat stoically showing a level of about 60dBA regardless of whether boost was applied or not. The deep rumble would be extremely annoying if it were present for any length of time.
+ +Without changing any settings (or the meter placement), I switched to C-weighting. The meter then showed the average level as 68dB, and this increased to about 76dB when boost was applied. So the meter now registered that there was about 8dB more low bass energy, and it was clearly audible as before. Acoustic theory (suitable adjusted to give the desired results) tells us that we can't hear these frequencies well, courts and governments believe the theory, everyone insists on using A-weighting (dBA), and they are quite clearly wrong in any case that involves deep bass. I have complained bitterly about the stupidity of measuring all noises (regardless of SPL) in dBA, and this simple test has proved that my complaints are (and always were) justified.
+ +It is remarkable that such a basic test can demonstrate quite clearly that A-weighting is a fundamentally useless way to quantify low frequency annoyance levels, and I urge anyone who is involved in any kind of acoustic testing to run this same test. It is even more remarkable that no-one involved in acoustics seems to have run tests and published their findings, because this is fundamental to our understanding of the perception of low frequency noise.
+ ++ Note that this test has been performed by others, who have found exactly the same. Low frequency noise is audible, regardless of whether the meter shows it or not ! ++ + +
Measurement microphone calibration is an essential step, but the difficulty will always be actually performing the calibration with a known standard. It's not an issue for organisations who specialise in noise (or sound) measurements, because it's simply part of the cost of doing business. As such, the calibration costs can be amortised across the business, with each client paying a small part of the cost. It's not so easy for hobbyists, because they have to bear the entire cost.
+ +For general work measuring loudspeakers (for example), the absolute accuracy of the mic is immaterial. In 99% of cases it only needs to be able to make comparative tests, with frequency response being far more important than being able to measure SPL within ±0.5dB. As noted at the beginning, huge amplitude errors are common due to mic positioning, and most of these also affect the response. Few of us can afford the space or money for an anechoic chamber, so for the majority of us, speaker listening tests remain the 'gold standard'. Fortunately, most electret mics have remarkably flat response (at least across the frequency ranges needed for most tests/ measurements), so the main unknown remains accuracy of the SPL measured.
+ +In professional acoustics, absolute accuracy is necessary, because without it noise level testing is meaningless. Of course, the use of A-Weighting makes many measurements meaningless anyway, regardless of the accuracy of the microphone and measurement system. Be that as it may, if any measurement is to be made that has to survive legal scrutiny, the accuracy of the system is paramount. For this, calibration is essential, and will ideally be carried out before any measurement is taken. For long-term measurements (typically recording either the actual waveform or the measured results), calibration should be carried out both before and after the measurement, with the calibration results recorded along with the measurement data.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +Microphones |
Microphones are often poorly understood, and this article seeks to provide some basic details about the various types, how they work, and the interfacing of the microphone with a suitable preamplifier. While it would be tempting to explain mic techniques, proper usage, etc., these are not topics that will be covered. There are several reasons, but the main one is that there are so many possibilities that it is impossible to cover them all.
+ +Instead, the focus will be on the microphone basics - how each type works, along with its advantages and disadvantages. The following brief summary is a warm-up for the real thing - and even though it looks like a lot to cover, there are (and will remain) several omissions. For example, carbon microphones will not be covered because they are no longer used in new equipment, and 'esoteric' microphones (such as the so-called shotgun mic) will not be explained in any detail.
+ +With microphones, the terms Directional, Cardioid, Omni-directional (or just omni), Hyper (or Super) Cardoid, etc. refer to the polar response, but these terms are sometimes loosely applied. The directivity of all microphones is frequency dependent, and becomes spherical (omni-directional) as the frequency decreases. There are exceptions, and these will be looked at as we progress.
+ +The microphone, abbreviated to 'mic' or 'mike' is an essential part of the process of getting our music from the performers to our listening rooms. Mics are also used for sound reinforcement, ensuring we can hear everything at a concert (and also often ensuring that we can hear very little for hours afterwards). Correct microphone selection and placement during recording minimises the amount of equalisation that is needed, because the sound is already the way the producer intends. The choice is enormous, as the brief summary below indicates. + +
This article is mainly focussed on performance mics, rather than those used for test and measurement. The latter are almost exclusively either 'true' capacitor mics or electret types. Almost all measurement mics are omnidirectional. Directional mics are not used because their response is unpredictable (especially for low frequencies) and SPL (sound pressure level) must include sound coming from all directions. Measurement mics are a complete topic unto themselves, and are only mentioned here in passing.
+ + +Although the number of different microphones looks daunting, they are all based on common parameters ... these are directional patterns and transducer types, and almost every microphone made is covered by the two listings below.
+ +The directional characteristics of microphones are defined in the capsule (or capsules, in the case of dual capsule mics). Contrary to what some may claim, any type of microphone can be configured to have any of the listed configuration patterns. The directional characteristics are frequency dependent, and refer to the free field response - placing a microphone very close to any surface changes its directional characteristics, and they become unpredictable because of the almost infinite number of possibilities. Directional microphones are also called 'pressure gradient' mics, because their directional characteristics are created by means of varying pressure to the front and rear of the diaphragm (the pressure gradient). + +
In the drawings below, the mic position is shown by a dot.
+ +Omni-directional ...
+
![]() |
+Pick up sound (more or less) equally from any direction. Omni-directional refers to the frequency response being essentially flat, regardless of the direction of the arriving sound waves. Omni mics can often give fewer feedback problems compared to most cardioids, but this is highly dependent on correct usage. Omni mics have minimal proximity effect, and are (generally) better suited to instruments. These mics are not commonly used for live production - partly because of limited understanding.
+
+ Measurement microphones are exclusively omnidirectional, with no significant exceptions that I could find. Sometimes they may be arranged in an array to obtain the required directional characteristics, but this is only common for infrasound measurements as used for detecting volcanic activity or missile launches. |
Cardioid ...
+
![]() |
+The most common directional pattern. These usually have a proximity affect that colours and enhances the bass end of vocals at close range. Different cardioid mics may suit male and female singers. Singers should own their microphones and be skilled in the techniques of using them, in the same way that musicians own their instruments. Cardioid mics are often misused for instruments, typically used in very close proximity to drum skins (among other misuses). Naturally, if this gives you a specific sound that you want, then it is no longer misuse. |
Hyper-Cardioid ...
+
![]() |
+This is an exaggerated version of the cardioid mic, so it is more directional. A side-effect is that a small lobe is created at the rear of the microphone, so these mics must never be 'aimed' so that the rear lobe points towards a floor monitor (for example). Sometimes a distinction is made between 'super' and 'hyper' cardioid microphones, but other descriptions will consider them to be equivalent. |
Figure-8 ...
+
![]() |
+The figure-8 mic picks up sound equally well from the front and back, but rejects sound coming from the sides (as well as top, bottom, etc.). The pattern can be looked at as an extreme form of hyper-cardioid, where the front and rear lobes are equal in amplitude and frequency response. Many dual element microphones combine an omni and figure 8 capsule to allow switchable directivity. |
At the heart of every microphone is a transducer - simply a mechanism that converts one form of energy to another. The source (input) energy is sound, and the output is electricity. An electrical waveform is produced, which matches the acoustic input with as little modification as possible. All directional microphones must (by definition) alter the received sound to some extent. It is not possible to modify the directional characteristics without also altering the nature of the sound that is picked up. This is not necessarily 'bad', just different.
+ +Likewise, many cartridge / capsule types (the actual transducer) have their own sound, whether real or imagined. This often influences the choice of microphone type for different tasks - for example, there are mics that are favoured for bass (kick) drums that may be deemed unsuited for anything else. This is not necessarily true, as experimentation can often demonstrate.
+ +Condenser ...
+More correctly called capacitor mics, these are generally considered to be the ultimate. They have exceptional detail, and can usually tolerate very high sound levels. Distortion is very low, because the diaphragm movement is so small (comparable to that of the human ear drum). Capacitor mics most commonly use a high DC voltage to polarise the 'plates' of the capacitor sensor, although some use the change in capacitance to modulate a radio frequency oscillator. The frequency modulated 'carrier' is then fed to a detector stage to be converted back to audio. Another form is called MEMS (Micro Electro-Mechanical Systems), which typically use a charge-pump to provide the polarising voltage.
+
+
Capacitor mics (of all types) require power - this may be supplied via the P48 (48V phantom feed) from a mixing desk, or may be an external power supply. Electret and MEMS mics are low voltage (between 1.2 and 5V) and are normally supplied with power by the equipment in which they are installed, or from a single 1.5V cell (common with self-contained electret microphones). + +
In audio production, probably the most famous of all capacitor mics is the Neumann U47.
+ +Dynamic ...
+The dynamic mic uses a mechanism that is very similar to speakers. The majority are robust and can accept extreme sound levels, ideally suited for live productions. Most are cardioid, although omni-directional and hyper-cardioid types are also available. Dynamic mics are the most common of all types used in live work, and they are often used for studio recording as well. One of the best known is the venerable Shure SM58.
+
+
They are usually very rugged, and can handle more abuse than almost any other type of microphone. The ideal dynamic mic has a low impedance voicecoil, and uses a small transformer to provide the required output level and impedance. High impedance voicecoils are fragile, and can't handle the rough treatment common with live performances.
+ +Electret ...
+Also called 'Electret Condenser' or 'Electret Capacitor' microphones use a permanently 'charged' plastic membrane so that a high voltage polarising signal is not needed (as is the case with 'true' capacitor mics). Most are omni-directional, although cardioid inserts are also made. Like capacitor mics, an impedance conversion stage is essential because of the extremely high intrinsic impedance. While electrets can be used for stage work, they may distort at high sound pressure level (SPL). Many vocalists are capable of driving electret mics well into distortion. Temperature and humidity (such as from the breath of vocalists) can adversely affect them. Professional electret microphones are excellent for recording.
+
+
Electret mics (also known as 'pre-polarised') are now very common for sound level meters and other precision measurements. Many do not use an internal FET, relying on an external preamp to provide the several gigohm input impedance needed to measure low frequencies. The capacitance is very small, often no more than 10pF for a miniature capsule. To get down to 20Hz, the preamp input impedance needs to be around 1G ohm (1,000Meg).
+ +Ribbon ...
+These are common in recording studios, but less so in live work because they are comparatively fragile. A very thin (usually aluminium) ribbon is suspended in an intense magnetic field, and generates a small current when it is moved by sound. Ribbon mics have extremely low impedance, typically (much) less than 1 Ohm. A transformer is used to raise the impedance (and output voltage) to a usable level. Although Ribbon mics have an inherent Figure-8 pattern, they are also available with cardioid or hyper-cardioid patterns. 'Planar ribbon' mics are a variation of the theme. These use a thin membrane with a planar (flat) coil deposited on the membrane.
Carbon ...
+These microphones used to be very common - every old style telephone had one. The carbon mic has one major advantage over every other type - it has gain! Because the microphone element is made up of carbon granules, speech activating the diaphragm will compress and release the granules, changing the resistance significantly. Since power is needed by these mics, this is provided by the telephone line. The microphone gain is such that no additional amplification is needed to allow a normal phone call - even over a considerable distance. In the early days of telephony, this was essential to the operation of the 'phone network - so much so that if it were not for the carbon microphone, the telephone would never have been even useful (let alone gain acceptance) in those early days. Cheap and reliable amplification has made them redundant now.
As a short side-note, it is worthwhile mentioning that the telephone system uses a nominal 48V supply (see phantom feed, below). The influence of telephony on electronics as we know it is huge - so much so that the development of the phone system drove many of the inventions that we now take for granted. Have a look at the vast contribution of Bell Laboratories (which used to be an integral part of AT&T). Bell labs invented the transistor - the very cornerstone of every electronic product we use, as well as the electret microphone (plus countless other things we now treat as commonplace).
+ + +The output level of microphones should ideally be rated in millivolts per Pascal (mV /Pa), although there are many variations. Other conventions used include dBm at 0.1 Pa (this will always be a negative number). All new microphones will generally be be rated in dBV at Pa, where 0dB is 1V. For example, a mic may state its sensitivity as -44dBV (the 1Pa reference is sometimes assumed), which translates to 6.31mV at 94dB SPL. Other standards may persist in some countries.
+ ++ 1 Pascal = 10 micro-Bar = 94dB SPL+ +
+ 0.1 Pascal = 1 µBar = 74dB SPL
+ 1 dyne/ cm² = 0.1 Pascal = 1 µBar +
There are also noise ratings (these vary widely, both in output noise and the way it is specified), output impedance, recommended load impedance, polar response, frequency response, etc. Frequency response claims are meaningless without a graph showing the actual response, and for directional mics this should also indicate the distance of the mic from the sound source. Cheap microphones are particularly bad in this respect, and it is not uncommon to see the frequency response stated as (for example) 50 - 20,000Hz. Because no limits are quoted (such as ±3dB) this is pointless - any microphone will react to that frequency range, but may be -20dB at the frequency extremes, with wide variations in between.
+ +A proper graph showing the response at all frequencies will quickly show the actual response, although it is uncommon for any manufacturer of general purpose mics to state the distance between mic and sound source, or the method used to take the measurement.
+ +A polar response graph will also show the directivity at a number of different frequencies. As frequency decreases, the directional pattern commonly approaches omni-directional, although some mics maintain excellent directivity even at very low frequencies (the secret is in the rear chamber).
+ + +There are some microphones that appear to be completely different from those described above. Not the case, as the essential characteristics and transducer types don't change, but more/ different hardware or electronics are added to give additional functionality.
+ +RF (Transmitting) Mics ...
+These are available in many variations and professional types can be very expensive. A conventional microphone transducer having one of the directional characteristics listed above is connected to a small radio frequency (RF) transmitter so the mic can be used without the need for cables. The transmitter for professional radio mics requires excellent frequency stability, and receivers are highly specialised to ensure no 'drop-outs' and maintain a good signal to noise ratio (SNR) at all times.
These mics used to require specialist knowledge and experience to use them correctly, but they are now commonplace and few people have issues with them. Many have automatic limiting and compression that has to be managed carefully, because compression limits the dynamic expression of good singers, causing them to sound comparatively flat and lifeless.
+ +PZM™ ...
+The Pressure Zone Microphone™ (also known as a boundary mic) is a special application of the electret mic. A miniature electret sensor is mounted a small distance (typically less than 1mm) from (and facing towards) a flat plate. They are often used on the floor or walls, tables (for conferencing and the like), but can also be attached to large flat discs or plates. They have exceptional performance, and can effectively reduce reverberation if used carefully.
There are several variations on the basic technique, allowing for a single stereo mic unit, a cheap 'knock-off' made by Radio Shack (Tandy in Australia) called a boundary mic, but lacking the characteristics of a true PZM, and a few others.
+ +Dummy Head ...
+The dummy head mic technique yields extraordinary performance, but the recording can only give the full effect when listened to through headphones. Electret mic capsules are either embedded in a true dummy head (wig-carriers can be used ... meet Yorick below), or miniature capsules are worn in the ears of the sound recordist. When played back through headphones, the original sound field is essentially restored, and the listener hears the sound as if s/he were there.
The requirement for headphones has limited the appeal of the technique.
+ +Shotgun ...
+Shotgun mics are worthy of a complete article to themselves. Usually fairly long, they have an extreme directional pattern, and typically only pick up sound from the general vicinity directly in front of the mic. There are several ways to make shotgun microphones - techniques include a long 'barrel' with slots designed to create an interference pattern that rejects sound from the side, and multi-element designs with phase and amplitude balance between elements. An old method was to use multiple thin tubes of differing lengths, arranged so that the longest tube is in the centre, with smaller tubes surrounding it.
Some shotgun mics use a combination of methods, as well as careful attention to the mic capsule's rear chamber. These mics are useful for location sound recording (for movies or TV), nature recordings, and anywhere else where very high discrimination is needed.
+ + +The following is intended to give you an idea of the basic techniques used to make various microphone elements. These are the basic building blocks, and while some (such as dynamic mics) can be used with no additional circuitry other than a small transformer (not always used), most others require some additional components to be useful.
+ +Because the dynamic mic is one of the most prolific (or so it would appear to the uninitiated), it will be covered first.
+ +![]() |
+The general arrangement of a dynamic mic is shown to the left. A diaphragm is coupled to a voicecoil that is suspended in a strong magnetic field. As the diaphragm (and thence the coil) moves in sympathy with the arriving sound waves, an electric current is generated. In a perfect microphone, the electrical current will be an exact replica of the acoustic signal, but in reality this is never the case.
+
+ The element (also known as a capsule) shown has a vented pole-piece (indicated with a *), and this is typically done to create the required directional characteristic. For an omni-directional dynamic microphone, the back would be sealed. However, as with all omnidirectional microphones there must be a small vent to allow air pressure to equalise on both sides of the diaphragm. Without the vent, the diaphragm would be displaced by changes in atmospheric pressure. |
As you can see, this is very similar to the construction of a small speaker, and indeed, a speaker will work as a microphone (and a dynamic mic can also make noise). Naturally, the speaker and mic are each optimised for their intended application, and neither works particularly well when its role is reversed. 99% of basic intercom systems use the speaker as a microphone.
+ +Typical dynamic microphones have an impedance of around 150 - 300 ohms, although some are higher or lower than that. While it may seem tempting to match the impedance of the microphone and preamplifier, this is ill advised, as it will reduce the signal level by 6dB, and thus reduce the signal to noise ratio.
+ + +![]() |
+A capacitor microphone is much simpler mechanically, but the material quality is critical for good performance. Because the capacitance is so small, the insulation resistance must be very high, as must the impedance of the following stage. It is not uncommon to find well in excess of 1 Gigohm input impedance for the impedance conversion stage.
+
+ While the capsule shown has damping material, this may not always be the case. The distance between the diaphragm and the rear of the housing can be made small enough so that no ill effects occur within the audio range. Like the omnidirectional dynamic mic, a vent is provided to equalise air pressure. + + The backplate must be polarised so the microphone will work. While this may be as low as 48V, this may not be insufficient to allow a worthwhile signal level. Voltages up to 200V will be found in some examples. This places great constraints on the insulation, and means that such mics can be adversely affected by moisture. |
In some cases, the microphone capsule may have two diaphragms, each spaced as close as possible to the backplate. This will create a microphone with a figure-8 directional pattern. The one shown is omni-directional - this may come as a surprise because sound coming from the rear of the mic is shielded from the diaphragm by the mic itself, but this only applies at very high frequencies. Many Neumann mics use a dual diaphragm capsule, and switch one diaphragm to change the directional characteristic from cardioid to omni-directional.
+ +The diaphragm of capacitor mics must be conductive, and it is common to use metallised plastic film (Mylar is popular). The metallisation film must be protected from moisture, so may be on the inside of the capsule. In almost all cases, the insert will have a tiny bleed hole to allow the air pressure inside the housing to match that of the outside atmosphere. If this were not provided, the diaphragm to backplate spacing would vary with atmospheric pressure.
+ +Electrically, a capacitor mic can be represented by a signal source in series with a capacitance equal to that of the capsule itself. As noted above, this will be very low. A typical capacitor microphone (such as the Neumann U47) has a capacitance of around 80pF (See References.)
+ +Historically, these mics have been known as 'condenser' mics for many years. 'Condenser' is the old term for a capacitor.
+ + +![]() |
+Electret (sometimes referred to as 'ECMs' - electret capacitor microphone) mics work using the same general principles as a traditional capacitor mic. Instead of using a DC polarising voltage, the backplate is an electret material (this is a so-called 'back electret'). This material is a plastic that is subjected to an intense electrical field during processing. This causes the plastic material to retain a charge (more or less) permanently. The electret surface must be metallised to make it conductive. Some electret mics use the diaphragm as the electret element (and use a conventional backplate), and while this works very well, they do not have an indefinite life. As before, the vent is required.
+
+ The FET shown is almost always included in the capsule itself for consumer electret mics. This is the impedance converter, and in most cases there is no resistor from the gate to common (ground, mic housing). This is one reason that electret mics can react badly to a sudden loud sound, and may lose sensitivity for a few seconds. The FET gate circuit relies on surface leakage alone to bias the FET correctly. |
While I stated earlier that dynamic mics seem to be the most common, they are soundly (pun intended) beaten by electret microphones. All modern telephones use electret mics, including mobiles (aka cell phones), answering machines, computer headsets, and virtually every piece of electronic equipment that needs to hear voice commands, noise, etc. The electret has been the most successful mic capsule ever developed - over 100 million are produced every year! However, MEMS mics (see below) are now starting to take over, and will capture even more of the market in time.
+ + +![]() |
+The ribbon mic has a special place in the heart of many a sound engineer. They have an inherent figure-8 pattern, although this is often modified to produce more 'conventional' patterns. Because the impedance of the ribbon is so low, all such microphones use a transformer to raise the impedance and output voltage to more usable levels. The transformer is almost always in the same housing as the microphone element itself.
+
+ Ribbon mics are often though to be fragile, and many of them are. There are others that are very robust indeed. Because ribbon mics use a relatively large diaphragm (much larger than most other mics), they can be very sensitive to air movement - even at subsonic frequencies. |
However, high SPL does not usually bother a ribbon mic in the least. Provided the ribbon remains in the gap, almost nothing will cause a ribbon mic to distort - apart from the aforementioned air movement which must be avoided. Even apparently gentle air movement can distort the ribbon, which then must be replaced. 'Planar' ribbons are used by some manufacturers - a planar ribbon is not a ribbon in the true sense of the term, but uses a metallised coil printed on a thin plastic carrier. These are very rugged according to the literature.
+ +Because of relatively low output level (even after the transformer), you need a very quiet preamp for ribbons. They have very low self noise, so preamp noise can easily exceed the microphone noise.
+ +![]() |
+MEMS (Micro Electro-Mechanical Systems) microphones are now replacing electrets in many applications. They are made using traditional silicon etching processes, where layers of different materials are deposited onto a silicon wafer and the unwanted material is then etched away. This creates a moveable membrane and a fixed backplate over a cavity in the base wafer. The sensor backplate is a stiff perforated structure that allows air to move easily through it, while the membrane is a thin solid structure that flexes in response to the change in air pressure caused by sound waves.
+
+ Changes in air pressure created by sound waves cause the thin membrane to flex while the thicker backplate remains stationary as the air moves through its perforations. The movement of the membrane creates a change in the capacitance between the membrane and the backplate, which is translated into an electrical signal by the ASIC (application specific IC). MEMS mics always require power, typically 3.3V at a few hundred microamps. + |
MEMS mics are rugged, and are almost always made as SMD (surface-mount devices) allowing them to be placed on a PCB along with the other SMD circuitry. While some have good low frequency response, most are tailored for use with speech signals only. They can have an analogue output, although many provide a digital output in the form of pulse-density modulation (PDM), which is easily converted to a 'traditional' digital data stream by a microprocessor. + +
MEMS mics are available with the sound port at the top or bottom. A bottom port as shown in the drawing provides a reasonably large back-chamber, which improves low frequency response and sensitivity. Top port types mean that the back chamber is very small (just the size of the front chamber in the drawing), generally resulting in reduced sensitivity. The small cavities (chambers) also act as Helmholtz resonators, and can be used to tailor the frequency response, especially at high frequencies where the chamber size becomes significant compared to wavelength. Most MEMS mics are tiny, with a typical package size being only 3 x 4 x 1mm, with some being smaller still. As the package size is reduced, it becomes more difficult to achieve good performance because the back chamber (in particular) is so small.
+ + +For those microphones that require power, the most common option is phantom feed (P48). This uses a nominal 48V DC applied to both signal leads via a 6.81k resistor. A good example of a P48 powered microphone is described in Project 93, and Project 96 describes a 48V power supply and P48 distribution scheme. For the sake of completeness, Figure 5 shows the general arrangement of a 48V phantom feed system. Although the feed resistors are shown as 6.81k, 6.8k resistors can be used instead. It is recommended that they be matched to within 0.1% so common mode rejection is not compromised.
+ +
Figure 6 - 48V Phantom Powering
Although the phantom feed supply voltage has been standardised at 48V, there are many supplies that do not comply, with some operating at 30V or even less. While mics designed for P48 power might work with these low supplies, they may not. In general, phantom feed power supply must be able to supply 48V. The accepted voltage range for P48 is between 38V and 52V. A 'new' sub-standard has arisen, called P24 (20V - 26V), but this is (IMHO) a seriously retrograde step, creating potentially disastrous incompatibilities between competing standards.
+ +Some time in the late 1960s, Neumann (of microphone fame) converted its valve (tube) capacitor microphones to solid-state. They decided upon a remote powering system that they called Phantom Power, and this was a trade mark of Neumann. Although other manufacturers originally avoided the trade mark (using terms such as 'simplex' instead), with time the term Phantom Power has become generic. DIN standard 45596 describes the powering of any device that uses the P48 phantom powering scheme.
+ +Because phantom power is a common mode signal (it appears equally on both mic leads), plugging a balanced dynamic mic into a 'live' P48 powered mixer channel will not harm the microphone. The mic may make strange and/or loud and/or rude noises if the internal insulation is degraded (by age, saliva, beer, rum+cola, etc., etc.). In general, it is better to switch off the P48 supply unless it is needed.
+ +Phantom powering is not the only way that power is supplied to microphones. Another standard is called T12 - as well as transverse feed, A-B powering, parallel powering, and occasionally by its full name ... 12V Tonader (it originated in Germany). It is not commonly found outside the film industry, and is totally incompatible with P48 powering. Adaptors can be fabricated, but require a transformer.
+ +The T12 system uses 180 ohm feed resistors and a 12V supply, but the DC is not sent as a common mode signal like phantom feed. Referring to an XLR mic connector, the positive DC is applied on pin 2, negative on pin 3, and earth (ground) on pin 1. However, there is also a reverse version, with positive on pin 3 and negative on pin2. T12 powering will probably damage dynamic mics that are inadvertently connected while the T12 power is on.
+ +Capacitor microphones using valves (tubes) will almost always require a special outboard power supply, and multi-pin connectors are common. Because of the current needed by the valve heater, the 2 - 4mA available from P48 is completely unsuitable. These power supplies will be specific to the microphone - as far as I know, there is/was no standard adopted by manufacturers, so each will be different.
+ + +For live applications, the number of 'open' microphones (i.e. connected and picking up sound) should be kept to minimum. Unnecessary use of a large number of open mics creates excessive comb filter distortion. This reduces intelligibility and increases feedback problems. There are many recommendations that you may find - you may be advised to minimise the number of different microphones, for example. Exceptions are (directional overhead) for percussion and (dynamic high velocity) for bass drums. Placing any mic too close to an instrument, sound source or surface affects its response. This effect may be good or bad, depending on what you are trying to achieve.
+ +There are many sites on the Net that give some general idea of what microphone to use where, but these are mainly a matter of opinion. Everyone who uses mics has different ideas on optimum placement and type. Some are reasonable, a few good, and a lot that are (IMO) just plain wrong. One thing that is almost never mentioned is that where you place a microphone may change its characteristics.
+ +If a mic is placed very close to a surface (be it a wall, floor, drum skin or singer's face) it will no longer have the directional characteristics you purchased it with. Likewise, holding a mic in such a way that your hand cups the back of the mic ball will change directionality radically and unpredictably.
+ +Something that is not well understood is just how much signal you can get from a microphone. A typical dynamic mic is easily capable of 0.5V RMS (500mV) when held close and singing (or in my case yelling) loudly. This may seem extreme, but look at the specification for the SM58 as an example. 1.85mV at 1 Pascal (94dB SPL), so 185mV results at 114dB SPL - anyone can yell that loud at close range. You will get 500mV at just under 123dB SPL. While this may seem pretty extreme, many vocalists can achieve such levels at close range - good mic technique includes 'pulling back' from the mic when singing loudly, and getting in close for soft passages. This is a vocalist's natural compressor, but many singers don't have any mic technique at all (there seems to be an increasing number that don't have any singing technique either, but that's a different matter).
At these levels, you can completely forget using electret mics, as they will just distort badly. Because sensitivity is much higher than a typical dynamic mic, the mic may attempt to produce perhaps 3-5V RMS at the same SPL (123dB), and this is not possible with standard electret capsules. This is especially the case if it is powered by a 1.5V battery! Such mics are very common (and very useless for most applications).
+ +This article has only scratched the surface, but is a good starting place. Although there are a great many variations, the details above cover the majority of microphones in general use.
+ +As an experiment, I was recently forced (i.e. it was something I'd been planning to do for well over a year) to build Yorick (as in "Alas poor Yorick, I knew him well" - Shakespeare). Yorick is a dummy head microphone, and details are available in Project 112 so you can build your own version. Tests are very encouraging, with an amazing ability to locate the sound source.
+ +
Yorick - My Dummy Head Microphone System
Please note that any resemblance between 'Yorick' and a certain well-known (and now deceased) American entertainer (who seemed to be rather over-fond of cosmetic surgery) is entirely coincidental.
Although you can purchase a Neumann, Gras or Brüel & Kjær dummy head mic already made, I suspect the price will be a fairly strong deterrent. There are other methods of achieving much the same result, but there is something rather nice about having a 'real' head rather than a plastic or MDF disc with mics on each side (the hair is optional of course). Each capsule uses a P93 mic amplifier board. In my case, I already had a suitable preamp that is multi-purpose, but the P93 mic amp is the easiest way to build the unit.
+ +![]() | + + + + + + + |
Elliott Sound Products | +Morse Code |
The first question that many people will ask is likely to be ' .-- - ..-. ', which is Morse code for 'WTF'. Before you scoff, it should be remembered (or become known to those too young to remember) that Morse code signalled the birth of electronic messaging. The 'electric telegraph' was the first system that allowed people to communicate over long distances, by pressing a key in a sequence of 'dots' and 'dashes' (commonly referred to as 'dit' and dah' respectively).
+ +Earlier systems, such as semaphores (flags used in particular patterns and sequences to pass messages), flag signals (e.g. flaghoist), smoke signals, bonfires or drums all suffered from environmental influence. Visual systems were affected by line-of-sight and weather conditions (fog, rain, etc.) and audible methods were limited by the propagation conditions prevailing at the time (wind, atmospheric 'inversion' layers, background noise, etc.). All had limited range, requiring relay stations at regular intervals where the message could be received and re-transmitted. As you would expect, the requirement for re-transmission could easily introduce errors. The word 'telegraph' was coined in 1792 from the Greek, tele, afar, and graphos, a writer (Concise Oxford Dictionary).
+ +Morse code and the Morse telegraph system were by no means the first methods used for telegraphy. Visual and audible systems existed from ancient Greek times and probably long before, and mechanical semaphore telegraph stations were used in France in the late 1700s. Electrical experiments were conducted as early as 1747 [ 11 ], with a telegraph system developed in 1774 using pith balls, 24 conductors and high voltages. There were many other attempts as well, but it would be folly to even try to list them all. The above reference (amongst many of the others cited here) does cover quite a few of these early attempts, as well as a great deal of historical information.
+ +However, the system devised by Morse and his co-workers eventually defeated the other contenders - partly due to its inherent simplicity, and partly due to intense litigation that saw many competing (or even complementary) technologies disallowed as 'infringing' existing patents by (especially) the US Patent Office. The problems seen by many of today's inventors (and corporations) are certainly nothing new. The once dominant Western Union was the largest provider of messaging (and later 'telegram') services, which were initially all based on Morse code, but adopted new technology such as teletype (TTY) and teleprinter networks when they became available.
+ +Up until the end of the last century, Morse code was a requirement for amateur radio operators, many military personnel and a number of other occupations where communication was involved. Although it's no longer used by most people, it still retains an important place, not only in history. SOS (the international distress call ' ... --- ... ') is still recognisable to this day by a great many people, and will invoke the same reactions now that it did over 150 years ago. Early Nokia phones would beep ' ... -- ... ' when a message was received - Morse code for SMS.
+ +Patented by Samuel Morse, Joseph Henry and Alfred Vail in 1836, the telegraph (using Morse code) was first demonstrated to US Congress in 1844, transmitting the message "What hath God wrought" over a wire from Washington to Baltimore. He later experimented with submarine cable telegraphy, which was to become the first intercontinental messaging system created. However, there is considerable conjecture concerning the real 'who-what-when and why', which is covered very well in the first reference [ 1 ]. However, this short article is intended only to provide some basic background, and to show the importance of the early electro-magnetic signalling schemes.
+ +Most of the very early attempts at telegraphy originated in Europe, but with a few notable exceptions, failed to gain acceptance. A system devised by William Cooke and Charles Wheatstone was in use by the British railways in the 1830s, and was the first electric telegraph system ever to be used to catch a murderer in 1845 [ 15 ]. These telegraph systems used a system of needles which could be deflected left or right, pointing to the desired letters in turn, and were based on an idea first demonstrated by Baron Pawel Schilling (see YouTube video demonstration). The need for multiple wires (up to six) was a significant drawback. Another Wheatstone system used generated pulses to move a pointer to the desired letter of the alphabet on a circular dial (the 'ABC' or 'dial' telegraph). While this was well ahead of Morse code in almost all respects, the equipment would have been far more expensive to produce and maintain, and it failed to gain wide acceptance. Have a look at this YouTube video to see one in action at the Telstra Museum in Sydney (Australia).
+ +Morse code was used first in the USA, but Europe and the rest of the world followed quickly, because of its effectiveness and simplicity. The first European line was set up between Hamburg and Cuxhaven in 1847, and many others followed. Soon the need to link countries across oceans and continents was realised. In 1866 a submarine cable link was established between Britain and the USA, and by 1872 a link to Australia was installed. These are remarkable achievements when you look back on the technology of the time, and it's hard to imagine the working conditions of those who did the hard work (and it would have been very hard work indeed).
+ +The international code used (or what used to be used) is slightly different from the original that was developed by Alfred Vail (commonly called 'Morse Code', although it seems likely that Vail did most of the work), but the essential principles are the same. There are no lower case letters - all transmissions are assumed to be UPPER case. The code also provides numbers (0-9) and a limited number of punctuation marks ( . , : ? ' - / ( ) " @ = ). A European variant also exists to allow a few of the European characters to be transmitted.
+ +So-called 'telegrams' (not the app that's currently widely available) were once quite common. A message was sent via Morse code from one place to another, transcribed and delivered to the recipient by a messenger. Prior to the telephone, this was faster than any other method of communication that had ever been available to the public. In Australia and elsewhere, it provided communication services that were widely used for many years, even after telephone services became common. Not every household had a phone, but a telegram message could be delivered to any address world-wide. Eventually, the teleprinter (or teletype) and telephone made the need for telegrams diminish to the point where they are no longer used. SMS (short message service) and email are now responsible for almost all text traffic.
+ +It's also important to understand the limitations of the early telegraph systems. In particular, the extreme lack of privacy - anyone in a given telegraph office could listen to the message being received, and it would have been foolish indeed to send sensitive information. While there were almost certainly laws to prevent telegraph personnel from passing on information to persons other than the intended recipient, this would not actually prevent them from doing so. Encryption wasn't common as far as I can determine, but it was used by some [ 13 ], although it's also claimed that it was banned by law in some jurisdictions. Steganography, the practice of hiding messages within innocent-looking text, was also used to get around any laws prohibiting encryption [ 14 ].
+ +Like many ESP articles, it is hoped that this will inspire people to do some research, and learn some of the fascinating history behind the development of electrically powered systems - particularly in the areas of communications, which was the birth of electronics as we know it. It must be remembered that when the early telegraph experiments were first carried out, knowledge of electricity was almost non-existent for the vast majority of the experimenters and inventors of the time. Strange (to us) 'solutions' came about due to a lack of understanding of the basic principles that we can now learn even in our early school years.
+ +The early telegraph systems were the pioneer industries that effectively created electronics as we know it. The vast majority of all early electronics were devoted to improving our ability to send and receive messages from afar, and even today, a huge amount of consumer electronics is still devoted to the same purpose. The technology has changed, but our thirst for information and the ability to communicate with work colleagues, friends and family is still one of the main driving forces of the electronics industry.
+ + +The duration of a dot is considered to be one 'unit', and that of a dash is three 'units'. The space between the components of one character is one 'unit'. The space between characters is three units and between words seven units. To indicate that a mistake has been made and for the receiver to delete the last word, send ........ ('HH' - eight dots). The length of a single 'unit' is usually somewhat variable, especially when a human operator is keying the code. A reasonable unit duration is the time it takes to say "dit", but there appear to be no hard and fast rules, and the duration of a single unit also depends on the transmission speed.
+ +
International (ITU) Morse Code A - Z, 1 - 0
In the early days (before radio), the code was sent simply as a voltage on the transmission ('telegraph') line. The operator used a key (a specially designed momentary contact switch) to send dots and dashes. Depressing the key sent a voltage down the telegraph wire, and that operated a sounder or paper tape punch at the other end. Early radio systems used what was known as 'CW' (carrier wave) - a (very) broad-band radio signal that was originally provided by a spark-gap transmitter. This was keyed on and off in the same way as a wired telegraph, providing simple on-off ('digital') modulation. + +
A spark-gap transmitter literally used an electric arc across a pair of electrodes, and the RF generated by the arc was sent to an antenna by a cable. These generated noise that was detected by various (crude by modern standards) detectors in the receiving apparatus. Nearby stations could not transmit at the same time, because the signal was poorly tuned (or not tuned at all) so provided 'blanket' coverage over a wide frequency range. Tuned systems came later, especially after the advent of 'wireless' valves (vacuum tubes) that could amplify weak signals, and the benefits of tuning became apparent. In particular, a tuned system (if properly aligned) was far more sensitive than early broad band systems.
+ +Tuned transmissions occupied a relatively small bandwidth, allowing transmitters in the same locality to operate without interfering with each other or ruining reception. Each transmitter used its own frequency, so a selective receiver could pick up the frequency of the desired signal source. Unlike today, even relatively low power transmitters were large, complex and expensive, so there would never have been more than a small few in operation at any one time.
+ +As 'wireless' (as it was known at the time) progressed, it became possible to modulate the carrier with a tone. Before that, tuned (single frequency) CW receiving systems generally used a 'BFO' (beat frequency oscillator) that could be adjusted to be around 500-1kHz higher or lower than the transmitted signal. When the transmitter was activated, a 500-1kHz tone could be heard at the receiver. The frequency of the BFO can be adjusted to obtain a signal that is clearly audible, but not annoying to the receiving operator. The first modulation system developed was AM (amplitude modulation), which was used for all early broadcast (voice and music) transmissions.
+ +I still recall being able to tune a 'short wave' receiver across the band an pick up Morse transmissions. At the time, I was not yet a teenager and never bothered to learn Morse code. In hindsight this probably left a gap in my overall education in the world of electronics, but I've never been in a position where it could have been useful so I'm not overly distressed. A transmitter sending amplitude modulated Morse code as a tone provides reception capabilities that are second to none. The tone can be heard and interpreted at levels well below the noise. It's possible (but would require a very low data rate) to detect a tone that's up to 20dB below the peak noise level. This is demonstrated by the following recording ...
+ +The tone is 12dB below the peak noise level (-6dB). The level of the 550Hz Morse code is -18dB.
+ +Morse code was also used for line-of-sight communications, often between ships at sea. They most commonly use a continuous lamp (e.g. Aldis lamp), with a shutter mechanism that blocks the light until it's activated by the operator. The flashes of light can transmit Morse code between vessels, enabling communication during periods of 'radio silence' - usually imposed so that enemy vessels were unable to locate a convoy by using radio direction-finding (RDF). The same thing can be done today using a LED or laser lamp, but the transmitted signal would be high speed digital rather than Morse code. A system that exploited this method was used for digital communication between buildings (Datapoint 'LightLink'), which offered infrared optical transmission up to 3km at data rates of 2.5M bits/second (yes, I used to work on them ).
Until the early 20th century, the primary source of power for the telegraph was primary (non-rechargeable) batteries, typically based on the chemical principles demonstrated by Alessandro Volta in 1800. It seems that the original Morse telegraph used five Grove cells (zinc + sulphuric acid anode, platinum + nitric acid cathode), each producing 1.9V so the total voltage was 9.5V. However, Grove cells generate nitrogen dioxide (NO2) as they discharge. When used in large numbers (such as at a telegraph station), NO2 can lead to lung disease and other ailments.
+ +++ +Note 1: In case you were wondering, rechargeable batteries could not be used because the telegraph was in constant use well before mains electricity was available anywhere. + There was no power source available for charging, and secondary (rechargeable) batteries didn't even exist before 1859 when the lead-acid cell was invented. A web search will provide much interesting + history for you to read through.
+Note 2: The Grove cell was invented by Welsh 'Polymath' William Grove (1811-1896), who is also credited for the invention of the incandescent lamp (pre-Edison), was a + pioneer of early photographic processes and he invented the hydrogen fuel cell. [ 17 ] +
Note 3: A polymath (from the Greek polymathes), 'having learned much'; Latin: homo universalis, 'universal human') is an individual whose knowledge spans a substantial + number of subjects, known to draw on complex bodies of knowledge to solve specific problems. (Wikipedia) +
The wiring between telegraph stations was most commonly iron (or probably what today might be called mild steel). Annealed copper wire is too soft, and is unable to support its own weight across the typical distance between poles, and the idea of 'hard drawn' copper wire as is common today had not been discovered at the time. Copper wire is (and was) also a great deal more expensive than iron, but of course it is a far better electrical conductor. There is some information about rust prevention with iron wire. In the very early systems the wire(s) were coated with tar (presumably coal tar) which would have been a most unpleasant task indeed. In later years the wire was galvanised (coated with zinc), but details are rather sketchy. It appears that in Britain, zinc coating (galvanising) was common, but high sulphur levels in the atmosphere (from burning coal for home heating and industry) caused the zinc coating to degrade quickly.
+ + +For transmission, the early keys were simply an on-off momentary contact switch. There were countless designs developed, with special emphasis on ergonomics, with style and design intended to try to sell one maker's unit over the competition. A key that requires minimal travel improves sending speed, and if it's comfortable to use the operator won't tire quickly. The term 'RSI' (repetitive strain injury) didn't exist 150 years ago, but it's unlikely that the condition itself didn't exist for Morse operators who may have done little else during the day. Today, it's a simple matter to have a computer translate ordinary text into Morse code and back again, but of course there's no longer any need to do so.
+ +
Figure 1 - Transmitter Key Example
The key shown above is a standard key, and the knob is depressed for the duration of a dot or dash. The spring tension is adjustable, as is the stroke - the distance the key must be depressed to make contact. Individual operators would adjust the key to suit their personal style and preferences. Other keying systems were developed as electronic circuitry became capable of simple logic and timing functions. Early keys had an extra contact that allowed the key contacts to be shorted, and this allowed a single wire with an earth/ ground (literally) return to be used for transmission and reception - but not simultaneously.
+ +In the later years of Morse code, a key system that many found to their liking was a system of 'paddles' that operated from side-to-side rather than vertically, and known as iambic paddles. One paddle produced a train of dashes when pressed (inwards) and the other a series of dots. If both paddles are operated (squeezed together) the electronics would output an alternating sequence of dots and dashes ( .-.-.- ). Timing of the dots and dashes was/is electronic, and is faster and more precise than purely manual operation of a traditional key.
+ +Reception was an altogether different matter. The primary goal was sensitivity, because power sources of the day were generally unimpressive, and for a long range transmission the wire resistance would be considerable. More sensitive receivers needed less current from the telegraph line, and could offer impressive battery savings and/ or longer range. Both were important, because there was no way to amplify the signals, other than by using a repeater - essentially a receiver connected to contacts that could re-transmit the original Morse code with close to a zero error rate. This alone was remarkable !
+ +The earliest receivers were nothing more than a couple of electro-magnets. When a 'dot' or 'dash' was sent from the key operator, the electro-magnet would close for the duration of the signal. This was used to mark a paper strip that was drawn through the system using a clockwork drive.
+ +
Figure 2 - Audible Sounder
Another common arrangement was the use of two sets of contacts on the Morse key. When the key wasn't being used, a secondary contact set (placed where the end stop is shown) connected the incoming line and battery supply to the receiver. When the remote key was operated, this would activate the receiver/ recorder unit. Operating the key would close the main contacts, sending power to the remote receiver, via the closed contacts in the remote key. A complete system (albeit greatly simplified) is shown below, including the dual contact key, and a simplistic representation of the receiving system. The paper tape was moved by a clockwork motor.
+ +When the key is at rest, the rear contacts are closed, so the telegraph line connects to the receiver. When the key is operated, it connects the battery to the far end via the line, and to the far end's receiver. This allows two way communication, but only one station can transmit or receive at a time (half-duplex). An end of message code was typically used to indicate that the line was clear, so another operator could send a message.
+ +The very first 'sounders' were simply an electromagnet, as shown in Figure 2. In some early British systems, the deflection of a magnetic compass needle was used as a receiver (developed by Charles Wheatstone and others), but one's eyes are poorly suited to decoding visual cues. Our hearing is far more sensitive to short impulses, and can easily distinguish between a dot and a dash, even when sounded from a simple electromechanical sensor.
+ +However, it is far more convenient to have a permanent record of the message, and the receiver (known as the 'register' in the Morse system) used a paper tape to record the message. The tape's clockwork system could be activated remotely (no details have been found as to how this was done), so an operator could activate the receiver from the far end, then send the message. There did not appear to be a way to stop the tape again though, which had to be done by the operator at the receiving station.
+ +
Figure 3 - Complete Simplified Telegraph System (One End)
Some early attempts used pencils, but the stylus was more durable. Later versions used an ink pad or roller. If done today, a thermal transfer paper similar to that used in most labelling systems and cash register receipts would be the easiest and require the least maintenance, but such little luxuries were unknown at the time. Other telegraph systems did attempt to use anything from chemical reactions to primitive (by the standards of today) thermal transfer, but with the general lack of understanding of electricity at the time (and the low speed of both chemical and thermal detectors) they were never implemented in any commercial systems.
+ +As mentioned above, it was possible to include a repeater (relay station) in the telegraph line, allowing for much greater distances that could otherwise be achieved. Although the repeater was common fairly early in the development of the systems, it was usually intended to 'amplify' the weak current from the line (influenced by high resistance) to drive the local register (receiving unit). The ability for the repeater circuit to act as a relay gave the name to the device we still know today as a 'relay'. A small current in the coil can switch a much greater current via the contacts. These early relays were the only equivalent to valves or transistors in the 19th century.
+ +The relay station had its own battery supply, so as the signal was received by the relay coil, the contacts closed and delivered a current from the local battery allowing the signal to travel much further than would otherwise be possible. Relay stations would require regular maintenance of course, but this was faster and less error prone than having an operator manually re-transmit the message.
+ +
Figure 4 - Telegraph Relay
When current passes through the coil, the steel armature is attracted to the electro-magnet's pole piece, closing the contacts. The armature is prevented from making contact with the contact support by means of an insulator. This was at a time where modern insulation materials were not available, and (apparently) ivory was used in some systems. The relay would generally be adjustable so the sensitivity could be controlled. Almost all of these early systems had adjustments for most of their parameters, because the principles weren't well known - even amongst those building the apparatus. Although Ohm's law was understood (to a degree, by some), this was pioneering work, so much of the equipment in use was barely beyond the stage of an experimental prototype.
+ +This is shown fairly clearly when you look at photos of the original equipment. Today we expect to find closed magnetic circuits, where the iron core not only passes through the centre of the coil, but wraps around so the other pole is also close to the armature (the moving piece). It's not always clear, but most of the gear does use a closed magnetic circuit, generally with two coils on a 'U' shaped polepiece. However, some of the equipment of the mid 19th century seems to have used an open magnetic circuit, which means that more ampere-turns are needed for a given pulling power.
+ +The concept of ampere-turns appears to have been almost unknown to many of those involved, and some believed that to get the best magnetic strength, the wire around the electromagnet had to be as large as possible. This led to some impressively large equipment, with decidedly unimpressive performance. However, coils wound with many turns of fine wire were difficult, because there were no suitable insulating materials for the wire. Most insulation consisted of cotton thread wound around the wire, usually in two or more layers, with each wound in the opposite direction. DCC (double cotton covered) wire is still available (why? - mainly for restoration of antique gear), but the cotton is often also used with insulating enamel, something unavailable to the pioneers of the telegraph.
+ +You may notice from the drawings and schematics that there is no attempt to counteract the back-EMF generated by the receive coils when the current is interrupted. When these systems were devised, there were few people who really understood the concept of back-EMF, and no components existed to reduce it. Today we'd use a diode, but of course these didn't exist at the time. It was many years after the first equipment was developed before even resistors became available, and when they did, they were hand made using cotton covered resistance wire.
+ +In academic literature of the day [ 11 ] the concept of 'induction' (back-EMF) was known, but wasn't understood to the extent that it is today. Measuring systems were minimal, so people relied on the distance that a spark might jump to evaluate the voltage generated by induction. It appears fairly likely that few of those who built or maintained the telegraph would have even been aware of the science of electro-magnetism outside of their own experiments. Much of the material of the day (ca. 1840) indicates that the study of electricity was in its infancy, and it remained poorly understood (even by the likes of Michael Faraday [ 12 ]).
+ +A quick simulation shows that even a 300m length of 50 ohm coaxial cable will cause the back-EMF to be attenuated to a reasonable degree, so the transmission lines of the day would probably have limited the peak back-EMF voltages to a great deal less than you might expect. This is especially true because of the lossy nature of the transmission systems used, but I could not find any information about back-EMF and its effects during the early days of the telegraph. There are reports of linesmen suffering electric shock, but details are scant. In some cases, it would simply have been the result of the use of relatively high voltages, with systems sometimes operating at 100 to 150V to try to extend the range and counteract the line resistance.
+ +One fascinating quote [ 11 ] that highlights the issues faced ... "More damage is often done to the telegraph in a second by a thunder storm, than by all the mischievous acts of malicious persons in a whole year." Lightning arrestors and other protective measures were developed to minimise damage to equipment and operators. This was especially important in America, because violent thunderstorms are far more common there than in Europe, so it's no surprise that many of the lightning protection systems were developed in the US.
+ + +Prior to the discovery that gutta-percha (still used for root canal therapy in dentistry) made a good insulator, underwater services weren't possible because sea water is highly conductive. A rigid natural latex produced from the sap of various trees of the genus Palaquium, gutta-percha was commercialised in the mid 1800s. Underwater telegraph cables became possible after British suppliers started producing underwater cables that were immune from attack by marine creatures (plant or animal). The first trans-Atlantic telegraph cable started service in 1858 and used gutta-percha insulation (amongst other protective coverings). This cable subsequently failed due (it's claimed) to high voltages being applied. It's not clear if this was due to inappropriate testing methods or an attempt to improve the transmission speed. Both claims exist, and it appears impossible to determine which is right.
+ +By the beginning of the 20th century, people had a much greater understanding of the behaviour of electrical signals in a long transmission line. Speed improved from a claimed 2 minutes to transmit a single character (0.1 WPM - words per minute) to 8 WPM by 1866 or thereabouts. This was partly due to improved cable construction. By the early 1900s, transmission speeds improved to around 120 WPM as engineers discovered that electrical loading systems (coils, capacitors and resistors) could be applied to ensure that the sending and receiving systems matched the impedance of the cable itself. For more information on this particular topic, see Coaxial Cable.
+ +Even the commonly used single suspended wire with earth return forms a transmission line once it's long enough. At the transmission speeds used at the time, the effects were minimal, but undersea cables had far greater capacitance per unit length than an above-ground system, and the effects of this weren't understood at the time. Today we know that a transmission line terminated with its characteristic impedance is close to flawless even at very high frequencies, but these concepts were unknown at the time. Experimentation was the only tool available.
+ + +Once radio (wireless) became mainstream, the growth of the telegraph became an unstoppable force. The very early systems were extremely limited, using spark gap transmitters and receivers consisting of 'coherers'. The coherer was a primitive detector, relying on fine conductive particles in a sealed glass tube aligning themselves (cohering) to provide a low resistance path upon reception of a radio signal. A mechanical means of 'de-cohering' the device was required, typically a small 'clapper' as may be used by an electric bell. When the coherer became low resistance due to a wireless signal being received, this activated a solenoid. This activated an arm that tapped the tube, restoring the non-coherent state of the particles within to await the next signal.
+ +As expected, coherers were slow, and were never a truly satisfactory means of reception. Detection and distinction between dots and dashes of Morse code would have been a specialised skill, by listening to the sound produced by the decoherer as it constantly reset the coherer while a radio signal was present. There is little information available on this particular topic, so we must imagine that the Morse signals would have been heard as bursts of 'noise' from the decoherer resetting the device as the message was received.
+ +Once John Fleming invented/ discovered the electron 'valve' (vacuum tube), detection became easier, but it wasn't until the invention of the first amplifying valve (the Audion) by Lee De Forest in 1906 followed by true (high-vacuum) triodes in 1913 that wireless became really viable. The Audion and high-vacuum triode created a flurry of activity that hasn't abated to this day. By the 1920s, wireless was well understood and broadcasts of popular music and news were becoming common.
+ +Despite this, Morse code was still very much alive, especially for military applications. It was likely possible to operate an AM (amplitude modulated) transmitter in the field in 1918 or thereabouts, but the size and complexity of the equipment needed to transmit and receive the transmissions was such that it would have been non-sensible to try. This changed when miniature valves first appeared in 1938. Even during WWII, Morse code was widely used, with one of the most notorious schemes ever seen to appear in the late 1930s - the German Enigma encryption system.
+ +Messages were first written, then encoded using the Enigma machine. The coded message was transmitted using 'Morse' code - albeit a modified version that suited the German alphabet. The encoded message made no sense to anyone who intercepted it - even if they had an Enigma machine themselves! If they didn't know which set of rotors were being used, and the initial setting for each individual rotor, the message could not be deciphered. The rotors were set at the beginning of each day to a pattern described in a code book, and each time a key was pressed, the coding changed. Do a web search if you want to know more - it is a fascinating (albeit very complex) topic, and doubly so if you look into the procedures used to break the code. I don't propose to cover this in any greater detail, but one thing that made the job at Bletchley Park easier (this is where Enigma was fully broken and decrypted) was the simple fact that the Enigma was unable to assign a plaintext (not encrypted) character to itself. For example the letter 'A' could become any letter in the alphabet in the ciphertext - except 'A', and likewise for all other characters. In modern cryptology, this is considered an epic fail.
+ + +Other than for its entertainment value for enthusiasts, there is almost no Morse code used any more. For anyone wanting to learn, there are countless websites that have pre-recorded Morse samples that you can practice with, and the chart shown below makes it easy to get started with slow (typically no more than around 5 words per minute) Morse code. The chart should be printed out to make it easier to use.
+ +
Figure 5 - Morse Code Learning Aid
The aid shown above is easy to use, and can help you to learn Morse code. To use it, when you hear a dash ("dah") you move to the left and down, another dash means you move left again. A dot means that you move right. As each segment is heard, you simply move left or right, so ' -..- ' takes you left, then right, right again, then left. The letter is 'X'. There are a few slightly different versions of this chart, and I have tried to make this one as clear as possible. The full stop (period) and hyphen have been added, as has the bracket (parenthesis) - there is only one in the code, and it's up to the operator to decide which way it goes.
+ +I've also shown a short example of code, along with the relative spacings of dots, dashes and spaces between characters and words. As noted earlier, a dot is 1 unit, a dash is 3 units, the space between characters is 3 units and between words it's 7 units. The length of a 'unit' depends on the transmission speed.
+ +There are also a number of 'prosigns' (procedural signals) used. These are mostly two letter codes that are sent without the normal character space, so are transmitted as if they were a single letter. The last two ('C L' and 'B K') may be transmitted as separate letters, with the normal inter-character space (the length of three dots) between them. Some references show them as being sent as a single stream, while others show them as two characters.
+ +++ ++
+Prosign Code Meaning + AA .-.- New line (carriage-return + line-feed) + AR .-.-. New Page + AS .-... Wait + BT -...- New paragraph + CT -.-.- Attention (important message) + HH ........ Error (delete last word) + KN -.--. Invite a specific station to transmit + SK ...-.- End of transmission + SN ...-. Understood (also VE) + SOS ...---... International distress message + + C L -.-. .-.. Going off the air (clear) + B K -... -.- Break (back to you) +
The recommendation from nearly everyone is that you learn Morse code by sound, and not as a written sequence of dots and dashes. Although there is no longer any requirement for anyone to learn Morse code, there will undoubtedly be those who want to learn just for the fun of it. There are countless applications (even today) where it could be useful, and this is especially true if you happen to think that Armageddon is on its way someday soon .
Almost all messaging is now digital, and this includes the land-line telephone - it's analogue only as far as the local exchange (central office), and from the far-end exchange to the home. Most businesses with more than a couple of phone lines connect to the network digitally, and the conversion to analogue may not take place until it reaches the telephone itself. Many 'cordless' home phones now use DECT (digitally enhanced cordless telecommunications), another digital protocol that has far greater security than earlier analogue cordless phones, and other (often proprietary) digital protocols are used by various manufacturers.
+ +Many countries (including Australia) have deprecated the standard twisted pair telephone line altogether, or re-purposed the last few hundred metres to handle digital traffic only. Phone calls are made using VoIP (voice over internet protocol), which means that for most households, only the final metre of cable (from the broadband modem to the telephone itself) is analogue. Whether this is a good idea depends on many factors, but I know from personal experience that VoIP is grossly inferior to a fixed phone line. The latter still functions if there's a local blackout, but with the 'latest technology' you lose all non-wireless functionality if the power goes out. Some systems have battery backup to get around this problem, but most people have to use the mobile ('cellular') network whether they wanted to or not.
+ +Mobile ('cell') phones with SMS provide greater connectivity than ever before, and there is no longer any need to use Morse code. However, it's an important part of history, and as such it has to be preserved. There is a good case for museums in particular to utilise some of today's technology to enable simple demonstrations of the technological triumphs of the past, with interactive displays rather than a few pieces of yesterday on a shelf behind glass, doing nothing.
+ +The descriptions above do not include bipolar signalling (positive and negative voltages applied to the telegraph line or cable), nor the many variations of senders and receivers that were in common use. This is a short introduction only, and was never intended to be a complete reference work. The basic sender (key), receiver and relay are fairly detailed because it's necessary to show just how they worked. The 'register' (recording receiver) is included because it was such an integral part of the system.
+ +There is a surprising amount of information on the Net covering Morse code, the various adaptations used for specific countries and the history of telegraphy. I encourage anyone who is interested to do a search, as some of the equipment used is of great historical interest, as are the inevitable arguments (and legal challenges) as to who did what and when. During the early days of electronics (because this really is the beginning of electronics as we know it), there were some epic battles between the various people involved. Some were very well known, but others not so much so.
+ +This short article is intended as an introduction, and as a recommendation for others to look into the subject. As with many of the early inventions and discoveries, it's inevitable that if they hadn't been invented by the people we know now, someone else would have done so. In many cases, someone else did invent things that are routinely attributed to others - after all, history is written by the victors in any altercation, but that doesn't make it real. Reference 1 is a long article, but it goes into some detail about the 'disagreements' between the protagonists, and also has a truly impressive list of references. Reference 6 has many photos of early Morse keys, sounders and receivers.
+ +There is little doubt that Morse code and the equipment developed to transport messages signalled the beginning of the 'information age'. Although it's not often acknowledged, the communications industry was responsible for the vast majority of the things that we take for granted today. Once the telephone became popular, phone companies pushed the boundaries of what was possible. The transistor was the result of research at AT&T's Bell Laboratories - after that, electronics became a part of our lives that becomes ever more entrenched as we rely on better, faster and more ubiquitous technology. Communication still rules as one of the primary drivers of our advanced technologies.
+ +It is educational to read the words of the 'ancients' (as it were) from the early days, just to learn how they perceived and understood (or failed to understand) principles that are considered to be basic knowledge by anyone even remotely connected with electrical equipment today. Many of the texts from the 1800s are available as a free ebook or PDF download thanks to Google's efforts at digitising this material. We have free access to knowledge and research material that was difficult or impossible for most people to get at the time.
+ +As you look into the history of written telecommunications, you find references to Baudot code, patented by Émile Baudot in 1874 (5 bit), which was the precursor of EBCDIC (Extended Binary Coded Decimal Interchange Code - IBM) and ASCII (American Standard Code for Information Interchange). The latter two are 8 bit encoding schemes, and ASCII (or more usually the 'enhanced' version known as UTF-8 - 8-bit Unicode Transformation Format) is still used as the basis for most human-readable text used in computers and on the Net. The term 'baud' for serial communications speed came from Baudot.
+ +Remember that the readily available knowledge that we now expect at our fingertips had its beginnings in the printing press (invented in 1440), but instant communication came from the electric telegraph, as the first method ever devised by humans to transmit information over thousands of kilometres in just a few minutes. So much has been achieved in such a short time, thanks to the efforts of early pioneers who didn't know a fraction of the information that we expect to find at a moment's notice today. One wonders what they would think of the Internet !
+ + +These references are in no particular order, so the first may be referenced towards the end of the article or vice versa. The reference numbers you'll find scattered through the article do point to the specific reference below. Some may not be referenced in the text at all, indicating that they have either simply been used as verification, or snippets of the info have been used in multiple places. I have tried to include all of the main reference material here, but it's also probable that some have been missed. If so, I apologise in advance.
+ +To see some of the truly vast amount of information available on-line, do a search for 'electric telegraph'. This will lead you into some of the basics, but as you widen your search you'll discover just how much you never knew about telegraphy in general. In its day, the telegraph was a far greater leap into the unknown than the internet, because the latter was based on so many discoveries from the past.
+ +Please Note: There are countless references that were used to double-check the validity of many claims made, and to extract a few finer points about the systems and how they worked. Not all have been included above, as the reference list could easily become unwieldy. For those interested, the list above is a good starting point, but it's surprisingly easy to look at ten different sites (and/ or books) and get ten different answers. It's up to the reader to determine what looks as if it might be real and what is obviously (or not so obviously) bogus. Historical information such as this can be notoriously difficult to verify. Much of the very early material was based on conjecture, because the principles of electricity (as we know them today) were still mysterious.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +MOSFET Relays |
This article concentrates on MOSFET relays for speaker protection - disconnecting a faulty amplifier from the speakers to minimise damage. However, they are increasingly used in industrial applications due to indefinite life and faster operation than electromechanical relays (EMRs). One thing you won't find is a miniature MOSFET relay that can handle the output of a typical 100W power amplifier. Of those that can handle the voltage and current, most are based on a TRIAC (bidirectional thyristor) or SCRs, and they are completely useless for speaker protection. It's unrealistic to expect a tiny SMD MOSFET relay to be able to handle ±50V or so at 13A or more, and that's what's needed for speaker protection, along with high power industrial processes.
+ +There are many small MOSFET based SSRs available now, but you have have 'high' current of up to perhaps 2A or so or high voltage (up to 600V), but not both. Most are based on a photovoltaic coupler (essentially a stack of miniature photo-cells), and they are generally fairly slow, although much faster than EMRs, and with no contact bounce. 'On' times vary from around 200μs to 2ms or so, with high-voltage, low-current devices being faster than those designed for low 'on' resistance (RDS on).
+ +If you need to switch a few hundred volts at several amps, you have no choice other than to buy a very expensive commercial product, or build your own. For many applications, you may even want to consider a hybrid relay - a combination of an SSR for switching, and an EMR to carry the load current. These are covered in the article Hybrid Relays, and are ideal for many otherwise difficult loads.
+ +Mains switching Solid-State relays (SSRs) for AC have been around for nearly as long as the first SCRs and TRIACs became available. However, none of these early devices was suitable for use with audio signals, because of gross distortion around the zero-crossing point of the waveform. They also cannot switch DC, because TRIACs and SCRs rely on the current falling to zero to allow them to turn off. MOSFET based SSRs have existed from around 1984, when a patent was taken out by International Rectifier Corporation for a MOSFET circuit that could handle AC with very low distortion. [ 1 ]. It is not known if this is the earliest example, but it's probably close.
+ +There are any number of SSRs available that are suitable for DC, but comparatively few low-distortion types that can handle the high AC voltages and currents that are typical of high power amplifiers. Those commercial devices that might be electrically suitable will most likely do some serious damage to your bank account. There are a many that can handle up to around 2.5A at voltages as high as 600V, but comparatively few that can handle the 30-40A or so that is needed for a high power amplifier driving low impedance loads.
+ +While conventional relays can be used, they have a small problem ... when a high power amplifier fails and the output goes DC, there could be 100V or more with a load impedance of perhaps 4 ohms or less. Breaking 100V at 25A DC is a very difficult job for a relay, because the DC allows a substantial arc to be created across the contacts.
+ +This arc is very difficult to stop, and the only way to actually protect the speaker is to earth the normally closed contact so that the arc is connected to the power supply common rail (earth/ ground). The relay will be destroyed, but the speakers will (probably) survive. Most mains rated electro-mechanical relays are limited to around 30V DC, but even with this seemingly low voltage it is still likely that the relay will be damaged if it ever has to protect the speakers.
+ +
Speaker Protection Relay Wiring
The above shows the wiring scheme that must be used to protect the loudspeaker. The earth connection is often neglected in 'protection' circuits shown on the Net, and the end result is that while it may happily pass your basic tests, it will likely fail when you really need it due to the DC arc.
+ +Perhaps due to the known problems with electro-mechanical relays, there seems to be some interest in solid state relays on audio forum sites, but while some of the information is actually quite good, there are many misconceptions and often a failure to understand the things that can go wrong.
+ +++ ++
+Note that the DC detector and control circuit (not shown - see Project 33 for an example) + must be connected directly to the amplifier's output. If it's connected after the relay, fault induced DC will only be present when the relay is closed, so your speakers will be + subjected to repeated pulses as the relay closes, DC is sensed, and the relay opens again. This process will continue until you switch the amp off. When the detector is connected to the amp output, + the relay will never close because the fault condition is detected before the detector attempts to connect the speaker. +
This article shows MOSFET relays for speaker protection, but there are countless uses for them in other applications, especially where high DC voltages are present, or for 'arcless' switching of AC power. When used as mains relays, great care is needed with all wiring and MOSFET selection, both for electrical safety and to ensure reliability under adverse operating conditions. A MOSFET relay offers several advantages over a more 'traditional' SSR (solid state relay) using a TRIAC or back-to-back SCRs. The biggest advantage is that you can control the switching speed to minimise EMI (electromagnetic interference), and that with the optimum choice of MOSFETs the voltage drop can be reduced. A TRIAC(or SCR) has fairly consistent 1V RMS voltage drop, so dissipated power is 1W per amp of controlled current, regardless of the supply voltage. At 10A, a dissipation of 10W is normal. Use of MOSFETs with a low RDS-On means that this can be reduced, especially at lower voltages. A MOSFET relay also has no issues with minimum (holding) current, as do TRIACs and SCRs. You can control milliamps to amps with ease.
+ +As of December 2019, there's a new option. Section 10 has the info on the latest (and so far the best by a long way) MOSFET driver, specifically intended for MOSFET relays. With the introduction of the Si8751/2, the game has changed. While it's only available in an SMD package, that's not an insurmountable problem - see Project 198 to see the complete design.
+ + +Although I have shown IRF540N MOSFETs throughout, this is more a matter of convenience than anything else. While these will be suitable for some lower powered amps, they are not suited to very high current. The claimed RDS-On is acceptable (77mΩ for the 540, 44mΩ for the 540N), but there are much better MOSFETs available now, having RDS-On below 20mΩ. I leave it as an exercise for the reader to select MOSFETs that are suited to the voltage and current available from the amplifier to be switched. There are many to choose from, and it would be rather pointless for me to try to list all those that you may (or may not) be able to get easily where you live. You can use multiple smaller units in parallel, which may work out cheaper. The lower the value of drain-source resistance, the lower the distortion contributed by the circuit, and there's less power dissipated (and therefore less heat generated).
+ +The general idea of an AC SSR is shown in Figure 1.2. Two N-Channel switching MOSFETs are used, with their sources and gates joined. The signal and load are connected to each of the drain terminals - it doesn't matter which is which, because the 'switch' is symmetrical. However, bear in mind that there are two MOSFETs in series, so the effective RDS-On is double that for a single device.
+ +With no voltage between the gate and source terminals, the MOSFETs are off, so no current flows. Depending on the MOSFETs used, they will conduct fully when the gate-source voltage exceeds around 7 Volts. It is always a good idea to provide 10-12V gate drive to ensure that they always turn on fully. The zener diode you see is to protect the delicate insulation between the gate and MOSFET channel.
+ +The gate insulation is typically rated for a maximum of around ±20V. Even a little bit of stray capacitance or resistance (moisture on the PCB for example) can easily allow the voltage to rise to destructive levels because of the very high impedance, and the zener is mandatory. Even drain-gate capacitance can cause problems if the zener diode isn't included.
+ +While the concept is very simple, in practice there may be quite a lot of additional circuitry needed because the control circuit must generally be completely isolated from the switching MOSFETs. Two complete circuits are needed for stereo, even if they are driven by the same detector. This is because the two pairs of MOSFETs cannot be connected together in any way, other than sharing a common control drive circuit such as a dual optocoupler or miniature double pole relay.
+ +Each MOSFET's voltage should be rated for at least 25% more than the supply rails of the amplifier with no load. This is due to the way the circuit works, and because of the possibility of instantaneous back-EMF from the speaker or crossover coil when the DC fault current is suddenly interrupted. It may be useful to include a MOV (metal oxide varistor) across the SSR switch terminals, or use a capacitor 'snubber' to prevent the likelihood of any destructive voltage spike.
+ +When MOSFETs fail, they almost invariably fail short-circuit (like most semiconductors), and it is conceivable that a failure could go entirely unnoticed until your speaker catches on fire. It is essential to make sure that failure is rendered highly unlikely, or that some kind of test process be incorporated (which adds further complexity of course). Quite obviously, a conventional relay can fail too, but they are generally extremely reliable and have no sensitive electronic bits in them. However, expect the contacts to melt if you try to break a high DC fault current - especially with voltages above 30V DC.
+ +
Figure 1.2 - Basic MOSFET Relay
What we need to activate the MOSFET relay is a floating DC source. It must be electrically isolated from the amplifier's speaker output (and with high impedance) or it will either be damaged by the amp, or will damage the amp. For simplicity, the DC source is shown as a 9V battery (discussed further below). Then the DC is connected and disconnected as needed to switch the relay on and off (shown above by a switch). There are any number of different ways to implement the switching function, ranging from miniature relays, opto-isolators (either LED + LDR or LED + photo transistor) or by remotely turning the gate supply on and off by some means. The zener is used to ensure that the voltage is kept below that which may damage the gate's sensitive insulation.
+ +Figure 1.2 shows the general form of a MOSFET relay, using a 9V battery as an example only. While IRF540N MOSFETs are shown, you must use devices that are suitable for the voltage and current to be controlled. This general circuit arrangement will work with millivolt signal voltages, all the way up to 230/120V mains with the right devices.
+ +If you have ±100V supplies, the MOSFETs should be rated for at least 120V, as this provides a comfortable safety margin. You can add resistors in parallel with each MOSFET, which reduces the effects of stray capacitance and ensures that your safety margin is maintained. 100k is a good place to start, but this isn't strictly necessary (especially with the relay circuits shown further below). The likelihood of excessive voltage is most likely with the 'charge coupled' circuit in Figure 4.1.
+ +Capacitance from the speaker output to earth must also be minimised, or there is a risk that the amplifier may oscillate. Ideally the floating supply should be isolated from any stray capacitance by series resistors. These damp the effect of the capacitance and render it harmless. Where an output coil is fitted to isolate the amp from speaker cables and other stray capacitance, the SSR should be between the coil and speaker terminal - never between the amp and coil.
+ +Note that the MOSFET switch is completely bidirectional, and although it may seem that it must introduce considerable distortion, this is actually not the case. When the switch is closed and the MOSFETs are biased on, the only voltage that appears across the pair is due to their on resistance (RDS-On). With suitable devices, this resistance is very low and reasonably linear. Linearity is not as good when the 'switch' is off, but that's of little consequence. Bear in mind that any series resistance reduces damping factor, so if you happen to think that very high values are essential, you may be disinclined to add a circuit that adds resistance.
+ +The choice of suitable MOSFETs is huge - so much so that I'll only attempt a couple of types for consideration. A popular and inexpensive part is the IRF540N. It's rated at 33A with a voltage rating of 100V, so it can be used with supply voltages up to about ±70V. Another worth considering is the IRFP240, 200V and 20A. RDS-On is higher than desirable, but 2 or more can be paralleled to reduce that. There are many others, and I leave it to the reader to find a device that suits the purpose and the budget. The total series resistance will be double the RDS-On of each MOSFET. With an amp current of (say) 20A peak, there will be a loss of 1.76V peak (1.25V RMS) across the relay, and a total power dissipation of less than 150mW.
+ +Note that while it would be very convenient (and easy) to use a battery as shown above, that would be a really bad idea. Even though the MOSFET gates require minimal current, the battery will eventually discharge (via the resistor, which cannot be omitted) to the point where a significant voltage will appear across the MOSFETs because they are not switched on hard enough, and this will cause severe overheating and gross distortion. As an example, the circuit shown above has a distortion of 0.013% with 28V RMS applied (a 100W/ 8 ohm amp at full power).
+ +Should the MOSFET gate bias voltage be reduced to 5V, the maximum output is dramatically reduced, and distortion becomes excessive at any level above around 50-60W. In addition, the MOSFETs will overheat badly, because normally they only need a very modest heatsink (if any at all). Once there is a significant voltage across them and they are passing current, they will dissipate power.
+ +Having ruled out using a couple of 9V batteries (at least from a purely practical perspective), we have to find an alternative solution to provide the necessary voltage needed to switch the MOSFETs on. Switching them off is easy - just take away the voltage. Some of the options are as follows in the next sections.
+ +Note that if you use a cap across the relay terminals as shown in following examples, there will be a small signal current that will be audible with high sensitivity speakers if the MOSFET relay is used for muting. Provided the 'clamp' diodes are used, the cap can be omitted, or you can use a MOV for protection. If used, the MOV must have an RMS voltage rating that is less than the rated breakdown voltage of the MOSFETs, but greater than the amplifier's RMS output voltage. Given that MOV devices have a rather broad tolerance and are only available with a limited range of voltages, this makes selection rather difficult.
+ + +Several manufacturers make photo-voltaic MOSFET drivers that seem ideal (they use an infra-red LED and a bank of photovoltaic-diodes or tiny 'solar cells' to generate the gate voltage) [ 2 ]. While they can be obtained fairly cheaply (less than $10 if you find a supplier), a few problems exist. They are ...
+ +The final issue is the one that is likely to cause some grief, because most have an output current that's less than 100µA, with some below 10µA. This means that the MOSFETs cannot be switched quickly (on or off), so peak power dissipation may be unacceptably high during switching. Remember that all MOSFETs have a gate-source capacitance that must be charged and discharged when the MOSFET is switched on and off. Although this must be considered, it is still possible to get switching times in the order of a few milliseconds, and this will generally be considered acceptable.
+ +
Figure 2.1 - Photo-Voltaic MOSFET Driver
The arrangement shown above is fairly typical of the general scheme, and will work very well provided the optimum photovoltaic optocoupler can be found at a sensible price. Ideally, you will need a photovoltaic opto that can provide at least 50µA or switching times become embarrassingly slow. In some data sheets (and included above) you will see a JFET used to speed up the MOSFET's gate discharge and turn-off time. As shown, turn-off is almost instantaneous.
+ +Note the pair of 'catch' diodes (D2 and D3) that connect to the amp's supply rails. (These diodes are also included in other drawings, as it is very important that they be included.)
+ +This arrangement can also be seen if you have a look at the Vishay VO1263AB data sheet, but they use a P-Channel JFET. It is pretty much mandatory to include the JFET if you choose to use the photovoltaic isolator circuit, unless you use the circuit shown in Figure 8.1. In other circuits you might come across, a high value resistor (~10MΩ) is placed across the isolator's output, but this has a much longer turn-off time (perhaps 100ms or even more, depending on MOSFET gate-source capacitance). This will almost certainly cause excessive peak dissipation in the MOSFETs and lead to failure.
+ +The opto's LED typically needs to be driven with around 10-50mA to work, depending on the device used. This current is easily supplied by the speaker DC detector circuit. Project 33 can do the job easily.
+ +In case you are wondering, the JFET circuit shorts the MOSFET gate to source when there is no current from the opto. When the opto is active (supplying current), a voltage is developed across R2 that biases the JFET off, so it does not draw any current. With 50µA and a 2.2M resistor, the JFET is biased fully off. Because of the wide parameter spread of JFETs and photovoltaic isolators, you may need to experiment with the value of R2 to ensure reliable switching.
+ +It is important to understand that there will be some voltage drop across R2, sufficient to bias the JFET off. This voltage is unavailable to the gates of the MOSFETs, so the already limited voltage from the coupler is reduced a little more. This may be enough to not allow the MOSFETs to conduct fully, a highly undesirable outcome. The extra resistance also means that the MOSFETs turn on slower than they otherwise would. The difference is not great, but is easily measured.
+ + +Using a small transformer is a good solution, and it only requires a simple rectifier and minimal filtering to drive the MOSFET gate. Since the transformer can easily supply 10mA or more, switching times are dramatically reduced. Unfortunately, a transformer coupled circuit also needs a driver circuit to provide a signal at the secondary (or secondaries). This should operate at 50kHz or more to minimise the size of the transformer(s). Not really a problem, but there's more circuitry needed which takes up potentially valuable space.
+ +
Figure 3.1 - Transformer Based MOSFET Driver
An example is shown above (just one of a great many possibilities), based in part on the original patent [ 1 ], but somewhat simplified. While it's not overly complex, there are nuisance issues like finding a suitable transformer that have to be solved. It will generally be easier to find single secondary winding transformers, so two will have to be used for a stereo system. If a single transformer with dual secondaries is used for a stereo amp, the insulation between the primary and also between both secondaries has to be able to withstand the full amplifier supply voltage. This means that if the amp uses ±60V supplies, the insulation has to be rated for at least 120V. It would be wise to ensure that all inter-winding insulation is rated for a minimum of 500V.
+ +The drive signal typically needs to be a squarewave of no less than 15V peak-peak. The advantage of using a voltage doubler is that there is a small parts saving - the caps that form part of the doubler also smooth the DC. Without the doubler, either the drive voltage has to be increased or the transformers have to be step-up types to obtain enough gate voltage. There are a (small) number of new ICs that integrate the isolated coupler into the IC, but these are fairly new and may not be available yet.
+ +Note that the diodes (D1 and D2) must be high speed types, preferably Schottky or at least 'ultra-fast' types. Normal diodes are far too slow and will cause very high losses in the rectifier - so much so that it may not even work.
+ +The drive oscillator can be almost anything you like, but as noted on the circuit diagram above, you need at least 15V P-P drive voltage, at a frequency of around 50kHz. Current is fairly low at 45mA RMS for a transformer with 500µH primary inductance. Depending on the transformer you use, the current may be somewhat higher. There is no point trying to specify particular cores and formers, as their availability is highly variable - parts I can get here may be unavailable elsewhere and vice-versa.
+ +
Figure 3.2 - Typical Oscillator
The oscillator shown above is the simplest possible arrangement using a 555 timer, but is perfect for this application. The output signal is close to a squarewave, and any small variation in duty cycle is handled by the capacitor feeding the transformer. This prevents any DC magnetic flux build-up that may cause the transformer to saturate. There are countless other oscillator designs that will also work, but few that are quite as simple as the one shown.
+ +When the CTRL line is open circuit or high, the oscillator runs, and gate voltage is available to the MOSFET relay which turns on. When the CTRL line is pulled low, the oscillator stops and the MOSFETs also turn off once the gate caps discharge. It is possible to incorporate additional circuitry to ensure the relay turns off very quickly, but in reality anything up to a few milliseconds will be alright in most cases. A simulation tells me that as shown, it will switch off in under 1ms.
+ +I tried an Ethernet transformer, with three windings in series, and having a theoretical inductance of ~250μH. Driving it with a 10V peak squarewave, I obtained an output of 15V DC using the doubler circuit shown next. The turn-on time is about 4μs, and turn-off time was measured at 25μs with a 2k load. This is significantly faster than any other option, and the high drive current will turn on the main MOSFETs more quickly than most other circuits. Even this can be improved, but only at the expense of greater complexity.
+ +
+
Figure 3.3 - Typical Pulse Transformer & Test Rectifier
The values are those I used (they were conveniently to hand when I ran the test). There's plenty of latitude, so you don't need to replicate the exact values I used. The transformer is a 23Z90 Ethernet pulse transformer, but anything similar will do nicely. The drive signal was directly from my function generator. There are many options for the oscillator, including something as basic as a CMOS hex Schmitt trigger (40106 or similar). The transformer is tiny - 11mm long, 6.8mm wide and 5mm high (excluding pins). The transformer will work down to 250kHz, but its low inductance starts to become a problem at lower frequencies. The drive current is about 26mA RMS (roughly 50mA peak), so it's a fairly easy load, even for CMOS ICs. The current can be reduced by increasing the value of R1, but unless C2 and C3 are also reduced, the turn-off time will increase.
+ +You may also be able to use small 'output' transformers intended for low power transistor amplifiers (portable radios for example). These provide another option - just use the core and bobbin. Remove the existing primary and secondary, and wind new ones by hand. The inter-winding insulation layers can easily be improved to suit the new requirements. We are not overly concerned with efficiency, because the power needed is negligible. Unless you have access to small ferrite cores, this is likely to be the cheapest option.
+ +As an alternative to using discrete circuits, it is also possible to use miniature DC-DC converters to provide both isolation and gate drive. This is a more expensive option though, but details are provided below in Section 5. While these will cost a little more than a full DIY approach, it's very easy to implement needing a minimum of additional parts.
+ + +There is the option of doing away with the transformer(s), and simply using low value capacitors to couple a high frequency AC signal to a rectifier circuit that then drives the MOSFET gates [ 3 ]. This arrangement is also known as a 'charge-coupled' circuit, and can use the same oscillator as shown above. Although the data sheet says that a 555 timer can operate at a maximum frequency of around 500kHz, I wouldn't be happy with it running that fast. Up to 250kHz should be fairly safe though.
+ +
Figure 4.1 - Capacitive-Coupled MOSFET Driver
Like the transformer solution, an oscillator at 50kHz or more is needed, and the coupling caps are so small that they will pass very little signal in the audio range. At the high switching frequency, the caps are almost a short-circuit, and can fully charge the gate driver within a couple of milliseconds. Current is quite low though, depending on the switching speed and capacitor value. High speed Schottky diodes are essential, as I would normally expect that switching frequencies well in excess of 100kHz be used. In the circuit shown above, the circuit can supply around 300µA, but can still switch MOSFETs on or off in a few milliseconds. The capacitors used should be rated for at least 600V DC.
+ +The primary disadvantage of using capacitive drive is that the amp's output must be limited to no more than perhaps 25kHz or so. Should the amp decide to oscillate, there is a real chance that the capacitive driver circuits will be damaged. This will happen if any amplifier output frequency (intended or accidental) is high enough to pass a signal back from the speaker line to the driver circuit.
+ +In addition, the voltage across C2 increases with increased amp output frequency, because the caps (especially C1) pass some of the signal and this assists the charging process. The only way to avoid this is to use a higher oscillator frequency and a smaller value for C1 and C3. For example, with an oscillator frequency of 500kHz and C1 and C3 reduced to 470pF, an amplifier signal of 25kHz at full power causes no increase in the normal voltage developed across C2. Also note that although the RMS drive current is only around 5mA, the peak value is over 30mA with the values shown.
+ +Switching times are passable. While turn-on is quite fast at about 1ms, turn-off is rather lethargic - it takes just over 3ms for the MOSFETs to turn off with the values shown, so dissipation under fault conditions will be rather high. Although turn-off time can be improved by reducing the value of R3, this demands a higher drive signal (either higher voltage or frequency).
+ +For the reasons described above, I do not consider this a usable approach for an audio amplifier. There are too many factors that make it unsuitable. It can be used for switching mains (50/60Hz) without too many problems though. In that case, C1 has to be a Y-Class safety rated capacitor to ensure electrical safety. You will need to experiment to get reliable switching, and it may be necessary to reduce the capacitor values (C1, C3) as well as R3 to ensure that the mains waveform can't provide a charge into C2. Note that R2 can be omitted (replace with a short-circuit).
+ +This is probably one of the least desirable ways to make a MOSFET relay, but with care it can be made to work quite well.
+ + +The final option is to use conventional small power supplies that can have their outputs fully floating. A small transformer with dual secondaries is one possibility, but there is a real risk that the insulation between windings will be unable to withstand the output voltage swing from powerful stereo amplifiers. When doing some initial tests, I used a 12V DC switchmode plug-pack (aka 'wall-wart') as the voltage source (it's actually built-in as part of my workbench system, but that's irrelevant).
+ +A small 50/60Hz transformer with dual windings can be used, having a conventional rectifier and filter cap on each output. As with the transformer drive approach, inter-secondary insulation has to be up to the task. This is actually rather unlikely, so it's hard to recommend this approach - it's much safer to use two transformers, but that gets bulky and rather costly. The DC to the MOSFET gates is simply switched using optocouplers with transistor outputs as demonstrated in Figure 6.1.
+ +This kind of approach certainly works, but the cost and space is such that you'd be a lot better off financially by using miniature DC-DC converters. It's hard to recommend the use of separate mains powered transformer supplies as it is somewhat clumsy, physically large and comparatively expensive. However, it does offer a simple and easily implemented solution, with the minimum number of electronic bits to fail.
+ +Commercial DC/DC converters are now readily available for well under AU$10.00 which are pretty much ideal. They have high isolation voltage, and have power ratings as low as 1W. This is as much as you'll ever need to power a pair of MOSFET gates (by a good margin). An example is the Murata MEU1S1212ZC, a 12V to 12V converter, at 6.1 × 8.3mm (width × length) and 8mm high, they are small enough to be easily incorporated into almost anything. There are many examples, but most are a little larger than the Murata unit. With 12V input versions needing less than 20mA (no load) input current, there's no strain on auxiliary power supplies.
+ +
Figure 5.1 - DC-DC Converter Gate Bias
This is a fairly elegant solution, and allows for fairly rapid turn-on and turn-off, with a well defined voltage available from the DC-DC converter. A complete relay (one channel only) will cost less than $25 in parts (not including a heatsink for the MOSFETs), but is capable of handling high current and it can be adapted for any number of other tasks, not just as a speaker relay. The DC-DC converters are small enough to be able to fit in almost anywhere, and they typically offer at least 1kV isolation between input and output. However, note that this is usually the test voltage, and operating voltage is far lower. You absolutely cannot use these for controlling mains voltage unless the isolation working voltage is rated for 250V AC or more.
+ +There is no requirement for power to be available full-time, because if the converter has no power the relay is off by default. Preferably, you'd also include an optocoupler, so that is used to turn the MOSFET relay on (and off). This is the safest way to wire the circuit. This is probably one of the better (and more flexible) solutions, as it can be adapted to many different applications easily. There isn't much info available on how quickly these converters drop their output voltage after input power is removed, but I ran a test using a 2.7k load resistor and the voltage collapses to (near) zero in a little over 10ms. An optocoupler can reduce that to less than 1ms.
+ + +This arrangement perhaps doesn't really look like it could work, but that's an illusion. By taking a diode blocked resistive feed from the positive supply, the DC is stored in C1 and because of the high impedances will hold up well even at low frequencies. C1 can't discharge back through the resistor when the amplifier's output voltage swings fully positive because the diode prevents this. Note that there is some modulation of the supply, but this is smoothed by the zener as it simultaneously protects the MOSFET gates.
+ +The zener is absolutely essential in this arrangement (but should always be used anyway), because without it the voltage can rise to the full DC supply - gate destruction is a certainty. R4 is optional but recommended. The cap will charge whether it's there or not, as long as the amplifier or speaker remains connected (the MOSFETs' internal diodes provide the current path). + +
Figure 6.1 - Supply Rail Gate Bias
As shown, the supply resistor (R2) does provide a small DC offset current to the speaker when the SSR is turned off, but at less than 2mA with 60 volt supplies it can be ignored. This is by far the simplest way to obtain the necessary DC to keep the MOSFETs turned on. To switch them off, you may use an opto-coupler as shown, or even a small relay can be used. Note that R2 and R4 are suitable for supply voltages up to around ±50V. Higher values will be needed if the amp uses higher voltage supply rails. R4 can be connected to the -ve supply instead of earth if desired, but there's little point - the circuit won't work any better by doing so.
+ +This circuit has the advantage of great simplicity compared to the other methods described. There is no need for an oscillator or transformers, no rectifiers or high speed diodes, and no side issues with high frequencies. Because of the continuous supply and use of an optocoupler, the turn-off time is also very fast. Turn-on speed is determined by the value of R3 and the MOSFET gate-source capacitance. We don't need sub microsecond switching, and in most cases the values shown will be more than acceptable.
+ +It's also worth noting that the circuit doesn't actually have to be powered from the amp's supply rail. Any positive voltage source of 15V or greater is enough to allow the relay to turn on and remain on until the optoisolator turns it off again. It may be helpful if R2 is reduced to suit the lower voltage - about 22k is fine, but you might need to experiment a little. The second 'feed' resistor (R4) should be the same value. You may need to use 1W resistors with high power amps.
+ +++ ++
+
Note, however, that there are contra-indications to this technique. When used as shown and in the 'off' state, there is a small charging current that is rectified by the + diode D1 and the MOSFET's intrinsic internal diode. When the cap is used in parallel, this tends to swamp the very small but highly distorted leakage current that flows each time the + diodes conduct. While R2 (the bias feed resistor) does reduce the noise, you will hear a low-level distorted signal across the speaker. The capacitor (C2) tends to swamp this to a degree, + but that allows even more signal to pass. + + None of the above affects the relay's ability to disconnect the speaker if DC is detected, but is something you need to be aware of. The distortion component of the muted signal is + especially audible if you choose to use a feed voltage that's less than the full amplifier supply voltage, and you have very sensitive speakers such as horn compression drivers. For this + reason, the MOSFET relay is not really suitable as a signal mute - this should be done at the amp's input or from the mixing desk. +
BEWARE! - the relay's default state is ON! The external circuitry turns the relay off, but if the supply to the detection circuit is not present before the amplifier supply rails start to rise, DC can be fed to the speaker until such time as the detection circuits function and disconnect the load. This is easily circumvented by some additional circuitry or by leaving the detector permanently powered ... with the proviso that the amp cannot be turned on at all if the relay supply is not present!
+ +For example, you could use P39 (soft start circuit), and use its power supply to power the detector (such as P33 which powers the optocoupler), as well as the soft start. While this adds some complication, a high power amp needs soft start anyway, so it's not necessarily a big deal.
+ + +It is possible to design the overall circuit so that the power supply constraints are reduced. Instead of placing the MOSFET relay in series with the amp's speaker output, simply connect it between the speaker common terminal (normally PSU earth/ ground) and the actual PSU earth bus. Now the entire circuit has one terminal that is earth referenced, which reduces the isolation requirements between the separate power supplies. However, when switched off, the centre-tap between the two MOSFETs can easily reach a voltage that still demands good insulation of any floating supply (roughly 1/2 the +ve supply voltage). The diode shown in Figure 6.1 (in series with R2) is not needed in the earth-referenced circuit shown below, because C1 cannot discharge when the amp's output swings positive - the junction between MOSFETs is at (close to) zero volts when the MOSFET relay is on.
+ +This circuit doesn't actually need the optoisolator, and it can be used with a couple of transistors to provide gate voltage. However, if done like that it ideally needs a negative supply as well as the positive supply. Off performance is improved, but that doesn't matter if it's used as a speaker protection relay (allied with a modified version of Project 33 for example). The default state for the relay is ON, so the external circuitry is used to turn it off.
+ +
Figure 7.1 - MOSFET Relay Circuit In Earth (Ground) Line
Needless to say, this method cannot be used with BTL (bridge tied load) amps regardless of bias supply type, because each side of the speaker is driven by a separate amplifier driven 180° out-of-phase with the other. Both speaker terminals are therefore 'live' with the amplified signal, so a fully floating system is definitely required. Even if you do decide to connect your MOSFET relay in the earth end of the speaker (i.e. the speaker return), I still recommend that the power supplies are properly isolated or you may have unforeseen problems (assuming that you use one of the other methods shown, not the one in 6.1). I don't know what they might be, because they are unforeseen .
Another potential issue is the added resistance in the speaker line, and that will reduce 'damping factor' (assuming that you consider it to be important) and output power. Any voltage and current combination that appears across/ through the MOSFETs also causes heating. For these reasons, using MOSFETs with a very low RDS-On is essential. The lower this resistance, the less distortion the circuit contributes as well.
+ +The circuit shown is not perfect, and it will probably let a small 'leakage' current through with negative output voltage. The simulator says about 5mA or so, which is audible (200µW into 8Ω) but is unlikely to cause any issues. Attenuation is 40dB, so extraneous leakage signals will be very quiet. It's possible to improve it, but the circuit described in Section 10 is so much better that pursuing a simplistic approach isn't worthwhile.
+ +It is possible to use a conventional relay in parallel with the MOSFET relay, so that there is no added series resistance. In the case of a fault, the relay must open first, followed by the MOSFET relay. This process will add an inevitable delay, because you must allow sufficient time to allow the electromechanical relay to be fully open before the MOSFET relay opens. The control system will also be far more complex, with more things to go wrong.
+ +Based on simulations and some tests, distortion can be expected to be well below 0.1% unless you don't have enough gate voltage or RDS-On is too high. Remember that there are two sets of RDS-On in series with the speaker, so maintaining a very low figure for each MOSFET is essential. It may be necessary to use two or more MOSFETs in parallel on each side of the switch to keep the insertion loss as low as possible. For a high power amp, even 0.1 ohm represents a significant power loss, and that power is turned into heat in the MOSFETs. Any increase in temperature further increases RDS-On, causing higher losses and more heat. Thermal runaway is possible if the MOSFETs are not sized correctly.
+ + +The solution that you eventually use will be determined by a number of factors, including space, cost and switching speed. As always, there are trade-offs that must be made in any design to get a final circuit that does what's required, but doesn't compromise the internal layout or add excessive cost and complexity.
+ +Photovoltaic optoisolators are a good solution, but the JFET (or the scheme shown in Figure 8.1) is mandatory to ensure fast turn-off times. The greatest obstacles you will face with this technique are cost and availability of suitable photovoltaic devices. There's a wide range available, but some will struggle to provide enough voltage to ensure the lowest possible RDS-On. Others have very limited current - perhaps 20µA or less. These will have rather long turn-on times, but that's not a major limitation - simply mute the amp's input until the MOSFET relay is turned on.
+ +A transformer coupled system that uses squarewave drive needs very little capacitance after the rectifier to get a clean DC switching waveform. This means that turn-on/ off times can be quite respectable, without having to resort to using opto-couplers. Once the oscillator is stopped, the relay will switch off within a millisecond or so.
+ +The high frequency transformer drive arrangement is also fairly straightforward, and will probably work out cheaper than using photovoltaic isolators. You can almost certainly make your own transformer quite easily - at 50kHz or so you don't need many turns, so it can be wound by hand. Naturally, the secondaries must be insulated to a standard that suits the amp's output voltage - both from the primary and each other. Another possible source of suitable small transformers is to use those tiny transistor output transformers that most electronics suppliers sell. Although they are extremely basic, some suppliers ask rather silly prices for them (up to $5-6 each). While they are basically rubbish in terms of audio quality, that's the least of your concerns when they are driven with a 50kHz squarewave . Their insulation may not be appropriate for a high power amp though, and this would need to be tested.
As noted earlier, I cannot recommend the charge coupled driver for use in an audio amp. It may be suitable for switching mains ... in which case the coupling caps (C1 and C3) must be Y2-Class (certified) types. All circuitry must have the required creepage and clearance distances for separation of hazardous voltage from your control electronics.
+ +Using separate DC supplies (whether via a mains transformer or miniature DC-DC converters) is expensive and rather clumsy. It's hard to recommend this approach, but it may be required for some less pedestrian uses for a MOSFET relay. Since isolated switchmode DC-DC converters are available with over 1kV isolation, the technique is suitable for switching mains, provided transistor output optoisolators are used for MOSFET control. Be careful with these DC-DC converters though, as some have an isolation test voltage of 1kV, but the allowable working voltage is much lower (40-50V typical).
+ +If you do use floating fixed supplies, you need to decide whether to switch the supply on and off to control your relay, or leave the supply running and use an optocoupler to control the gate voltage. The opto approach has a definite advantage in speed - it's easy to achieve sub-millisecond switching times, but at the expense of additional components. Switching the power supply on and off can result in MOSFET switch-off times of 10s or even 100s of milliseconds, depending on the load resistance.
+ +In terms of ultimate simplicity, the supply rail bias scheme wins hands down for power amplifiers. No oscillators or transformers, and very few parts, so there's not much to go wrong. Standard transistor output optocouplers are cheap and readily available, and the most expensive part of the system is the MOSFETs. This is the scheme that I would probably use, provided that the speaker relay isn't used for muting. It's unlikely that any alternative scheme can come close for overall cost and lack of complexity.
+ +It is important to understand that it does cause a distorted signal to be produced across the speaker when turned off (especially when connected in the earth lead of the speaker oddly enough). This is of no consequence if the only goal is speaker protection, and it is by far the easiest to implement. If full muting is needed, you will need to use one of the other schemes. The circuit must also introduce a small amount of distortion when turned on, because the diodes still go into and out of conduction as the signal voltage varies. However, the amount of distortion is very low indeed, and is unlikely to be audible at any level. While I have attempted to test for this, I was unable to measure the distortion, but of course that doesn't mean it's not there.
+ +It used to be quite common for power amplifiers to have meters on the front panel to show the power level. These also used diodes driven from the amp's output, and therefore introduced some non-linearity. As far as I'm aware, no-one ever heard the distortion created, and I expect much the same with the circuits shown.
+ +The circuit shown below has none of the limitations of the other schemes, but is comparatively expensive because of the two optocouplers. This is really a 'cost-no-object' approach, having only one limitation - the MOSFET turn-on time. This can only be made faster by having a low impedance supply for the gates, something you can't get with photovoltaic isolators.
+ +
Figure 8.1 - Composite MOSFET Relay Using Dual Optocouplers
The arrangement shown above has many things to recommend it. Unlike the circuit shown in Figure 2.1, there is no series resistance and no JFET that must draw a tiny amount of current, thus reducing the available gate voltage and the resistor further limits the gate charge current. When the photovoltaic optocoupler is active, the transistor output opto is turned off, and all the available voltage from U2 is presented to the gates of the MOSFETs. When the input signal switches from 5V (MOSFETs on) to 0V (MOSFETs off), U1's output transistor shorts the MOSFET gates to the sources, ensuring fast turn-off.
+ +The drive system is shown using 5V, but it can really be any voltage you have handy. It's just a matter of scaling R2 and R3 to get the right current into the opto's LEDs from the supply you have available. You may need to use higher voltage transistors if you want to use a voltage of over 25V or so.
+ +Of course, disconnecting the speaker is not the only option. You can also use MOSFETs to switch off the amplifier's power rails when a fault is detected [ 4 ], however the circuit must latch so that the protection system doesn't cycle. This approach also has the limitation that you can't detect the fault before the speakers are connected, since the amplifier(s) need power to trip the protection circuits, so speakers will thump loudly when the amp is turned on.
+ +The other version that probably has the widest application is that shown in Figure 5.1. It's fairly elegant, and I've used the small DC-DC converters in other (commercial) products I've developed with great success. While it's not the cheapest way to get a good result, it is still fairly cost-effective and it works quickly, which is what you need for a circuit such as this.
+ + +Although I have shown the various circuits here as speaker relays, needless to say this is only one of many applications. When switching DC loads, the schemes described here are not needed because the polarity will be known (so only a single MOSFET is needed), and for mains AC it's generally easier to use a TRIAC or a conventional relay.
+ +There are many applications for the MOSFET relay in AC mains circuits though - trailing-edge light dimmers being one. In addition, if used with AC it is possible to use the relay to limit inrush current by turning on the MOSFETs relatively slowly. This may be hard to recommend though, because dissipation will be high and even a small asymmetry can cause an effective DC component that will cause serious problems for motors and transformers.
+ +The number of applications is almost unlimited, but I confess that I can't think of many that haven't been covered already. Low current SSRs can be used for audio signal switching, and they are not particularly expensive. While this method of switching would satisfy most consumer audio requirements, it is probable that most people who love to build hi-fi equipment would frown upon the idea of active switches. CMOS active analogue switches already exist, but it's rare to find them in the audio path of any hi-fi equipment.
+ +Even when switching high power circuits, there is usually no good reason to add the extra complexity. In general it's far easier to use a more conventional approach - traditional relays, TRIAC solid state relays, etc. However, it's also important to know about other techniques that might just prove to be the perfect answer to a problem that appeared to be insoluble.
+ + +A recently available MOSFET driver is the Si8751/52 capacitively coupled device, which was released in 2016 (it takes time before new devices are available from distributors). Unfortunately, they are only available in an SMD package, but with a rated working isolation of 630V (and a test voltage of 2.5kV) that provides sufficient isolation for most mains rated applications. Depending on local requirements, the low-side (transmitter circuit powered from a 3.3V to 5V supply) may require a mains protective earth. For speaker relays and other low voltage applications, no special precautions are required. There's been a lot of design work on these to make them as flexible as possible.
+ +Apart from the Si8751 IC itself, mostly you only need a 5V power supply, a couple of resistors and a capacitor. The output MOSFETs will be selected for the voltage and current needed for your application. I've shown an AC MOSFET relay, but the IC is just as capable for DC. Although it's a great deal faster than any of the optocouplers examined here, it's not designed for high speed switching. The datasheet suggests an upper limit of 7.5kHz, but even that may be a little adventurous.
+ +
Figure 10.1 - MOSFET Relay Using Si8751 Capacitive Coupler
Turn-on and turn-off times are significantly better than photo-voltaic optocouplers, with typical figures of 42µs (on) and 15µs (off). This makes them an ideal choice for any MOSFET relay, and IMO pretty much renders the other methods obsolete. The only down-side is the fact that only an SMD package is available (SOIC-8). At a bit over AU$2.00 each when I bought them (late 2019), they are economical as well. For backward compatibility with optocouplers, the input of the Si8752 emulates an LED, the idea being that no circuit re-design is needed. The Si8751 uses a logic level input.
+ +There's provision for 'Miller' capacitors, with the idea being that they will prevent the MOSFET(s) from turning on with fast transitions on the applied signal. For audio work (and anywhere else where very rapid voltage transitions are not expected) these can be omitted. R3 (connected from the TT pin to ground) is used to control how much current the circuit draws from the 5V supply. It can be shorted to ground (17mA), use (for example) a 10k resistor (9.5mA) or left open (1.8mA). This determines the switch-on time, with high current giving a faster turn-on.
+ +The typical MOSFET gate 'on' voltage is 13V, but the datasheet does say that it may be as low as 9V. Most 'normal' (i.e. not logic level) MOSFETs will be quite happy with this, but you need to verify that from the MOSFET datasheet. If you want to find out more about these, a web search will provide the datasheet. This is the first IC I've come across that really makes MOSFET relays the 'go to' option for switching anything from mains voltages through to loudspeaker protection.
+ +These ICs also allow the use of an N-Channel (or P-Channel, with gate and source pins swapped) MOSFET for high-side switching, with no requirement for bootstrap capacitors or other components. They aren't suitable for switchmode power supplies though, as they aren't fast enough. They are also not suited for any application that requires linear control - the MOSFET(s) are either on or off. Since they are designed specifically for MOSFET relays, this should come as no surprise. That they outperform anything else available is a given, as none of the other techniques examined in this article even come close.
+ +
Figure 10.2 - MOSFET Relay Using Si8752 And Project 198 Board
The above shows my prototype, using the P198 PCB, and using a pair of STW20NK50Z MOSFETs. These are 500V, 20A, 190W devices that I happened to have on hand (removed from a SMPS that had failed). It pretty much goes without saying that it performs exactly as expected, and I have run a few 'definitive' tests, and turn-on and turn-off times are as shown in the Si8751 datasheet. The DC output from the IC measured just under 11V, more than sufficient to fully turn on the MOSFETs.
+ +The MOSFET relay has been tested with an audio amplifier to turn the speaker on and off, and also with a 50W LED floodlight from the 230V mains. It works perfectly in both applications, and is at least reasonably safe with mains voltages due to the large area of creepage and clearance distances. With the MOSFETs I used, it should be able to handle a 230V load of up to around 500W (a bit over 2A) without needing heatsinks for the MOSFETs, as they will dissipate about 800mW each. Higher current will require heatsinks to maintain a safe operating temperature. There are many MOSFETs with significantly lower RDS-On that will dissipate less power, especially at lower voltages.
+ +For a MOSFET relay project, see Project 198, which shows a complete circuit, based on the test board shown above. It's been tested for mains switching, lamp dimming and loudspeaker switching, and it does exactly what's expected in each case. It's shown using IRF540N MOSFETs, which are suitable for speaker switching, and can be used with the Project 33 loudspeaker protection board. A MOSFET relay is ideal when the DC supply voltage is too high to prevent relay contact arcing.
+ + +There are literally thousands of MOSFETs to choose from, and you will need to select devices that can handle the voltage and current you will be switching. For a loudspeaker protection relay, there are several suitable candidates shown in the Project 198 construction details (available to purchasers). You need very low RDS-On for high current, and a voltage rating that will suit your power amps.
+ +Ultimately, it's up to the constructor to decide on the most suitable MOSFET for the intended purpose, and ESP makes no assurances one way or another. Sometimes, you'll have a limited choice and will have to make do with what you can get. The lower the RDS-On the better, and the voltage rating should be around 10-20% higher than the amplifier supply rails.
+ + +The datasheet for the Si875x ICs provides no information on just how the Miller clamp circuitry works. The circuitry is integrated, and presumably the manufacturer either imagines that people will know somehow, or they are trying to keep it 'secret'. While I figured it out fairly quickly (once I decided that people might want to know), there are many documents on the Net that describe Miller clamps. They are particularly important with SiC (silicon carbide) MOSFETs due to their different internal structure, but very fast voltage risetimes can cause issues with standard silicon MOSFETs as well. You'll quickly discover that most of the info available on-line is either application notes or academic discussion. You be hard-pressed to find many example circuits that show how it's implemented.
+ +All semiconductors have inter-electrode capacitance, and the capacitance between the drain and gate (the Miller capacitance) is the most important. Mostly, this isn't a problem because MOSFETs are usually driven from a very low impedance, but the internal circuitry of the Si875x ICs has considerably higher effective impedance. This allows a fast risetime drain voltage to cause spontaneous conduction of the switching MOSFET. It might only last for a microsecond or so, but it can reach a high current, limited only by the external impedance.
+ +The Si875x datasheet is unclear about the 'gate-off' impedance. While it claims that it's over 1MΩ, this is highly unlikely. It's not something I've been able to verify, but from performance measurements it would appear to be around 22k. Provided the DVDT (ΔVΔT - rate of change of voltage vs. time) remain below 10V/ µs it's unlikely that there will be any issues. That may not sound very fast, but it's equivalent to a 50V RMS sinewave at 20kHz.
+ +The parasitic capacitances for a MOSFET are shown shaded. These are (in order of importance) CGD - gate to drain, CGS - gate to source, and CDS - drain to source. Spontaneous conduction is caused mainly by CGD, because the rising voltage on the drain is partially coupled to the gate. If the drain voltage changes quickly enough, it should be apparent that the MOSFET will conduct, but only while the drain voltage is rising.
+ +
Figure 12.1 - Active Miller Clamp Demonstration Circuit
In the drawing above, I've only shown one switching MOSFET, Q1. Q2 is the Miller clamp. When the drain voltage DVDT is very short (e.g. 10µs or so from zero to maximum) the switching MOSFETs Miller capacitance will cause it to turn on - as the voltage is changing. By using a capacitor to differentiate the critical rate of change to the clamp MOSFET, the clamp turns on and shunts the parasitic gate current to the source. The switching MOSFET may have the DVDT current reduced from many amps to only a few milliamps at most.
+ +The Miller clamp is shown as a small-signal MOSFET, but a bipolar transistor can also be used. In a simulated comparison between the 2N7000 MOSFET and a 2N2222 BJT, the difference was negligible
+ +A simulation of the circuit shown (but with Q2 disconnected) indicates that a 10µs supply voltage risetime (from zero to 100V on the DC supply), the MOSFET will conduct 6.27A peaks (with current in excess of 1A for 22µs). When the Miller clamp circuit is connected, the peak current is reduced to 15mA with a duration of less than 1µs. Note that this entire process is irrelevant if the supply is steady DC, and it can only happen when the DC is turned on, and its DVDT is greater than 10V/ µs. In 'real-life' circuits this is very unlikely.
+ +During my simulations, I found that a ΔVΔT of 100V/ µs would cause a (simulated) IRF540N to enter spontaneous conduction with a G-S resistance of 330Ω, passing a current of 4.5A during the voltage transition from 0-100V (in 1µs). This was reduced to 65mA with the active Miller clamp in place, using a 22pF capacitor. With a higher G-S resistance, the effect was a great deal worse. In the vast majority of cases, the Miller clamp caps will not be required.
+ +This is a simplified explanation, so if you wish to get more in-depth coverage of the topic, I suggest a web search. The circuit shown above has been simulated, and the clamp does exactly what it's supposed to do. The demonstration circuit shown in Figure 12.1 reduces the peak switching MOSFET current from over 6A to no more than 15mA (most of which is due to CDS), based on a simulation using the devices (and voltage waveform) shown in the circuit. Despite everything described here, I can't think of any application where any problems will be experienced, but if they do arise the solution is provided in the IC.
+ + +While the MOSFET relay has some significant advantages over and above a traditional electro-mechanical relay, these advantages come at a cost. The MOSFET relay will be physically larger than a conventional relay, and the overall circuitry is more complex and costly. Where a relay can be glued into place on the rear panel of an amp, the MOSFET version requires at least one printed circuit board, as well as more wiring. While it will survive a DC fault in the amp (possibly many times over), DC faults are uncommon in well designed amps, used properly, and with good heatsinks.
+ +To obtain complete isolation (full muting), you have to forego the parallel capacitor, and use a MOV and/or 'catch' diodes on the speaker side of the relay. With the cap shown in the above examples, there is a small 'leakage' current via the cap, and if the relay is used to mute the speaker output, a low-level signal is audible with sensitive speakers. Also, note the comments for my simplified 'supply rail powered' version - ignore this at your peril.
+ +Yes, a conventional relay will be a real mess after the DC arc has been shorted to earth, and the relay should always be replaced if the amp has failed DC. However, the relay is cheap, easily replaced and you never have to worry too much about a multiplicity of electronic parts that can also fail, rendering the amp unserviceable even if there's no other fault. Contact resistance is sufficiently low as to ensure minimal power loss (if any), and distortion should be somewhere between zero and negligible if you have good contact materials and no oxidation. Once driven to power, any oxidation will be burnt away anyway - it is extremely uncommon for anyone to suffer from audible distortion caused by relays.
+ +The MOSFET relay will survive countless DC faults, but this should never happen. All the additional complexity and cost is essentially wasted, with the exception of very high power amplifiers. It is extremely hard to find any relays that can break 100V DC at 25A or more - they exist, but are large and expensive. In such cases, it is worth considering the use of a final level of protection - a high power TRIAC that acts as a 'crowbar'. It protects the speaker by simply shorting the amp's output to earth. The amp has already failed, so additional damage is of little consequence because it will be limited when the fuse blows - which it will do spectacularly.
+ +The important thing is to ensure that an amplifier failure only means that you have to repair the amp - not the speakers to which it's connected. In many cases, the loudspeaker drivers can cost more than the amplifier.
+ +Naturally though, the idea of building your own MOSFET relay should have some appeal, just for the knowledge gained and the experience you'll get, not to mention the fun factor. I leave it to the reader to decide which method to explore and how much fun they should have doing so. The IC described in Section 10 really is a game-changer, and makes MOSFET relays far more usable than any other technique. I have retained the other techniques for posterity, but in reality they are all rendered obsolete with the availability of the Si8751 and Si8752 MOSFET driver ICs.
+ +See Project 198 for a complete description of the MOSFET relay described in Section 10.
+ +This is a space that's evolving, with new options being announced by the major IC manufacturers regularly. TI (Texas Instruments) has a new range of ICs for SSRs as of late 2023, but availability is limited at the time of writing. We can expect new developments as technology improves. It's already (more-or-less) possible to buy ICs that provide the equivalent of the transformer-coupled option shown in Fig. 3.1, but in a tiny SMD package. Most of these are hard to get though - they are listed as 'available', but the major distributors never seem to have them in stock. I have no doubt that this will change!
+ + +I also looked at a great many suggestions, websites and application notes - some good, some decidedly otherwise. The references shown above are intended as representative, and the same or similar information can be found elsewhere. A search is always a good place to start, but you need to know just what to look for in any circuit you may find. While some ideas seem ok on the surface, that's because the potential shortfalls haven't been mentioned (or addressed).
+ +![]() | + + + + + + |
Elliott Sound Products | +Electric Motors |
Motors are very much a part of life, and are used almost everywhere. They range from tiny flea-power types for quartz electric clocks, to CD and DVD players, computer hard disk drives, to large industrial machines that may be rated for 1MW (1,000kW or 1,340HP) or more. They are used to start internal combustion engines, power the electric seats, door locking mechanisms, through to powering the car itself for 'fully electric' and hybrid cars. One of the most common types is still the brushed DC motor, which has been with us for over a century, and shows no sign of going away anytime soon.
+ +The most common AC motor is the 'squirrel cage' asynchronous motor, originally patented by Nicola Tesla in 1888 [ 1 ]. These have been refined consistently over the years, and are the most common motor for light industrial machines, refrigerators, washing machines and other similar tasks. In many areas they are being replaced by electrically commutated motors (commonly referred to as BLDC motors). Another very common motor is the shaded-pole type, and these are found in exhaust fans, small pumps (dishwashers, washing machines, etc.) and in many other places where a more robust motor isn't needed. They are used in many pedestal fans, in particular the cheap 3-speed types.
+ +A shaded pole motor is actually a variation on the squirrel-cage motor, but with greatly reduced size and power. Despite initial appearances, all motors use much the same operating principle, although there are often some subtle (but important) differences. Motors rely on magnetism (or more correctly, electromagnetism), which is either switched or produced by AC input power. All AC induction motors have a synchronous speed, which depends on the AC frequency and the number of poles. A 2-pole motor fed with 50Hz AC has a synchronous speed of 3,000 RPM (3,600 RPM with 60Hz), but the majority of AC motors are not synchronous - they rely on 'slip'. The rotor slows under load, inducing a current into the 'squirrel cage' rotor that generates a magnetic field in the rotor, allowing the motor to produce torque (rotary force).
+ +The first commutator DC motor capable of powering a machine was invented by the British scientist William Sturgeon, in 1832. Following the work of Sturgeon, Thomas Davenport built an improved DC motor in America, with the intention of using it for 'practical purposes'. This motor, patented in 1837, rotated at 600 RPM and operated light machine tools and a printing press [ 2 ]. The basic principles haven't changed, but modern motors are far more efficient than these early attempts. + +Something that isn't covered here is the interaction of magnetic fields that causes a motor to rotate. This is a deliberate omission, because it's assumed (rightly or otherwise) that the reader already knows the basics of magnetic (and electromagnetic) attraction and repulsion. A rotor has a North and South magnetic pole that changes continuously, and this is attracted to its opposite pole on the stator, and repelled by a like pole. All motors (whether AC or DC) use this principle, and DC motors use a commutator (see below) to ensure that the poles can never reach static equilibrium - therefore the motor is in a constant state of trying to catch up, resulting in rotation. AC motors are much the same, except that (usually) it's the magnetic field in the stator that 'rotates'. This basic understanding is all that's necessary to be able to follow the descriptions here. Ultimately, it's all based on the magnetic rule that ...
+ +Like poles repel, unlike poles attract.
+ +It makes no difference if the magnetic field is due to permanent magnets (ferrite ceramic, AlNiCo [Aluminium, Nickel, Cobalt], NdFeB [Neodymium, Iron, Boron], Samarium–Cobalt, etc.) or electromagnets (coils of wire, with or without an 'iron' core). Motors can use only electromagnets, or may use permanent magnets and electromagnets. You cannot make a motor that uses only permanent magnets, because there's no way to change the polarity of the magnetic field, and therefore no way to generate movement. Contrary to belief in some circles, magnets are not a source of energy, thus rendering all 'perpetual motion' (aka 'overunity') machines into concentrated snake-oil.
+ +Permanent magnets use 'hard' magnetic materials, meaning that once magnetised, very little field strength is lost over time, or due to the influence of external magnetic fields. These materials are specifically designed to have high coercivity - the ability to retain magnetism without becoming demagnetised. Laminated 'iron' (actually silicon steel) is a 'soft' magnetic material. It's easy to magnetise with a coil of wire, but the magnetism is not retained. When the current in the coil stops flowing, the material falls back to (almost) zero magnetic field strength. This is the same type of material as used in transformers.
+ +There's a vast amount of information available about motors. This article is intended as a primer, as it would be impossible to describe every application and variation. There are many specialised motor types that aren't particularly common (homopolar motors are just one example - look it up, because they aren't covered here, and nor are they very useful). I won't discuss 'ball-bearing' motors either - while certainly interesting they appear to have no practical use, and there's some debate over how they (barely) function. Another type not covered is the piezo motor, which uses piezo crystals to create rotary or linear motion. These are highly specialised and are usually very expensive.
+ + +While you can be excused for thinking that these motors are 'old hat' and rarely used any more, nothing could be further from the truth. They are still made (and used) in the millions each year, because they are one of the most economical motors around. You can buy them from many specialist suppliers, or find them on eBay. The most common types range from a few hundred milliwatts or so up to 500W (continuous), but there are others that operate at much lower and higher powers. Most are permanent magnet types, so they can be used as a motor or a generator. The brushes are always a cause for some concern, as they wear out from constant friction as they press onto the commutator. Brushes are typically carbon/ graphite, often with fine granules of copper to reduce resistance. In 'better' motors, they can be replaced without having to dismantle the motor. Speed control is easy, but speed regulation is not. Without a feedback system, the motor's speed is highly dependent on the applied load. As the supply voltage is reduced, torque is also reduced (though not necessarily in proportion).
+ +These motors are also common in 'linear actuators', used for locking/ unlocking car doors, in robotics and industrial processes. The motor shaft (which may include gearing) is attached to a pinion which drives a rack (a flat or linear gear). As the motor rotates, the rack is moved in the desired direction. Most have limited travel (although up to 1 metre isn't uncommon), and there's a requirement to use limit switches to stop the motor at each end of the actuator's travel. Without that, the motor would remain powered but stalled, leading to failure. Current sensing can be used instead of limit switches, and that means power to the motor will be turned off if the rack is obstructed, jammed or overloaded.
+ +
Figure 1.1 - DC Brushed Motor Construction
The basics are shown above. The commutator switches the voltage from one winding to the next as the motor spins, ensuring that the rotor's magnetic poles are constantly changing to force the rotor to rotate. The position of the commutator segments in relation to the rotor windings is critical to obtaining the best speed and efficiency. You'll see many drawings that show a 2-pole rotor, but in all but a very few cases, the rotor will have a minimum of three poles. This ensures that it will always turn when voltage is applied. The direction of rotation can be changed simply by reversing the supply polarity. These motors can be designed for extremely high speed, with some rated for 20,000 RPM or more. Precision motors may use an 'ironless' rotor, with the windings being self-supporting. Some also use 'precious metal' brushes and commutators for greater life and lower friction losses.
+ +For hobby motors used for model planes, boats, helicopters etc., you'll often see the speed rating in 'KV' - this doesn't mean anything at first, but it's in thousands of RPM ('K') per volt ('V') with no load. A 2KV motor will run at 10,000 RPM with 5V applied. The standard brushed DC motor is the mainstay of most 'hobby duty' servos (see Hobby Servos, ESCs And Tachometers). The same terminology is often used for 'brushless' DC motors (see next section).
+ +
Figure 1.2 - DC Brushed Motor Stator & Rotor
The stator and rotor are shown separated in the above photo. The ceramic ring inside the housing is the magnet, and you can see the three segments of the rotor. The commutator can also be seen, but not clearly enough to discern the three segments. You can see where the windings are terminated to the commutator. The copper windings are clearly visible. You can also see slots in the stator housing, which allow some movement of the brushes relative to the fixed poles. There is a specific point where the motor is most efficient (minimum current for a specific output torque).
+ +
Figure 1.3 - DC Brushed Motor & Brushes
The rotational speed is determined by a number of factors. One of the limiting factors is the strength of the permanent magnets - to obtain higher speed, they need to be weakened - pretty much the opposite of what you'd expect. Using strong magnets means higher torque, but lower RPM. It's very common for DC motors to have an attached gearbox, almost always geared down to reduce revs but increase torque. Many motors are sufficiently powerful that they can destroy the gearbox if the output is loaded excessively. This is especially true with gearboxes using a worm gear for reduction.
+ +All of the early DC motors used field coils rather than permanent magnets. Very strong magnets are available now, but in the early days the only suitable material was AlNiCo (aluminium, nickel, cobalt), which as developed in the 1930s. It was then (and still is) fairly expensive, and that would have made its use in motors uneconomical. Until comparatively recently, most electric trains and trams used field coils, which could be switched (via the controller) to be either in series or parallel. A drawing of a motor using field coils is shown below. I've only shown two field coils, but many of these motors use four pole stators.
+ +To reverse the direction of rotation, the polarity of either the field winding or the rotor (via the commutator) is reversed (but not both). Some motors are specifically designed to operate most efficiently in one direction, and reversing it can cause severe arcing at the brush/ commutator interface. Reversible motors generally use a compromise for the brush location, so realistically, neither direction is optimum. Sometimes the brushes can be adjusted (as seen in Figures 1.2 and 1.3, where the stator housing is slotted to allow optimum brush location).
+ +
Figure 1.4 - DC Brushed Motor With Field Coils
The common car/ truck starter motor uses this construction, with the field coil and rotor (via brushes) wired in series. This class of motor has very high starting torque, because the coils are low resistance, and carry up to 200A when stalled - limited only by the series resistance of the windings, brushes and wiring (including the internal resistance of the battery). This provides enormous magnetic field strength, allowing a small motor to turn over a car engine easily (via the ring-gear attached to the flywheel). If run with no load, as the speed increases, the current falls, reducing the magnetic 'pull'. This can allow the motor to reach dangerous speeds, limited only by friction. At high RPM the motor has little torque, but when first connected to a battery the motor will try to 'escape' from whatever is holding it.
+ +The rate of acceleration and starting torque are both very high, so strong restraints are essential. Never allow a series-wound motor to keep accelerating (which it will with no load), as it's not unknown for a starter motor to reach such a speed that the rotor windings can literally detach from the rotor. This applies to most series wound motors, whose speed is inversely proportional to the load.
+ +Series-wound motors are also used in many household appliances, especially vacuum cleaners and most mains powered power tools (saws, angle grinders, drills, etc.). These use a laminated core for the rotor and stator, and will operate equally well with AC or DC. Motors with field coils are often referred to as 'universal - AC/DC' motors (the band of the same name got the idea from a sewing machine motor - true!).
+ +
Figure 1.5 - Typical AC Brushed Motor With Series-Wound Field Coils
A typical AC/DC motor is pictured above. This is rated for 230V AC, but spins quite happily with only 15V DC. The DC resistance is only 16Ω, so the switch-on current could be up to 20A with 230V AC. The motor is only 80mm long (excluding shaft and rubber shock-mount). At 15V, the stall current is just under 1A (as expected from the resistance), and while it can be held stopped with one's fingers (holding the nut at the right-hand end), it still has a surprising amount of torque. With fan cooling, it would be capable of around 500W, and it was liberated from an old vacuum cleaner. The 'clean' air output was directed across the motor for cooling. The small blue 'thing' visible joining the brush mount to the frames is a 1nF Class-Y1 capacitor. There is another for the second brush.
+ +Some other household machines (in particular sewing machines) may use a shunt-wound system, where the rotor and stator windings are in parallel. The parallel/ shunt connection is far less common than series, but exhibits a more constant speed with varying loads. It is often preferred if the speed has to be tightly controlled by the operator, but they don't have the same starting torque as a series wound motor. The maximum speed is also (more-or-less) fixed, because the stator's field strength remains constant.
+ +A crude (but effective) speed regulator that used to be common was a centrifugal switch, which would open at a defined RPM, and close again when the motor speed fell a little. These were used in kitchen mixers for many years, but electronic speed control has taken over. By using a tachometer, the speed can be held constant with any load up to the motor's rated maximum. This isn't covered here other than as a basic concept (below).
+ +A simple motor controller (DC only) is shown in PWM Dimmer/ Motor Speed Controller, and I have one (with a big MOSFET) to control a 400W motor used to power my mini-lathe. The circuit also works well as a DC dimmer for LED lighting (with direct connection to the LEDs - not those with an integral power supply). DC PWM speed controllers can obtain a feedback voltage by monitoring the motor's back-EMF when the 'power pulse' is turned off. The motor acts as a generator in these short intervals, and the voltage is proportional to RPM.
+ +Because most DC motors are fairly high speed, it's common for them to have a gearbox (integrated or external. These can use 'conventional' gears and pinions, or may be planetary - the output shaft is in-line with the input shaft. This style of gearbox is very common in battery powered tools (especially drills). If you've never come across a planetary gearbox, I recommend a web search. They offer high gear ratios in a very compact unit, with no offset between the input and output shafts as is usually found with 'conventional' gearboxes.
+ +Early electric trains (the big ones) used DC motors, and those used in Sydney from the 1920s up until the 1990s used a 1,500V DC supply, with a full 'set' (a complete 8-car train) demanding up to 1.6MW. These are commonly known as 'Red Rattlers', and they had a remarkably long life before they were finally all retired. Naturally, there's a fair bit of information about these (as well as early electric trains around the world). The 1.5kV DC supply was unusual at the time, as trains in many other countries used 750V DC, so would draw twice the current for the same power. 1,500V DC is now quite common, and is used by all of Sydney's 'heavy' rail cars (as opposed to 'light rail', which is basically a glorified tram).
+ +Electric trains are a topic unto themselves, and most people who are interested will no doubt already have done extensive reading on the topic. For those who think that this is 'uninteresting' or even 'boring', if you have a technical mind, the more you read the more info you'll look for. Anything that can deliver up to 1.6MW (for a whole train) requires some pretty serious engineering! A 'starter' is shown at the end of the References section. Go on - you know you want to .
The brushless DC motor is not a DC motor at all. Electronics are involved that switch the DC as the rotor turns (electronically commutated), and the voltage applied to the stator windings is AC, not DC. In the case of small motors, the electronics are enclosed within the motor housing, and this is most commonly found with 'BLDC' cooling fans (as used in computers, high power amplifiers, power supplies and the like). The motor is a reversal of the permanent magnet DC motor described above, with the rotor containing the magnets (not always used), and the field coils are stationary. Most use a hall-effect sensor to switch the DC from one set of field windings to the next.
+ +There's another motor type that's also called a BLDC motor, but all the electronics are external. These are common for high power applications (for example in electric cars), and the motor is really an AC induction motor, despite the name. If the rotor uses magnets, the motor operates as a permanent magnet synchronous type, with the rotor speed directly related to the AC frequency used. If magnets are not used, the motor operates as a 'conventional' induction motor (see below).
+ +
Figure 2.1 - BLDC Floppy Disc Drive Motor
+( © 14 September 2007, Sebastian Koppehel (Wikipedia) Licenced )
The photo above shows a more-or-less typical small BLDC motor, as used in floppy drives (ancient technology now). The rotor is outside the stator, and is magnetised. There was no information provided about the number of poles for the rotor, but the stator has 12 poles and I'd expect the rotor to have the same. Similar motors are used for hard drives and CD/ DVD players. These motors are capable of very high speed, with high RPM types generally using fewer poles than low-speed types.
+ +It's not particularly helpful that there are several different names used for the same type of motor. You'll see numerous acronyms, including PMM (permanent magnet motor), PMAC (permanent magnet AC), PMSM (permanent magnet synchronous motor) and BPM (brushless permanent magnet). These terms are generally interchangeable, but you always need to check the specifications carefully so the correct controller is selected. The range of controllers is vast, and using the wrong type will almost certainly not work properly (if at all).
+ +Almost all BLDC motors are actually synchronous motors, covered in the next section. As discussed above, many use Hall-effect sensors (or sensing coils in early examples) to detect the rotor position so that the next set of coils can be energised, so they are not really synchronous, because their drive frequency and speed can vary, and the sensors determine the frequency and RPM. The maximum RPM depends on the load. These simple motors can be slowed by reducing the supply voltage (making fans quieter), but there's a lower limit. Below that, the motor may run, but can't start from rest.
+ +
Figure 2.2 - BLDC Fan Motor
+
+
A common BLDC fan motor is shown disassembled above. The blades have been removed. The rotor (left of photo) has four poles (i.e. two North, Two South), as does the stator, which uses a laminated 'iron' core and appears to be wound as 2-phase. The windings are not symmetrical, with a reading of 16 ohms across the full winding, and 8 ohms from each winding to 'common'.
+ +The Hall sensor can just be seen between the two upper poles (at the top of the stator assembly). The controller IC is on the other side of the PCB. Like the floppy drive motor, this is an 'outer-rotor' motor, so the rotor spins and the stator (with the windings and electronics) remains stationary.
+ + +These used to be very common, with small types used in electric clocks. They were (and still are) very accurate, because the power companies worldwide need to keep the frequency (50 or 60Hz) tightly controlled so that generating capacity can be increased as needed. It's well outside the scope of this article to go into detail, but AC synchronous clocks are extremely accurate over the long term. The power company will generally ensure that the number of cycles of AC produced per day is consistent (4.32 million cycles for 50Hz mains, 5.184 million cycles for 60Hz). The first mains powered synchronous electric clock was developed in 1916 by Henry Warren (see Clock Motors for details), and many others followed as power companies worldwide ensured frequency accuracy.
+ +In general, synchronous motors are not inherently self-starting. Many will operate as an induction motor when power is applied, and will only lock to the incoming frequency when the motor's actual speed is close to the synchronous speed. As a result, most have to be started with relatively light loading, and the load applied only once the motor has synchronised. Once the rotor has 'locked' to the AC input frequency and the load is applied, there is usually an offset between the magnetic poles. Overloading will 'break' the magnetic bond, and the motor may stop.
+ +
Figure 3.1 - Synchronous Clock Motor
An example is shown above. This motor has multiple poles, and spins slowly. The use of 24 poles was fairly common, resulting in a motor that spins at 250 RPM (50Hz). The original Warren Telechron motor was a shaded-pole type, and with only two poles ran at 3,000 RPM (50Hz) or 3,600 RPM (60Hz). The speed of a synchronous motor is determined by ...
+ ++ RPM = ( f × 60 ) / ( n / 2 ) (where f is mains frequency in Hertz, and n is the number of poles) ++ +
The only way to adapt a 60Hz clock to 50Hz (or vice versa) is to change the gearing - the speed is fixed by the mains frequency. The article Frequency Changer for Low Voltage Synchronous Clocks shows how you can change from 50Hz to 60Hz or vice versa, while maintaining the accuracy of the incoming mains. For what it's worth, I have a couple of synchronous clocks, and their timing is far better than any quartz clock I own. One disadvantage of these simple multi-pole motors is that they may start in either direction, so a simple mechanical pawl is used that 'bounces' the motor to run in the correct direction. Some earlier types used a little knob that had to be spun by the user when the power was applied. These types did not re-start if the mains supply failed.
+ +Very small synchronous motors are used in electromechanical timers (as used for turning lights and other gizmos on and off at pre-determined times). There's a photo of one in the Clock Motors article if you'd like to see an example. Notably, quartz clocks (and watches) use a similar type of motor, and they are truly tiny (especially for watches!). While these share some characteristics of synchronous motors, they operate very differently.
+ +Many years ago, Elac (in Germany) made (vinyl disc) turntables that were unusual, in that they were both high-quality, and had the facility for record changing. Most other 'record changers' of the day used a shaded pole motor (see next section) and were generally mediocre at best. Record-changing with accurate speed required a fairly powerful synchronous motor, and the ones used were known as 'outer-rotor motors', made by Papst (now EBMPapst, which still makes motors, but not the same). With the large rotor on the outside, it acted as an effective flywheel. I used one for some years back in the early 1970s, and I have one in my workshop to this day. Many models (especially aircraft) use the same principle for high-speed BLDC motors, where they are commonly known as 'out-runners'. The floppy-disc motor shown in Figure 2.1 is an outer-rotor motor.
+ +Small synchronous motors are very common in 'high-end' turntables to this day. Some are low voltage and use an oscillator (which provides speed changes and allows variable speed). The output is amplified and fed to the motor windings. These always use two windings, with the voltage to one shifted by 90° (quadrature) so that the motor always spins in the right direction. Others run directly from the mains, with one winding fed via a capacitor to get the required phase shift to ensure reliable starting and good torque characteristics. They can use a crystal locked oscillator for accurate fixed speeds (45 and 33⅓ RPM). Most 'direct drive' turntables use a multi-pole synchronous motor that requires no belts or gearing. The motor itself spins at the desired speed, and by careful attention to the waveform they are almost vibration-free. There are several different styles used, some being similar to any other 'outer-rotor' motor, and others using a 'pancake' (flat rotor and stator) design.
+ +Pancake motors don't get a section of their own, because they are no different from more traditional designs in the way they work. The motor shown in Figure 2.1 can be considered a pancake design, as it's very flat (as the name implies). They are available in multiple different formats and sizes, and some are even brushed DC motors. There is a wide range of available power levels, from a couple of watts up to 6kW or more in some cases.
+ +A few readers will know the original Hammond organs, which used 'tone wheels' to generate the notes and their harmonics. These used a synchronous motor, so the instrument was as accurate as the mains frequency. Unlike later (fully electronic but not crystal controlled) oscillators, the Hammond organ was never out of tune, and everyone else in the band had to tune their instruments to the organ. These were made from 1935 until 1975, and are still sought after (and expensive) instruments. The sound is quite distinctive, although it can be matched using modern electronics. Sadly, the synchronous motor is no longer used.
+ +Many industrial processes use synchronous motors, which can range from fractional horsepower types up to several thousand HP (1HP is 746 watts). Once they get above a certain size, many large motors use an electromagnet for the rotor, with DC power applied via slip-rings. These are solid copper rings, insulated from the drive shaft, and power is delivered with brushes. Wear is minimal, because there are no gaps in a slip-ring. I once worked on a 1,000HP (746kW) synchronous motor, helping to ensure that it was properly balanced as it was to be used in a water supply pumping station (27/7 operation). That was one scary machine when it got up to speed, as much of its 'innards' were exposed for all to see (but definitely not touch !).
+ +
Figure 3.2 - Westinghouse 'Type C' Synchronous Motor With Direct-Connected Exciter (Wikipedia)
The motor shown above dates from 1917 and is not too dissimilar from the one I worked on (although it was somewhat less ancient). The 'exciter' shown is a DC generator used to magnetise the rotor, and by having it directly mounted to the drive shaft means that slip-rings aren't needed. However, the generator requires a commutator to 'rectify' the AC output from the exciter's rotor winding, which means that it's hardly maintenance-free.
+ +An interesting use for synchronous motors is for power factor correction. The motor is (usually) run with no load, and the power factor can be changed by altering the DC excitation current. When the excitation current is lower than 'normal' the motor has a lagging power factor (inductive), and if excitation current is increased past the critical point, the power factor is leading (capacitive). Note that this only works when the mains current is linear, but out of phase (see Power Factor - The Reality (Or What Is Power Factor And Why Is It Important) for information on power factor). Many modern loads draw a non-linear current from the mains, and this cannot be corrected with a synchronous motor (or a capacitor bank).
+ + +Shaded pole motors are one of the most common small AC types available. Unlike 'traditional' single-phase induction motors, they don't require any starting system, but they are limited to low power application. You'll commonly find them in desk, pedestal and exhaust fans, end they are also used as pumping motors for washing machines and dishwashers. Most will be rated for no more than around 50W (0.67HP), although there are a few used at higher power (up to 150W is available, but fairly uncommon). These higher powered versions will often be rated for intermittent use only, unless a cooling fan is attached to the output shaft. These motors are not very efficient, and have low power factor and don't run happily if loaded when power is applied, due to very low starting torque.
+ +A variation on the standard shaded-pole motor is the shaded-pole synchronous motor. The rotor is magnetised, and will rotate at the AC synchronous speed (3,000 RPM for 50Hz, 3,600 RPM for 60Hz). These were once common for AC electric clocks, with one of the earliest being made by the Warren Clock Company of Ashland, MA (patent #1,283,431 applied for on 21 Aug 1916 and granted 29 Oct 1918). See Clock Motors & How They Work. These synchronous shaded-pole motors have very low torque. Most shaded-pole motors in use today are not synchronous, and are used for fans (desk, ceiling or pedestal). They are gradually being replaced by BLDC motors for 'high end' products (see Section 2 [above] for details).
+ +Synchronous shaded-pole motors were also sometimes used for vinyl turntables. These were used with some of the 'better' record-changers, and were fairly robust. The motor could drop out of synchronous operation during a record change (due to relatively high loading) but would return to synchronous operation once that process was complete. Several manufacturers used these motors in the 1960s and 1970s, but the desire for 'better' speed regulation and the demise of the record-changer spelled the end for them in this role.
+ +
Figure 4.1 - Shaded Pole Motor
The 'shaded' poles have a short-circuit ring around them, which forces the flux in the shaded pole to be shifted with respect to the 'main' poles. In the arrangement shown, the motor will spin clockwise. It can be reversed only by removing the bearing plates and installing the rotor the other way around (with the shaft pointing up in the top view). This is a trick worth knowing if you have one of these motors but need it to spin the other way from 'normal'. Like all squirrel-cage motors, the rotor has embedded conductors, which are typically die-cast aluminium.
+ +The efficiency of these motors is low, rarely better than 50%. This is due to power losses in the laminated steel core, additional losses due to the shaded poles, and losses in the rotor itself. They also have very poor power factor, as evidenced by measurements of the motor shown next.
+ +
Figure 4.2A - Shaded Pole Motor With Gearbox
Those who know shaded pole motors already will tend to think of them as being (usually) pretty small and wimpy. The photo shown proves that this isn't always the case. I don't recall what it's from, but it has a very substantial core, and is fitted with a 3-stage gearbox to reduce speed and increase torque. Most motors have the power rating and output speed on the nameplate, but that's missing on this one. It only states that it's for 220-240V, 50 or 60Hz. I did manage to track down a datasheet, but that is not as helpful as one would hope. I measured it, and it pulls 1.5A at 230V and dissipates 100W (Power factor is very poor - less than 0.3). Output speed is about 16 RPM, and there was no way I could stop it when hand-held. Output torque is very high!
+ +
Figure 4.2B - Shaded Pole Motor End View
From the end, you can (almost) see the rotor, with the shaft and bearing more visible. I know it's not an exciting view, but it's included so the 'real thing' can be compared to the drawing in Figure 4.1. The vast majority of these motors are small and wimpy, so it's at least a bit interesting to see one that's designed for some fairly serious torque. Unfortunately (and to add to its unusual nature), the output shaft has a left-hand thread, and I don't have a nut that will fit it. Now, if I could only recall from whence I got it ...
+ + +Of all motors, these are the most common. Shaded pole motors as described above are still induction motors, and the principles are virtually identical, except that shaded pole motors don't need a start winding. Nicola Tesla is credited with the invention of the induction motor, and they have been in use for over a century. These motors are used in countless industrial processes, and are the mainstay of power tools such as drill-presses, bench grinders/ belt sanders, radial-arm saws, band-saws, lathes and many others. Fractional horsepower types are common for small workshops, with ratings between 1/4HP and 1/3HP (180 - 250W). These are almost invariably single-phase, and all single-phase motors require a starting mechanism.
+ +The contacts for the centrifugal switch are normally closed, and as the motor comes up to speed, the weights pull back the actuator and the switch opens. This ensures minimal friction when the motor is running, and prevents wear on the actuator and contact assembly. The contacts are usually only closed for a fraction of a second after power is applied, but this depends on the load. Where a high starting torque is necessary, capacitor-start is preferable to resistance-start systems.
+ +
Figure 5.1 - Single-Phase Induction Motor Internals
The general idea is shown above. These are very simple machines, which helps to ensure that they can last for 50 years or more without any attention whatsoever. I know this because I have one that's at least 60 years old, and it still works perfectly. It powers a medium-sized drill-press, and it doesn't get a great deal of use, but that's a very long time for anything to remain serviceable without a single repair! Note that the drawing doesn't try very hard to show the workings of the centrifugal actuator or the switch. These vary widely in design (and longevity), and while not particularly complex, it would be hard to to fit it into the drawing. The only way to know how the one you have (assuming that you have one) is to pull the motor apart and look at the mechanism (or you can look at the photos shown below). Not all induction motors use a fan - those intended for intermittent rating (or for dusty/ explosive conditions) are sealed, and rely on external cooling.
+ +The stator is comprised of circular laminations, with slots for the windings. The windings are fully insulated from the stator with the winding wire enamel, plus a secondary layer of heavy duty insulation within each slot. The windings are often held in place with a piece of stiff insulation that clamps them firmly in position. Winding movement may lead to abrasive damage to the enamel insulation, resulting in motor failure. For high-reliability applications, the entire stator may be vacuum impregnated after completion. This ensures maximum reliability, but it also means that the motor cannot be economically repaired.
+ +Both the rotor and stator cores are laminated, because they handle AC. The stator is connected to the AC supply, and AC is induced into the (usually) aluminium conductors that are cast into the rotor. These conductors act as shorted turns, allowing a high magnetic field strength (due to high conductor current) with very little voltage. The rotor turns slightly slower than the mains derived magnetic field, and the speed falls (and magnetic strength increases) with increasing load.
+ +There are two different approaches used for starting single-phase induction motors, and in some cases these's also a third option. Without a start winding, the motor can be started manually, just by spinning the shaft. That this practice is potentially dangerous is without question (especially for a saw or lathe!). If an induction motor is manually started, it will spin in the direction that initiated operation (either forwards or backwards!). If nothing is done the motor will remain motionless, but will draw a very high current.
+ +
Figure 5.2A - Induction Motor Stator Assembly
The stator is shown above, and the windings and stator winding slots can be seen. Also visible is the rear bearing cup and the rear of the centrifugally operated switch. The 'run' windings are at the outside, and are heavier gauge (and slightly darker coloured) than the 'start' winding. The latter is disconnected by the centrifugal switch when the motor reaches about 80% of nominal speed.
+ +
Figure 5.2B - Induction Motor Rotor Assembly
The rotor shown above is typical of those used in most small induction motors. The 'stripes' you can see are aluminium conductors which are shorted at each end. This is commonly known as a 'squirrel cage' rotor, because if the laminated steel core is removed it would look like a cage, typical of those used for small animals for exercise. While the above photo shows the rotor 'windings' skewed, this is not always the case. Because the windings are shorted, current induced into them (by transformer action) is very high, and that creates a strong magnetic field.
+ +The rotor has a fan at one end (within the end-bell), the rotor itself with the aluminium conductors and end pieces easily seen. The rotor 'shorting' end-pieces also have fins to provide some additional cooling. These are not always used, but aren't particularly uncommon. The centrifugal actuator is visible on the right-hand end of the shaft.
+ +
Figure 5.2C - Induction Motor Centrifugal Switch
The switch itself is very basic. It has no mechanical hysteresis, as this is provided by the actuator shown next. The wiring back to the terminal block is easily seen. The switch is normally closed, and the centrifugal actuator opens it at the designated speed. The actuator is arranged so there is no contact with the switch mechanism after it activates, so there is only a sliding contact as the motor starts and stops. A smear of grease is visible on the circular switch operating ring. This minimises wear during starting and stopping. The photo shows just one of many different configurations that are used, but the operating principles are the same.
+ +
Figure 5.2D - Induction Motor Centrifugal Actuator
The centrifugal actuator is a relatively simple affair, and is just one of many variations on the theme. At rest or below the cut-out speed, the weights are as shown in the photo. Once the motor gets up to speed, the weights are thrown outwards (so they are parallel to the pivots), and this retracts the black plastic plunger which disengages the start winding. The weights and springs have to be tailored for the motor's nominal full load speed, in this case 1,400 RPM. The switch will activate at around 1,100 RPM, but it's not a high precision device and there will always be some variation.
+ +The rotor's magnetic field interacts with the stator's magnetic field to cause the rotor to spin - using the start winding for single-phase motors. The start winding can either be resistive (commonly referred to as a 'split-phase' motor as seen in Figure 5.2A) or it can use a capacitor. With capacitor-start motors, some use capacitance only to start, and others have a large start capacitor that's switched out with a centrifugal switch, and a smaller 'run' capacitor. These generally have higher torque than split-phase motors.
+ +Note that 3-Phase motors have an inherently rotating magnetic field, and a start winding is not required. Starting current mitigation is essential with very large motors.
+ +As the motor comes up to speed, the flux in the rotor reduces, because it approaches the synchronous speed dictated by the frequency and number of poles. When a load is applied, the rotor slows down, causing more current in the rotor 'windings' and increasing their magnetic field strength. This is known as 'slip', and all asynchronous induction motors use the slip to try to maintain speed. A 4-pole motor at 50Hz has a synchronous speed of 1,500 RPM, but the rotor will typically run at around 1,400 RPM at full load (7% slip). Larger motors generally have less slip than small ones [ 4 ]. If the motor is loaded too heavily, it will lose torque rapidly and will draw excessive current.
+ +The cheapest (and most cost-effective due to the vast numbers made) is a 'resistance-start' system (aka 'split-phase'). The main winding is supplemented by a secondary winding with comparatively high resistance. When the motor is started, the two windings are connected to the mains supply. The main winding has a poor power factor at this point, so the winding current lags (is behind) the voltage. The resistive winding has a much better power factor due to its resistance, with voltage and current closer to being in-phase. The interaction of the two creates a rotating magnetic field that causes the rotor to accelerate. At about 80% of the rated RPM, a centrifugal switch disconnects the resistance winding, which would otherwise overheat and cause the motor to 'burn out'. The motor is then able to keep turning by itself - once running, the start winding is no longer needed.
+ +A capacitor-start motor also uses a secondary winding, but it can be a lower resistance. The secondary winding is then supplied via a capacitor, which creates a leading power factor (the current occurs before the voltage). (While this may seem unlikely, it is a well proven technique.) Like the resistance winding, the capacitor-fed winding interacts with the main winding to create a rotating magnetic field, and the motor starts.
+ +In capacitor-start motors, a centrifugal switch is again used to disconnect the start winding when the motor is nearly up to speed. Some other motors keep the capacitor in circuit (capacitor run operation), which improves torque. The capacitor value is selected to produce a phase difference of 90° (or as close as possible) for both types. A few capacitor start motors use a fixed (run) capacitor and a start capacitor, to improve both starting and running torque.
+ +
Figure 5.3 - Capacitor Start Induction Motor
Some small synchronous motors (particularly those used for vinyl turntables) often use two identical windings, and a capacitor is connected to one or the other. This allows the motor to be reversed simply by reversing the connections. See the drawing below that shows how the capacitor can be connected. This also works with asynchronous (induction) motors, but only if the two windings are identical, and is generally limited to relatively low-power motors. Direction reversal is provided by reversing the polarity of the start winding or the main winding. (This also applies to split-phase motors with a resistive start winding.)
+ +
Figure 5.4 - Reversible Capacitor Start Synchronous/ Asynchronous Motor
In some cases, a current-activated relay is used instead of the centrifugal switch (mainly for smaller motors and comparatively uncommon). The high starting current causes the relay to pull in and connect the start winding, and as it falls when the motor approaches operating speed, there's not enough current to hold the relay closed, so it disconnects the start winding. These are uncommon - I know of their existence, but have never come across one. I worked on a lot of motors in my early 20s (now that was a long time ago), but not so many in later years.
+ +Larger motors (typically those of 2 - 3HP (1.5 - 2.3kW) and above) are almost always 'poly-phase' - generally 3-phase types. While a single-phase motor can be up to around 5HP (3.72kW), the start current is too high to allow them to connect to a wall outlet without overload. 3-Phase motors use a higher voltage (400V in Australia and most of Europe, but may be different elsewhere), so require less current for the same power output.
+ + +Speed control of single phase motors is difficult because of the centrifugal switch. If the motor is slowed to the point where the switch engages the start winding, it will almost certainly fail due to overheating. With capacitor run motors (where the capacitor is permanently connected), the fixed capacitance means that at lower speeds it's less effective. As a result, most single phase motors use either stepped pulleys or a gearbox to change speeds. Stepped pulleys are very common with small bench drill-presses and the like, and some use an intermediate idler pulley to provide more speed options.
+ +Changing the belt to different pulley sizes is a nuisance, but it works well enough in practice, and the technique is almost as old as the idea of a drill press itself. All modern units use V-belts, which can transmit significant torque if properly tensioned. The same system is used with some small milling machines, along with many other machines where different speeds are required.
+ +In some cases a variable frequency drive (see 3-Phase Speed Control below for details) can be used with single-phase motors that are specifically designed for use in this application. Most are not, so it's not something that can be applied without a great deal of research. Attempting to use any motor in a configuration for which it was not designed can often lead to unexpected failure. As is to be expected, a single-phase motor with a centrifugal switch cannot be used with any form of speed controller. At low speed, the centrifugal switch will close, engaging the start winding (whether resistive or capacitive). The will lead to rapid overheating and failure.
+ +The only motors that can use variable frequency are 'permanent split capacitor' (PSC), shaded pole and synchronous motors (the latter are very uncommon in all but the most esoteric applications). One of the very few can be found in some vinyl turntables, although the majority use a PSC synchronous configuration.
+ + +I don't intend to spend much time on 3-phase motors, because the majority of people will never come across on (other than small 3-phase 'BLDC' hobby-motors). Very large motors require special care when starting, mainly because they are very large, and they draw an astonishing amount of current when started. Back when I worked on such motors, the most common arrangement was a slip-ring 3-phase motor, usually running from at least 415V (now nominally 400V) 3-phase power, although once over 500HP (373kW, or 300A/ phase) higher voltages were used (such as 1.1kV). These motors used a 'resistance-start' arrangement, where the rotor has windings that are connected to slip-rings [ 3 ]. When power is applied, the slip-rings are connected to very high power resistors (typically made from a specialised cast-iron alloy). As the motor speed increases, the resistance is reduced using high-current contactors (very large relays), until once full speed is reached, the slip-rings are shorted and the motor runs as a 'normal' induction motor.
+ +
Figure 7.1 - 3-Phase Voltage Waveform
The 3-phase waveform consists of three sinewaves, with 120° displacement between each. This inherently creates a rotating magnetic field when applied to a 3-phase motor. Direction can be reversed by swapping any two phases. They have been shown as Phase 1, 2 and 3 above, but are also known as 'A, B and C', or by the colours used (which varies between countries). The graphs show three 230V sinewaves at 50Hz, and this needs to be changed to suit other voltages and frequencies. The voltage for each phase with respect to neutral (see Figure 7.2) is 230V RMS, and the voltage between any two phases is (nominally) 400V RMS. You can calculate the 3-phase voltage by multiplying the single-phase voltage by the square root of three (√3).
+ ++ 230 × √3 = 2 × 1.732 = 398V ++ +
Small 3-phase motors (up to perhaps 50HP (37kW)) use a start system known as 'star-delta' or 'Wye-delta' (Y-Δ). Now, consider a 40HP motor (30kW), connected to a 415V 3-phase supply. Full load current is 26.7A per phase. Because the starting current of a delta connected motor is around 6 × the running current, the motor will pull around 160A per phase if connected directly to the supply (known as 'DOL' or direct on-line starting). This is usually quite unacceptable, so for starting, the motor is connected as 'star' or 'Wye'. This reduces the maximum starting current to about 90A - still very high, but tolerable in an industrial setting. Once up to speed, the windings are reconfigured with a switch or contactor into the delta pattern, which gives maximum power.
+ +The overall construction of a 3-phase motor is very similar to a single phase type, except there are three windings, and no centrifugal switch.
+ +While most people seem to think that motors have a very poor power factor (PF), that's only true if they are lightly loaded. At full rated power, you can expect the PF to be at least 0.85. That represents a phase angle of about 60° lagging (due to inductance). Bigger motors are engineered to have a higher power factor, as that reduces the reactive current drawn from the mains supply.
+ +
Figure 7.2 - Star (Wye) And Delta (Δ) Motor Windings
The above shows the two winding types. Many people who work with 'BLDC' hobby motors (which aren't restricted to hobbies!) will recognise the winding pattern, and the ESC (electronic speed controller) for these motors outputs a 3-phase AC signal that powers the motor. These motors are almost invariably connected in delta. A neutral isn't provided, and it's not needed.
+ + +Variable frequency drives (VFDs) are now very common, and provide a 3-phase output that has both frequency and amplitude control. The motor's speed is varied by changing the frequency, and the amplitude is changed to maintain a reasonably constant power in the motor windings. As the frequency is reduced, the current would normally rise because the inductance remains constant. At some frequency (not much below the normal operating frequency of 50-60Hz) the stator core will start to saturate, causing a rapid increase of current and failure of the motor, VFD or both. To combat this, the VFD reduces the voltage when the frequency is reduced, and vice versa. There is always a lower and upper limit, and trying to use the motor 'inappropriately' will cause failure. The following relationships are important when using speed control ...
+ ++ Magnetic strength (or Magnetic Flux) is proportional to Voltage and frequency+ +
+ Torque is directly related to magnetic strength
+ Power (kW) = torque (Nm) × ω (where ω = 2π × RPM / 60) +
When a motor's output speed is reduced with gearing or belt drives, the torque is increased in inverse proportion to the speed reduction. For example, if the speed is reduced to half, the torque is doubled, and the power remains constant. This is not the case when a VFD is used. If the speed is reduced to half by reducing the frequency from 50Hz to 25Hz, the torque remains the same, so power is also halved. Variable speed is useful to ensure the machine runs at a speed that's appropriate for the job, but the power varies with the frequency applied.
+ +There are other complications as well, especially if the power frequency is much higher than normal because you want the motor to operate at a speed greater than it was designed for. Voltages may exceed the insulation rating, eddy current losses will be higher than expected, and even bearings can be damaged (either through excessive speed or electrolytic corrosion). VFDs use PWM (pulse width modulation) to create the output waveform, and this usually contains harmonic frequencies that are much higher than the frequency delivered. It's usually recommended that the output shaft should be earthed/ grounded with a specially designed 'grounding ring' to prevent current induced into the rotor from passing through the bearing itself when a VFD is used.
+ +At first glance, using a VFD seems simple enough, but if all precautions aren't followed motor damage is very likely. These issues are well outside the scope of this article, but there is plenty of information (and warnings) from various manufacturers, and elsewhere on the Net. If this is something you are planning to use (or already use), it's worthwhile to read up on bearing damage, as it's a common problem that isn't always addressed properly.
+ + +Stepper motors (aka stepping motors) are used in so many things that it would be impossible to list them all. A few examples include computer printers and scanners, 3D printers, CNC (computer numerically controlled) machines of most types, robots (both toy and 'real') and for all manner of positioning applications. They are commonly used 'open-loop', and there is no feedback mechanism used. Stepper motors can be used as fast as the design will allow or down to DC, with no change to torque or holding power (when stopped). The same relationship to power applies as it does with variable speed induction motors.
+ +Provided the load is well within the limits of the motor, it can be relied upon to perform exactly the number of rotations (or part thereof) that's programmed into the controller. This is why ink-jet printers (for example) move the print head from one extreme to the other when turned on. The 'home' position is established, and the printer knows exactly how many rotor turns are needed fo move the head to the end position. If the print-head is jammed, the limit switch won't be activated after the programmed number of turns, and an error light will come on. Similar tests are performed by scanners and other equipment controlled by stepper motors.
+ +The simplest stepper motor of all is used in common quartz clocks. These are a 'special' case, because the rotor turns 180° with each alternating pulse. The winding is clearly visible in the photo, and the rotor is beneath the small gear seen between the two metal 'arms'. These are carefully shaped to ensure that the rotor always turns in the proper direction.
+ +
Figure 9.1 - Quartz Clock Motor
Stepper motors are characterised by type and size. The ones that are the most common are a hybrid, being a combination of variable-reluctance and permanent-magnet types.
+ +'Proper' stepper motors have a tightly controlled and precise angle between full steps, typically 1.8°. That means the motor requires 200 steps to complete one revolution. Specialised driver ICs provide half-step and 'micro-step' capabilities, by controlling the winding current. While a stepper motor is (in theory) a synchronous poly-phase motor, it is the ability to be used at any desired frequency up to the maximum - the upper limit is determined by the coil inductance. With the capability to be locked at any position, it is sufficiently different from a synchronous motor that (IMO) equating the two is folly. The photo below shows the intestines of a NEMA-17 stepper motor.
+ +
Figure 9.2 - NEMA-17 Stepper Motor Dismantled
The 'teeth' on both the rotor and stator provide the key to operation. When a pair of windings is energised, the rotor will move and lock to that position, and it takes some effort to move it as long as current is applied. By switching current from one set of windings to the next, the motor will rotate by one 'step' (1.8°). If both windings are energised, the motor will move by one half step (0.9°). there are several different ways that a stepper motor can be wired, depending on the motor itself. The most common variants are uni-polar and bi-polar. A bipolar motor usually only has two pairs of wires, while uni-polar types usually have six (but sometimes five or eight) wires. There are two separate windings in these, each with a centre-tap.
+ +
Figure 9.3 - Uni-Polar And Bi-Polar Motor Wiring
As should be expected, a uni-polar motor with a DC voltage on the common (centre-tap) leads only needs the drive wires to be grounded to obtain current flow. A bi-polar motor requires that the polarity to each coil is reversed for alternating pulses (the quartz clock stepper motor is bi-polar). The dismantled motor shown in Figure 9.2 is uni-polar, and the full winding measures 6.2Ω (3.1Ω for each winding from centre-tap). Uni-polar motors can be used as bi-polar by ignoring the centre-tap, but a uni-polar motor cannot be wired for bi-polar operation.
+ +
Figure 9.4 - A Selection Of Different Stepper Motors
+Owner: Bill Earl, License: Attribution-ShareAlike Creative Commons
As you can see from the above, there are many different types and styles of stepper motors. While not shown in the above, there are also some very large ones - I have on in my workshop that's over 200mm long and 150mm diameter. If the windings are shorted, it requires a pair of strong 'multigrip' pliers or similar to just move the shaft. This is one of the attractions of stepper motors in general. If the static load is small, just shorting the windings will keep the motor from turning, but even with a fairly high load, even a small current can be enough to prevent the motor from turning. The preset position is maintained, without any requirement for a servo system to make it stay where you want it.
+ +
Figure 9.5 - Uni-Polar Stepper Motor Logic
Each logic output connects to a high-current switch (e.g. a MOSFET), shorting the relevant winding to ground. The is only one of many ways to driver stepper motors, and it's been shown because it's easily simulated and can be built using cheap CMOS parts (using a 12V supply) and driving output MOSFETs. The direction is reversed by pulling the 'Dir' input high. The clock signal is nominally a squarewave, and can be as slow as you like. The devices specified are a 4584 (hex Schmitt inverter), 4070 (dual XOR gate) and 4013 (dual D-Type flip-flop). Note that there are two coils energised at any point in time, providing the maximum possible torque.
+ +The maximum clock speed is determined by the winding inductance and available voltage. A common way to get higher than 'normal' speed is to use a higher voltage supply (e.g. 12V for a 5V motor) and use current-limiting so the motor doesn't draw excessive current and overheat. This can be active (using transistors) or passive (using resistors). More advanced control systems are IC based, and provide many advantages over the simple scheme shown, but may only be available in an SMD package, and may require a microcontroller to function.
+ +Paradoxically perhaps, bipolar motors are simpler than unipolar types, but are harder to drive. Instead of simple MOSFET switches for each winding-end (four in all) you need eight MOSFETs (or bipolar transistors), and a more complex drive circuit. The switching devices are wired as an H-bridge, and a minimum of four devices are required for each winding. With MOSFETs, they'll usually be N-Channel and P-Channel types (NPN and PNP for BJTs).
+ +
Figure 9.6 - Bi-Polar Stepper Motor H-Bridge
The drawing above is a somewhat unusual H-Bridge driver circuit, that only requires a pair of drive signals. The required switching is performed by the resistors (R3, R6), which are cross connected so that when Q1 turns on, it forces Q4 to turn on as well. When Q3 is turned on, that turns on Q2. It's important that both drive signals are never present at the same time, as that would cause all MOSFETs to turn on, shorting the supply. A small 'dead-band' (where all MOSFETs are off) is required, but it only needs to be about 20µs - just long enough to ensure that the MOSFETs are off before the second pair is energised. This is no different from the arrangement used for Class-D power amplifiers, and all H-Bridge circuits have the same requirement for a dead-band. The zener diodes protect the MOSFET gates from voltage spikes that may cause failure. I do not consider them as 'optional', although many circuits you'll see don't include them.
+ +With MOSFETs, there is no real requirement for additional diodes, because they are intrinsic to the MOSFET (the internal 'body' diode). These aren't usually especially fast, but stepper motors cannot accept high-speed input frequencies anyway, and it's not usually a problem. External diodes can be added, but are usually only required when the output switches are bipolar transistors.
+ +The circuit is shown with values to suit 12V motors, or a lower voltage motor with a series resistor. Higher voltages are easily accommodated by increasing the value of R3 and R6. For example, if you have a 24V motor, these resistors are increased to 1k, so the upper MOSFETs still get a 12V gate voltage. It can also be used with lower voltages, but the MOSFETs must be low-threshold types, with a gate turn-on voltage suitable for the voltage used. The minimum will normally be around 5V, and suitable MOSFETs are available (although the choice of P-Channel devices is limited). The scheme shown is simpler than most, with many expecting four separate control voltages, all of which must be synchronised without overlap that can cause cross-conduction (two MOSFETs in the same 'stack' turned on at the same time).
+ +
Figure 9.7 - Alternative Bi-Polar Stepper Motor H-Bridge
A common arrangement is to use the MOSFETs with their gates simply tied together (optionally with gate resistors and diodes) as shown above. This scheme demands that the drive voltage and supply voltage are the same, and it cannot be higher than the maximum gate voltage. This may not be ideal for the application, and while it certainly will work, it lacks any flexibility in the selection of the motor. Admittedly, most stepper motors are designed for low voltage operation, but a circuit that imposes arbitrary limits is ... limiting.
+ +Of course, you can always cheat and use a power H-Bridge IC such as the L298 (BJT output switches). This has the switching and basic logic all sorted out for you, but it's not necessarily ideal. It does have logic circuitry that steers the output transistor drive current and ensures that there is minimal 'shoot-through' current (it includes the required dead-band). Unlike MOSFET switches, there are no parallel diodes to protect from voltage spikes, and these must be high speed types, added externally (eight diodes for a bipolar stepper motor).
+ + +Linear motors can be thought of as a 'conventional' motor that's been 'unrolled'. They are surprisingly common, though mostly you won't know they are there [ 5 ]. There are many considerations, not the least of which is maintaining the correct (and generally very small) distance between the stator and the propulsion system. They are common in 'MagLev' (magnetic levitation) systems for trains. See the Wikipedia page referenced for more information.
+ +Like many other more advanced topics, a full discussion of linear motors is (well) outside the scope of this article. They are mentioned here purely because they exist, and are in use in many parts of the world. A web search will provide you with plenty of reading material, but it's not something that most hobbyists will get into. So-called 'rail guns' use a form of linear motor as well, and these have been built by hobbyists and professionals alike. Other than pointing out that they exist, I don't propose to go into more detail here.
+ + +A very basic introduction to speed control was provided in Section 6, related to the use of a VFD unit, but with induction and DC motors the speed stability can be highly variable. In some cases, it's (perhaps surprisingly) an advantage if the motor slows under heavy load. When loaded the motor slows, giving the operator more time to negotiate tricky parts - this is especially true of sewing machines! No-one (unless very experienced) wants the machine to sew at high speed over heavy seams, so allowing the motor to slow down helps the user to negotiate problem areas more easily. However, this is certainly not desirable for many other applications.
+ +Synchronous motor speed is determined by only one thing - the AC input frequency. Stepper motor speed is determined by the pulse rate - provided it's lower than the maximum allowable of course. When DC or induction motors are operated 'open loop' (with no form of feedback), the speed will vary, and dramatically so with DC motors. Feedback involves providing a means of monitoring the actual shaft speed, which can then be compared to the 'reference' setting. This allows the system to automatically adjust the motor power as the load changes.
+ +The monitoring device can be a small motor operated as a generator, a digital encoder that provides both speed and direction information, or simpler units can monitor the current drawn and make corrections based on how much current the motor is drawing. The latter are not precision speed regulators, but they can suffice in non-demanding applications.
+ +However, accurate speed control/ regulation is not trivial! Because a motor has inertia and momentum (determined by the mass of the rotor and the coupled load), this introduces a mechanical 'filter' into the equation that makes a stable feedback system difficult to design. This is one reason that 'ironless' or 'coreless' motors are popular in high speed applications where the RPM needs to change quickly. With less mechanical inertia, acceleration and deceleration are much faster, and it makes the controller easier to design.
+ +It is beyond the scope of this (introductory) article to go into great detail, but a common approach is a 'PID' (proportional integral derivative) controller. These are discussed briefly in the Servos article, and there are countless PID controllers one can buy, and a great deal of info on the interwebs. If a very well regulated speed is needed, a synchronous motor is probably the best option. However, they are rarely suited for accurate positioning tasks, and there is a definite limit as to how quickly you can make speed changes (mainly due to motor inertia/ momentum).
+ +Seeming simple tasks (like maintaining a pre-determined speed regardless of load) are nowhere near as simple as they may appear at first. Dealing with instability in purely electronic circuits is usually fairly straightforward, but the task is a great deal harder when there are mechanical forces working against you. Because many machines not only require constant speed, but are also subjected to variable load conditions makes everything that much harder.
+ + +Motors generally require balancing to prevent vibration while running. A common approach is to drill into parts of the rotor laminations to remove weight from the heavier side(s) of the rotor itself. It's generally close to impossible to wind or build a rotor that is perfectly symmetrical in every respect, and it's usually equally difficult to add weights, as there's always the possibility that they will fall off (or fly off!) or be dislodged during assembly/ disassembly. The degree of balancing needed depends on the size and speed of the motor. Large low-speed motors require careful balancing, as do small high-speed motors. The forces created are proportional to the unbalanced mass and the square of RPM. Double the speed, and the unbalance forces increase by four times.
+ +A small imbalance may be be imperceptible at 1,500 RPM in an 'average' sized motor (fractional horsepower types), but the same magnitude of imbalance will destroy bearings with larger motors or at higher speeds. Where balancing is performed, it should ideally be a dynamic test, where the rotor is spun and computerised equipment pinpoints the exact location where weight(s) have to be added or removed. Less satisfactory (but often acceptable) is a static balance, where the part to be balanced is placed on a perfectly flat pair of knife-edge supports, or fitted with very low friction bearings. If the part always turns so a particular point faces down, then that part is obviously heavier than the rest. When (very gently) rolled on the knife-edges or rotated, a well balanced part should stop at a random position each time. This doesn't mean that the rotor is truly balanced though - there may be situations where there is equal but opposite imbalance laterally (along the length/ width of the part to be balanced.
+ +Dynamic balancing will ensure that all sources of imbalance are identified. For anyone who's seen car wheels and tyres being balanced, you'll know that weights are added to the inside and outside of the rims, to ensure that there are no lateral forces. This is not identified with static balancing, and it's rarely used any more. Some information is available at Motor Repair Best Practices, but there's a lot more general info available if you search for 'Dynamic Balancing' (which isn't specific to electric motors).
+ +Few (if any) hobbyists will have access to precision dynamic balancing equipment, as it tends to be rather expensive. However, it's important that not only the motor's rotor, but anything directly attached to the motor shaft is balanced as well. Failure to balance high-speed (or high rotating mass) loads will lead to bearing failure, or even complete motor destruction should a bearing fail catastrophically! It's unlikely that most hobbyists will ever need to balance a rotor, but if you're dealing with high-speed motors it's something to think about.
+ +A good introduction to static and dynamic balance is shown at Balancing Know-How: Understanding Unbalance (YouTube). If you want (or need) more, there are many other videos on the subject. As always, be careful. Just because someone posts a video, that doesn't mean that they know what they are talking about!
+ + +This is a primer on electric motors, and as such doesn't attempt to cover every possibility. Motors are used in so many different ways that it would be impossible to list them all, and even more so to examine every control system. There is a great deal of information on-line, and most of it is accurate. Unlike audio, there is very little snake-oil in the motor industry, but some people have used motors to sell fraudulent products ('power savers' are a prime example - they are almost all based on peoples' lack of understanding).
+ +There isn't a great deal to add here, other than to commend the interested reader to do further research for the specific type of motor s/he intends to use. While the motor as a basic 'black-box' machine is very simple, there are subtleties that can easily cause any project that relies on motors to fail. I've only shown a limited number of references below, but there are thousands of articles about motors, covering every type in use. Many include photos and drawings, along with formulae for calculating almost every aspect of their operation.
+ +I deliberately kept the maths in this article to the bare minimum, and there are no phase diagrams or animations. These are all available if the basic concepts don't seem to help you to make sense of the operation and control of any style of motor. While very simple machines, electric motors are (like transformers), far more complex than they appear. Hopefully, this primer will help the reader to appreciate their versatility and give some insight into how they work.
+ + +![]() |
Elliott Sound Products | Multimeters |
One of the first pieces of test equipment bought by anyone interested in electronics is a multimeter. These are also referred to as a 'VOM' - volt, ohm meter (or volt, ohm & milliamp meter). The range is bewildering to the newcomer, but at least 99% of buyers will choose a digital meter. One of the reasons is that they are far more readily available than analogue (with a moving-coil meter movement), and usually a great deal cheaper. Analogue meters also require a bit more skill to operate, as you have to select the scale appropriate for the selected range. You also need to minimise parallax error - looking at the pointer and scale from an angle other than 90°. Better moving-coil meters have a mirrored scale, and the reading is within the rated accuracy when the pointer and its reflection are superimposed. In the days when analogue meters were the only choice, there were many people who could never get to grips with the ranges, scales and multiplication factors needed to obtain a meaningful result.
An auto-ranging digital meter only requires that you select a range (e.g. DC volts, Ohms, etc.) and the readout shows you the value in the correct units. For most measurements you don't need to think about the units, as the display shows the measured value and its units (AC volts, DC volts, etc.). There is an 'implied' accuracy that's often misleading, because the reading may show four or more digits, and users almost always assume that the value displayed is exact. In reality, there's a stated accuracy (typically 1% for low cost meters), but the last digit may be ±2 digits off the true value. When you see the accuracy described as 1% ±2 digits, that means that a voltage of (say) 10.00V could be displayed as anything from 9.88 to 10.12 volts. However, users (and that means all users, even those who know better) tend to take the displayed value as 'gospel'.
In a well equipped workshop, it's very handy to have both types of meter available. Analogue meters are far better at displaying a changing voltage, current or resistance. Even fairly large cyclic changes are easily averaged by eye. You simply look at the pointer and find the geometric mean of the maximum and minimum pointer swings to obtain the average. A digital meter will usually just show a reading that changes (at the sampling rate), and it's usually impossible to determine a true average. It's become common for (LCD) digital meters to have a 'bargraph' along with the digital display that supposedly gives you the best of both worlds, but I find them next to useless when taking measurements. Note that some digital meters will show the true average of a cyclic voltage, but not all. Some will display garbage - no use to anyone.
Despite the apparent simplicity of multimeters, many people still have difficulties interpreting the results. This is true of both analogue and digital meters. Analogue types are always harder to interpret, but digital meters have a very high input impedance, and it can sometimes appear that an AC voltage is present, even though the available current may only be a few microamps. This can lead to great frustration, with the user wondering where the voltage is coming from, when it may only be due to capacitive coupling between insulated conductors.
If one has been caught out by this a few times, it can become dangerous. The user assumes that the measurement is false (having been caught out before), then discovers that it was very real! 'Standard' analogue meters are better in this respect, because they have a relatively low impedance. This will load very high impedance 'leakage' paths, but show the true voltage if it's present. Great accuracy isn't important, but getting a definite answer (safe/ unsafe) is important.
![]()
WARNING: Always take great care when measuring high voltages (AC or DC). Use only probes and test leads that are rated for the voltage being measured, and do not attempt to measure any voltage that is (or is suspected to be) greater than the meter's maximum voltage rating. The information here is provided in good faith, but does not (and can not) cover every eventuality. Safe work practices are the reader's responsibility, and must be applied at all times. If unsure, always seek professional assistance before risking your life! Never use test leads that show signs of abrasion, damage, or that have been modified or mistreated.
You always need to know what to expect before taking a measurement, otherwise you'll never know if the measured value is alright or not. Your guess doesn't have to be particularly accurate, but it should be based on the component values in the circuit, or (in very few cases these days) the voltages may even be shown on the circuit diagram. In the 'old' days, this was common, and in many cases they even described the type of meter used to take the measurement! Alas, this is no more. In most cases it's expected that a digital meter will be used, but you're usually left to work out what voltage(s) should exist in a circuit. In some cases (such as power amplifiers and other circuits that use DC feedback), the voltages will only ever be 'sensible' when everything is working normally, when measurements aren't really needed.
It's worth pointing out that the death of the analogue multimeter has been greatly exaggerated. There are countless new models still available, with a wide price range. Many vintage instruments are popular with collectors and restorers, but the usefulness of the analogue movement is such that there's still a strong demand. I was actually surprised at the number of new models I found while researching for this article - there are far more than I ever imagined. They don't have the accuracy of a digital meter, but mostly you don't need it anyway, and IMO no digital meter has the 'charm' of a good analogue multimeter. A well-equipped workshop will have both.
An analogue multimeter uses a moving-coil movement, almost always of the D'Arsonval type (see Meters, Multipliers & Shunts for the details). The input impedance is not easy to understand at first, because it's usually quoted as kΩ/V. This figure depends on the range selected, not the displayed value. Most decent analogue meters are 20k/V, meaning that on the 1V range, the meter impedance is 20k, so the movement has a sensitivity of 50µA full scale. Very cheap meters can be as low as 1k/V, meaning that the movement is 1mA full scale. These are not recommended, because a) they are cheap (in all respects) and b) because some circuitry will be 'upset' by the current drawn by the meter. The more you're willing to pay, the more sensitive the meter movement, and the most sensitive I've heard of was made by Sanwa, and was 2µA full scale (500kΩ/V). You also need to be aware of the accuracy, generally 3% for DC and somewhat worse for AC. Scale linearity depends on the quality of the movement, and while it may be adjustable (internally) I don't recommend that you attempt it.
A properly balanced movement will show zero on the scale regardless of the angle of the meter (vertical, horizontal, 45°, etc.). Unless you pay serious money, don't expect this to be the case. Setting up a moving-coil meter movement to be unaffected by angle (balancing the moving parts) is a painstaking process, and unless you're an instrument technician I suggest that you don't fiddle with it. It's far easier to make it worse than better.
To make sense of the kΩ/V specification, you look at each range, which may be 0.5V, 2.5V, 10V, 50V, 250V and 1kV full scale. For each range, you multiply the range by the kΩ rating, not the reading. The impedance for the 0.5V range is therefore 10k, 50k for the 2.5V range, 200k for the 10V range and so on. This means that if you're measuring the voltage of a high-impedance circuit, the reading will change as you change ranges. This is disconcerting for beginners in particular, as there appears to be an inconsistency in the readings. The meter (and its reading) is operating as dictated by Ohm's law, and it's not inconsistent at all, but it does cause problems.
If a 20k/V meter is set to the 10V range (for example), the impedance is 200k. Most circuitry won't care about the load, but it makes a significant difference if you're measuring high impedance circuits, such as the plate voltage in a valve (vacuum tube) preamp or phase splitter circuit. For that you'd typically use the 250V range, which has an impedance of 5MΩ. While that sounds fine, if the plate resistor is (say) 220k, the meter will cause an error. You might expect to read 125V DC, but the meter's loading will reduce that to 122.3V. This may be within the accuracy specification, typically around ±3% for 'mid-range' meters. It isn't a limitation for users who understand that the vast majority of measurements don't need to be exact, but it can still be a nuisance.
There are two brands of analogue multimeters that are revered - In Britain, Australia, New Zealand (etc.) the Avometer (introduced in 1923) was the 'gold standard', and early models are sought after. Possibly one of the most distinctive meters of all time, the dial surround was in the same shape as the dial scale (you'll have to look up a photo, but they are easy to find). They were expensive meters, but they were one of the most common brands found in well-equipped workshops. AVO held worldwide patents for many of the techniques used, and almost every other meter that followed was based on the original design. To this day, no other readily available multimeter uses a current transformer for AC current ranges, along with a full-wave (bridge) rectifier.
Although it came much later (some time during the 1930s), in the US and Canada, Simpson holds a similar position. They were (are) more conventional, but had a well deserved reputation for reliability and performance. From the Japanese makers, Sanwa and Micronta were probably the best known, although there were many others from manufacturers worldwide. The one shown next is one (of a small few) that I have, and it's a good meter for a very sensible price and is currently available (at the time of writing).
Figure 1 - Analogue Multimeter
The multiple scales are probably the thing that cause most people trouble with analogue meters. The ohms scale is reversed, with 0Ω at full scale. There are three different scales for DC volts, with 10, 50 and 250V ranges, and you use the one that corresponds to the selected range (dividing or multiplying by 10 as needed). AC volts has a separate scale for the 10V range, and it's not linear because the internal diode voltage drop affects the reading more at lower voltages. The AC voltage ranges also have an impedance that's usually only 9kΩ/V, so that will impose greater loading on the circuit. In almost all cases, the AC volts ranges are not AC coupled, so if DC is present as well, that will affect the reading.
The meter shown in Fig. 1 has extra functions, namely transistor testing. Mostly, this is next to useless (as is the same function on digital meters), because it's often hard to use, and/ or doesn't test the devices at a realistic voltage or current. I have several meters that include transistor tests, but they are never used. Diode tests are another matter, but the meter does not show the resistance, but shows the forward voltage drop (but be aware that the test lead polarities are almost always reversed, so Red is negative). Digital meters with a 'diode test' range also show the forward voltage drop, but do not reverse the polarity.
One thing that is often very handy is the lowest ohms range. Unlike a digital meter that is 'auto-calibrated', you can set the ohms range to zero with the test leads in place, thereby eliminating them from the measurement. This lets you measure less than 1Ω (don't expect high accuracy), where a digital meter requires that you subtract the lead resistance from the total. Some digital meters have a 'relative' function that lets you zero the meter with the test leads in place, although it's not always obvious, and most don't have that option. Be aware that some analogue meters have a fairly high current on the low ohms range that may damage some components. The current can be over 150mA - look at the schematic below, in particular R11 (18.5Ω).
Figure 2 - Analogue Multimeter Schematic
The circuit shown is not meant to be representative of any particular meter, but provides the basics. The switching is invariably convoluted, because it's almost always a simple rotary switch on the outside, but it has to make all the right connections for every range internally. The majority now use a pattern etched into a PCB, often with gold plating to eliminate problems due to corrosion. The rotor itself has a set of joined contacts that connect the input and output for each range appropriately. Earlier meters used a rotary switch with four or more separate wafers, made specifically for the manufacturer.
Note that AC voltage measurements are almost always ½ wave rectified, and the meter is calibrated for RMS based on the average value. This leads to significant errors with asymmetrical waveforms, and any input signal that is not a sinewave. It also means that low voltages cannot be measured because of the diode voltage drop. Although most used germanium diodes (with a forward voltage of around 150-300mV, depending on current), that still meant that measuring less than 1V AC would introduce errors. This is why most have a separate scale to 10V AC, and it's compensated for the diode nonlinearity.
Note that the meter (and indeed many digital meters as well) doesn't include an AC current range. This is due to the very basic ½ wave rectifier used, which won't work at low voltages. Depending on the types of measurement you normally expect to make, this may or may not be an issue for you. To get accurate AC current measurements almost always means using a True RMS (digital) meter.
One thing that is common with better units is the use of two batteries for ohms readings. With a 1.5V (or 3V) supply, the maximum resistance that can be measured is limited by the voltage and the meter sensitivity. A 50µA meter can only read up to 30k with 1.5V, or 60k (both full scale) with 3V. Including a 9V battery (in the above it's in series with the 3V battery, giving 12V total) lets you measure up to 180k full scale (padded back to 100k with additional resistors). This allows 1MΩ and above to be measured, but at the lower end of the scale for anything over 1MΩ. In the heyday of 'simple' analogue meters, most resistors were generally only 5% tolerance, so any error was pretty much immaterial. A quirk of these meters is that the terminal voltages are reversed on the ohms ranges. If you're unaware of this, diode readings won't make sense.
Prior to the advent of digital meters, the impedance limitation of standard moving-coil multimeters led to the development of the VTVM, typically using a dual triode valve to drive the meter movement. This made it possible to have a constant input impedance of (usually) 10MΩ (or 11MΩ with a 1MΩ probe resistor). This all but eliminated the problem of loading the circuit under test, and the input impedance remains constant on all DC voltage ranges. AC voltage measurements were generally less advanced, and were equally sensitive to waveform distortion.
VTVMs usually used the same rectifier (½ wave) as standard multimeters, as did later FET voltmeters that worked the same way as a VTVM, but with much lower power consumption. These have not disappeared, with several new FET models still available. Original versions regularly command fairly high prices as 'vintage' test gear. The only real advantage was a higher input impedance, and most were no more accurate than 'ordinary' multimeters.
Figure 3 - Basic FET Voltmeter (11MΩ Impedance)
You won't find a new VTVM on sale, but FET voltmeters are available, and I've included a very basic schematic above. As shown, it's intended for DC only, and this arrangement was often sufficient when working with high impedance circuits using valves. Most other measurements could be taken using a normal multimeter, but the vacuum tube or JFET meter would provide negligible circuit loading so that measurements on high impedance circuits would not cause circuit malfunction. In the circuit shown, the two JFETs must be matched, and in good thermal contact with each other. The 'Balance' and 'Cal' pots are internal, but the 'Set Zero' pot was always available on the front panel. JFETs are more stable than valves, but both drift and require adjustment.
Note that the circuit uses JFETs that are no longer available, and quite a few changes would be needed to make it work with the few choices available today. It's certainly not a circuit that I'd recommend that anyone try to build, and it's shown only for its historical significance. There are many similar circuits shown on the Net, with some having a better chance of working than others. In almost all cases, suitable JFETs are no longer available.
If you needed something like that these days, a FET input opamp would be the preferred option. With high gain, excellent linearity, low drift and an input impedance of around 1TΩ (1E12Ω), making a meter with high input impedance has never been easier. Of course there's far less need for a simple high-voltage DC meter any more, because most circuitry is low impedance and even a very basic analogue multimeter will measure most voltages just fine. However, there are exceptions! The majority of digital meters have a high enough input impedance that very few circuits will be affected. However, the inherent limitation of all digital meters still applies - you can't read fluctuating voltages easily (if at all).
Other 'vintage' valve and FET meters used a switching system similar to that in the passive multimeter, although some only included AC and DC voltage, so the user needed a standard multimeter for measuring ohms. This wasn't an issue at the time, since most people involved in electronics had one (or more) standard multimeters as well. A few VTVMs included a parallel capacitive voltage divider (as shown in Project 16) so that AC voltage measurements extended to beyond 20kHz.
Note that it is possible to have an analogue readout with a True RMS AC measuring capability. This means that it must have internal electronics, and will require power (either battery or mains), and other than a few AC millivoltmeters there are none available that I'm aware of. The True RMS converter will typically be the AD636, AD737 or (perhaps) an LTC1967. These are not inexpensive ICs, so don't expect to find any of them in low-cost meters (analogue or digital). Most have an input sensitivity of 200mV. There's an Application note (AN268) from Analog Devices that describes the use of RMS converter ICs. It's well worth reading to find out how these devices are used.
These days, probably 99.9% of all multimeters sold are digital. I doubt that I need to show a photo of one, but I'll do so anyway. Most people use hand-held meters, but a good bench type multimeter is well worth having if you can justify the cost. I use both, but in the workshop the bench meter is always my first choice. A unit such as that shown below is under AU$400.00, which isn't cheap, but you do get a lot of meter for the money. One thing that (IMO) is absolutely essential is 'True RMS'. This means that the meter will show the actual RMS value of an AC signal, regardless of distortion.
With the averaging measurement system used by budget meters, the reading can be so far off the mark that the measurement is useless. This topic is covered in AN-012, Peak, RMS And Averaging Circuits in the ESP app. notes section. As an example, a symmetrical squarewave will read 11% high, and a pulse-train can measure as much as 90% low. True RMS meters used to command a very high price, but these days a hand-held True RMS meter can be bought for less than AU$50.00. The meter shown below is the one that I use most of the time. It's served me well, as I've had it for many years. It's mains powered, but that's common for bench meters and isn't a problem because they aren't moved around. Readouts that extend to 55,000 counts give good low-level resolution, and True RMS bench meters can be found for less than AU$250.00 at the time of writing. You can pay a great deal more of course, and you need to check the specifications.
The basic building block for digital meters is the analogue to digital converter (ADC). The most common arrangement is a dual-slope integrating type [ 4 ]. These are a low-cost but very linear ADC, with the ICL7106/7 3½ digit devices (maximum reading 1.999) being very common, and the maximum voltage that's displayed is 199.9mV. There are other ICs too, but a detailed description of those available is outside the scope of this article. External circuitry is required to allow measurement of voltages above 200mV, AC, current and resistance. There are application notes available should you wish to build your own, but given the low cost of 'standard' 3½ digit multimeters making one is ill-advised. It will cost a great deal more to build than you can pay for one that's available (often less than AU$10.00). 3¾ digit meters (3.999 maximum reading) are available for less than AU$40.00 and there's no way you could build one for less.
Figure 4 - Digital Bench Multimeter
One thing that a lot of digital meters (and almost all analogue meters) are very bad at is measuring high frequencies. Some digital meters include a frequency counter that may extend to 10MHz or more, but voltage measurements may be limited to only a few kHz. 'Ordinary' (not True RMS) meters are generally worse than True RMS types, with some being incapable of measuring anything beyond 2-3kHz. To make matters worse, most don't fully specify the frequency range for AC measurements, and some don't mention it at all! Most True RMS meters extend to at least 10kHz, and often much more. The one shown in Fig. 3 has been tested fairly thoroughly, and is within 2% at 90kHz. That makes it useful for audio frequency measurements, but not much more.
This doesn't mean that average-reading meters are of no use, because not everyone needs to measure AC voltages other than low-frequency sinewaves (or close to, such as the AC mains derived voltages from transformer windings). Digital multimeters are now pretty much all that most people ever use, but there are traps for the unwary. The greatest of these is accuracy. You may wonder how this could possibly qualify as a 'trap', but mistakes are very common, because we believe the digits. If the meter shows the output of a 5V regulator as 5.00V everyone is happy, but that doesn't consider the error that's inherent in all meters and regulators.
That exact 5V measured may actually be anywhere between 4.93V and 5.07V, allowing for 1% ±2 digits accuracy. I've lost count of the number of people who've built a ±15V P05 (or other regulated supply) and said that their meter showed +14.8V and -15.3V, and wondering if this is alright. Anyone used to an analogue meter would simply look at the pointer, see that it's 'close enough' to the required voltage and move on. The implied precision of a digital multimeter has people wondering what's wrong when a 5V supply measures 5.1V (or 4.9V) when there's absolutely nothing amiss.
Nothing in life is perfectly precise. Regulators have small errors, as do the meters used to measure their outputs. All electronic parts have some leeway for supply voltages, and it should be obvious that a small error won't cause any problems. Most opamps will happily operate with +27V and -3V if you want them to (but obviously this is not the case for those with a 5V maximum supply voltage), and logic ICs (including PICs, microcontrollers and CPUs will all handle voltage variations as shown in the datasheet. For example, processor ICs (CPUs) are probably the most fussy as they operate at low voltages (2.7V and/ or 3.3V). Even these have leeway, and the 5V supply is expected to be within the range of 4.75V and 5.25V (±5%, [ 5 ] ). Unless a multimeter has been damaged, the voltage you read will normally be more accurate than the supply requirements.
If you need to make accurate measurements, a 6-digit (or 5½ digit) meter is worthwhile. You can get more, but ultimately noise becomes the dominant factor and limits the ultimate resolution with AC measurements. DC is usually less restricted, as multiple readings can be averaged. The end result will be as accurate as the meter allows. Most digital meters offer auto-ranging, so you only need to select the quantity to be measured (DC V, AC V, ohms, etc.) and the meter will adjust itself to give the most appropriate display. This is in contrast to analogue meters where you must select the range. If you were to select (accidentally or otherwise) the 0.5V DC range and try to measure 230V or 120V AC, the meter will almost certainly be destroyed. Many have an internal fuse, but that's often only provided for the 10A range (this applies to both analogue and digital meters).
Low resistance readings often pose a problem unless the meter has a 'relative' function. This lets you short the leads, then select the relative function which resets the reading to zero. Provided you make a good connection to the device under test you'll get a fairly accurate measurement. Don't expect to measure much below an ohm or so with any accuracy though, as for that you need a dedicated 4-wire measuring technique. This technique is described in Project 168 (Low Ohms Meter).
Almost all multimeters us a shunt for measuring current. The manual may or may not indicate the value, but it can me measured if you have another meter. Note that this basic technique only works if the meter is not auto-ranging, and you must be willing (and able) to perform a meaningful test. This is not likely to be easier, as you need a variable power supply that can deliver the current needed.
Connect the meter (in DC current mode) to your power supply, with a series resistor to limit the current. Most meters only measure up to 250mA or so (not including the 10A shunt if provided). Apply a voltage to obtain a low current (around 2.5mA or to suit the lowest range), and measure the voltage across the multimeter. If the voltage measured is 250mV with 2.5mA displayed, the resistance is determined by Ohm's law, and is 100Ω. The same procedure is used for higher ranges (10mA, 25mA and 250mA for example).
By knowing this, you can work out the voltage drop across the meter when measuring current. If you ignore the voltage drop, you may get readings that don't seem to make sense. The more you know about your test equipment the better, as you're less likely to get readings that appear to be wrong. No-one expects the user to know everything, but there are always things that you need to know to get the best from the meter.
This technique is just as important (if not more so) for digital meters, because we all see the digits and believe the number, even if it's wrong! With an auto-ranging meter, you need to increase the current (from the initial low value) until the range switches, and (with some) the voltage across the meter suddenly drops. Set the current for that range to something that makes an easy calculation, and you'll soon know the voltage dropped across the meter for each range.
I tested my bench meter, and it uses a constant resistance for all current measurements. It extends to 800mA, and shows a voltage drop of 1.28V at 800mA, a burden resistance of 1.6Ω. At low currents this is immaterial, dropping only 16mV at 10mA, but it becomes significant at higher current. It's never caused me any problems though, because I'm aware of it, even though I hadn't measured it before. The burden may be specified as a mV/mA figure, which in the case of my meter is 1.6mV/mA. Note that this does not include the test leads, and most specification that describe the burden won't include them either.
I tested another (switched range) digital meter, and it measured 100Ω on the 2mA range, 10Ω on the 20mA range and 1Ω for the 200mA range. These correspond to the basic digital meter IC sensitivity, which is typically 200mV full scale. With an analogue meter, the voltage across the shunt(s) must be sufficient to deliver the movement's full-scale current (e.g. 50µA). The unknown quantity is the internal resistance of the meter itself, which is typically 2kΩ, but it can vary from around 1.5kΩ to 5kΩ depending on the way the movement was made (particularly magnetic field strength).
In the analogue meter circuit (Fig. 2), you can see that the current ranges from 2.5mA to 250mA use odd value resistors (R8, R9 and R10). This is due to other resistors that are in series and parallel with the movement, which skews the ranges. As a result, a little more voltage is needed at low current than at high current (there's a total of about 2.49k [R7, R22 and VR1+R25] in parallel with the movement, and 240Ω [R21] in series). I'm not about to perform a full circuit analysis for each range here, but it all works out to better than 1.6% accuracy.
Taking measurements is easy for voltage (AC or DC), and if you have a meter with switched ranges and an unknown voltage, it's a good idea to always select the highest range first, and reduce the range until you have a sensible reading. Whenever you use the milliamps or ohms ranges, make sure that you put the red plug back into the correct socket for measuring voltage as soon as you're done. It's all too easy to damage your meter or the circuit being tested if you try to measure voltage when the test lead is plugged into the current measuring socket. A (very) few meters have mechanical shutters that block access to the current measurement socket unless current measurement has been selected.
Current readings always require that you break the circuit, so the meter is in series with the power supply and the device under test (DUT). This is often a real nuisance, and I will often use a series resistor and measure the voltage across it. The resistance needs to be selected based on the expected current draw of the DUT. For example, if you expect it to draw 100mA and the supply voltage is more than 5V, a 1Ω resistor will show 100mV across it at 100mA. The powered circuit gets 100mV less voltage than intended, but nearly any 5V circuit will function normally with 4.9V. The meter does the same thing, with selected shunt resistors for each range (look at R8, R9 and R10 in Fig. 2). All 'conventional' meters (analogue and digital) use the same arrangement, so when measuring current there is always some of the supply voltage dropped across the meter. This is known as the 'burden' resistance. The meter doesn't measure current directly, but instead does the same as an external resistor. The voltage across the resistor is displayed, but read as current (see previous section).
AC current measurements are subject to the same limitations as AC voltage measurements. Almost all meters that are capable of measuring AC amps/ milliamps are True RMS types, because a simple rectifier has too much voltage loss to allow measurement of AC at low voltages. The meter must be capable of providing an acceptable frequency response, and again, True RMS meters are likely to have better high frequency response than simple ½ wave rectifiers. Unless you run your own tests to verify frequency response, assume that most True RMS meters will be limited to about 5kHz at most, and 'ordinary' meters usually somewhat less. Most low-cost digital meters don't offer AC current ranges, because to do so requires a precision rectifier (although some may perform the rectification using the meter's processor IC if it has that capability).
Insulation testers (often called 'Meggers' after the original insulation testers - the name is a registered trade mark, but has become 'generic') are a special case. Most are designed for one task - measuring insulation resistance. In Australia, the standard test for household mains wiring (with the test performed before the energy supplier will connect the mains) is 500V DC, with a minimum resistance of 1MΩ. Normally, each mains conductor (active [hot] and neutral) will be tested between each other and to mains earth. Some insulation testers also include a high-current earth (ground) test mode, and/ or selectable voltages (usually 500V or 1,000V DC for 230V countries).
These are specialised tools, and are often rather costly. However, the verification that all building wiring be safe is somewhat more important than the one-off cost of the tester. Another specialised tester is a 'PAT' (portable appliance tester) unit, which is used for testing individual portable appliances, extension and removable mains leads. Most workplaces will have a test regime set up so that all equipment that plugs into a wall outlet is tested regularly. The electrical tests include polarity (active, neutral or earth not swapped), leakage (mains to chassis or earth connection) and earth continuity (the earth connection must not exceed 1Ω). A more rigorous test uses a high current to verify that the earth connection can carry at least 10A without failure.
I won't go into any more detail on these testers, because they are specialised and are not applicable for normal hobbyist workshop duty. Having said that, I do have a 1kV insulation tester that I use to verify that the old transformer I may be thinking of using is still 'safe'. I also sometimes use it to check that transistors are truly isolated from the heatsink (25µm thick [thin?] Kapton will withstand 1kV easily). Mine probably only gets dragged out a few times each year, but if I didn't have it I'd have to buy one, as they are useful. This is especially true as I do a wide variety of tests and experiments, not all of which are audio related.
Dedicated AC millivoltmeters are another useful tool, but they are generally fairly expensive. Most use a moving-coil (analogue) readout, and almost invariably use a 1-3-10 range switch. This is done to get 10dB steps, and the actual ranges are 1-3.16-10 (along with multiples and sub-multiples thereof). Most will measure down to 3mV full scale, and have a frequency response that remains flat up to at least 100kHz. An example of a DIY version is shown in Project 16, and it has a capacitive voltage divider in parallel with the resistive divider. This minimises the effects of stray capacitance that causes serious errors at frequencies above around 10kHz.
Like insulation testers, these are fairly specialised, and are only useful if you have an audio oscillator (with flat frequency response across the range) so you can perform frequency response tests. They are also at the heart of distortion meters, with the final measurement calibrated in % THD (total harmonic distortion plus noise) - see Project 52 for an example of a distortion meter. The circuit shown needs a millivoltmeter to measure the THD.
I've been using my audio millivoltmeter for around 40 years, and couldn't be without one. There are currently three that get used, two of which are in distortion meters, and the Project 16 version which is stand-alone. The P16 unit is particularly useful with high impedance circuits, as it has a 2MΩ input impedance. Most hobbyists won't need one, but for design work a millivoltmeter is an invaluable tool, but the same tests can be done using an oscilloscope (albeit with a few calculations to convert to dB). Although my digital bench meter (Fig. 3) can also measure millivolts, it's very slow, and it's completely useless for many tasks. Reading a steady-state low voltage works well enough, but having to wait for up to 10 seconds for a stable reading is sub-optimal (to put it mildly).
Both analogue and digital meters are useful, and while I don't use my analogue meter a great deal, it's often much faster to see that the pointer shows 'about right' than to have to read the digits. Slowly moving voltages or currents are easily visible, and when cyclic it's easy to see the average with an analogue meter, but almost impossible with digital. If you plan on getting yourself an analogue multimeter, you generally should expect to pay at least AU$30.00 or so (very expensive ones are also available). Avoid anything advertised for less than ~$15.00 or so, as you'll almost certainly be bitterly disappointed. Many of the very cheap ones aren't even useful to use for spare parts - they are complete rubbish and not fit for purpose.
For a digital meter, get (at least) one with True RMS. They are usually fairly inexpensive, and even a cheap unit is handy to have as a spare, or to measure current while you use another to look at voltages. A well-equipped workshop will have several, and it's a very good idea to compare them regularly so you know that they all read the same voltage, current and resistance, within their accuracy specifications. If you find one that reads vastly differently from the others, you know it's out of calibration. Most can't be re-calibrated because the information you need isn't provided, so it may well end up as scrap, or a source of spares (handy if you have a couple that are the same make and model).
You always need to make sure that you've selected the right range for the intended measurement. Many (but by no means all) digital meters have some degree of protection built in, but if you try to measure 230V AC with the meter set for milliamps (AC or DC), expect instantaneous failure. Fuse protection is unlikely to save the meter from destruction, but it reduces the risk of fire or melting the case which may expose live parts. If you're not vigilant, this can be surprisingly easy to do, and I recommend that the meter is set for AC or DC volts (with a high range selected for analogue meters) when it's not being used. Naturally, the probes should be plugged into the 'normal' sockets (if a separate connection is used for milliamps). Make sure that the meter is used in a position where it can't fall. Even 'cushioned' cases won't save an analogue movement if it falls, with the most common failure being that the moving coil assembly 'jumps' out of its jewelled bearings and jams. This can be fixed if you have a good eye and a steady hand, but it's often quite tricky and you're dealing with very small (and delicate) electromechanical parts.
One final point: many (mainly cheap) analogue meters use a small (typically 2mm diameter) pin on the test leads, rather than (now almost always shrouded) 4mm (nominal) banana plugs and sockets. Avoid the small ones if at all possible (or at all costs), because it means that you can't use your other 'general purpose' test leads. I have at least ten (maybe more) leads with banana plugs and alligator clips that are used with a variety of meters, power supplies, loads, etc, and I very rarely use meter probes. This isn't a recommendation, it's just the way I've always worked - clip leads stay where you put them, but I do have an insulated 'probe' that I can attach a clip to when necessary. Most people will use the probes supplied with the meter, and if that works for you then it's all good.
![]() | + + + + + + |
Elliott Sound Products | +Muting Circuits For Audio |
![]() ![]() |
Providing the ability to mute an audio stream is both very common and in many cases, essential. I've described a remote receiver and transmitter that uses a relay for muting, and of all the methods available that is one of the simplest. The relay is wired so that the normally closed contacts simply short the signal to earth (ground). Until the relay is activated, the signal is muted, and because the contact resistance is typically only a few milliohms, there's no need to add resistance to the circuit that provides the audio signal (typically an opamp).
+ +Most opamps are perfectly happy to have their outputs shorted, but as a matter of course I always include a 100 ohm output resistor to ensure stability if the opamp is connected to a reactive load such as a length of shielded cable. In the many years that I've been designing and building opamp and other 'small signal' circuits, I've never had one fail because its output was shorted.
+ +The down side of using a relay is that muting (and un-muting) the signal is very abrupt, and while the difference between 'hard' and 'soft' muting is audible, most people are perfectly happy with relays for muting. Some people may not like the audible click made by a relay when it opens or closes, and others like the audible response. I like it because it provides audible feedback that the circuit is functioning, regardless of whether there's a signal or not. There are several other options for muting, as described below.
+ +Mute circuits are also used at the inputs of many professional power amps. In some cases the signal is muted if the amp gets too hot, and nearly all mute the inputs for 1 - 5 seconds after power on. This is done so that turn-on/ off noises from mixers and other gear is blocked in case the entire system is powered up at once. There are countless applications for muting circuits, and not all are there so you can stop the noise from TV ads .
One thing that is very important is that there must be no DC along with the signal that's being muted. Any single supply source (such as a USB DAC for example) must be capacitively coupled and have a bleed resistor to the signal common (earth/ ground) to ensure that the DC component is removed. Failure to do so may damage your loudspeakers, because the DC offset from some circuits can be quite high. If a mute circuit suddenly removes perhaps 700mV of signal along with 2.5V of DC, the noise will be very loud indeed!
+ +One thing that really surprised me was the number of patents that cover perfectly ordinary muting circuits that have been used for years by any number of manufacturers. In general, these patents aren't likely to be worth the paper they are written on, because they are largely in the 'public domain'. It pretty much goes without saying that these patents have been granted in the US, where the patent system is often considered to be broken. It's hard to argue this, because there are so many patents that simply don't make any sense.
+ +One form of muting that has been around for a very long time is used in FM receivers and other radio applications. When used with communications and CB ('citizens band') receivers, it's commonly called a 'squelch' circuit, and it's designed to mute the inter-channel noise. If no RF signal is received, you will normally hear white noise because the receiver operates at maximum gain and amplifies external noise as well as internal circuit noise. The muting circuit cuts out the background noise, but is released as soon as an RF signal is received.
+ +Another form is called 'ducking', common in broadcast systems. If the announcer speaks while music is playing, the 'ducking' circuit partially mutes the music by reducing its level to some preset value that can be set with a pot. Some of the circuits below can be used to this end, by adding a variable resistance in series with the muting switch. The control and/ or level reduction circuitry is not described here because each case will be different. Ducking circuits will most commonly use a comparatively slow attack and release time so the effect is not abrupt, and an LED+LDR circuit is the most appropriate.
+ + +Of all the methods that can be used, this is my personal favourite. It's very reliable, and automatically mutes the signal when there is no power. To make the signal audible again, a transistor is turned on that energises the relay coil and removes the short. This can be done after a power-on delay, or at the touch of a button (local or remote). With a bit of extra circuitry, the mute can be reapplied at the instant power is turned off. This is provided for in the P05 preamp power supply for example, and it uses a 'loss-of-AC' detector circuit. The most common relay will be a miniature DPDT (double-pole, double-throw) type, and a single relay can mute both channels of a stereo preamp.
+ +Naturally the relay has to be powered for as long as the signal is required. A typical small signal relay might draw around 12mA or so (assuming a 12V coil and a 1k coil resistance). The one pictured below has a coil resistance of 360Ω (12V coil) and will draw 33mA. There is some dissipation, but in real terms it's nothing to worry about with mains powered equipment. For battery operation it's another matter though, as every milliamp matters.
+ +The input tagged 'CTRL' (control) activates the relay and removes the short with the application of a voltage from 5 to 12V. The attenuation of a relay is close to infinite because the contact resistance will be a few milliohms at the most. Because the normally open (NO) contacts are only used to short the signal to earth, there is zero signal degradation when the contacts are open. A relay also provides close to perfect protection for the output stage of any preamp, so stray static charges and other potentially damaging signals are simply shorted to earth and can do no harm.
+ +The only down side of using a relay is that it usually does short the output of the preceding stage, although that can be solved if you are willing to pass the signal through the normally open contacts when the relay is activated (the output then connects to the 'NO' relay contacts). There are some circuits that may not be happy with a shorted output - discrete opamps and other all transistor circuits. Project 37 (DoZ Preamp) is one example, but provided the 100 ohm output resistor is included it's unlikely that it will come to any harm with normal signal levels (up to 3V RMS output). The easy way to ensure that it's happy at any level is to increase the output resistor to 560Ω. This is quite low enough for any preamp, and means that a shorted output cannot damage the preamp.
+ +Relays are also ideal when balanced interconnections are used, as the relay contacts can simply short the 'hot' and 'cold' balanced signals together. Alternatively, a double pole relay can short both balanced signals to earth. Note that you absolutely must never short the two signal lines to earth when phantom power is used (either by design or accident). The method used depends on the application and the designer's preference.
+ + +Junction FETs (JFETs) can also be used, and like the relay they mute the signal by default. To un-mute the audio, a negative voltage is applied to the gate, turning off the JFET and removing the 'short' it creates. Unlike a relay, JFETs have significant resistance when turned on. The J11x series are often used as muting devices, and while certainly effective, the source impedance has to be higher than with a relay. The typical on-resistance (RDS-on) of a J111 is 30Ω (with 0V between gate and source). The J112 has an on-resistance of 50Ω, and the J113 is 100Ω (the latter is not recommended for muting). I tested a J109 (which is better than the others mentioned, but is now harder to get) with a 1k series resistor, and measured 44dB muting, and that's not good enough so two JFETs are needed as shown.
+ +Note that JFETs will generally not be appropriate for partial muting (for a 'ducking' circuit for example), because when partially on they have significant distortion, unless the signal level is very low (no more than around 20mV), and/or distortion cancelling is applied. This application is not covered here.
+ +To un-mute the signal, it's only necessary to apply a negative voltage to the gates. There is no current to speak of, and dissipation is negligible. JFETs are ideal for battery powered equipment, but there has to be enough available negative voltage to ensure that the JFET remains fully off ... over the full signal voltage range. If you use a J111 with a 10V peak audio signal, the negative gate voltage must be at least -20V (the 'worst-case' VGS (off) voltage is 10V), and the gate must not allow the JFET to turn on at any part of the input waveform.
+ +Using a JFET to get a 'soft' muting characteristic works well. The JFET will distort the signal as it turns on or off, but if the fade-in and out is fairly fast (about 10ms as shown) the distortion will not be audible. You may be able to use a higher capacitance for a slower mute action, but you'll have to judge the result for yourself. I tested the circuit above (but using a single J109 FET) and the mute/ un-mute function is smooth (no clicks or pops) and no distortion is audible. Measured distortion when the signal is passed normally is the same as my oscillator's residual (0.02% THD).
+ +If a JFET has an on-resistance of 30Ω, the maximum attenuation with a 2.2k source impedance is 37dB. This isn't enough, and you will need to use two JFETs as shown to get a high enough mute ratio. This is at the expense of total source resistance though. With the dual-stage circuit shown above, the mute level will be around -70dB. It is possible to reduce the value of the two resistors (to around 1kΩ) which will reduce the muted level to around -60dB, which is probably sufficient for most purposes.
+ +You can improve the attenuation by applying a small positive signal to the gate, but it should not exceed around +400mV. Any more will pass DC through to the signal line as the (normally reverse-biased) gate diode conducts. In general I would not recommend this, as it adds more parts that have to be calculated for the mute control circuit, and the benefit isn't worth the extra trouble.
+ +There is also the option of using a JFET based optocoupler (the datasheet calls it a 'symmetrical bilateral silicon photo detector') such as the H11F1. These are claimed to have high linearity, but I don't have any to test so can't comment either way. According to the datasheet, low distortion can only be assured at low signal voltages (less than 50mV). They might work as a muting device, but the FET is turned off by default, and turns on when current is applied to the internal LED. This means that the internal FET would need to be in series with the output for mute action when there's no DC present. The on resistance of the FET is 200Ω with a forward current of 16mA through the LED.
+ +Analog Devices used to make ICs called the SSM2402 and SSM2412 that included a three JFET 'T' attenuator and a complete controller circuit for a two channel audio switching and/or muting circuit. They have been discontinued, and there doesn't appear to be a replacement. They were aimed at professional applications such as mixers and broadcast routing, and would be useful parts if still available.
+ + +It may seem unlikely, but ordinary bipolar junction transistors can be used for on/off muting. Several manufacturers made transistors that were specially designed for the purpose (such as the Toshiba 2SC2878 (TO-92) or Rohm 2SD2704K (SOT-346 SMD) which appears to be still available), but perhaps surprisingly, 'ordinary' transistors work perfectly well. The purpose designed devices have roughly equal gain when the emitter and collector are reversed (sometimes referred to as 'reverse gain'), while 'normal' transistors are optimised for maximum gain when the emitter and collector are used as intended.
+ +Provided enough base current is provided, a standard transistor (such as the BC549 which I tested) works perfectly. The transistor will handle signal levels up to 5V RMS easily, and when turned on the attenuation is very high. One complication with BJTs is that the base must be completely open circuit when the mute signal is absent. Even a high resistance (such as 1Meg) will cause high levels of asymmetrical distortion. The system shown works very well, but alternatively the base of the mute transistors can be driven to a negative voltage when off. The negative voltage (if used) has to be greater than the peak signal voltage and must be less than the base-emitter reverse breakdown voltage (typically around 5V). If this is exceeded the transistor will be damaged. I don't intend to show the circuit using a negative bias voltage as it's not necessary and only adds complication.
+ +Because 'conventional' transistors have low gain when the emitter and collector are reversed, the base current needs to be equal to the peak signal current. For example, if the source voltage is 5V peak and impedance is 1k, the peak signal current is 5mA, so you need to provide at least 5mA base current to ensure complete attenuation. The level shifter is needed and Q2 provides an open circuit to the base resistors (R7 and R8) when the signal is not muted.
+ +I've shown a dual version above, and Q4 appears to be wired backwards. In fact, it is backwards, with the collector used as the emitter. Transistors will still work when connected like that, but with very low gain. Where the forward gain (hFE) may be 300 or more when wired normally, it may only be somewhere between 1 to 10 when reversed (this is device dependent). Some devices may even show gain of less than unity when reversed. A BJT operated as a muting switch works as a normal transistor for only one polarity of the input signal, and is reversed for the other. That's the reason for using a higher base current than you would use otherwise.
+ +The attenuation is better than you might imagine. 60dB is easy to achieve, although there may be a small DC offset when the muting transistors are on. I simulated 0.8mV for the circuit shown, but measured a little more than that during a single transistor bench test - about 2mV, which is nothing to worry about. Distortion with 5V RMS input and with the base open circuit or connected to -5V was the same as my oscillator's residual (0.02%). The on resistance of the BC549 I tested was 3Ω. This isn't as good as the Rohm 2SD2704K transistor (1 ohm with 2mA base current), but is significantly better than the J111 JFET. The residual voltage is distorted, but it's also at around -82dB referred to the input voltage, so can (probably) be ignored.
+ +Project 147 shows a complete stereo muting system based on BJTs. You can also use PNP transistors for muting, but of course the base drive polarity (and drive switching circuit) must be reversed.
+ + +Enhancement mode MOSFETs can be used for muting, but two are needed, connected in 'inverse series' to get around the issue of the internal body diode. They can work very well, but the need for two is a bit of a nuisance. The fact that they need a DC gate voltage to be turned on is a disadvantage, but it's all the greater because it has to be a floating supply. This means that the overall circuit becomes much more complex to the point where it's not worth the trouble. A highly simplified version is shown below, using a 9V battery as the power source and an optocoupler to turn the MOSFETs on and off.
+ +While performance should be very good, the complexity of the complete system is such that it can't be recommended. The process is based on a MOSFET 'relay', and there's more info in the MOSFET Relays article. The article concentrates on switching the speaker lead, but smaller MOSFETs can be used for muting. Note that each muting circuit needs its own separate floating DC supply and optocoupler, and if that doesn't convince you that it's a silly idea nothing will .
You can use a photovoltaic optocoupler to drive the MOSFET gates (it would replace the optocoupler and 9V battery shown). That removes some of the nuisance value, but it's still not worth the effort. The reason ... you can get MOSFET output optocouplers that only need a couple of resistors for less than the cost of a photovoltaic isolator.
+ +If you can get hold of KQAH616D (0.1 ohm on resistance) or LCA110 (35Ω on resistance, and apparently discontinued) MOSFET relays, these do everything. There are quite a few listed on supplier websites, so you can choose the type you can get most easily. I've not tested any of them and can't attest to their suitability, but those I've looked at seem ok. The KQAH616D seems to be unobtanium but you might still find them somewhere. It would probably be better to use them in series with the output, so the signal is disconnected by default. Static protection for the output is essential. Another possibility is the TLP222G (Toshiba), which is another dual MOSFET optocoupler, which has a rated maximum MOSFET voltage of 350V, and a typical on resistance of 25Ω. I've not used any of these, and can't comment on their suitability for audio.
+ +All IC MOSFET relays require a DC source to power the internal LED so they will conduct. I would expect that most people would prefer that they were not in the signal path, although it's not known at this time whether they create any distortion when used to pass the signal (as opposed to shorting it to earth).
+ +The general scheme using any of the MOSFET optocouplers is shown above, but note that pinouts may be different from those shown for other versions. There is almost no real difference between this and the circuit in Figure 4, except that the need for a floating supply is removed. It's expected that performance will be similar. Although this is potentially a good solution, it requires a supply voltage to mute the signal, and that limits its usefulness.
+ +A single MOSFET can be used if the level is kept well below the intrinsic diode's forward conduction voltage (0.65V). With a level of 250mV RMS, the circuit simulates 0.016% distortion. Attenuation is close to 60dB as shown, which isn't too bad. It can be increased by increasing the value of R1. The signal voltage limitation is a nuisance, but if you're only working with low signal levels that's not a problem. Control voltage breakthrough is low, typically being no more than 20µV.
+ + +Light dependent resistors (LDRs) can make an excellent muting circuit, but ideally you need two LED/ LDR optocouplers for each channel because their on resistance is comparatively high. One is used to turn the signal off, with another to short any residual to earth. They are easy to drive and show very low distortion, but the circuit is more complex (and expensive) than a JFET or a BJT circuit. It's possible to get at least 100dB of attenuation, and LDRs have a slow response and very low distortion during the transition. This makes it a 'soft' muting system, where the signal is reduced to nothing over a few hundred milliseconds, and is returned to normal in a similar timeframe.
+ +You can use commercial LED/ LDR units (typically Vactrol™ VTL-5C4 or similar), or you can make your own. Full details on how to build a LED/LDR opto isolator are provided in Project 147. If you make your own, they will not be quite as sensitive as the VTL-5C4, but they work well and are fairly cheap. Make sure that the LDRs you use have a high dark resistance - greater than 500k if possible. This is the only (simple) version that is suitable for partial muting without distortion.
+ +The slow response (fade-in, fade-out) would seem to be necessary, but in reality it's simply a nice touch and certainly not essential. The LED/LDR optocoupler is one of the few methods that doesn't cause distortion as the signal fades in or out so it can be as slow as you like.
+ +When the 'CTRL' input is high, Q1 conducts, and current flows through R3 and turns on LED1. Q1 also removes base current from Q2 via D1. D2 is included to ensure that Q1 can take all available base current so that Q2 remains off. LED2 is off, LDR2 is high resistance and LDR1 is low resistance. The signal is passed normally. There are many ways that the LEDs can be driven (opamps, TTL inverters, micro-controller, etc.) and the circuit shown is merely representative.
+ +When the 'CTRL' input is low, Q1 can no longer 'steal' the base current for Q2 (supplied via R4) so Q2 conducts and LED2 is on. This passes signal to ground, and since LED1 is off the remaining (small) signal is fully attenuated by LDR2. The diodes are essential, and without them the circuit won't work. Without power, there is some muting because both LEDs are off, and the LDRs will be high resistance. The attenuation depends on the dark resistance of LDR1 and the input impedance of the following stage.
+ + +While using diodes to switch a signal on or off may seem unlikely, it can be done, and some early compressor/limiters used diodes as a variable gain element (as seen in Figure 7). You might expect distortion to be high, but that's not necessarily the case. When off, there isn't much distortion, provided there are enough diodes to ensure that the signal peaks don't exceed the diode forward voltage. The signal is attenuated by passing current through the diodes, which lowers their impedance. The main disadvantages of using diodes are the need for very close forward voltage matching to avoid DC offset when the signal is muted, the circuitry needed is more complex than for any of the other methods, and the current drain is higher than most of the other circuits. It's included here only because diode switching is an option that most people have never come across, and it has some interest value (if nothing else).
+ +All things considered, it's very difficult to recommend using diodes because they don't work as well as any of the other mute circuits shown. There is also some risk of distortion for high level signals, and it's very hard to ensure that no sensible signal level will cause the diodes to partially conduct. Attenuation for the circuit shown will be around 35dB, and you can be assured that there will be some DC offset, even if the diodes and zeners are perfectly matched. Even the simulator I use (which by default has perfectly matched components) shows 12mV offset, and about 0.02% THD with a 707mV input signal. These are not good results compared to the alternatives, and the attenuation is barely acceptable.
+ +The circuit shown also has a slow and distorted recovery when the mute signal is removed. It takes around 500ms for the signal to return to normal, and during the 'fade-up' process, there is significant distortion. R6 was added to reduce the recovery time to something tolerable - a lower value can be used, but other changes will be needed to restore an acceptable attenuation.
+ +Another diode circuit is shown above. This is considerably more complex, because the only way to ensure that there is no DC imposed on the output signal is to use a differential amplifier. The diode bridge is driven differentially, so an inverter (U1A) is used at the input. The input level should remain below 400mV peak at all times, or distortion is very high. Surprisingly little current is needed to reduce the level, and only 140µA will cause attenuation of over 23dB. As simulated, 50% attenuation is achieved with only 14µA diode current.
+ +Distortion with no attenuation is around 0.14% with an input of 350mV peak (250mV RMS near enough), but with 20dB of attenuation that rises to 0.4%. Worst case (50% attenuation) distortion is over 6%, so while this type of circuit can work reasonably well for muting (albeit with higher distortion than is desirable), it's not usable for linear attenuation (so it's not useful for gain control for example). Like all diodes switching systems, this circuit is very limited in the allowed input voltage. The mute attenuation is quite good, at 58dB as shown.
+ +Muting is very fast, but recovery is slower, as the four capacitors must discharge before the signal returns to full level. With the values shown, it takes around 100ms for the signal to return to 90% of the normal level. Full output is reached after a little over 1 second.
+ +Diode switching is very common in radio frequency circuits (transmitter/ receivers in particular). Because RF signal levels are typically fairly low compared to audio levels, distortion isn't generally a problem, and they can switch very quickly if DC offset isn't an issue. This is usually easily dealt with in RF circuits, and very low capacitance values work because the frequency is high (several MHz up to GHz). However, a discussion of RF diode switching circuits is outside the scope of this article.
+ + +Devices such as the 4066B CMOS bilateral switch can be used both for signal source selection and muting. They are quite linear, but the peak audio amplitude is limited to around ±7V (5V RMS) because they cannot be operated with a supply voltage above 15V (±7.5V DC). Their on resistance is typically around 80Ω, somewhat higher than desirable for many applications. If you use one to disconnect the signal and another to connect the output to earth (as shown below), the muted signal will be better than 80dB below the normal level.
+ +Q1 is used as a level shifter, because the 4066 operates from ±7.5V (the maximum allowed). If the 'CTRL' input is connected to a voltage of 7.5V or is floating, Q1 is off, and the control signal to U1A and U1B is low, so they are turned off. U1B is configured as an inverter, and when it's off, U1C gets +7.5V (via R4) at its control input so it is turned on, shorting the output. When 'CTRL' is brought low (typically earthed), Q1 turns on, thus turning on U1A and U1B. U1B in turn removes the control input to U1C, which now turns off. The signal is passed normally. The unused switch should have its control input (Pin 12) connected to -7.5V and input/ output pins (10 & 11) connected to GND.
+ +These ICs have extremely high input impedance for the control signal, and quiescent current drain is exceptionally low - around 0.01µA at 25°C. They are static sensitive, and direct connection to the outside world is not recommended unless protective diodes are used. Even so, there is always the likelihood of damage if an un-earthed amplifier is attached without grounding it first.
+ +CMOS switches are common in many audio circuits, but they will degrade the sound quality - whether the degradation is audible or not is another matter. They are very sensitive to static because of their exceptionally high internal impedances. Since they are off by default, the supply voltages must be maintained so they can mute, and they must have their supplies present before they can mute any power-on noise. This is a limitation with all circuits that do not provide at least a partial short with no power.
+ + +Most digital pots provide a mute function. Some use internal logic to set the 'pot' wiper to zero to mute the signal, and return it to the previous setting to un-mute. There are so many and they are so diverse that it's not possible to show a representative circuit. If you intend to use a digital pot, then you have to work out how to access the various functions. Most need a microcontroller to send the digital codes needed to change volume, mute, etc.
+ +Many digital pots are configured to use 'zero voltage switching', and they only make a change when the signal voltage is close to zero. This avoids the slight click you may hear from a relay, JFET or BJT muting circuit, as these are close to instantaneous. No schematic is shown for a digital pot, because there are too many different types and the application notes or datasheets will have the information needed.
+ + +There is the option of using VCAs (voltage controlled amplifiers/ attenuators) to control the level of multiple channels of an audio system simultaneously. The circuit is shown in Project 141. This is easily muted by shorting the control voltage to the positive supply. The signal will be reduced to zero smoothly, with no distortion, clicks or pops. Unfortunately, this mutes the signal, but not the output, and this may result in the VCA itself making odd noises during power-on and off - this depends on the VCA and opamps used.
+ + +Many IC power amps have provision for muting, and in some cases standby as well. These functions work well, but are really only applicable for integrated amplifiers that use an IC power amp. Most are easy to use, and typically the signal is muted until a voltage is applied to the mute pin. This varies depending on the power amp - TDA7293 and LM3886 both have a mute function, but they work differently. With an LM3886 the mute pin is connected to the -ve supply (usually via a resistor) to un-mute the output, but with the TDA7293 the mute pin is connected to a +ve voltage of 5V or more.
+ +There are many others, including Class-D (switching) amplifiers that provide a mute and/ or standby function. To understand each type requires you to look at the datasheet for the specific IC you intend to use. Being able to mute a power amp for home use isn't actually as useful as it seems, unless the system is an integrated amplifier, with both preamps and power amps in the same enclosure.
+ + +Ducking is a special case of muting. It's traditionally used in shopping centre PA systems, and also in radio broadcasting. The term comes from the background audio 'ducking' (as in reducing level, nothing to do with ducks) when an announcement is made. Unlike muting circuits, ducking does not mute the background audio, but reduces it to a preset level in the presence of an announcement. To implement this properly, the circuit should have a nice, controlled action. Not too fast or too slow, but at a rate that doesn't introduce any noise or distortion. An LED/LDR optocoupler is ideal, because they have characteristics that are almost perfect for this application.
+ +The basic idea is shown above. When speech is detected, the LED turns on and the LDR has a low resistance. The amount of attenuation is set by VRa1, and ranges to almost complete elimination of the background signal (typically around 23dB) with VRa1 at minimum, to just a slight reduction (about 2dB) with the pot at maximum resistance. The range can be increased or decreased by adjusting the resistor values.
+ +This is a simple explanation of the process, and is not intended to show a complete system. The audio detector is a fairly critical part of the circuit, as it needs to be able to detect even quiet speech and handle loud speech equally well. This isn't especially difficult, but the circuitry required is not described here, because the article is more about covering applications and ideas, rather than complete circuits. For a complete ducking circuit, see Signal Detecting Audio Ducking Unit.
+ + +Many sources (e.g. CD/ DVD/ Blu-Ray players, etc.) are not earthed, and they use switchmode power supplies. In all cases, there will be a Y-Class cap from the DC output of the supply back to the rectified incoming mains. This is done so the unit will pass EMI tests, but it also causes the output to float at some AC voltage above earth (anything from 50 to 120V AC). Even a 1nF Y-Class cap can provide more than enough instantaneous current to damage opamp inputs and outputs and other sensitive circuitry.
+ +The standard RCA type connector doesn't help matters, because the centre pin (signal) makes contact before the shield, so circuitry can be subjected to whatever voltage is present at the time, with the steady-state current limited by the capacitor. It's not steady-state voltage or current that causes the problem, it the instantaneous current that is delivered at the instant the centre pin makes contact. Touching the outer shield part of the connector to the chassis before inserting the plug may help, but it's certainly not a reliable way to ensure nothing is damaged. It's far safer to use a clip lead or similar to link the two chassis before plugging anything into the RCA sockets, or ensure that AC mains power is removed from all equipment before making connections.
+ +In a typical SMPS as shown above, C4 (typically Y2 Class) is the one that causes all the trouble, even though it's usually rated at no more than 1nF. It combines with the primary to secondary capacitance of T1 to provide a low current path between the incoming AC and the DC output. Provided the secondary is earthed, only a tiny current flows, but if it's not earthed, momentary contact can cause an instantaneous current of several hundred milliamps. If that only flows in the chassis or circuit common it's unlikely to cause any problems, but when the connection is made via the signal lead the momentary current spike can easily damage sensitive components.
+ +The steady state current is around 50µA, but the peak current is limited only by the total circuit impedance. The peak voltage across C4 can be as much as the AC mains peak (325V for 230V mains), and whether anything is damaged or not depends on the exact instant in time when the connection is made during an AC cycle, and the relative speed of the connection. Metallic contact usually makes one or more fast, low resistance connections as a plug is inserted. The peak current can easily exceed 1A, albeit for a very short period (around 1µs or so). That's all it takes to damage any semiconductor.
+ +This is something that I've tested extensively, and there is no doubt at all that even a comparatively rugged TO-92 transistor can be degraded or destroyed by a single input pulse from a Y-Class cap set up to simulate real world conditions. It's (apparently) common for muting transistors to be damaged or destroyed, because they connect directly to the output of many products, including media players, game consoles, TV sets and many others. Even a 'smartphone' that's connected to a charger and then to some other gear via the headphone socket poses a risk, because the charger uses the same capacitor 'trick' in order to pass EMI tests.
+ + +There are two main types of muting circuits. A common need is to mute the system (or an individual channel of a mixer) by means of a switch. This may be hardware (a physical switch), under software control (common in mixing consoles), or from a remote control. A single relay can be switched by several different circuits, simply by removing power to force muting. This creates a logical 'OR' gate - if 'input 1' or 'input 2' or 'input n' is applied, the relay will be on and the signal is not muted. If multiple circuits can mute the system, care is always necessary to ensure that you always know that there is a mute signal active, and where it's coming from. It's generally best to have an indicator on each sub-system that can force the mute, rather than a single LED that shows that the system is muted, but without providing any clue as to which circuit is responsible.
+ +Mostly (at least for hi-fi) this isn't an issue, as you may have a power-on/ off mute, and the same relay controlled from a remote. The power-on mute will clear after a few seconds without you needing to do anything, and the remote mute will be activated/ deactivated as required (this is not included in the circuit shown). Using multiple mute relays on the same audio bus should be avoided if possible, because it makes fault-finding much harder if there's a problem a few years after the gear is built.
+ +Muting systems can be operated with a simple analogue circuit, and/ or may be under the control of a microprocessor. Either way, the system has to know when the mains power is turned on or off, so that muting can be applied or removed as needed. For an entire system that controlled by a micro of one kind or another, it will probably be programmed to activate the mute circuits before power is turned off (typically with a relay, also controlled by the micro). Most hobbyist systems don't use microcontrollers, because many people want a purely analogue circuit, without having to worry about digital switching noise getting into the audio.
+ +When a piece of AC powered gear uses a mute circuit, it's commonly set up so that there will be a delay after power-on, and a 'loss-of-AC' detector will be used to mute the signal before the power supply filter circuits can discharge. For a variety of reasons, some circuits will generate loud 'bangs' or strange noises as the supply voltage collapses, and it's a very common requirement that the output should be muted as soon as power is disconnected or turned off. Using a double-pole switch for mains 'on' is a really bad idea, as it's potentially dangerous if a switch fault develops, and it does nothing to mute the output if the mains lead is disconnected.
+ +There are many ways that AC detectors can be set up, with both Project 05 and Project 33 incorporating a loss-of-AC detector and mute circuitry. Ideally, the circuit would mute within a single cycle of the AC waveform, but this isn't always desirable or practical. However, any such circuit should respond within 50ms (5 cycles at 50Hz) to be useful. Extreme precision isn't necessary, but fortunately it's quite easy to do. A single circuit can both provide an initial delay (around 1-2 seconds is generally enough), and provide loss-of-AC detection.
+ +While it's shown as a single circuit, the muting and loss-of-AC sections can be separated and used individually. The circuit is deliberately as simple as possible, consistent with it being able to perform well in practice. The AC input will typically come directly from the low-voltage transformer that provides the 12V DC to operate the circuit. As shown it should be a maximum of 20V peak, and it will be half-wave rectified because of the power supply's bridge rectifier. The voltage developed across C1 should be no more than 10V (average).
+ +Initially, C1 is prevented from charging by Q1, which is turned on for about 800ms via R2 and C2. Extend the power-on delay by making C2 a larger value. For example, if you use 47µF, the power-on mute is extended to about 3.5 seconds. Once Q2 turns off (because C2 has charged), the output from C1 can turn on Q2, which turns on Q3. D3 adds hysteresis to make the circuit turn on cleanly and without relay chatter. When the AC signal disappears (because the mains has been interrupted), C1 discharges quickly, and the relay will have power removed within around 50ms. Once the relay releases, the contacts close, shorting the signals from the two input channels (shown as 'Sig L' and 'Sig R' for convenience only). D2 ensures that C2 is discharged when the 12V supply is turned off.
+ +R7 is optional, and should be the same value as the relay coil's DC resistance. It will allow the relay to release faster. This is covered in some detail in the Relays article. Without the resistor (only the diode D3), the relay may take up to 6ms to release after power is removed. The resistor improves this to about 2.6ms with the relays I tested. The extra 6ms should normally make no difference, but it's a relatively unknown trick that can be included 'because you can'.
+ +As you can see, all of this has been achieved by three transistors and a small handful of other parts. It can (of course) be made far more complex, and it may even be possible to improve its performance a little. The circuit is actually a hybrid, using elements of both Project 33 and Project 05. As shown, it will do exactly what it's meant to do, reliably and for many years.
+ +The output doesn't have to power the relay, and it can be used with any mute system that releases when supplied with a positive voltage. The polarity can be reversed with another transistor if necessary.
+ +The circuit shown is one of many approaches you can take, with the appropriate interface taken from any of the other circuits shown. It's also fairly easy to re-configure any of the circuits shown to perform in the same way. There are too many options, and it would be folly for me to try to show every combination you can use. There is another that's worth including, because it's easily built using a single 4093 CMOS quad Schmidt NAND gate.
+ +It has the advantage of making either polarity available (High = Mute or Low = Mute), and it consists of the IC and a small handful of other parts. It was originally designed to be part of the Project 236 AC millivoltmeter, but it works so well I've added it here. This can be used with any of the circuits shown above, and it's not at all fussy about the supply voltage provided it can never exceed 15V.
+ +The outputs can drive a transistor for relay muting. The base should be driven from the selected output (normally -Mute, U1 Pin 4) via a 10k resistor. Relay wiring is the same as shown in Fig. 1. The 'loss-of-AC' cutout is particularly useful, as it mutes the signal when AC is removed, and prevents noises as the circuitry loses power.
+ + +The ideal mute circuit will attenuate the signal in the absence of power, so the signal is muted by default. It needs an active system to allow the mute to be removed, which will be done after all circuitry has had time to settle after power is applied. It will mute the signal immediately when mains (or battery) power is removed, before any filter caps have had time to discharge to the point where opamps become unhappy. With a typical supply filter cap of 1,000µF prior to the regulator and a current drain of around 100mA, a 15V supply will take about 100ms before the voltage is low enough to cause some opamps to misbehave. This is plenty of time for the mute voltage to be removed so most noises can be suppressed fairly easily.
+ +Only the relay and JFET mute circuits satisfy the above criteria. + +
Mute circuits that require a voltage to be present pose additional difficulties, because some means of holding up the supply voltage for the mute circuit has to be provided. A capacitor will usually do nicely, but it needs to be fairly large to ensure that the mute is maintained until all 'disturbances' have ended. It's not hard, but it does add extra parts. Providing a supply voltage before anything else gets power is much harder, so if you have equipment that makes noise at power-on your choices are limited.
+ +It's very hard to beat a relay. The mute/ un-mute action is sudden and creates some transients, but the effect is not usually a problem. It may not have the finesse of a nice soft action as you'll get with a LED/LDR combination, but it does what's needed reliably and with the absolute minimum of additional components. An added benefit is the fact that the mute is active in the absence of power, and the circuitry has to provide a DC voltage to activate the relay and remove the mute. This helps to protect the output stages of your equipment from static damage. The relay is the simplest and most effective of all muting circuits.
+ +Relays are also completely immune from damage caused by plugging in RCA leads. With these, the centre pin makes contact first, and any static charge (or the voltage that can be measured from all 'double insulated' and other un-earthed equipment) can't damage the contacts. The short circuit to earth/ ground provided by the relay also protects the rest of the circuit.
+ +The next best option is JFETs, but their attenuation is not as good as a relay. The JFET is also static sensitive, and protection is needed to prevent static impulses from destroying the JFET. While BJTs actually work surprisingly well, they require more external circuitry than a relay or JFET so are not very good candidates.
+ +There is a lot of very confused thinking on the Web about mute circuits, and what does (and does not) work. Those shown here all work exactly as described, and distortion is generally very low (with the possible exception of the diode circuits). Many people think that using BJTs must cause distortion, but that's only true if they aren't used properly. Perhaps surprisingly, transient distortion (while the signal is being muted or restored) is close to being inaudible provided the transition is fairly fast. If the transition takes less than 100ms, you almost certainly won't notice the distortion from a JFET. However, a BJT produces highly asymmetrical distortion during the transition and that may be audible under some conditions.
+ + +References are few, because there's surprisingly little information on the Net. There are certainly a few circuits (and more than a few forum posts), but finding definitive info is not easy, and this article is intended to bridge that gap. It's quite obvious that many of the comments made in forum posts and elsewhere simply show that the writers don't understand muting circuits at all.
+ +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Audio Myths |
For reasons that may initially seem unclear, the world of audio is rife with myths. Some are harmless enough provided the wallet pain isn't an issue, but some are quite malicious and potentially dangerous. Mains cables fall into the last category - many countries have very strict rules about what may be sold as a mains cable, but the vast majority of 'audio' cables have none of the required approvals. Maybe they are safe, maybe not - the rules are in place for good reasons, and based on the cost of some, paying for approval would be a drop in the bucket.
+ +Because audio 'quality' isn't something tangible for most people, it's an area where the unscrupulous can dive in and make their outrageous claims, with little chance that they can ever be disproved. In many cases, it's only rational thinking and measurements that can really determine if something is going to make an audible difference to your system. In reality, everything that's part of the hi-fi will make a difference, and the only argument is whether it's audible or not. The charlatans will zoom into the fact (and it is a fact) that everything makes a difference, but they conveniently forget to mention that the difference will never be audible to anyone with normal hearing (or in many cases cannot even be measured). Some changes are immeasurable - we know that a difference exists because we can calculate it, but even the best measurement tools are unable to resolve the infinitesimal changes that will be made.
+ +The limits of audibility are somewhat fuzzy - they change from person to person, day to day, and for various reasons. Hearing isn't just about ears - our brain and what we see makes a huge difference to what we hear (or think we hear). This provides a golden opportunity for anyone who is a bit shy of scruples to run rampant. Magic components, rocks, pebbles and holograms, cables that will transform your system ("better than room treatment" I've seen claimed) - the list is seemingly endless. One thing that we do know rather well ... humans cannot easily resolve a level difference of less than 1dB with programme material, yet if the scammers are to be believed, differences of 0.001dB (or even no difference whatsoever) are clearly audible to anyone who doesn't have tin ears. By applying this 'logic', it becomes easy to classify anyone who doesn't hear the 'magic' as being half deaf and not credible as a commentator on the topic. The cult followers will often use this very argument to try to discredit anyone who disagrees with their nonsense.
+ +One of the major problems is that almost zero of these so-called 'improvements' have ever been properly tested. That means a full double-blind test, where no-one knows which component is in circuit at the time of the listening test itself, and the details are revealed and statistically analysed after everyone has finished. Our senses are too easily fooled to allow sighted tests, because there will be preconceptions and (sometimes subconscious) bias towards one test item and against another. This is perfectly normal - we all do it, every day. What is not normal is that these biased views are then claimed as 'fact' and brandished around on the Net.
+ +The result is quite predictable. People with relatively little knowledge look up to reviewers and others who claim to be 'gurus' or (lower-case) gods in the field, and if they tell porkies (lies) most are unable to detect that what is claimed is simply not possible. One only has to look at the number of complete scams that abound for 'energy savers' (as but one example). They are routinely shut down, only to open up again with a different name, but the same old scam. Quack medicos do much the same with 'miracle cure-all' products that may not even contain a single molecule of anything that might have some medicinal value.
+ +The counter-arguments to double blind testing are so trite that they are laughable ... "The extra circuitry in the double-blind switch box (or whatever) adds (or takes away) so much extra detail (or colouration) that it makes the test meaningless." My comment on that ... bollocks! The second argument dragged out regularly is that double-blind testing is 'stressful', and the added stress means that people are unable to discern differences they otherwise might find easy. The arguments (naturally) are intended to deflect attention from their preferred test methods, which are fatally flawed.
+ +There is another counter-argument that's often dragged out, kicking and screaming. You will be told that you must keep an open mind, because some things just work for no apparent reason. They will tell you that 200 years ago, science stated categorically that people would not be able to fly. What they fail to mention is that science has come a very long way indeed since then, and while there are still things to be discovered, they will almost certainly not be anything to do with hi-fi.
+ +Since they have provided what they think is the perfect reason for you not to trust science, it is expected that you should try whatever idiotic 'tweak' is being suggested with a completely open mind. To do otherwise is being closed-minded and not willing to try something new. This occurs in audio probably more than in any other field, and is complete bollocks (again).
+ +If I were to claim that listening whilst having your wedding tackle partially immersed in olive oil improved the sound, would you do it? No, nor would any other sensible person. You wouldn't do it because it makes no sense. There is no connection between your naughty bits and sound quality, and you would rightly dismiss my claim as twaddle. Now, consider that your hearing is directly affected by many emotional triggers, as well as alcohol, many 'illicit substances', whether you had an argument with your partner, etc., etc. Dangling your privates in olive oil might actually make more difference to the perceived sound that you would have imagined, but it's still silly and you wouldn't do it. Why is it different when other equally silly ideas are proposed (such as demagnetising non-magnetic items like CDs or vinyl)? There is no difference - silly ideas are silly ideas regardless of the particular type of silliness.
+ +One of the primary issues with these myths is that they create FUD - fear, uncertainty and doubt. Most people do not have the detailed knowledge needed to be able to determine that the latest craze or tweak is nonsense, implausible or just plain wrong. Reviewers who should be truly impartial are often anything but, and instead of dispelling myths they help propagate them. This is unforgivable in my view.
+ +There are two particular things to which one can easily fall prey - the 'experimenter expectancy (or bias) effect' and the 'placebo effect'. Both are potentially very powerful, and can shape the outcome of a test at the subconscious level. If you 'demagnetise' a nonmagnetic medium and expect to hear a difference, then you probably will. What actually caused the difference will be curdled by your brain (at a subconscious level), and you will be left thinking that demagnetising made the difference, when in fact it was 100% imagination. This is why all proper medical tests are double-blind, to guard against these well known phenomena. It is a BIG mistake to think that you are immune - no-one is immune because we don't even know it's happening.
+ +Not all myths are to do with magic components and indefinable qualities that are imparted by this or that tweak or incomprehensible magic act. Many are just plain nonsense, and although covered elsewhere on the ESP site, they will be referenced here too.
+ +I've also come across some fascinatingly deluded articles, including one that explains why the writer is a subjectivist. This person actually believes that what s/he thinks s/he heard is real, and castigates objectivists with all the same tired old bollocks that we've come to expect. This is denial, plain and simple. Some people seem to be completely unaware of the huge traps they set for themselves with sighted tests - they think that normal reality can't possibly apply to them. "If I heard it, then it's real (to me)" is common enough, and in a sense it's also true enough. However it fails to accept that their 'reality' can be completely imaginary.
+ +Something to ponder ... A truly open mind has to be open to the possibility that a new and radical idea, however exciting, may prove to be complete bollocks. [ 7 ].
+ + +Loudspeakers are available today that are capable of truly insane amounts of power - or so one may believe from the manufacturers' literature. There's only one small problem - they can't really handle the claimed power at all. The setup used by the maker to determine the power rating is often not disclosed, and in some cases has little or nothing to do with the way the speaker will be used.
+ +The published figures are usually accurate, but only if the loudspeaker is used in much the same way as when it was tested for power handling. If you use a different box design (perhaps bandpass), then all bets are off, because the cone movement is severely restricted so cooling is reduced dramatically. In some cases the maker will cheat too - if the test bandwidth for a woofer is extended to 20kHz, that makes the power handling figure look a lot better because there's a lot more energy in the pink noise test signal.
+ +This inflates the power handling figure, because the high frequencies are incapable of generating current in the voicecoil due to its inductance. Remember that if the maker can claim an extra 3dB, that means that a 300W speaker is suddenly a 600W speaker in the sales blurb. The fact that much of the applied voltage does not cause a corresponding current to flow seems to be immaterial.
+ +For example, one well known 8 ohm loudspeaker has an impedance of over 16 ohms at 1kHz, rising to over 30 ohms at 3kHz. Needless to say there's also bass resonance, so the actual power is considerably less than that claimed. Power level is based on the use of band-limited pink noise, and is calculated using the RMS voltage and minimum impedance [ 1 ]. Already this gives an overly optimistic result, because there is no requirement to measure voicecoil current nor to use that in the calculation. Note too that the minimum impedance (Zmin) is not the rated impedance - for an 8 ohm driver it's typically around 5.5 to 6 ohms. The test method also states that power handling shall be tested in free air, so close to optimal cooling is available.
+ +Because of the speaker's impedance curve, it's likely that the actual power (as opposed to claimed power) may be reduced by half. This means that a 600W driver only really handles perhaps 300W during the test, and if the speaker has a particularly high impedance at resonance it may well be quite a bit less. If the bandwidth is not limited to the speaker's actual frequency range (e.g. extended to 20kHz for a bass driver), the voltage measured during the test will be far greater than that which can be utilised by the speaker. Again, this make it appear that the speaker can handle 600W, but in reality the real power level may be closer to 150W.
+ +To give you an idea of how deceptive it is to extend the noise bandwidth, I ran a simulation so I could see for myself. Extending the noise bandwidth from 2kHz to 20kHz will increase the applied voltage by around 2dB with band-limited pink noise, but the power that the driver actually receives is only increased by 0.2dB because the impedance is too high at the upper frequencies where we gained the extra voltage.
+ +So, real power, based on the applied RMS voltage and RMS current might increase from 300W to 312W, but based on the RMS voltage it appears to increase from 300W to 480W. The apparent power is enhanced yet again by using the minimum speaker impedance rather than the nominal value. Now our 300W driver is rated for 630W based on Zmin of 6.2 ohms. That's impressive - the speaker rating has just been more than doubled by messing with a few numbers, and it didn't cost a cent to develop. Even more impressive, we have complied with the AES standard to the letter and still managed to double the real power handling without a spending a sausage for research.
+ +Should the user think that 600W is the real power (after all, it says that in the brochure) and then adds an extra 3dB for headroom, a speaker that can really only cope with 150W is hooked up to a 1.2kW amplifier. The end result is inevitable failure. For more on this topic, see the Speaker Failure Modes article, and also have a look at Loudspeaker Power Handling Vs. Efficiency.
+ +The whole idea of a loudspeaker being able to handle as much power as a bar radiator and not get so hot that it fails is simply silly. Manufacturers will persist with fudged 'power handling' figures for as long as people buy loudspeaker drivers based on power handling rather than efficiency. Always remember that an extra 3dB of efficiency is like getting double the power for nothing.
+ +Just before this article was published, I came across some (dis)information on a site that really should know a lot better. The following is a direct quote ...
+ ++ "Many speakers have a 'maximum wattage rating' on the back. Treat this as a 'minimum wattage rating'. You are far more likely to damage a speaker giving it too few + watts and trying to play it too loud. High-end amplifier companies make amps with more than 1,000 watts, and you could plug in a $50 speaker into it with no problem." ++ +
While the last part of the quote is obviously true, what wasn't mentioned anywhere was that the 1kW amplifier would fry the $50 speaker in seconds flat if someone were to turn up the volume. The same article also claimed that amplifier power ratings are meaningless. While this is true of many cheap HTIAB systems, if a well known (and serious hi-fi/ professional) maker states that the amp can deliver 1kW, then there is usually little reason to doubt it. Note the old chestnut - small amps kill speakers. They don't (see the articles referenced by the above links). If a small amp driven too loud can kill the speaker, a bigger amp driven too loud will kill it faster.
+ +In addition, there wasn't even the smallest mention of speaker efficiency anywhere in the entire article. Remember, if one set of speakers is 3dB more efficient than another, that's exactly the same as getting double the amp power - free. Assuming of course that the more efficient speakers still sound decent. There's no point getting the most efficient speakers you can if they sound like a cat farting into a milk bottle.
+ +The same author as the above quote also claimed that loudspeaker frequency response figures were meaningless and should be ignored. Yes, this is true for the HTIAB systems that all claim 20Hz to 20kHz (but with no graph or dB limit), but for serious speakers, most makers go to some effort to demonstrate that their speakers really do what is claimed. While fudged figures are not uncommon, to make a blanket claim that all are meaningless is going much too far - even for an article aimed at ordinary consumers.
+ + +Rather than reiterate what I've already written on this topic, I suggest the reader looks at The Truth About Cables ... first. Despite almost everything you read about this or that cable 'transforming' your system, it won't. Nada, zip, not a sausage. The hype and BS surrounding pieces of wire is astonishing, and there are crooks all over the world ready and more than willing to relieve you of your money.
+ +Should you be so wealthy that $10,000 is small change or pocket money, then you probably don't care one way or another. However, if you are part of the real world then you should resent the fact that thieves and charlatans are relieving others like you of their hard-earned cash. I really dislike crooks, and in my book anyone who claims that their cable is capable of doing anything more than transporting your audio from point A to point B is a both a fraud and a liar.
+ +In essence, that is all a cable ever does, and if it's designed and built properly it will do exactly that ... transport your audio from point A to point B. Nothing more and nothing less. Yes, there are losses - always. These are utterly unimportant for signal leads provided some common sense is used. 5km signal leads using cheap shielded wire is not sensible, but the vast majority of interconnects are perfectly alright for the job. You don't need to spend more than perhaps $20 or so to get decent signal leads, and those costing $hundreds will not do anything differently - despite all claims.
+ +Speaker leads can be more of a challenge, but that's all about keeping resistance and inductance low. Resistance means that power is lost, and inductance means that high frequencies can be affected. The simple answer is to keep speaker leads as short as possible, and preferably make your own. I've seen speaker leads selling for not $hundreds, but $thousands, and that defies all logic. There isn't a speaker cable made anywhere, by anyone, that is worth that kind of money. In my book, anything over $20 or so for 3 metres of unterminated wire is looking suspiciously like a scam. Terminations can be fairly expensive (particularly for those with a gold plating to prevent corrosion), but even these shouldn't cost more than perhaps $10-20 a pair.
+ +To make matters worse, some cables (especially those with a low characteristic impedance) can cause amplifiers to oscillate - definitely not something anyone wants. The fix is easy - add a terminator to the far end, using a series network of a 10 ohm resistor and 100nF capacitor, wired across the speaker terminals. Some of the charlatans will offer to charge you serious (additional) money for a terminator, which should be included as a matter of course. IMO, this Zobel network should be included on speakers - it adds almost nothing to the cost, and ensures that cables with excessive capacitance don't harm the amplifier. A Zobel network will not influence the sound, regardless of claims you may hear.
+ +In reality of course, all cables make a difference that might be measurable - especially at super-audible frequencies or radio frequencies. Unless you compare bell-wire with 5mm² cable of sensible construction, the difference will rarely (if ever) be audible in a double-blind test. Sensible speaker cables are readily available for a few dollars per metre, and anything else should be treated with suspicion.
+ + +These are another matter entirely. Most are a blatant rip-off, that much is predictable, but a great many (probably most) are also likely to be illegal. They often have no fire rating certification, and/or are otherwise ill-advised additions to your system. In Australia for example, it is mandatory that all mains cable designs are tested and certified for safety. In the US, some insurance companies may deny a claim if it's thought that a non UL-certified cable started a fire. Elsewhere in the world other regulations apply, but the mains cord sharks don't give a toss! + +
Who cares if little Johnny is electrocuted, as long as the hi-fi system sounds great? Well, I do, and so do the authorities. It has to be considered that anyone who cares so little for your wellbeing that they will tell you (with a straight face) that 1 metre of their 'magic' crap will undo the alleged damage caused by possibly hundreds of kilometres of perfectly ordinary wire simply cannot be believed. Good grief! Many electricity suppliers use aluminium cables for high voltage transmission, and that sounds dreadful (so they say). No-one seems to be concerned about aluminium voicecoils in loudspeakers though. Ditto for ribbon tweeters, which almost all use a thin aluminium ribbon as the drive membrane - as do many ribbon microphones in the recording studio. Strange, that.
+ +The gall and audacity of these sharks to claim that 1 metre of their magic mains cable will make an audible difference! These claims are simply deluded. Read the introduction section again - the only way to be certain is to perform a double-blind test, and anyone who claims otherwise is lying - straight and simple. Good quality connectors that make firm and positive contact are worthwhile, but the rest is horse-feathers.
+ +If you have a problem with mains noise, a quality mains filter might help to reduce interference. You don't need to spend $hundreds to get one.
+ + +"Second harmonic distortion is pleasing to the ear" say those who enjoy their little distortion boxes called SET amplifiers. If only that were true, we could all relax and stop worrying about all the intermodulation products that these pointless atrocities produce and just enjoy the music. + +
Strangely, every high quality VCA (voltage controlled amplifier/attenuator) and anyone who uses FETs as the active element in limiters and compressors, will include distortion cancelling circuitry. Such circuits only reduce even harmonics (second, fourth, etc.), but are unable to reduce odd harmonics at all. We are left with a distortion cancelled circuit that only produces odd harmonics, because it's difficult to reduce all distortion to zero, and a bit of odd harmonic distortion is far less intrusive than a lot of even harmonics. The odd order harmonics that remain cannot be removed by distortion cancelling circuits, but removing even harmonics reduces overall distortion to (usually) acceptable levels.
+ +When valve amplifiers were all we had, not one amplifier that had any pretense to quality used single-ended triodes. Most mantel radios of the day used a single-ended pentode output stage, but that was for a simple (AM) radio with no pretense to hi-fi. When more power and/or higher quality was needed the output stage was invariably push-pull. Push-pull output stages cancel even harmonics, and also allow the full use of the laminated iron transformer core (which is dramatically less effective in a single-ended stage).
+ +The remaining distortion consists of predominantly odd harmonics - there may be a small residual second harmonic content, but quality designs came close to eliminating it altogether. Feedback was used (albeit in moderation because of the output transformer) to reduce distortion as far as practicable. The very best amps of the day (at the end of the valve era) came very close to equalling a decent transistor amp.
+ +So why do we have this myth that second harmonic distortion sounds 'nice'. I wish I knew. It's possible that it was started by a SET fanatic somewhere along the path to nirvana, but other silly explanations that I can't think of are probably equally plausible. Regardless of the origin, it's complete nonsense, and distortion of all kinds should be below 0.1% (system wide) to qualify as hi-fi. Less is better, and easy to achieve until you reach the loudspeakers. Speaker distortion is typically at least an order of magnitude greater than that from most competent amplifiers! + +
So, do we need opamps and power amps with distortion that's virtually immeasurable? No, not at all - however aiming for extremely low distortion doesn't hurt anything provided nothing else is sacrificed to get there. Almost always, very low distortion systems usually have wide bandwidth too - it's comparatively easy to get flat response from 1Hz to 50kHz or more.
+ +In some areas there is a serious prejudice against opamps. I don't have a problem with DIY people wanting to build a discrete opamp - indeed, I even have a PCB for one. This is an excellent way to learn about circuits and how they work, and a discrete opamp can perform very well, within limits. For anyone to claim that traditional (IC) opamps are grossly inferior in some way is just silly - there are opamps available that beat anything you can build hands down. Still, there is often fierce debate about which opamp sounds better, but almost always with no reference whatsoever to a blind test to prove it one way or another.
+ +A claim that I find interesting is that many or all opamps sound 'better' if biased into Class-A. With few exceptions the reverse is true, because the additional loading presented by the bias circuit loads the opamp's output stage and increases distortion. The increase may be quite pronounced in some cases, and it's an idea that doesn't tally with reality.
+ +As you are no doubt aware, reality and fantasy are worlds apart, and people making claims should be willing to back up their story with proof. Sadly, few even attempt to do so. It really doesn't matter if 1 person or 10,000 people think there's a difference - if there's no proof then it has to be likely that there is no difference. Proof is defined here as statistically significant results based on double-blind tests, or measurements that show the difference is measurable and within the limits of audibility as we currently understand them. Don't expect to hear of some miraculous discovery that changes what we know about human hearing - there are exceptions (primarily government standards based, and often quite wrong), but that's not at issue.
+ +Despite claims that there are hearing mechanisms that are not well understood (which may or (more likely) may not be true), it has been established over many, many years that normal people with normal hearing are completely unable to pick most differences that a reviewer might claim is "astounding" (or some other superlative). Nor can most people hear the 'veil' over the high frequencies, or be convinced that the bass has the 'authority' claimed - beware of words that attempt to convey emotions, as they are the emotions of the reviewer, but usually no-one else.
+ +There are things that we hear that are not thought to be audible. Countless people listen to MP3 audio, but most don't seem to have noticed that the stuff that we allegedly can't hear is the very stuff that provides the stereo image. Listen to the same track direct from CD and then as an MP3 - imaging is gone, and you are left with a mostly mono signal with some left and right highlights every so often. Digital radio is the same - for ages I thought my DAB+ digital radio had only a mono output, until one day I heard something that was panned hard left.
+ +The above notwithstanding, most well-engineered music (if you can find any these days) sounds extremely good. On my system, this is despite the fact that I haven't used a single magic component anywhere.
TID (aka TIM) was proposed by Matti Otala in 1972, and the basic concept is 100% true. Unfortunately for the proponents of TIM/TID, it doesn't actually happen with real music in any reasonably competent amplifier (which is almost all modern amps, including IC types). Many have tried to demonstrate its existence with programme material, but to my knowledge no-one has ever managed to succeed. The information supplied in Wikipedia [ 4 ] is untrue - no known amplifier shows the problem with normal programme material.
+ +Virtually any amplifier can be made to show the 'problem' - all you need is a low-level high-frequency sinewave superimposed on a fast risetime low-frequency squarewave. If the squarewave's rise and fall times are fast enough, no audio amp ever made (including those with no feedback, valve amps, etc.) is fast enough to prevent some loss of the high frequency sinewave signal. A fast squarewave may easily demand a bandwidth of several MHz - well beyond any realistic expectations for an audio amp. Once the squarewave is filtered so its bandwidth is more in line with a typical full range audio signal, TIM/TID simply disappears.
+ +Of far greater concern is the amplifier clipping - even for an instant! All frequencies other than the one that caused the amp to clip are eliminated once the amplifier is no longer operating within its linear range. Despite this, occasional transient clipping is generally considered to be inaudible under most conditions.
+ + +The range of things that one can buy that will allegedly improve their hi-fi system is mind-boggling. There are rocks, pebbles, holograms, feet, weights, springs, 'special' capacitors, 'special' lacquers that match the human body's carbon content (I truly wish I could say I made that up, but it's true), audiophool knobs (yes, that's true too - over $200 for a timber knob!), carbon composition resistors, 'demagnetisers', little 'towers' to keep your speaker leads off the floor ... the list goes on and on. Pretty much without exception, these are scams. If someone wants to believe that a rock on top of their speaker makes it sound 'better' then fine - put as many rocks as you like on the speakers.
+ +It's when others insist that the rock is 'special' (and costs hundreds of dollars) that the claims become criminally fraudulent. Other parts that supposedly have magical properties are often simple passive components. While there are certainly differences, if an appropriate part is used in a circuit (rather than something completely unsuited to the job), the likelihood of audibility is usually nil. The same applies to cables of course - they have already been covered in several ESP articles as well as above (albeit briefly).
+ + +These are an especially good target for the scammers, because most people don't actually know much about them. They have an air of mystery, so it's easy to claim that polyester caps sound 'bad', damaging the 'air' around instruments for example. When you see the claim, try to get a meaningful explanation of exactly what is meant by 'air'. Don't expect anything that makes sense.
+ +Polypropylene and other mildly exotic dielectrics don't seem to have attracted the wrath of the magic parts proponents, but they are comparatively large compared to a polyester or Mylar cap of the same value, and don't fit on many PCBs (such as those I sell). There is no credible evidence that any film cap is aurally different from any other. Tests have been run and various distortion products measured and identified, but in all cases the results are extremely difficult to measure because they are below the noise floor of most test equipment. If even purpose-built test equipment can't identify a significant difference, then it is folly to imagine that we can hear it. Consider that if a capacitor is (much) physically larger than another type of the same value then it may act as a relatively large section of unshielded wiring, and may pick up noise - hardly an improvement.
+ +There are certainly differences between plastic film capacitor dielectrics, but nothing that need concern us for audio. If you happen to be making a high resolution sample and hold circuit then the choice is critical, but none of the effects are relevant to dealing with audio signals. I have seen it claimed that ceramic caps shouldn't be used for supply bypass because they somehow affect the audio quality - this is utter nonsense, and anyone who claims this to be true is either a fool or a liar. Multilayer ceramic caps are specifically designed for bypass applications! + +
Even electrolytic caps are perfectly usable in the signal path of most audio gear, and if large enough their limitations will never cause any problems. There is a long-standing myth that you have to bypass electros with a small film cap, but this is also nonsense. The inductance of any capacitor is simply a factor of its physical size, and small sized caps have low self-inductance. If you are working with RF equipment, then yes - add the bypass cap, otherwise it's optional. It won't hurt anything though, so if it makes you feel better that's fine.
+ +The exception for electrolytic caps is their use in any kind of filter. When the AC voltage across a cap is significant, it is able to distort that signal if there are internal non-linearities. Electrolytic and many ceramic caps certainly have non-linear behaviour (usually both voltage and temperature dependent), but if the voltage across the cap is close to zero, then distortion is also close to zero. Using bipolar electrolytic caps in passive crossover networks is a bad idea, because they degrade with time - especially if the system is pushed hard and the caps carry significant current. Add to this the voltage dependent distortion characteristics and you have a non-linear system that can't be relied on. High 'k' ceramic caps have no place in the audio path, but are perfect for supply bypass (and that's what they are made for).
+ +Capacitors are much maligned by many in the audio field, but I know of no double-blind test where listeners have been able to pick them apart. I specifically exclude multilayer ceramic and electrolytic caps from this because it's far too easy to make the difference audible by using nonsense circuits that are designed to reveal any flaws, but are not used in normal circuits.
+ +Most capacitors have some distortion, but for almost all film caps it is so far below audibility that you don't need to worry about it. Even electrolytic caps are fine as long as the AC voltage across them is minimal. It stands to reason that if there is no significant voltage across any capacitor, then it can contribute no significant distortion.
+ + +Carbon composition resistors are useful for one thing - the rubbish bin! They have no place in audio equipment because they are noisy and unstable with time. As with most of the other scams, I can't imagine how this started - it just doesn't make any sense. Way back when electronics just started, carbon comp resistors were the mainstay of low cost resistors, but they are now outdated and stupidly expensive for what is really a pretty crap component. Metal film resistors are far cheaper, are much more stable, and have better tolerance (typically 1%). Carbon film resistors are better than composition types, but not as good as metal film. Audibility for any of the above? Almost zero provided the noise of each sample is not audible.
+ +Wirewound resistors are used where high power is needed, and contrary to popular belief they generally have a very low inductance compared to resistance. While it is possible to see a measurable change in performance due to the inductance, it is unlikely that the difference will ever be audible because the inductance is so small. Some 'non inductive' wirewound resistors are just the standard version with a different marking - they don't use a non-inductive winding at all (but they do cost more).
+ + +A few years ago, a bunch of lunatics launched a speaker crossover that completely did away with evil capacitors, and used only nice, friendly inductors for the whole network. I managed to obtain the schematic and was able to simulate it, and to say the results were dreadful would be high praise. The results were worse than dreadful - it was an abomination. + +
What the 'designers' missed completely is that inductors are the worst electrical component of all - inductors have self-resonance that's much lower than any sensible capacitor, and they are lossy because of the wire resistance. So-called inductors are only a useful inductor over a relatively limited frequency range, and their internal resistance ruins damping factor ... if you happen to think that's an important parameter. Naturally, all resistive losses result in heat (usually inside the speaker cabinet) and wasted power.
+ +Like so many other silly fads, the all-inductor crossover seems to have mercifully passed away, but not before it created much controversy and instilled FUD in some sectors of the hi-fi fraternity.
+ +If you doubt that inductors are as bad as I say they are, run a passive crossover at reasonable power into dummy loads for a while, with a full bandwidth signal. Feel the capacitors - they should be at room temperature. Now the inductors. They will be far hotter than the caps. The heat is wasted power, and indicates that the inductor also has significant resistance.
+ + +There are countless myths that fall into the 'miscellaneous' category, and those shown below are just a sample. These are some of the more popular distortions of reality, but the number continues to grow, with 'new' BS 'products' being introduced all the time. It's clearly impossible to keep track of them all (and who would want to?), but those shown here have been around for a while.
+ +Residual magnetism in component leads causes either distortion or something completely unexplainable (but apparently bad anyway), according to some. This is unmitigated drivel - it doesn't happen in any competently assembled equipment, and is just an idiotic claim designed to separate you from your money. If this were true, then that would be a known distortion mechanism of loudspeakers - they have extremely powerful magnets. So do magnetic phono pickups, guitar pickups, and various others used for electric piano and other instruments. Surprisingly perhaps, there is a magnetic distortion component in loudspeakers, and it could be eliminated entirely by removing the magnet completely. This would naturally mean that the speaker would no longer work, but surely this is a small price to pay.
Even worse than the original claim is the range of products that are allegedly designed to demagnetise the leads in question. These products do not work at all - they can't, because it's impossible to get enough current through the components to do anything even remotely useful. Demagnetisation goes a lot further though ... if you demagnetise a CD or vinyl disc (which both have close to exactly zero magnetic material) they will sound better. It's hard to even waste time on claims like that, because they are so obviously and blatantly false. A non-magnetic material cannot be magnetised (because it's non-magnetic) and therefore, it cannot be de-magnetised because it was never magnetised in the first place.
+ +The magnetism bogey-man seems to be fairly popular at the moment. I find it fascinating that depending on what you read you will discover that (electro) magnetic fields are either the most efficacious miracle cure-all known, or are evil and will cause your internal organs to collapse into amorphous cancerous jelly (I may have exaggerated the latter claim a wee bit ). I even saw an advertisement for a CD (yes, a CD) with 'special' demagnetising tones recorded on it (at least that's what I surmise) that will (astonishingly!) improve "transparency, dynamics, details, soundstage, and all other parametres" (sic). As an Aussie comedian was heard to say often ... "I see it, but I dooon't believe it."
Another one that's guaranteed to get the lunatic fringe-dwellers on their soap-boxes, shouting loudly, is break-in. Special boxes that produce an equally special signal will break in your leads faster than just listening to music ... we are told. Again, this is almost completely bollocks too. There are a few components that do change characteristics over time (such as speakers), but it happens so slowly that we will never actually hear the difference.
+ +Our audio memory is notoriously short, and it is simply impossible to hear a change that takes weeks to occur. What really happens is that we become 'acclimatised' to the sound - there is rarely any significant change at all. This is doubly true of cables - there isn't any reason whatsoever to break-in an interconnect or speaker cable, because they don't change enough to create a measurable change, let alone one that's audible.
+ +With no exceptions that I can think of, electronic equipment only needs to reach normal operating temperature for everything to work as it should. In most cases, the temperature doesn't even matter. After sitting in warehouses and/or on the dealer's shelf for a few months, electrolytic capacitors might need a minute or two to polarise themselves properly. It takes a short while before the leakage falls to its normal value. The sound doesn't change during this process.
+ + +The claim is made in countless forum arguments that "sinewaves are too simple to get a useful measurement". It is true that a sinewave is simple - it is a mathematically pure tone, containing exactly zero harmonics. Because of this, it makes it relatively easy to measure tiny amounts of non-linear distortion in any audio product. Sinewave testing also shows audible distortion, well before it can be heard in most music. With a pure sinewave, it's possible to hear 0.5% THD (total harmonic distortion) or less, depending on the speaker used to monitor the results and the room acoustics.
+ +The reason that sinewaves are used for testing is that the waveform is so clean that any modification to the original signal is easy to measure. Some will protest (often vociferously), but quite frankly, they are wrong. Supposedly 'simple' sinewave testing is still far and away the easiest way to quantify and qualify distortion - the exact nature of the distortion is revealed to anyone who knows how to conduct tests properly.
+ +An amplifier doesn't actually care if the input signal is a sinewave or something more 'complex'. While the claim that sinewaves are 'simple' is true up to a point, amplifiers don't actually have any idea what they are amplifying. An instantaneous value of voltage is amplified by the amp's gain, to produce an amplified version of the signal for that moment in time. If a signal happens to change too fast, the amp cannot keep up and some information is lost. If the input signal is too high, the amplifier will clip and some information is lost. Distortion is generated in both cases.
+ +In reality, no normal audio signal can change fast enough to trick any competent amplifier, but this has never stopped the pundits from claiming it happens anyway. The idea of TID has been bandied around for ages (see above for more), yet no-one has ever been able to name an amplifier that suffers from it. It's easy to demonstrate with (perish the thought) test equipment, but the test has to be modified to account for real-world audio signals. This isn't done, but there are those who still claim that the results are relevant. Sorry, but they are not!
+ + +There are many people who insist that phase shift is evil, and it must be eliminated from a system for it to sound any good. The actual complaints vary, but there is general agreement amongst those who think it's bad that it really is bad, in any number of ways. In reality, like many other things, phase shift is a fact of life. It's actually generally harmless unless the amount of phase shift varies cyclically - this is only ever found in effects pedals used by guitarists and the like, and never happens in any amp or preamp. Phase shift may also create audible artifacts if it's different between two channels of an amplifier. This is theoretically possible, but extremely unlikely unless the amp/preamp (etc.) has been modified by someone incompetent. + +
It's easily demonstrated (and used as 'proof' by those who think it matters) that inverting a signal changes the sound. With instruments that produce an asymmetrical waveform (many woodwind and brass instruments, human voice, etc.), inverting the signal often makes it sound different. The problem then becomes "which is right?". The unexpected answer is "both" and "neither". The first answer is because there simply isn't a 'correct' polarity. No-one knows how many inversions the signal may have been subjected to before you hear it. The answer is also "neither" because no reproduction can ever return the original performance, and to imagine it can is pure folly.
+ +Some generated waveforms (a sawtooth for example) usually sounds 'different' depending on its polarity. As far as I'm aware, no-one knows exactly why, but it's very common phenomenon. There is no 'right' or 'wrong' polarity, because it's an electronically generated waveform. The real test is to listen, then leave the room while someone else either changes or doesn't change the polarity, then return and listen again. In the vast majority of cases, you will be unable to determine whether the polarity was changed or not.
+ + +The complete rubbish that you'll find about the alleged superiority of 'valve sound' over evil little transistors is astonishing. In almost all cases the reverse is true - a competent transistor amplifier will usually murder even the best valve amps for overall quality. There are some extremely good valve amps (mostly from the end of the valve era), but none can be recommended any more because the available valves are now generally well below the quality that was routinely produced by mainstream manufacturers of yesteryear.
+ +The standard explanation for the so-called superiority of valves is that they produce predominantly second harmonic distortion, but this is simply untrue for any competent design. The best of the late 70s valve amps had very low distortion - not as good as a decent transistor amp, but much lower than most of the more recent attempts. As described above, having great gobs of second harmonic distortion is nothing to crow about - it's a good reason for the 'designer' to hang his head in shame though.
+ +Left to their own, transistors will also show predominantly second harmonic distortion. The job of the designer is to remove as much of this (and indeed, all types of distortion) as possible, while keeping the final circuit manageable in terms of cost and complexity. There are actually many factors, and distortion IMO is not something that should be considered a virtue. I suggest that anyone who wants to look at this more closely should read Amplifier Sound and also look through the valves section.
+ + +There have been countless claims that measurements don't cover all contingencies, or we don't know how to measure certain things, and therefore (all) measurements are pointless and don't give us the full story. This logic then goes on to conclude that since we have rendered measurements useless, we can therefore state with authority that only subjective tests are of benefit. This is, of course, nonsense.
+ +Along similar lines you may hear claims that hearing resolution (of a select group of people) is better than any test instruments, and can pick up the details that measuring instruments cannot. Proponents of this school of 'thought' never consider the experimenter expectancy or placebo effects, and many in this group will claim to be 'immune' to these effects because they have done it for years and know how to avoid the traps. Utter garbage ... no-one is immune from these effects, and to claim immunity is to be in denial of reality.
+ +Measuring instruments have come a very long way over the past 60 years, and they can resolve details that are completely inaudible to anyone - regardless of their 'golden ears'. There are techniques that simply subtract the original signal from the amplified version, so any difference can easily be detected. If the amplifier or preamplifier has any problems, the output from the subtraction circuit will be non-zero and easily identified. This process was first described by Peter Baxandall in 1979 or thereabouts (see 'Null Testing' below).
+ +Measurements don't cover everything though, this much is true. We have no way to measure sound-stage (the apparent placement of instruments in front of and between the speakers), but we don't actually need to. Provided the signal path is clean (minimal distortion), has a flat frequency response across the audio band and both channels have equal phase shift, we know that it's not 'damaging' the signal. If the signal can get through our systems properly and without significant modification, then there is no reason that the soundstage will be better or worse than what was recorded.
+ +This latter point is missed by most reviewers and most of the magic component cult followers. There seems to be an understanding that in order to get 'good' sound, your system needs to be made up from the most inconvenient and expensive parts available. An amplifier that won't burn your fingers isn't worth listening to, and all internal components must be physically much larger than whatever you used before, and at least 5 times the price. The same 'logic' affects everything else. No matter how good your CD player might be, there is always a modification (that uses expensive, large and inconvenient parts) that will make it so much better, and the same applies to everything else in the system.
+ + +There is a measurement technique that shoots down all complaints that "sinewave testing doesn't show what an amplifier does with a complex signal". As mentioned briefly above, the original and amplified signals are summed, with one inverted and scaled so it is exactly equal and opposite the other. The result is a null - one of the easiest things to verify. The smallest difference between input and output is immediately audible (or visible if an oscilloscope is used), and this technique demonstrates that most amplifiers can handle any audio signal that comes along. Remember, the smallest difference between the signals shows up clearly, and null testing can be used with any normal full-range signal.
+ +There is a version of just this that I called the SIM (sound impairment monitor). The version that I've used several times simply looks at the signals on the input and feedback nodes of the circuit (typically the bases of the input long-tailed-pair). Should be amplifier be unable to handle the rate of change of the input signal, it shows immediately. Should the amplifier even approach clipping or show any non-linearity, again it shows immediately. Noise, power supply ripple, slew rate limiting - all show up very clearly, and are an instant indicator that the amp can't cope.
+ +Tests I've done show that every amp I've tried it with is perfectly happy amplifying any audio signal I can send its way. Likewise, I can push any amplifier with a my squarewave generator and see that none can handle the extremely fast rise and fall times. This doesn't mean that every amplifier is flawed, it just means that a squarewave test is inappropriate for determining audio performance. Experienced technicians will use squarewave testing for other purposes though, and it's a very quick and easy way to check tone controls and equalisers (for example).
+ +Subjectivists seem to abhor all measurements, and signal null testing is a measurement. Therefore it cannot be used to prove a point, because it's a measurement and thus is automatically inferior to a sighted listening test.
+ + +At one stage, you could buy (for an insane price of course) a wooden knob that would supposedly transform your hi-fi. Yes, you read that correctly - a knob. Not a high quality replacement pot (potentiometer/ volume control), just a knob. I have no idea if anyone fell for this scam, but I expect there would have been a few takers, even at $485 - I kid you not. Needless to say, changing a knob from plastic or metal to wood will make absolutely no difference to the sound, but that obviously didn't disturb the criminals selling and promoting it.
+ +Along similar lines, there is an alligator/crocodile clip that you clip onto leads and such - again, this will supposedly work wonders. This is no ordinary clip though - it's a Quantum Clip, and "is capable of manipulating certain inanimate material into a condition that mimics the quantum state of our living senses". WTF!! What insufferable, unbelievable crap!. The purveyors of this garbage belong in prison for fraud. Personally, I don't care whether they believe this shite of not, they are common criminals and nothing more. Everything (and I do mean everything) they sell is nothing short of fraudulent. The cost of the supposedly 'quantum" clip - apparently it's £500 (British Pounds)! This for perhaps 50c in materials.
+ +As for so-called audio review sites and 'independent writers' who support this drivel - anyone who gives anything other than a big thumbs-down to the frauds cannot be trusted to review a soiled baby's nappy (diaper), let alone hi-fi equipment. I'm almost ashamed that I live on the same physical planet .
The normal fuse supplied with your system can't possibly sound any good, but that's easily fixed. Yes, you can buy true 'audiophool' fuses to prevent the inevitable congestion as the current has to flow through that tiny little wire. A bargain at only $60 each (give or take). Mind you, you'd expect to pay that for a 'hi-fi tuning fuse', because it's so much more than just a fuse. It's also a ... ahhh ... hmmm ... no, my mistake, it's just a fuse.
+ +Audiophile power outlets? I'm kidding, right? Sadly, no I'm not, and as if that wasn't bad enough you should see the price - almost US$150 each. Mind you, they do appear to be a cut above the average in terms of build quality (so are so-called 'hospital grade' outlets in the US), but the price is just outrageous. In addition, you can even buy a set of outlet caps (special ones of course) for a mere $99 for a set of four. I'm unsure how they improve the electrical supply, but apparently they stop nasty EMI from sneaking in through the little holes where the plugs go, when no plug is inserted. They claim to be gold plated solid copper - perhaps they short all the pins together? That should make a nice bang. .
Blocking the little holes predictably does diddly-squat - EMI doesn't sneak in through the holes - it doesn't need to because it can get to the wiring so easily through most interior walls and anywhere else where wiring is not completely shielded, not to mention the wires out in the street and all the way back to the power station.
+ + +But not your ordinary boring room treatment that actually works. No, you don't need to do any of that when you can go off and buy a few little bowls (with wooden stands of course) that will allegedly convert a $200 HTIAB into respectable hi-fi (no, I'm not kidding - a reviewer claimed pretty much exactly that). A movie intro showed "grandiose differences" and the sound became "voluptuous".
+ +Who wouldn't drop what they were doing and rush out to spend around $3k for a collection of little ornaments? You can get the bowls, balls, pebbles (with or without glass jar), rocks and all manner of accessories for less than the cost of a small car! These things actually treat your room, better than any conventional proper room treatment, and if the cat doesn't decide they are actually cat toys you should be in for a real treat ... or perhaps not .
There is nothing at all wrong with using balanced connections, but some take it to extremes. A balanced connection is designed to reduce common-mode noise, whether injected into the cable by nearby power cables or due to earth/ ground loops between separate pieces of equipment. There seems to be a school of thought that balanced connections sound better in some way. If using balanced cables and inputs/outputs removes hum or other noise then yes, the system will sound better. However, in most cases with a hi-fi system it makes little or no difference. There are exceptions, and if you find that you need balanced interconnects to remove hum, then that's exactly what you should use.
+ +I have even had enquiries about using Project 09 in fully balanced mode. In other words, two P09 boards, with one used for the pin 2 (hot) lead of an XLR, and the other for the pin 3 (cold) lead. The opportunities for things to go seriously wrong are many and varied, and every passive part needs to me matched to better than 0.1% or serious CMRR errors will result. In addition, there will be more noise (opamp and resistor thermal noise in particular), and no 'improvement' to sound quality.
+ +Professional/ studio mixers all have balanced inputs and outputs, but all internal circuitry is unbalanced with the possible exception of the mix busses. No-one has ever considered that each and every module within the mixer should be duplicated to maintain the balanced connection right through the mixer. Apart from anything else, the cost would be prohibitive in the extreme.
+ +Balanced connections are used for long mic cables and interconnects between different pieces of equipment. Cable runs in studios are often very long, going to and from patch bays and other gear that might be physically separated by some distance.
+ +For a home hi-fi system, if you cannot hear any loop-induced hum or buzz, there is no reason to use balanced connections. Contrary to what seems to be common belief, a balanced connection does not sound 'better'. Floating (unearthed) signal sources such as microphones don't actually need to be balanced, but they are almost invariably balanced for historical reasons. Many other sources (CD & DVD players, etc.) are floating because they are double-insulated, but are earthed as a matter of course via the interconnects. Again, a balanced connection is only needed if there is a hum problem when the device is connected to a preamp.
+ +There is absolutely no need for speaker signals to be balanced, as the signal is low-impedance, high-level and the speaker is floating with respect to mains earth/ground. Using a BTL amplifier is only worth consideration if you need the power, but not to 'improve' sound quality.
+ + +Only very recently I was asked about thermal crosstalk in dual operational amplifiers (opamps). This (amongst other things) is very real, but it has to be understood that limitations such as this are only relevant for precision designs where the opamp circuit has very high gain, and DC offset is critical. Just like capacitor dielectric absorption (aka 'soakage'), there is no need whatsoever to consider this for audio. It's simply not relevant with the relatively low gain and bandwidth needed to transfer an audio signal in typical hi-fi applications.
+ +Where thermal crosstalk and other electrical cross-coupling effects become important is in measurement systems (yes, the very systems that so many audiophools abhor), where very high gain, exceptionally low distortion and wide bandwidth are critical to ensure the measurement is accurate. With most audio circuits, the current and power demands on opamps are very low, and the effects mentioned are completely irrelevant. Despite claims to the contrary, small temperature variations across the die don't produce audible artifacts - they can be measured easily enough sometimes, but don't cause significant non-linearity. Unfortunately, the audiophools will sometimes pick up on very technical articles that they usually don't understand, and extrapolate this to define the reasons for certain opamps sounding 'bad'. + +
You see, only the most expensive and difficult to get opamps are suitable for audio in their opinion. More pedestrian types are obviously inferior, because ordinary people can get them easily and cheaply - that can't be good. In reality, some of the types that are claimed to be so obviously better in all circumstances may not really be suitable at all. Some will sacrifice noise for incredibly low input current for example, and while this may be an important consideration for scientific or laboratory equipment, it does not translate that it's therefore better for audio.
+ +The same logic applies to many other opamp functions - there is a huge range of specified parameters, and the rules of design indicate that the designer should choose those that are important for the application and ignore those that are irrelevant. What is irrelevant in one design may be highly relevant in another, one of the reasons that there is a mind-boggling number of opamps on the market. While a particular opamp may be ideally suited to precision sample-and-hold applications for example, it does not follow that the same device is suited to a phono preamp or other audio applications.
+ +As noted earlier, there is a belief that some opamps introduce colouration, despite the fact that measured response is ruler-flat and distortion is immeasurable with normal equipment. It's alleged that somehow these measurements miss the subtle effects that stand out like dog's nuts to those blessed with golden ears. A friend claims that he can hear a TL072 in a system instantly - said he heard one in mine, despite the fact that there aren't any. No-one has hearing so good that they can hear the difference between competent opamps, regardless of their claims to the contrary. If test instruments have difficulty detecting differences between opamps, you can rest assured that you will generally not be able to hear anything of interest. Claims that some opamps have better bass than others are just silly - all opamps can give their designed gain down to DC if allowed to do so, and no-one can hear that! + +
There is one condition to all of the above though - the noise floor of all opamps auditioned has to be well below audibility. Noise is often a clue, and in some cases the noisy part might be preferred as it can appear to have better top end. The noise may add a tiny bit of 'sparkle' that the listener prefers, without necessarily noticing that there is a background hiss (carbon composition resistors, anyone?).
+ + +I coined the term "Black Knights" to describe the cult followers - see the Monty Python sketch of the same name and you can work out the explanation for this yourselves (it's from "Monty Python and The Holy Grail"). They are in complete denial - science must be fatally flawed if it disagrees with their listening experiences, and therefore they have a propensity for throwing out the baby with the bathwater. They will never admit that they may have been tricked by the experimenter expectancy or placebo effects - what they think they heard is reality, and anyone who disagrees is just wrong. End of story! + +
Unfortunately for everyone, these off-the-wall opinions are touted as fact all over the Net, where they are picked up by others who use the fatally flawed arguments as backup for their own (equally fatally flawed) opinions. While we might hope that they would simply run in ever-diminishing circles and disappear up their own exhaust-pipes, they seem to gather mass and keep growing. This isn't helped one bit when formerly credible engineers apparently succumb to Alzheimer's and fall into the dark side.
+ +Once someone starts sprouting utter BS about the "audibility of capacitors" they are no longer credible. In many cases apples are cheerfully compared with oranges and the 'comparison' is touted as reality - sometimes with parts stressed beyond their normal working limits. Unfortunately, most don't understand the reality behind these claims, and they gain acceptance as being real. Ultimately, it makes the world a poorer place, because proper investigation is derailed by the nonsense. The same goes for other audio nonsense, from green pens for the edges of CDs to silver cable in interconnects (or even signal transformers!). There is no credible evidence that any of the major or minor 'tweaks' will have any effect at all, let alone transform your system.
+ +It's easy to dismiss most of the nonsense as harmless, but in reality it's no such thing. Countless people are duped into thinking that the rubbish posted is real, and once duped it's likely that they too will fall victim to the placebo effect. After all, no-one likes to admit that they have been conned, so will often (albeit inadvertently) jump onto the bandwagon as well. This perpetuates the belief that this or that tweak, rock, hologram or whatever has some benefit, when in reality it has achieved exactly nothing useful to the buyer.
+ +A lot of the scams are enabled by the simple fact that no-one can actually define what 'perfect sound' really is. Innumerable speaker, headphone and amplifier makers claim to give you just that, but everything you listen to has already been tweaked and messed with in the studio or during mastering. The only way anyone can hear the sound exactly as it was recorded is to be in the studio or mastering suite, listening to it at the same volume and through the same equipment that was used when the tracks were finalised prior to CD, vinyl, SACD or Blu-Ray disc production. This is likely to be somewhat inconvenient, even assuming it's possible.
+ +Over the years there have been countless attempts to convince buyers that someone has finally created the 'perfect' system. It's generally considered by audiophiles that most of the mass-market 'perfect' systems are anything but perfect, and in many cases they are probably right. However, there is no amount of tweaking or modification by adding magic components that will make any difference. I won't name names here, but most readers will be able to guess who has been responsible for some atrocious systems that continue to this day, and their owners are generally perfectly happy. They too have been influenced by advertising and consumer reviews that claim they are getting the best of the best, when the systems are best described as overpriced toys.
+ +We have no defence against this kind of onslaught, and vast numbers of people now think that MP3 encoded music sounds 'good'. They have possibly never heard a decent sound system, and would most likely dislike it if they did because it sounds so different from what they expect. As a direct result, it's now extremely difficult to get decent CDs (of artists you actually like listening to). Most have been compressed so heavily that everything is at the same volume (loud), and they sound like crap. No tweak, cable, rock or 'magic' capacitor can fix that - it's ruined forever.
+ +There are countless places in this article where I could have named names and products, but that would only serve to bring them up in searches when people are looking for some sanity. I flatly refuse to link or provide information that can be used in a search or improve page rankings in search engines, where the 'product' or 'service' is fraudulent.
+ +![]() | + + + + + + + |
Elliott Sound Products | +Negative Impedance |
![]() ![]() |
Negative impedance (or resistance) is a rather odd concept, and it seems unlikely (impossible, even) when it's first mentioned to most people. However, there are some fairly common parts that exhibit negative resistance, albeit only over a limited range of operation. One of these is the humble neon lamp. When voltage is slowly applied, initially there is no conduction. When the voltage reaches around 70-90V (depending on the lamp), it suddenly conducts. There is then a region of negative resistance, where the voltage across the lamp falls, but the current goes up.
+ +By definition, that is negative resistance. With the resistors we all know and love, their characteristic is positive resistance, so as the voltage rises or falls, the current rises or falls in direct proportion to the voltage change. Everyone in electronics knows Ohm's law, and it is (or should be) embedded permanently in one's subconscious for recall at a moment's notice ...
+ ++ R = V / I Where (of course) R is resistance, V is voltage, and I is current ++ +
There are several articles on the ESP site that look at negative impedance, and they are listed in the references section. There will be further references to parts of these articles throughout this text, because it was the concepts discussed that prompted a separate article to look at negative impedance more closely. There are some rather bizarre aspects to any negative impedance device, and this is especially true when theoretically 'ideal' negative resistances are looked at. Somewhat surprisingly perhaps, a NIC (negative impedance converter) based on opamps can approach the theoretical model, at least at low frequencies where the opamp has maximum gain - typically in excess of 100,000 (100dB).
+ +For the remainder of this article, the term 'NIC' will be mostly used in place of 'negative resistance', 'negative impedance' or 'negative impedance converter'. Naturally, there will be exceptions, depending on the context. While I use the term 'impedance' most of the time, this can often be just simple resistance (no frequency dependency as the word 'impedance' implies).
+ +Note too that there won't be an attempt to cover every different type of NIC, as there are just too many. I will concentrate on those that are interesting (or that I think are interesting), and I'll show as many examples as I can. Where possible, they will also be explained - necessary because negative impedance is not intuitive.
+ +It's fair to say that some of the examples won't have a practical use, at least as shown. However, most are actually potentially worthwhile, and if nothing else they can be great fun to play around with. Simple opamp based NICs are easy to build so you can prove to yourself that negative impedance exists, and you may even see a use for one in your next project. However, this is probably unlikely, but one never knows.
++ ++
+Most of the circuits shown expect to be fed from a low impedance source, which + in all cases must be earth (ground) referenced. Opamp power connections are not shown, nor are supply bypass capacitors or pin numbers. There is no + guarantee that all circuits are functional as shown. Opamp power is not shown, but dual supplies (±5-15V) are required unless otherwise noted. +
It's important to understand that any NIC can only ever be conditionally stable, meaning that some combinations may not work as expected unless all operating conditions are satisfied. We are used to opamps that are unconditionally stable, meaning that they will never oscillate or lock-up under normal linear operating conditions. This is provided that all datasheet conditions are met of course, including proper PCB layout, supply bypassing and component values that ensure that all voltages and currents are within specification.
+ +In contrast, a NIC can become unstable or lock-up (e.g. switch to one supply rail or the other) or become 'dysfunctional' for any number of reasons. Component tolerances are usually far more critical than with conventional linear circuits. While a resistor or capacitor tolerance of ±10% will do no more than change the gain or frequency of a conventional circuit, that same tolerance may make the difference between a negative impedance circuit working or producing an epic fail.
+ +In this article, reference to 'audio frequencies' doesn't necessarily mean audio or hi-fi, but simply means a circuit is usable at frequencies from a few Hertz up to perhaps 30kHz or so. Many industrial processes also work within the audio frequency range, but they are not used for speech or music. It's an important distinction, and it applies across many fields of electronics.
+ + +Although I've already described it, there are some things that you need to understand about negative impedance. As its name suggests, it can be used to make 'real' resistance simply 'disappear'. For example, if you have a signal generator with zero ohms output impedance, a load that's exactly 100 ohms, and you feed it with an impedance that's exactly -100 ohms, there is no resistance. None at all. This is identical to a short circuit, but the voltage developed across your 100 ohm load will be infinite, as will the current through it. This rather nonsensical situation cannot occur in real life for a variety of reasons.
+ +It's is not possible in an opamp (or even a power amp) based circuit, because they will always have a defined supply voltage (which limits the amplitude) and the output can only deliver the current that the device can provide (typically ±25mA or so peak current for an opamp). All real life sources have a finite (positive) impedance and/ or voltage and current limits. Few signal generators have an output impedance of less than 50 ohms, so you'll never normally see anything even approaching infinity.
+ +Negative impedance is fundamentally weird, and a NIC behaves in what may seem to be incomprehensible ways until you examine it closely. If you have a simulation package on your PC, it may be possible to simply tell it that a resistor has a resistance of -100 ohms - an instant negative impedance, and you don't even have to build much of a schematic. I use SIMetrix [ 9 ], which is perfectly happy for you to do that. Other simulators may or may not behave the same way.
+ +The question is how and why your load resistor can be made to 'disappear'? If the +100 ohms (your load) and the -100 ohms (negative resistance) cancel, the whole circuit must be a short circuit. In order for it to appear to be a short circuit to the signal generator (assuming zero ohms impedance), it must draw infinite current. That means an infinite voltage across the -100 ohm resistor, and an infinite (but opposite polarity) voltage across your +100 ohm load. The two cancel, and the signal generator simply sees a short circuit. In the following drawing, the generator has an internal resistance of 50 ohms - this removes the requirement for infinite voltage and current because it limits the current to a sensible (and achievable) level.
+ +
Figure 1 - Negative Impedance Concept
Basic laws of physics (Ohm's law in this case) show what must happen in the circuit. The two external resistances cancel perfectly, so the generator sees a short circuit at the output. There's zero voltage, but a current of 20mA flows through R1 and R2, limited by ZGen, the signal generator's internal impedance (1V with 50Ω in series). Therefore, the voltage across R1 (negative resistance) must be -2V, with +2V across R2 (the load). It doesn't matter if the voltages are AC or DC, but of course with AC, the voltage across the negative resistance must have its phase reversed so the two voltages cancel out. The result is a short circuit at the generator terminals, and R2 (the load resistance) has effectively 'disappeared'. Note that 'disappear' is in quotes because it's only an apparent disappearance - the resistance is still there, but its influence is removed by the NIC.
+ +When you make any calculations, all resistances must be considered, including the generator's output impedance. I encourage you to do some sample calculations so that the currents and voltages can be determined for different resistances, as that will help you to understand how it all joins up.
+ ++ I = V / R ('R' is the total resistance, positive and negative, and including the resistance of the generator)+ +
+ R = -100Ω (R1) + 100Ω (R2, Load) + 50Ω (ZGen) = 50Ω
+ I = 1 / 50Ω = 20mA
+ VLoad = R2 × I
+ VLoad = 100Ω × 20mA = 2V +
While the concept may seem odd, it all works out easily. We'll again assume a voltage of 1V (it doesn't matter if it's AC or DC), and see what happens when the load resistance is reduced to 40 ohms. If you don't run through a few simple calculations it won't make much sense, so it's well worthwhile to spend a few minutes.
+ ++ I = V / R ('R' is the same as above)+ +
+ R = -100Ω + 50Ω + 40Ω = -10Ω
+ I = 1 / -10Ω = -100mA
+ VLoad = R × I = 40Ω × -100mA = -4V +
This shows how (and why) the polarity reverses (or an AC signal is 180° out of phase) when the negative resistance is greater than the total 'real' resistance. Whether you calculate this or run a simulation, you will get exactly the same results. The same formulae work for any combination of voltage, 'real' resistance and negative resistance. As you can see, nothing more involved than Ohm's law is needed for complete analysis.
+ +Making a simple NIC using an opamp works rather well. One of the first things you will find is that the absolute value of the load vs. the negative impedance is important. As seen above, for a negative resistance/ impedance to work in the manner you expect, the value of the negative impedance must be lower than the actual (positive) impedance. It the negative resistance is (say) -100 ohms, then the total positive resistance will usually be greater than +100 ohms. Once the positive resistance is less than 100 ohms, the negative resistance becomes the dominant part of the equation, and the signal polarity is inverted as shown in the calculation above.
+ + +There are a few common devices that exhibit negative impedance - at least over a limited range. A neon lamp is one of the easiest to analyse and experiment with, because the voltage is passably safe (less than 100V), and it's easy to build a simple relaxation oscillator using nothing more than a neon lamp, a resistor and a capacitor. If you try to do the same thing with something that does not have a negative resistance region, all you get is a steady DC voltage. The negative impedance region means that it becomes an oscillator.
+ +
Figure 2 - Neon Lamp Relaxation Oscillator
When the voltage is below the neon's strike voltage, no current is drawn. Once the lamp 'strikes' the neon gas ionises, and the negative impedance causes the voltage across the neon to fall as the current through it rises. This is known as the Pearson-Anson effect [ 4 ]. The capacitor will be discharged until the voltage across the neon lamp is insufficient to maintain ionisation, the lamp then extinguishes and the cycle repeats. The resistor must be a fairly high value, or it may be able to provide enough current to maintain ionisation within the lamp, and the circuit won't oscillate. The oscillator circuit shown will stop oscillating when the supply voltage is a little over 260V (a continuous current of around 1.9mA). Changing the voltage also changes the frequency, but the output amplitude is not affected.
+ +
Figure 3 - Neon Lamp Oscillator Waveforms
As you can see, the waveforms are pretty much what you'd expect. The average voltage across the neon measured 71V, and the neon strikes at about 81V and extinguishes at 61V (the sawtooth output is 20V peak to peak around the 71V average). Peak current is monitored across the 2.2k resistor, and measures 14 volts, so the peak current is 6.4mA with a total duration of only 2ms. The negative resistance of a neon lamp is not great, and there is a significant positive resistance as well. However, if the negative resistance region didn't exist the circuit could not oscillate.
+ +As an aside (since it has little to do with the main topic here), neon lamp oscillators have been used as frequency dividers, and were used in some early electric/ electronic organs. The frequency was set to be a little lower than half the input frequency, and the discharge spike from the previous divider triggered the neon to fire at each second input pulse. This divided the frequency by two (one octave). A typical organ using this type of divider needed a great many neon lamps, and the supply was regulated to ensure stable operation.
+ +All gas discharge lamps exhibit the same general characteristic of negative impedance. While you could also build an oscillator with a full sized fluorescent tube, it would be somewhat unwieldy due to the size of the tube. It would also require a dangerously high voltage to cause ionisation, so it's not a recommended DIY project . All gas discharge lamps have a negative resistance region, but the small neon is really the only one that's useful for the experimenter.
Another negative impedance device that used to be reasonably common was the tunnel diode. These are now very hard to get, as are Gunn and IMPATT (IMPact ionisation Avalanche Transit-Time) diodes which also have a negative impedance region. The various negative impedance diodes mentioned are used at microwave frequencies, but will not be covered here.
+ +A DIAC is another negative impedance device, and they are common in TRIAC lamp dimmer circuits. An example circuit is shown in Project 159 (Leading Edge Dimmer, Figure 6). DIACs are classified as bidirectional trigger diodes, with a breakdown voltage between ~28V and 45V. These will also oscillate with a parallel cap and series resistor. A DIAC oscillator uses same basic circuit as a neon oscillator, but with lower voltage. These devices remain readily available at low cost, but the requirement for them is now somewhat diminished.
+ +Another of the better known negative impedance semiconductors is the unijunction transistor (UJT), such as the 2N4871 (now discontinued). A variant is the PUT (programmable unijunction transistor) such as the 2N6027 and 2N6028 (also discontinued). As you can see, there's a pattern here, with many of the negative impedance devices being no longer available - at least from major suppliers. You can probably get them from ebay or other suppliers, but you might not end up with what you wanted and paid for.
+ +Most of the tasks that used to be performed by UJTs and PUTs are now done with timers such as the 555, or by means of a microcontroller. They were always something of a niche product, and an unkind person might even suggest they were a solution looking for a problem. To some extent that was always true, because their uses were somewhat limited, as they were used primarily for simple oscillators that didn't need great accuracy or stability. There aren't many applications that can't be done with more 'conventional' parts, and the need for esoteric negative resistance parts is minimal.
+ +The circuits described in this section are all non-linear, and aren't suitable for any kind of signal processing. For that we need to become more adventurous, and look at linear negative impedance circuits. There are several different types, with some being pretty much purely theoretical (i.e. they don't do anything useful) and others being used in advanced circuitry. They remain relatively uncommon though, but may be hidden inside ICs designed for high-performance filters (for example).
+ + +There are several circuits that can be used to make a basic NIC, and building one with an opamp and a few resistors is quite simple. One characteristic that is shared by all NIC circuits is positive feedback, which has to be tightly controlled or the circuit will oscillate. That means that using a NIC to drive any load that is unpredictable (e.g. anything that can be changed or altered by the user) is unwise. As noted in the ESP article Effects Of Source Impedance on Loudspeakers, using negative impedance for any loudspeaker is probably a bad idea. In theory there appear to be advantages, but in reality this rarely turns out to be the case.
+ +One area where negative impedance really does work is explained in Transformers For Small Signal Audio. That article also shows oscilloscope captures of the waveforms expected in use. Using a NIC to drive an audio transformer means that the primary winding resistance can be (at least partially) cancelled out, allowing higher output levels, lower distortion and improved response at low frequencies. C2 is an absolute requirement, which is unfortunate but unavoidable.
+ +With C2 shorted out, the circuit has extremely high gain at DC and may easily become unstable. If C2 is there, it causes the response to rise at very low frequencies. A NIC transformer driver should always be preceded by a high pass filter to remove infrasonic energy. C1 goes a small way towards fixing this problem, but it's not a complete cure. The simple act of starting and/ or stopping a signal creates an infrasonic 'disturbance', and the NIC makes it worse than conventional voltage drive. With the values shown (and a similar transformer), response is less than 1dB down at 10Hz.
+ +
Figure 4 - NIC Used To Drive An Audio Transformer
The NIC is based on U1, which can be any normal opamp, a pair of paralleled opamps (for improved drive current), or an opamp with a buffer to allow it to drive a lower impedance. The output impedance is set by R4, and is 50 ohms to suit this transformer. You must determine the correct value for the transformer you want to use, with a value that's a little less than the winding resistance. When the opamp is attempting to remove distortion caused by partial saturation, the output current may be much higher than you anticipate.
+ +Note: Be very careful with this arrangement. It works exactly as claimed, but the negative impedance set by R4 must be less than the primary winding resistance. If you use more negative impedance, the circuit will oscillate at a low frequency, determined (at least in part) by the inductance of the transformer. If you rely on a simulator, you can easily be lulled into a false sense of security.
+ +C2 and R3 are used to ensure that the circuit has unity gain at DC, and without them the DC conditions in the circuit are seriously unpredictable. By using this arrangement, the output impedance is reduced because the transformer's primary resistance is (mostly) cancelled out. This also means that the circuit will 'automatically' pre-distort the input signal to compensate for transformer distortion caused by partial saturation of the magnetic core.
+ +The transformer is wired with the secondary reversed, because the NIC is inverting. You also need to be aware that you 'lose' more than half the available output from the driving opamp, some across the resistor (R4) and some because the opamp needs headroom so it can 'pre-distort' the signal to produce a clean transformer output. This is unlikely to be an issue, because the small, cheap transformers that need this technique most usually can't handle more than around 1V RMS anyway. This limit is at low frequencies, typically 40-50Hz (transformer dependent of course), and is due to core saturation.
+ +Notwithstanding the warnings, and as unlikely as it may seem, negative impedance drive works very well. The reason is superficially complex, but it's actually quite simple. A transformer with no winding resistance and driven by a pure voltage source (i.e. zero ohms) has no distortion. Saturation distortion occurs because the transformer draws high non-linear current as the core starts to saturate, and this distorts the voltage waveform across the primary due to the winding resistance. When a NIC is used to drive the transformer, the winding resistance can be cancelled, so the transformer appears to be driven from an almost 'perfect' voltage source. It is inadvisable to try to cancel all of the winding resistance, because a small variation in the actual resistance will make the system unstable. As seen above, the NIC provides a -50 ohm output impedance, driving a 55 ohm winding. The effective impedance across the primary is therefore only 5 ohms, instead of 55 ohms.
+ +This is stable, but we also need to ensure that extremely high gain is not available at DC, hence the addition of C2 and R3. The resulting 5 ohms of effective winding resistance means that the saturation distortion is almost completely cancelled - at least up to the point where the driving opamp runs out of voltage or current. In addition, the low frequency response is extended, but again, this is restricted by the opamp's output voltage and current limits.
+ +At frequencies well above those that cause saturation, the opamp does not see a very low impedance. It sees the (transformed) impedance presented to the secondary of the transformer, plus the secondary winding resistance. In the above (and assuming an ideal (non-saturating) transformer, the peak current is developed at around 6Hz, where the NIC is compensating for the small inductance of the transformer (2 Henrys as shown). With a 1V input, the maximum current is 10mA, but this is overcome by including a high-pass filter that restricts the response to perhaps 15Hz and above.
+ +
Figure 5 - Saturating Transformer Test Circuit
Real transformers saturate, and most simulators don't do a particularly good job of showing the waveforms you get when a transformer approaches saturation. Figure 5 is an attempt to demonstrate the effect, and it works reasonably well, at least to the extent that it can prove the point. At high frequencies (1kHz and above), distortion is minimal with or without the negative impedance drive. With an input of a little over 3V peak at 44Hz, the distortion when driven from a voltage source is 7.4%, which falls to 0.14% when the NIC is used. With negative impedance drive, the transformer's output voltage is also higher, and low frequency response is extended.
+ +In the article Transformers For Small Signal Audio, there are waveforms captured from a real transformer as it's driven towards saturation. The test circuit above only goes part-way towards the simulation of saturation, but it doesn't produce the actual voltage or current waveforms that exist in a physical transformer. The ability of the NIC to minimise the distortion is just as real though.
+ + +There is one very common NIC that's used in several 'explanations' found on the Net. While it certainly does what a NIC should do, it's actually not a particularly useful arrangement in the form shown. The point marked '-Z' shows where the negative impedance is found, and the value is equal to R1. If Rin is made to be 900 ohms, a 1V (peak) input signal (AC or DC) will be inverted, and a voltage of -10V (or 10V AC with inverted polarity) is seen at the opamp's input. The current from the generator is determined by the difference between +900Ω and -1k, or -100 ohms. Therefore, a 1V DC input will pass -10mA ( I = V / R ), and not +1.11mA as would be the case if the input resistor (Rin) were returned to earth/ ground rather then the NIC input.
+ +There are several (often wild) claims made about the circuit, including that you can substitute capacitance or inductance for any or all of the resistors (impedances) shown. This allegedly means that you can make a negative capacitor (an inductor) for example, but don't expect some of the published circuits to actually work with real opamps. This circuit is of minor interest only as an analogue 'building block', and has been included here only because it's so common on the Net.
+ +
Figure 6 - Common NIC Used For Explanations
This NIC is often shown with a voltage input, which is the basis for most explanations. In real life, the input will normally be a current, and applying a voltage (from a low impedance source) doesn't achieve anything you can work with so easily. However, as a first analysis it's helpful to see what happens. Note that the voltage source must have a very low output impedance, or the basic analysis doesn't work.
+ +Let's assume an input voltage of +1V, applied directly to the input of the NIC (resistor Rin shorted). Since the impedance is negative, we expect -1mA to flow from the signal source, not the +1mA we'd get from a 'normal' resistor. The opamp has a gain of two, set by R2 and R3. That means that the opamp's output pin sits at +2V (remember there's +1V input, and the opamp is non-inverting). Therefore, a current of 1mA flows through R1 - back to the voltage source !. A meter will show a current flow of -1mA into the NIC, but would show +1mA into a normal resistor.
+ +Now let's assume an input current of 1mA, created from 1V input, passed through a 1k resistor (Rin). The NIC has an impedance of -1k, so the two resistances will cancel. That means that the voltage source that should be supplying 1mA sees a dead short circuit, because the two resistances completely cancel. This assumes that the opamp used for the NIC is capable of infinite current, derived from an infinite supply voltage. Unfortunately, these are hard to come by.
+ +I suggested an input resistance of 900 ohms in the first place, because that lets you analyse the circuit easily. The process of analysis doesn't need any maths, apart from addition, subtraction and Ohm's law. It's too easy to completely mess up people's understanding by supplying formulae to try to 'simplify' the explanation. It's worth noting that although the circuit shown is a very common example, it's not actually useful in this form. For example, it will not work driving a transformer (as shown above), and with no input connected, it will swing straight to one supply rail (polarity depends on the opamp used).
+ +The only way you will understand this circuit is by running simple calculations, or by building one to see what it can do. When an input resistance is used, things can go pear-shaped very quickly if you aren't aware of what's happening. However, building one may be seen as a fool's errand, because it's usefulness is so limited. It's important to understand that the negative impedance must never be 'dominant', as that means the circuit has more positive feedback than negative.
+ +The last sentence above is one thing that's never mentioned with this circuit, which is a shame, because it's seriously important. The value of input resistance (Rin) must always be less than the negative resistance (R1). As the values converge, the gain of the circuit climbs rapidly until the opamp clips, because it can't produce the infinite voltage and current needed to completely cancel the external positive resistance. When the positive impedance is greater than the negative resistance, you have a Schmitt trigger (sometimes referred to as a 'regenerative comparator'). This does not happen with a simulated 'ideal' opamp, but on the test bench there is no question as to what works and what does not. Simulating with an opamp model produces the same result as the test bench.
+ + +I don't recall where the next circuit came from, and an extensive search failed to find it again. Hence, there is no reference for it. By using negative impedance, the circuit's Q (quality factor) can be much higher than can easily be obtained from a multiple feedback (MFB) bandpass filter. It has the advantage that the opamp doesn't require a large gain-bandwidth product, but it's more complex than the MFB filter. Because it's such a bizarre idea, I ran a bench test to check whether it really works, and the answer is a qualified "yes". More on this below.
+ +The output is high impedance, and needs to use a follower (U2) to ensure the second filter (Rt2 and Ct2) isn't loaded down as this will both change the frequency and reduce the filter's Q. There are effectively two separate filters, with Rt1 and Ct1 forming a high pass section, and Rt2 and Ct2 forming the low pass. Without the NIC, this circuit would have a Q of 0.5 (same as a Wien bridge) but the application of negative impedance changes this completely. The combination of a series and parallel RC network is the same as you find in a Wien bridge, but in this version, the NIC is between the two networks.
+ +
Figure 7 - Negative Impedance Bandpass Filter
When the ratio of R1 and R2 is exactly 1:2 the negative impedance is equal to the impedance of the two RC networks, and the input sees a short circuit so the opamp is expected to provide an infinite current. Naturally, this cannot occur, and even 1% resistor tolerance is enough to reduce the Q of the filter dramatically. In theory, a Q of over 1,000 is possible, but the circuit will be unstable and unusable, and it will simply oscillate. This is tempered somewhat by making R1 5.1k, reducing the Q to around 24. This is still a high Q filter.
+ +Note that for stability, R1 must be greater than half the value of R2, assuming Rt1 and Rt2 are identical, and likewise Ct1 and Ct2. This is affected by the respective tolerance of the frequency determining parts, and these can (and do!) reverse the way R1 works. For example, if Rt1 happens to be a little smaller than Rt2, then R1 must be less than R2/2 and vice versa. The tolerances are small, and for high Q there is very little room for error.
+ +While this circuit simulates (and works in my test setup) perfectly, it may not work unless you get everything right. It's also important to realise that very high Q filters can take a long time before the output stabilises. When the output has reached its final amplitude, it's said to be operating with 'steady state' conditions. The time for a signal to reach full amplitude with a high Q resonant filter can be much slower than expected. A filter with a Q of 30 will take about 100ms to reach the steady state maximum. When the signal is stopped, it takes a similar amount of time for the signal to decay back to zero. Very high Q filters are never used in audio, but are fairly common in other applications, such as test and measurement (T&M). Unless you have a specialised application, you will never need this filter.
+ +Because this is such an odd circuit and it probably shouldn't work, I had to put one together to see what it could do. The result is shown below, with a tone burst signal (150 cycles on and 150 cycles off). To be able to obtain very high Q, resistors and capacitors need to be better than 0.1% tolerance, and a very small change in the wrong direction will change a filter into an oscillator.
+ +
Figure 8 - Negative Impedance Bandpass Filter Response
The tone burst response of the NIC based bandpass filter is shown above. It was operated with a Q of just under 40, at a tuned frequency of 158Hz (100nF and 10k, ± component tolerance). No attempt was made to match the components, but I was able to get a Q of 80 (that's very high - it means a bandwidth of just 2Hz for a 158Hz filter). Any attempt to increase the Q further and it oscillates. The output is 5.8V peak (4.1V RMS) with an input of 57mV peak (40mV RMS) - a gain of 100 (near enough).
+ +A resistance change (of R1) of only 24 ohms (for a nominal 5k resistor) changed the Q from 40 to 80. From that it's apparent that component sensitivity is very high with high Q. As you can see, with a Q of 40, it takes a little over 250ms (1 division plus a bit) for the signal to build up to the maximum, and the same to fall to zero. This is not a limitation of this particular circuit - it applies to all high Q filters.
+ +As an oscillator, you might imagine that it's a fairly simple arrangement that should perform well. However, distortion performance is very ordinary, and when set up for reliable oscillation, expect it to be around 3% THD. This can be improved, but not without a thermistor or other form of gain control element. For a range of oscillators (not including this one), see Sinewave Oscillators - Characteristics, Topologies and Examples.
+ +There are several other rather complex negative impedance circuits that are sometimes used to create very sharp filter slopes. One of these is the 'GIC', covered next.
+ + +The GIC (generalised impedance converter) is also known as an FDNR, or frequency dependent negative resistance [ 6 ]. These are probably one of the least common filter topologies, but they are used mainly for specialised requirements. Sometimes I wonder if they are used just so people can show how clever they are (and working out one of these is not for the faint-hearted). So yes, the designers are clever, but it's rare that most users will ever need one. However, since we are looking at negative impedance it would be remiss of me not to mention these circuits. An example is shown below, a low pass filter tuned to 1,020Hz and with a 12dB/ octave rolloff.
+ +R4 changes the circuit's total Q, which can be varied over a small range without substantially affecting the filter frequency. As shown, the filter is Butterworth (maximally flat amplitude). R2 and R3 only need to be the same value, and if both are changed operation is not affected. Perhaps surprisingly, C1 and C2 can also be changed to modify the Q, but both should be the same value. The frequency is largely determined by R1 and C3, but is √2 times the calculated frequency (at least for the example shown here). However, all values are inextricably linked, and the frequency can be changed by scaling capacitor values alone. For example, changing C1, C2 and C3 to 47nF reduces the frequency to 217Hz, but Q is unaffected.
+ +I did warn you that this is a difficult circuit to analyse, and the guidelines above (and that's all they are) may help you towards some fruitful experiments. It may also be enough to scare you away, but I'm hopeful that someone will get something from my meagre efforts .
Figure 9 - Generalised Impedance Converter 2nd Order Filter
The GIC filter doesn't work any better than a Sallen-Key filter at audio frequencies, but needs an additional opamp and several more resistors. It also has a relatively high output impedance, so an output buffer is essential. The original idea was apparently to minimise the 'real world' limitations of opamps, but these days I doubt that there are too many good reasons to use a topology that is anything but intuitive. The impedance conversion is used to make capacitors act like inductors, in much the same way as a gyrator (covered next). The filter shown is (roughly) equivalent to an inductor-capacitor (LC) low pass filter, with the high Q inductor being synthesised by the GIC. This is one area where the GIC excels - making a simulated inductor with a very high Q, and without excessive loading on the opamps. That is something that's hard to do with 'ordinary' gyrators or simulated inductors.
+ +The circuit shown above is for the sake of completeness, and a detailed analysis is not going to happen. If you think that this approach is the solution to your filtering woes, then feel free to look up more info on the Net. There's plenty to be had, but I leave it to the reader to search out if s/he wants to pursue this type of circuit. (Yes, I am faint-hearted when it comes to complex maths - I prefer the simplest solution wherever possible).
+ +One of the main reasons that the GIC topology is used is when opamp bandwidth would otherwise compromise performance. This will usually become a problem when dealing with high frequency filters, where the GIC will (hopefully) provide better performance than more common filters (Sallen-Key, multiple feedback, etc.). These filters are complex though, and a deep understanding is necessary to make sense of what's going on.
+ + +The gyrator [ 7 ] is a common circuit, and isn't normally a negative impedance device in the true sense of the term. It's included here because it reverses the effects of reactive elements, so capacitors can be made to act as inductors and vice versa. There is rarely any need to convert inductance to capacitance, but if you really do want a particularly poor capacitor it's easily done. When reversing the action of a capacitor to create an 'inductor', the final circuit possesses all the things about inductors that make them the most flawed electronic part known. Adding a NIC changes things, but at the expense of added complexity which isn't warranted in most circuits.
+ +Despite their shortcomings, even basic gyrators remain a useful tool in the electronic enthusiast's arsenal, because they are not affected by stray magnetic fields, and can easily be adjusted to an exact inductance. This is very hard to achieve with real inductors, which are also prone to saturation (if using a ferrite or iron core), and are usually far more expensive to produce than a simple opamp circuit. There's a complete article that looks at gyrators in general, but here we will only look at one that utilises negative impedance to remove the traditional gyrator limitations (winding resistance in particular).
+ +The basic NIC gyrator is shown below. When fed with a signal, it behaves like an inductor in nearly all respects, except it has almost zero winding resistance. Just like a real inductor, it even provides a back-EMF when a DC input is disconnected, but the amplitude is limited to the opamp's supply voltage.
+ +
Figure 10 - NIC Gyrator
Inductance is determined by R1 × R2 × C1, and with the values shown it's 1 Henry. R3-R6 only need to be the same value, and the value used (10k) is a suggestion only. Adding the NIC to the gyrator increases the number of parts used, but performance is greatly improved. A traditional single opamp gyrator is hard pressed to minimise the effective winding resistance, but the NIC removes it almost entirely. The circuit is limited only by the opamp performance, but even fairly pedestrian opamps will perform surprisingly well.
+ +The drawing above is by no means the only version that you'll see. The article on gyrators shows an alternative circuit, and they are often drawn to look like the GIC topology shown above. Not unreasonable, because that's essentially what it is - a GIC or FDNR. By using negative impedance, the otherwise (sometimes) troublesome equivalent of winding resistance can be eliminated. Most of the time, it's not necessary though.
+ + +NICs are (or have been) used in a number of seemingly odd applications. Bell Labs devised a technique in the 1940s where negative impedance amplifiers were used on long transmission lines as repeaters. A NIC provides a lower cost solution than a traditional repeater which requires a pair of 'hybrid' circuits (2 to 4-wire converters and 4 to 2-wire converters - see 2-4 Wire Converters / Hybrids) and two amplifiers. These were refined over the years, and there are many patents on the technique [ 8 ]. These will not be covered here, as the application is too specific to telephony, and isn't likely to be useful for general applications.
+ +Similarly, negative impedance is sometimes used in antenna matching circuits, in order to correct for impedance differences between a transmission line and an antenna or amplifier. It can be very difficult to get any sensible information on some of the applications, because the information is hosted on sites that expect you to pay for it.
+ +Patents can be a good source of information, but it's usually disclosed in 'patent speak' which is not always intelligible unless you are a patent attorney. An example is shown below, [ 10 ] from a patent granted in 1958. It is described as a "Negative-Impedance Transistor Oscillator". For its day I have no doubt it was novel, but likely with somewhat limited application. Stability is poor, but it is interesting enough to show here.
+ +
Figure 11 - Negative Impedance Transistor Oscillator
The circuit has been simulated (no, I'm not going to bother building one), and it appears to work. The output waveform (across the load resistor) is shown. It oscillates at 2.3kHz with the values shown, which bares no relationship to the tuning components (C2 and R7). According to the patent information, the points marked 'X' are 'short circuit stable' ports, but the amplifier module is unstable if they are left open. The points marked 'Y' are the opposite. The network across the 'X' points was shown in the patent drawings, but the circuit works without it. If you are interested, it's worthwhile re-drawing the circuit. You'll find that it's rather similar to the transistor equivalent of a silicon controlled rectifier (SCR).
+ +In operation, C2 charges from the amplifier, and when a critical (trigger) voltage is reached, the transistors conduct with an effective negative impedance. This discharges C2 very quickly, and the cycle repeats. The transistor cross-coupling ensures that each supplies the other with base current, so the turn-on process is regenerative. Conduction ceases when the C2 is discharged, which happens in about 50µs with the values shown.
+ +Based on the simulation I did, the circuit is not particularly stable and it's usefulness is somewhat doubtful. It's shown only as an example of early attempts. In its day, transistors were still fairly primitive, and there weren't any of the more advanced devices that came along later. In reality, it might not be genuinely useful for anything - there are plenty of patents for things that are either useless, pointless or both.
+ + +When wiring an amplifier, it can be surprisingly easy to create an 'accidental' negative impedance converter. All that's required is to fail to ensure that the ground wiring for the audio inputs is connected to the right place. Most amp PCBs will bring the signal wire and its shield (or separate ground wire if the inputs are unshielded) directly to the PCB. If you connect the input RCA (or any other) connector directly to the chassis, it's possible to introduce a positive feedback component into the overall circuit. Consider the drawing below - a small resistance created by the grounding wiring (and/ or the chassis itself) creates a small amount of negative impedance.
+ +
Figure 12 - 'Accidental' Negative Impedance Circuit
The connections shown as 'Oops!' may not seem likely, but it's easier than you might think. If the speaker return is connected to the chassis (and not directly to the filter capacitor centre-tap), simply using grounded input connectors can create this very problem. You need to be very careful with the inputs, and bear in mind that some external equipment (a preamp or radio tuner perhaps) may join the input connectors to the mains earth (ground) lead.
+ +With the values shown for R2 and R3, the amp's gain should be 23 (27dB). These are the values used in most ESP designs, and many others as well. If the wiring from input connectors to the amplifier fails to include a ground (usually via the shield) directly to the amp PCB, the 'stray' resistance (shown as 50mΩ) provides a small amount of positive feedback, increasing the gain to just under 27 (28dB) with an 8Ω load. The gain is load impedance dependent, so it will vary along with the impedance of the loudspeaker. In the example shown, the output impedance is -1.1Ω, which may be enough to cause sound quality to suffer.
+ +As discussed in Project 56 - Variable Amplifier Impedance, very few loudspeaker drivers perform well with negative impedance. This is something I've played with many times over the years, and for the most part it never fails to disappoint. Adding negative output impedance 'accidentally' can be surprisingly easy to do, as it may only be a matter of a ground wire connected to the wrong place. We tend to think that 'ground' is something solid and substantial, but wires have resistance and it doesn't take much to create a problem. In case you're wondering, yes, I have seen this, especially in a 'lash-up' to test the functionality of a new design. If you see the output level from an amplifier increase when a load is applied, you have an 'accidental' NIC.
+ +With an output impedance of -1.1Ω, a +1.1Ω load in place of the speaker will (try to) make the amplifier's gain infinite! I can test this easily with a simulator, but any 'real' amplifier will either oscillate, run out of gain or blow up (all three are possible, and probably in that order). Needless to say I don't recommend this in real life!
+ + +<rant> One thing you will find is that the detailed knowledge needed to understand the GIC and other less common (but often quite complex) topologies is often behind 'pay-walls', where you are expected to pay a usually exorbitant amount to get access to the material. In most cases, you are not given anywhere near enough information to know whether the material is relevant or not until you pay for access. IMO this is an abuse of the Net and what it should be for - providing knowledge that you'd find difficult to locate elsewhere. In many cases, organisations are asking full fee payment for material that's over 20 years old, and should be released at no charge. Some readers will know the main offenders, and they are, otherwise, supposedly 'reputable' organisations. Grrrr! </rant>
+ +Now that my rant is over , we can hopefully get something useful from the details above. Negative impedance is not intuitive, and some of the circuits used are difficult to understand. In some cases, the only way that you can verify that the technique works is to build or simulate the circuit, and simulation has been done for all the examples shown. They all appear to work exactly as described, but reality may be different.
You always need to be careful of NICs, because whenever they give very high AC gain, this is often accompanied by very high DC gain. An otherwise harmless DC offset of a few millivolts can become several volts if you don't take care to ensure unity gain at DC. This isn't always possible. By its very nature, a negative impedance is intrinsically unstable. Although many claims may be seen for various NIC circuits, not all stand up to scrutiny (i.e. they may appear to work, but only with an ideal opamp). Others quite clearly cannot work at all, despite mathematical 'proof' that they do what's claimed.
+ +Whether anyone needs the techniques described here is another matter. Mostly, the answer will be "no", but if nothing else it can be very educational to experiment. NICs in general are a fairly uncommon class of circuit, party because there is usually no need for negative impedances, and partly because more traditional techniques are usually more than acceptable to get the results you need.
+ +Using a NIC to drive an audio transformer is one application where there are obvious advantages over the simple opamp drive circuit that is commonly used. Whether it's actually needed is another matter entirely, and it's less complex (and ultimately more 'user friendly') to use a better transformer and be done with it. Negative impedance may be an alternative where cost must be minimised, but careful testing is essential.
+ +Much the same applies to the NIC based bandpass filter. This can provide very high Q at normal audio frequencies with pedestrian opamps. Like all high Q filters it is sensitive to component variations, but it is far simpler than many of the alternative options. These often need esoteric opamps to obtain acceptable results, and are still just as sensitive to component values for the same Q. If you ever need a high (or very high) Q filter, these are definitely worth a closer look.
+ + +![]() ![]() |
![]() |
Elliott Sound Products | Notch Filters |
![]() ![]() |
Notch filters are a special kind of circuit. They are used for distortion analysis, but there are many other uses for them. At one stage, 10kHz (later reduced to 9kHz in Australia) notch filters were used in 'high end' AM receivers (something of an oxymoron) to remove any inter-station 'whistle' caused by the AM channel spacing.
They are also used to remove hum, in particular 60Hz/ 60Hz mains hum, but they can be used at any frequency within reason. The idea is to create a filter with high rejection of the unwanted frequency, but not affect adjacent frequencies. Notch filters are also used in communications systems to remove unwanted frequencies, and they have even been used in the old 'POTS' (plain old telephone system) to suppress DTMF (touch-tone) signals from the speech signal. I suspect that this was done to make it harder to place calls without paying, a process that used to be known as 'phone-phreaking'. (Interestingly, this is still a problem, but it's done differently now - just in case you were wondering. No?)
Predictably, it's not possible to just remove just one frequency from an audio signal, and there is always some disturbance to other nearby frequencies. It's unrealistic to expect a 50Hz filter (for example) not to affect 40Hz and 60Hz, but if the filter Q ('quality factor') is high enough, these two can be affected by no more than 3dB. The bandwidth would be stated as 40-60Hz (-3dB) with a theoretically infinite rejection of the unwanted frequency.
Infinity is a pretty big (or small) number, but it's not difficult to reject the centre frequency by more than 60dB, and some filters make 100dB fairly simple to achieve. As for the circuits themselves, there are several. Some are very well-known, such as the 'twin-T', which represents the majority of the notch filters you'll see if you perform a search on the Net.
Other variations include the Wien bridge, phase-shift (all-pass), state-variable, and one of the lesser known, the Fliege filter. A pair of 12dB/ octave Sallen-Key filters can also be used, but this isn't practical - other than when implemented as a state-variable filter. Each has its particular advantages and disadvantages, and there are two main criteria - ease of tuning to an exact frequency and notch depth. We can add ease of setting the Q, as that can be important in many applications.
Note: In a number of on-line 'explanations' you will see a pair of 1st order (6dB/ octave) filters summed, allegedly to generate a notch. This will not work, as the phase shift between the two filters is only 90°, and we require a 180° phase shift to obtain a null at the selected frequency. This completely wrong circuit is repeated ad-nauseam on the Net. As always, be careful with material found on-line, as much of it is incorrect. To get the required 180° phase shift, the filters must be 2nd order (12dB/ octave).
Another circuit is called a Bainter filter [ 1 ], but it's a complex filter to design despite its apparent simplicity. While a circuit may exist because someone has gone to the trouble to design it, that doesn't automatically mean it's a good idea. From what I could find about the Bainter filter, it appears to fall into the 'don't bother' category.
A 'bridged differentiator' is another topology that can be used, and while it's claimed to be easily tuned, this is something of an illusion, as it's only over a narrow range. It is an interesting circuit, but it's also difficult to tune over a wide range. It's rather impractical for most applications because of this. It also requires 3 perfectly matched capacitors, making it even more impractical. The Bainter and bridged differentiator aren't included here. The Bainter filter has relatively poor rejection and the bridged differentiator is just too irksome to tune properly. The multiple feedback (MFB) filter is covered here, and although it can't achieve a notch depth of much more than 50dB this may be enough for some applications.
With some notch filters, a second opamp is the secret to obtaining a narrow notch, as it provides feedback that tries to force the output to have a flat response. This can't be done because the notch is so deep, so it corrects the response either side of the notch. This technique is used for the twin-T, Wien bridge and phase-shift notch filters. The Q is adjustable as described for each circuit. This works because feedback cannot correct the frequency response if there is (virtually) no gain at a particular frequency. A 60dB notch qualifies as 'virtually no gain'. However, as discussed further below, the 'feedback' may actually be bootstrapping. Some notch filters do use negative feedback though, and it can even be applied to those that normally use the bootstrap circuit.
One final design has to get at least a mention, namely the LC (inductor/ capacitor) series circuit. Unfortunately, the end result isn't useful, as the inductor will almost certainly pick up hum from nearby transformers (or even current-carrying cables), negating the reason you'd build one. Getting a high Q is difficult because of winding resistance. Workable values are 2.7μF and 3.75H. Resonance is at ...
1 / ( 2π × √( L × C ))
You could use a gyrator (opamp 'simulated inductor'), but that makes high Q even harder. The notch depth will only ever be mediocre, and neither the physical nor simulated inductor is useful. No further discussion of this option is offered.
Notch filters are also used in conjunction with 'traditional' high and low pass filters to obtain a steeper rolloff. You can get an apparent rolloff of 60dB/ octave or more, but the output signal 'rebounds' beyond the notch. The Cauer or elliptic filter is an example (see Active Filters, Section 7.3 for details). These are a special case of filter, and are uncommon in audio (with the possible exception of anti-aliasing filters for digital systems). The NTM (Neville Thiele Method) crossover uses this type of filter, optimised for phase shift to ensure that the outputs sum flat. This is otherwise very difficult to achieve.
Information on Q determination is somewhat divided for notch filters. Unlike a bandpass filter, a band-stop (notch) filter has a theoretically infinite rejection of the centre frequency (with infinitely small bandwidth), so the Q cannot be determined by the standard method (fo / (fH - fL)). For a bandpass filter, fH and fL are the -3dB frequencies referred to the peak, and fo is the tuned frequency. This doesn't work for a notch filter, and if used it will give an impossibly high Q value. Claiming a Q of 150k might sound impressive, but that's not the way it's measured for notch filters (and yes, this is quite easy to achieve with a good notch).
In most texts, it's stated that notch filter Q is determined by ((fH - fL) / fo), with fH and fL referred to the out-of-band level (typically close to unity gain). This gives an inverted Q (i.e. 1/Q) otherwise called damping. When I refer to the Q in this article, it will be taken as the value determined by this method. It doesn't make a difference as long as the way the Q is determined is disclosed. As the Q is decreased, the distance between the -3dB frequencies either side of the notch decreases, indicating less disturbance to the adjacent frequencies.
While this method is correct, it doesn't provide a number that's intuitive. If you'd rather use the 'inverted' version, it's just the reciprocal of the figure I've used. The fo value should be double-checked - it should correspond to √(fH × fL). From Fig. 1.1 you can see that the bandwidth of the notch (between -3dB frequencies) is ~19Hz, so the Q is 0.38 - this is a good figure, and it's unlikely that there will be any benefit to having a narrower bandwidth. The graph was taken from a simulation of the twin-T notch filter.
The green trace shows the response without feedback. As you can see, the response is compromised for over two octaves either side of the notch. For measuring distortion, the feedback needs to be just enough to ensure that there's minimal reduction of the second harmonic (100Hz in this example). If the goal is to remove a troublesome frequency with minimal disturbance to the rest of the spectrum, more feedback/ bootstrapping is needed to get a narrower notch.
The various circuits described here show a tuning resistance of 11.79k (12.05k for 60Hz). It needs to be adjustable, because the tuning capacitors won't be exact, and resistor tolerances also require compensation. The series circuit using 10k, 1k, plus a 2k trimpot will allow enough range for most purposes (the two fixed resistors can be replaced with 11k if you have them to hand). At 50Hz, the notch can be tuned from 45.4Hz to 53.7Hz, assuming exact capacitor values. You could also use a 10k resistor with a 5k trimpot in series, but that will be harder to tune (even with a 25-turn trimpot).
Most of the circuits shown below use the same feedback components, but this does not mean they will have the same Q. Each circuit has an 'intrinsic' Q, which is different with different topologies. The circuits shown were all simulated using 'ideal' opamps, but the difference when a TL072 was used in the simulations was minimal. This will be the case in reality, but only at low frequencies. The twin-T circuit uses the opamps as unity gain buffers, so their response is as good as it can be. Some of the others may require a wide-band opamp for high frequencies (greater than 10kHz).
If you look at the transient response of a notch filter, you'll discover that if stimulated by a single pulse, you will generate the very frequency you're trying to remove. This is a characteristic of all narrow-band filters (peaking or notching), and as the Q is increased, they take longer to settle to steady-state conditions. For this reason, making the filter any narrower than is strictly necessary may cause more problems than it solves. When used with music (for example), there are generally no transients fast enough to cause problems.
It's theoretically possible to have a total bandwidth of less than 1Hz for a 50/ 60Hz filter, but doing that reduces the notch depth, and the tiniest component value mismatch will cause the unwanted frequency to get through. A very narrow bandwidth also means that the frequency to be rejected must be absolutely stable. Should it drift by only 0.01Hz (much better than the AC mains), the rejection is reduced dramatically. A 40dB notch can be reduced to less than 20dB if the frequency drifts by 0.01Hz if the bandwidth is too narrow. It will also show severe ringing, and the effects of that may not be what you hoped for!
For 'simple' filters such as the Twin-T and Wien bridge, the feedback system is not what you expect. It would be better described as a form of bootstrap circuit, as the 'feedback' is positive, not negative. The way it works is not immediately obvious, because there is no gain-stage involved as you'd normally expect with a 'proper' negative feedback circuit. The two opamps are used as buffers, with the lower one (in the drawings) having a gain of slightly less than unity (typically around 0.91 or so).
When the input signal is at the notch frequency, (next to) no signal gets through, so the lower buffer can't affect the notch because its output level is very low. As the input signal shifts away from the notch frequency, the buffer 'bootstraps' the filter network, so it has close to the same voltage at its input and at the feedback point. If the same voltage is present at both points, no current will flow and the notch becomes irrelevant. This is most easily seen with the Twin-T filter.
This 'bootstrap' mechanism can only work when the filter can pass some signal - when it's not at the notch frequency. The proof of this is looking at the input impedance. At the notch frequency, the impedance is ~8.37kΩ both with and without 'feedback'. When the bootstrapping is applied, the input impedance below and above the notch frequency rises. At 10Hz the impedance is 153k, and at 250Hz it's 30k. Without the feedback, the range is from 18k to 3.6k (10Hz and 250Hz respectively). This increase of input impedance is exactly what we expect from a bootstrap circuit. The impedance at the notch frequency is the same because at that point there is no bootstrap action. For more information on just how this works, see Bootstrap Circuits - A look At Those In Use.
In many descriptions I've seen, it's claimed that the feedback is negative, but that's clearly impossible if the signal isn't inverted. A few on-line articles do get it right though. You actually can use negative feedback to achieve much the same goal, but it's possible that there will be issues with stability. The 'bootstrap' arrangement is simple and stable, and it's by far the most common approach.
Some other circuits do use negative feedback, notably the phase-shift/ all-pass design. The state-variable filter tunes the high & low pass filters for high Q, which means that they have high gain if you need a high-Q notch. The version shown has a gain of 8.3dB (×2.6) with the values given, so there's a significant loss of headroom and high levels will cause clipping. The MFB bandpass-derived notch doesn't have that problem, and the Fliege filter has only modest gain (about 2dB).
Of course, it's not critical that you understand the exact feedback mechanism of each filter type, but if it helps you to understand how it functions that's never a bad thing. One thing I will not do here is go into details of poles and zeros, not will I discuss radians/second or other 'high-level' maths functions. These are traditionally used for 'proper' mathematical descriptions, but with few exceptions they won't improve your understanding - often the 'true' mathematical methods only serve to create FUD (fear, uncertainty and doubt). The information shown here is more than enough to allow you to design a notch filter for any desired frequency and Q, without stress.
This is probably the best-known of all notch filters. It has a sibling called the bridged-T, but that won't be covered because it usually has both limited notch depth and fairly poor Q. Both can be improved, but there's nothing even remotely intuitive about it.
The twin-T can be made as a completely passive circuit with centre frequency rejection of more than 60dB, and in that respect it's unique. Unfortunately, it's a very broad filter without some electronic assistance. A 50Hz passive filter will have -3dB frequencies of 13Hz and 183Hz, and that's a lot of your audio signal to lose. Fortunately, adding an opamp and a couple of resistors allows the Q to be adjusted without affecting the notch frequency or depth.
The frequency is determined by the traditional formula ...
fo = 1 / ( 2π × R × C )
Because we're dealing with a filter that can reduce the centre frequency by 60dB or more, the values are critical. Without exception, adjustment will be required to tune the filter to the desired frequency and ensure maximum notch depth. The twin-T is a good, reliable and fairly simple circuit to set up, and it's still an excellent choice. The horizontal part of the 'T' is made up of equal-value resistors and capacitors, with the vertical sections using ½Rt and 2Ct.
A practical example is shown above. Resistor and capacitor values are assumed to be exact, but in reality there will be trimpots used in series with one 'Rt' value and the '½Rt' value, as shown in Fig 1.1. ½Ct is treated the same way to get 2Ct. The filter uses a dual opamp to buffer the output and provide the bootstrap signal, which reduces the bandwidth but has little or no effect on the notch depth. The degree of bootstrapping can be varied by changing the value of R2 or R3.
The twin-T has been the notch filter of choice for many distortion meters, both commercial and home-made. The 2Ct and 1/2Rt values are most easily made by using the same values as for 'Rt' and 'Ct', with two in parallel for both. The source impedance isn't critical, so it will work even with a comparatively high source impedance. A common approach was to use a 10k pot to set the level at the input.
R2 and R3 set the feedback ratio, and therefore the Q. If R2 is made smaller, Q is decreased (less effect on adjacent frequencies) and vice versa. A practical minimum value for R2 is 100Ω, but I wouldn't recommend anything smaller than 390Ω. With 1k and 10k, the -3dB bandwidth is 18.3Hz.
There's an alternative solution for the twin-T, which makes it asymmetrical [ 3 ]. While this is claimed to improve the performance, it also makes the circuit very difficult to tune, so it cannot be recommended as a workable solution. There's also a simpler circuit called a bridged-T, but these generally have poor rejection. They are sometimes used as a 'contour' control in some guitar amplifiers, but their performance as a 'true' notch filter is inadequate.
By nature, the Wien bridge has fairly low Q and a poor notch depth, and it requires a dual opamp to function well as a notch filter. It's one that I have used, and while it requires some trickery to function properly, when it's set up it has excellent performance. There are only two frequency-selection networks, with one being a series R/C circuit, and the other a parallel R/C circuit. By using feedback, the notch depth can easily exceed 70dB, and the bandwidth can be adjusted easily. Adjusting the Q changes the gain, but this isn't always a problem.
The series R/C network feeds the inverting input of U1, and the parallel network is in the feedback path. The network of R3 and R4 is the key to getting a good notch depth. The input voltage is divided by 3 (R3/R4+1) and fed to the non-inverting input of U1. A Wien bridge has an insertion loss of 3, and this lets U1 subtract the phase-shifted signal from the input, providing a notch depth of well over 60dB without feedback.
When feedback is added the Q can be changed via R3 and R4. As shown (1k, 10k) the -3dB bandwidth is 23Hz. R4 can be reduced for higher Q, but anything less than 390Ω will make the circuit too hard to tune. The Wien bridge feedback system is neither 'true' feedback or bootstrapping, hence the label 'Bootstrap/ FB'. Whether it's one or the other depends on how you look at it. It's possible to re-configure a Wien bridge so it does use 'pure' bootstrapping, but the circuit complexity is greater.
This is one that you don't see very often because it uses more opamps than the twin-T or Wien bridge. The filter itself uses 3 opamps, and it must be driven from a fixed source impedance (another opamp). The filter uses two all-pass (phase-shift) networks and a summing amplifier. Feedback is applied directly to the input, and it's irksome to make it variable because that will cause a gain variation. This may not be an issue with some applications.
Despite using fairly 'ordinary' opamps, the notch depth can be over 100dB with feedback. This arrangement has been used in a few distortion meters where measurements down to 0.01% (full scale) were required, and it's one of the few that can manage that degree of attenuation of the selected frequency. The feedback really is feedback in this circuit.
There are two frequency-selection networks, both identical. It's possible to only tune one of them to get a good notch, making it unique. The only requirement for a perfect notch is for a total of 180° phase displacement through the two phase-shift networks (nominally 90° each), but if one has a little less and the other a little more than 90°, the result is the same.
U1 is a buffer to ensure a consistent source impedance, and feedback is from the summing amp (U4) back to the input. The 10k resistors can be a different value - the only requirement is that they are all identical (small differences can be corrected by tuning).
Although it's not a common circuit, it has better performance than most of the others. If it's used to measure distortion, the opamps all must contribute the least amount of their own distortion possible, which means premium devices. This makes it a more costly option than the twin-T or Wien bridge filters.
With the values shown, the -3dB bandwidth is 23Hz. Increase the value of R7 to decrease the bandwidth. With 100k, it's reduced to only 12.4Hz. A more realistic value is 51k, giving a bandwidth of 21.5Hz.
This is an uncommon filter topology, but it has good performance. As is common, there are two networks that determine the frequency, and the Q is changed by varying RQ1 and RQ2 (which must be identical). 1% tolerance resistors are good enough (without selection) only if you are willing to accept a shallower notch. A Q of 0.233 (more than acceptable) is obtained by making RQ1 and RQ2 10 times the value of the tuning resistors. The -3dB bandwidth will be only 11.7Hz, and a more realistic Q will be obtained by making the two RQ resistors 56k as shown (20.5Hz bandwidth).
The selection of the Q is somewhat inconvenient, and making the filter fully variable is difficult. Neither the frequency nor the Q can be varied 'on-the-fly' because no dual-gang pot will have good enough tracking. For a fixed-frequency notch filter this isn't a limitation.
The ratio between R1 and R2 is just as critical as that between RQ1 and RQ2. To get the optimum notch depth, both resistor pairs could use a trimpot to allow adjustment to get the best null. Both resistor sets will also change the frequency (albeit slightly for small adjustments), making the filter somewhat less attractive than other solutions.
It's an interesting filter with good performance, but it's impractical if different frequencies are required. The main tuning resistors (with trimpots) will generally allow enough variation to accommodate small errors with R1/R2 and RQ1/RQ2. If not, it's the only circuit that would use four trimpots, increasing the cost and difficulty of setup.
With the values shown, the -3dB bandwidth is 21Hz near enough. It's changed by varying the ratio between both RQ and Rt resistors. If the two RQ resistors are reduced, the bandwidth is greater, and vice versa.
The state-variable topology is one of the most flexible of all common filters. To get a notch, it's just a matter of summing the high and low pass outputs, and the Q can be adjusted by changing RQ. The value of 1.5k as shown gives a Q of 0.38 (-3dB at 41.5 and 61Hz). One unfortunate aspect of this design is that the high and low pass filters have significant gain (about 8.3dB, or ×2.6) and this reduces the headroom, meaning that you can't get as much overall level through the filter because it will clip. The same applies if the filters are standard Sallen-Key types (which saves one opamp but adds 2 caps). Using Sallen-Key filters isn't recommended and is not shown.
An input opamp is essential, because R2 controls the gain and Q, and the filter must be fed from a low source impedance. While 5 opamps seems a lot for a 'simple' notch filter, they aren't expensive and the performance is very good. However, when compared to the others shown it wouldn't be my first choice.
I mentioned at the beginning that you can use a pair of 12dB/ octave filters (high and low pass), and that's what the state-variable version is. At the centre frequency, the two signals are 180° out-of-phase, so the signal is cancelled. By increasing the Q of each filter, the disturbance near the notch frequency is minimised, but at the expense of reduced headroom.
The MFB filter is included here, but only as an example. The MFB topology invariably demands 'odd' resistor values, but it's usually possible to rationalise them to the point where the frequency can be fine-tuned with only one trimpot. This will affect both gain and Q, but depending on how close you can get the other resistors to the target values, the variations won't cause major deviations from the desired response. One disadvantage of an MFB filter derived notch is the notch depth. Unlike the others shown, the typical rejection you can expect will be around 30dB. This will often be enough for suppression of an interfering tone, but it can't be used to measure distortion (for example). If you get all values exact, you can get up to 50dB or so, but that requires very odd resistor values. Note that there is no 'Rt' value, because frequency, gain and Q are all determined by R1, R2 and R3.
The circuit is deceptively simple, but the devil is in the details. The 'really odd' values shown for R1, R2 and R3 are the ideal calculated resistances, with a 'rationalised' value shown first. R2 must be a trimpot so the frequency can be adjusted. If desired, you can make R5 (say) 8.2k with a series 2k trimpot to adjust the gain of the summing amplifier and therefore the notch depth. I don't intend to show the frequency, gain and Q calculations here, but the easiest way to design the filter is to use the calculator program I wrote many years ago. This is available in the software page ... mfb-filter.exe. Your operating system, browser and/ or antivirus will no doubt complain, but the file is safe (I check this to ensure that no malware has been 'inserted'). It requires the VB6 runtime library (this should be included with Windows 10/ 11).
The gain needs to be set for unity, and the frequency and Q determined by your requirements. These filters are used in the 8-band subwoofer EQ project, hence the development of the program. The ultimate notch depth is highly dependent on the filter's gain, and even a tiny variation either way (referred to unity) will degrade the ultimate attenuation. As a utilitarian filter, this is probably of no great concern - you don't build an MFB notch filter for high attenuation (you may not build one at all ).
The Cauer (aka elliptical or Zolotarev) filter is a special case, where a traditional high or low pass filter is followed by a notch filter (or vice versa - the notch can come first in some designs). The advantage is a much steeper initial slope, and complete rejection of a small range of frequencies. The Cauer topology is sometimes used as an anti-aliasing filter, and can become very complex. Only the basics are shown here, as the design process is very involved if you need very high rejection of out-of-band frequencies.
A 1kHz, 18dB/ octave filter is based on U1, which has gain to allow equal-value components in the filter network. The notch filter is a twin-T with minimal bootstrapping, because very narrow bandwidth is not required (and is not desirable). The twin-T filter is tuned to a nominal frequency of 2.84kHz. The simple voltage divider at the output is to obtain overall unity gain (the filter has a gain of 2.2).
The initial rolloff is much faster than 18dB/ octave (it's about 25dB/ octave), and there is virtually no output at the notch frequency. The response then 'rebounds', reaching about -43dB at 4kHz, after which it falls, ultimately at 18dB/ octave. The final rolloff is always the same as the low-pass (or high-pass) filter, as the notch has no effect outside a range of about 2 octaves either side of the notch frequency. Multiple notch filters at selected frequency intervals can be used after the initial high/ low-pass filter, suppressing the out-of-band frequencies even more. An initial rolloff of >50dB/ octave is fairly easy to achieve.
'True' elliptic/ Cauer filters are very complex to design, and this is a highly simplified example. It's included here to illustrate that notch filters can be combined with high/ low pass filters to get a faster rolloff, but it's not intended to cover the design in any detail. There's a lot of information on-line, but it's not for the faint-hearted, as the equations will be considered somewhat daunting by most readers. It's notable that most specialised filter design software does not include elliptical filters. Elliptical filters are used in passive format (inductors, capacitors and transmission lines) for radio frequency applications, typically to remove troublesome frequencies that may overload sensitive RF front-end circuits.
In some cases, a notch filter may be able to remove an unwanted peak from a transducer (e.g. a loudspeaker). The notch depth will usually be fairly shallow, usually no more than 6dB or so. Several of the circuits shown can be adapted without affecting the frequency, but in many, reducing the notch depth will change the frequency. Ideally, the two will be separate functions, allowing you to select the desired frequency, then apply the appropriate notch depth and bandwidth with as little interaction as possible.
Unfortunately, none of the circuits shown allow fully independent adjustment of frequency, notch depth and Q. In fact, this is very hard to achieve with any filter, but some make it (at least a little) easier than others. In most cases, you're probably better off using a gyrator notch filter. These aren't useful to get deep notches, but they are economical and easy to build. They aren't covered here, as there's a complete description in the article Gyrator Filters. Note that it doesn't cover this specific application, but you should find enough info to let you set up an equaliser for a troublesome response anomaly.
Response correction is always tricky, and there will always be some element of compromise involved. There's little point trying to provide a complete treatise on the topic, because every case will be different, and there is no 'one size fits all' solution. There is one possible exception - a parametric equaliser. Using a state-variable filter with appropriate support circuitry, a parametric EQ can have variable Q (bandwidth), boost or cut, and variable frequency. Each can be adjusted individually without interaction, but the circuitry needed is fairly extensive. Consider that any transducer that requires a very sharp filter to obtain flat response is probably flawed and is not fit for the purpose.
The filters described here are not intended for response correction - they have one job, to remove a specific frequency as completely as possible. With a minimum rejection of 60dB (many are capable of much more), this goal is achieved pretty well.
It's unrealistic to expect a Q of much less than 0.4, indicating -3dB frequencies of around 41Hz and 61Hz. That means there's a band of about 20Hz where the signal is reduced or missing completely. A notch depth of better than 60dB is fairly easy to achieve with (almost) any notch filter topology, but the phase-shift/ all-pass version can achieve better than -100dB, at the expense of four high-quality opamps. If you just need a good notch and aren't too concerned by a bit of distortion, even a pair of TL072 opamps will be fine.
In almost all notch filters, there will be two resistors that require adjustment, and the notch depth at the desired frequency requires setting first one trimpot for minimum signal, then the other. This is an iterative process, and you may need to adjust both trimpots several times to get the tuning 'just right'. In most of the filters, the tuning resistances are indicated by Rt, and they will be a fixed resistor and trimpot in series. The MFB filter is the odd one out, as R1, R2 and R3 are all responsible for setting the frequency, gain and Q. Making R2 and R5 adjustable allows the filter and notch depth to be adjusted.
Bear in mind that if the goal is to remove mains hum, the frequency may average 50 or 60Hz, but the frequency may vary by 0.1Hz during the course of the day. A variation of 0.1Hz is enough to reduce a -80dB notch to perhaps -40dB, and there's no easy way that this can be corrected. Adding an 'auto-tuning' circuit will compensate in real time, but that adds a great deal of additional circuitry, using a phase-sensitive servo system controlling LDRs (light dependent resistors) to fine-tune the system in real time.
Somewhat predictably, I'm not going to provide the circuitry needed to achieve this, as it's not an especially straightforward process. For a system that carries a normal audio signal (not a single tone), the addition of 'auto-tune' is made that much more difficult. All auto-tuning filters used in distortion analysers only have a single frequency (and its harmonics) to deal with, making the circuitry 'simpler'. It's still complex though, and adds a lot of additional circuitry.
The choice of capacitors is critical. For low values, C0G/ NP0 ceramic are ideal, with film caps used for higher values (anything 1nF or more). Multilayer ceramic caps are unusable, because their temperature-sensitive characteristics mean that the notch frequency will be unstable. It is often a good idea to select the caps for the closest match you can get, as this will make sure that the tuning is predictable. Polypropylene caps have the lowest thermal drift, but unless extreme temperature variations are likely, polyester (aka Mylar) caps will usually be stable enough.
Where possible, all resistors should be metal film, and in most cases a tolerance of 1% is the minimum acceptable. If pots or trimpots are used, they should be the lowest reasonable value, with the majority of the resistance being fixed values. Where the optimum value is 11.79kΩ, this would be made up using 10k plus 1k plus a 2k trimpot. This will also work for 60Hz (with 220nF tuning caps).
For general usage, it's hard to go past the TL072 opamp. It's low-cost, has an extremely high input impedance, and the bandwidth is acceptable for use at any sensible frequency (i.e. less than 30kHz). The output impedance and drive capability are both alright, as there's rarely a need to load the output with less than 2kΩ. Most of the circuits will also work with other work-horse opamps, such as the 4558 or similar. Note that opamps won't affect the frequency to any significant degree (a fraction of 1Hz at most), and the performance of most notch filters is determined only by the resistance and capacitance of the tuning networks.
All the filters are shown designed for 50Hz, but 60Hz is not difficult. As noted above, for most of the filters the frequency is determined by ...
fo = 1 / ( 2π × R × C )
This can be re-arranged to determine R or C as required.
I've concentrated on notch filters to remove mains hum, which is still one of the major uses. Eliminating mains hum can be critical for medical systems (e.g. electrocardiography [ECG]) systems, and in some cases there may be other frequencies that interfere with the wanted signals. These can also be removed with one or more notch filters. With some electromechanical systems, a servo can become 'confused' if there's an unwanted resonance within the mechanical drive. If this is the case, a notch filter can remove the unwanted resonance, improving the performance of the servo. This is a topic unto itself, and won't be covered further here.
With seismometers and vibration sensors, notch filters can remove noise or unwanted frequencies to isolate earthquake or vibration signals. These measurements are usually very sensitive, and interfering signals can render a measurement almost useless if they aren't removed. Of course, a notch isn't the only solution, and in some cases the system will simply use a low-pass filter set to allow the wanted frequencies through, while blocking anything out of the normal range.
Notch filters have also been used (with mixed results) to help reduce feedback in public address systems. Feedback occurs at frequencies where signals from the speaker get back to the microphone, and they are usually a few narrow-band frequencies that cause most of the troubles. A notch can help, but there are better alternatives, including room treatment, optimum mic (and speaker) positioning, and good mic technique by those using the system.
For feedback suppression, 'automatic' tunable notch filters have been applied by a number of manufacturers. These determine the frequency of the feedback loop, and apply a notch at that frequency. In real installations, there will be any number of unstable frequencies, and it's not possible to eliminate them all. However, these systems can work well if set up properly. Acoustic feedback is always a moving target, and as one dominant frequency/ phase combination is corrected, another will occur. It might be very close to the original/ first feedback frequency, or it may be separated by an octave or more. There's a practical limit to the number of notch filters that can be applied before the sound quality is adversely affected. The Behringer 'feedback destroyer' is probably the best known implementation of this technique.
Notch filters are a special kind of filter, and in theory (and in practice - within measurement limitations) they have an infinite rejection of the selected frequency. Greater than 60dB (1:1,000) rejection is easy, so a 1V signal at the tuned frequency is reduced to 1mV, but even 100dB can be achieved (1V reduced to 10μV). The frequency to be rejected cannot drift, and a variation from 50Hz to 50.1Hz (2%) will cause the notch depth to be reduced to around -40dB.
The most common use for notch filters used to be for distortion analysis, where the signal is rejected, leaving only distortion and noise as the residual output. This was the method of choice until quite recently, and it's still used because it's easy and fast. You won't get all the 'bells and whistles' that come with advanced DSP (digital signal processing), but do get to see the distortion residual on a scope, and that can tell you everything you need to know about the nature of the distortion (and/or noise) once you're used to doing it.
If there's a troublesome frequency that interferes, be it high or low frequency, a notch filter can remove it, while removing no more than around 1/2 octave from the wanted signal. That's usually a problem for bass if you have to get rid of 50/ 60Hz hum, but it's less of a concern if the frequency is above 5kHz. Back in the days when people tried to build 'hi-if' AM receivers, a notch filter was essential because the intermediate frequency (IF) amplifier had a much wider bandwidth than normal. Most AM sets have a bandwidth of perhaps 3kHz (often less), so the 'whistle' caused by another transmitter with only 9 or 10kHz carrier spacing is attenuated and isn't a problem. Expecting high quality audio from AM is generally unwise - it may have been acceptable 100 years ago, but not now.
While I've concentrated on analogue solutions, many of the newer products that utilise notch filters (such as the 'feedback destroyer' mentioned above) use DSP to define the filter parameters. This is not something that will be covered here, as it requires dedicated hardware and software, and it's well outside the scope of this article.
1 Active Notch Filter (Bainter)
2 Bainter and 'bridged differentiator' circuits
3 Band stop filters with real operational amplifier, January 2003 (Jirà Kolár and Josef Puncochár)
4 LTC1059 Universal Switched Capacitor Filter (Analog Devices)
5 Feedback Cancellation (Stanford)
![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +NTM™ Crossovers |
The Neville Thiele Method™ (or simply NTM™) crossover network has been described in AES papers and elsewhere, but there is scant information available to most of us about exactly what it is and how it works. There have been claims that it is anything from 48dB/octave to 100dB/octave, and that it may replace the Linkwitz-Riley crossover for general use.
+ +What is not generally available is a description of the filter type, or any information about how it works. This article hopes to rectify that, and de-mystify the hype that inevitably builds up when something new and exciting is first introduced, but with little supporting data to allow an informed choice.
+ +I must confess that I was mightily perplexed when I saw my first NTM crossover - it was claimed to be 100dB/octave, the published frequency response I saw first showed 24dB/octave, and the circuit had far too few opamps and caps to approach any conventional filter greater than 24dB/octave. There was no opportunity to try to analyse the circuit or run any tests, since it didn't belong to me, wasn't in my workshop, and I only had the opportunity to have a brief look inside the case.
+ +It was only when I saw the real frequency response graph of an NTM crossover that the penny dropped, and I recognised that it was probably an elliptical filter.
+ +NTM™ and Neville Thiele Method™ are trademarks of Precision Audio Pty Ltd (2 Seismic Drv, Rowville VIC 3178, Australia)
+For further information, licensing, etc., please contact Precision Audio Pty Ltd.
The descriptions that one usually finds describe a filter network and a notch filter. These are combined to produce a filter with a greatly accelerated rolloff slope. Part of the description from a brochure by BSS audio [1] states the following ... + +
+ A Neville Thiele Method™ Crossover Filter (NTM™) is a new type of electrical/acoustical filter offering significant performance advantages over all previous crossover + filter types in audio applications. ++ +
The article continues with a description of 'how it works' - while not actually giving any figures whatsoever - just the diagram referred to ...
+ ++ The NTM crossover uses a unique notched response to achieve a very steep roll-off rate outside the pass-band. The 4th order Thiele crossover amplitude response looks like the + diagram overleaf. You will see that notches in the responses speed-up the rate of roll-off. Beyond the notch, the response rises again, but remains respectably attenuated. ++ +I object (a bit) to the term 'unique' in the above. The filter type is described in 'The Active Filter Cookbook [2], and is commonly known as an elliptical or Cauer filter. There is no denying that this filter type is rather obscure (not too many will have heard of it), but it is neither new nor unique. It is however, an extremely clever application of an old technique, with some lateral thinking and necessary adaptation to maintain a flat summed response. Its use in a crossover network is certainly new, and the application is unique (if not the filter type itself). + +
The frequency response is shown below, and this is an almost perfect match to that shown in the BSS brochure. The frequency and amplitude scales are different (from the BSS graph), but the response is virtually identical. For comparison, the response of a Linkwitz-Riley filter is also shown. Not shown is the summed response, which is completely flat for both filters.
+ +
Figure 1 - NTM and L-R Crossovers Compared
The NTM crossover response is shown in green (high pass) and red (low pass), while the L-R equivalent is in a yellowish colour (high pass) and blue (low pass). It is undeniable that at one octave either side of the 1kHz crossover frequency shown, the response is better than 60dB down from the nominal output level. Looking at the L-R filter by comparison, at the same frequency it is only 30dB down. Unfortunately, the response of an elliptical filter rises again after the notch - again readily visible.
+ +About ½ an octave above and below each notch, response is back up to about -36dB (equal to the L-R), and beyond that performance is inferior to the Linkwitz-Riley implementation. Ultimate rolloff for a fourth order elliptical filter is 12dB/octave. None of this is shabby by any means, but it does show that some of the descriptions used in advertising literature are rather misleading once the full details are known.
+ +
Figure 2 - NTM and L-R Crossover Phase Response
There is not much phase difference between the two - they are (for all intents and purposes) the same in this respect. The red graph is the NTM filter, and the L-R is shown in green. Both filters have a 360 degree shift across the band, with the NTM filter being very slightly worse in this respect than the L-R filter.
+ +In real terms, the difference is marginal only, and should not be audible. Despite the apparently radical phase shift, both drivers remain in phase with both filter types, and the phase shift in itself is normally inaudible (despite some claims to the contrary). While there are circumstances that can make phase shift audible, such a discussion is outside the scope of this article.
+ + +Please note that the diagram shown below is taken directly from my simulation. It is not (and does not purport to be) the actual circuit of an NTM filter, but I strongly suspect that it will be rather similar. I have not seen the actual circuit, nor has it been traced from an actual working NTM crossover, so the 'real thing' could possibly be completely different.
+ +The general principle of an elliptical filter is/should be pretty well known in engineering circles where filters are used extensively, and it consists of a conventional second order filter, followed by a second order state-variable filter.
+ +The high pass and low pass outputs of each state-variable filter are summed to give the response shown above. The values shown are those used in the simulation. The hardest part of implementing a filter such as this is component 'sensitivity' - the requirement for close tolerance is increased compared to (say) a more conventional Linkwitz-Riley crossover network.
+ +
Figure 3 - Elliptical Filter Crossover Network Schematic
As you can see, the capacitor values are non-standard, but that was done to obtain a nominal 1kHz crossover frequency. This was selected because it is a defacto standard when showing general filter responses. It is not a useful crossover frequency for real use. Phase response is shown above, and is almost identical for both filters (NTM and L-R), and the ripple in the summed outputs is less than 0.2dB.
+ + +The circuit itself is relatively conventional for this filter type, but there are some important variations and points that need some explanation. The first stage (around U1) is what is known as an 'equal component value' Sallen-Key filter. Unlike the standard circuit such as that shown in P09, the Q of the circuit is determined by the gain of the opamp, rather than the filter component values. This allows the circuit to use the same capacitor values as the next stage.
+ +The second filter is a state variable type, using U2, 3 and 4. The filter frequency is set by R10, R11, C3 and C4, and the Q is adjusted by R6. The Q needs to be somewhat higher than for a standard filter, because the summing amplifier (U5) adds a selected amount of (out of phase) high pass to the low pass filter (and vice versa). This creates the notch, and R12 determines the notch frequency.
+ +For such a complex filter, it is remarkably tolerant to component variations, but predictably less so than a fourth order L-R filter of the type used for Project 09.
+ +In case you were wondering, the circuit shown will work, but the frequency determining components will have to be changed to get the frequencies needed rather than the 1kHz shown. In answer to the question (which will get asked at some stage) "How do I change the frequency?", the answer is that you will have to figure that out for yourself. This is not (and is definitely not intended to be) a construction project ... it is an explanation of how the circuit works. No more, no less.
+ + +This article has hopefully removed some of the mystery behind the NTM crossover network, and shows what can be achieved using some lateral thinking. The rise in amplitude beyond the notch is rather unfortunate, but at -35dB represents an effective power that is over 3,000 times (3,162 to be exact) less than the maximum applied.
+ +This means that for an applied 100W input power to a loudspeaker driver (via the crossover of course), the worst case out-of-band power level (> 1 octave) is only 31mW. This is further attenuated at a rate of 12dB/octave. At 3 octaves from the crossover frequency, (125Hz and 3kHz), the out-of-band power level is down by 48dB - 1.58mW for 100W input. This is insignificant.
+ +By comparison, the L-R crossover is at -24dB one octave from crossover frequency (400mW), at two octaves it is at -48dB (as above - 1.58mW), and at three octaves it has an output of -72dB (6.3uW) - again, assuming 100W input power to the loudspeaker drivers. For all practical purposes, this is also insignificant.
+ +The end result is that loudspeaker drivers can be pushed closer to their limits, because the out-of-band power is reduced. There is usually no good reason to push any driver that hard in a domestic system, but it can result in a useful improvement for high powered professional applications.
+ +The greatest benefit is obtained at between ½ octave to 1 octave either side of the crossover frequency, with an improvement of around 10dB at the ½ octave frequency, increasing dramatically to the 1 octave point. This represents a significant improvement, but only where drivers are being pushed to their limits. In a domestic system, all drivers will (or should) generally have sufficient 'spare' bandwidth to be able to cope with the out-of-band power levels with no stress whatsoever.
+ +Overall, the circuit is very impressive though - not so much because of the cunning application of elliptical filters, but more because of a complete re-think about the way such filters are normally designed and tuned. The Neville Thiele Method™ certainly delivers a very worthwhile improvement in overall crossover network performance.
+ +++ ++
+ + Please note that the NTM crossover network is patented, so commercial use of the information presented here will infringe patent rights and may result in a law suit or other potentially + expensive unpleasantness.
+ + Also, as pointed out above, the circuit shown is not taken from any literature, service manual, physical crossover or anywhere else. It is my interpretation of a circuit that + will achieve the same result as an NTM crossover produced by a licensed manufacturer.
+ + Based on extensive searches, it would appear that this is the first published circuit (for general viewing) that achieves the results claimed for the NTM crossover network. This + page has been seen by Precision Audio, and a couple of minor changes made at their request. I have provided them with an undertaking that I will not (under any circumstances) + [my emphasis] provide tuning formulae or any additional information that would allow patent infringement.
Note that the patent for this network has expired in Australia, but is still active in the US and elsewhere worldwide (expected expiry date on or around 22 March 2021). However, to retain good faith it will not be published as a project, although I may add some more information here after the worldwide patents have expired. The schematic shown above is not the same as that used in the patent documents, and performance is not quite as good. However, it is still a workable circuit and performs as shown in the graphs.
+ +![]() | + + + + + + + |
Elliott Sound Products | +Opamps - A Short History |
![]() ![]() |
Opamps (or op-amps/ operational amplifiers) are the most common components in any modern analogue circuit. This includes audio of course, and the opamp has displaced most discrete transistor circuits in (nearly) all common applications. These devices are covered fairly extensively in a number of articles on the ESP site, but this article is intended to describe the evolution of opamps, which means a bit of history. Where possible, the list is chronological, but the lines are very blurry around the dates that many of the ICs were introduced. Very early opamps are easy, as there was little or no competition at the beginning of the 'opamp revolution'. As development progressed the range and number of different types expanded almost exponentially. The number of new devices has diminished of late, mainly because there are already so many, and performance is approaching the theoretical limits.
+ +Where appropriate, links to specific articles that have more detail will be shown in-line. Many of the things we take for granted in modern circuitry would be a great deal harder if we didn't have access to opamps, and the choice of available devices is a testament to their continued popularity.
+ +The choices are extraordinary. A search for 'op amp' on Mouser (purely as an example) shows 8,496 devices in their catalogue. This is reduced to 6,728 devices if we look only at opamps that are normally stocked, reduced to 5,708 if we filter for opamps that are in stock. Texas Instruments lists 2,422 datasheets for opamps, and TI is just one of many manufacturers. Most new types are only available in SMD packages, and we are starting to see fewer through-hole parts in distributor catalogues.
+ +The number of devices to keep as inventory is crazy, although many have the same base type number (e.g. TL072) but with different suffixes. These can indicate a 'better' part (selected parameters) or a different case style. For example a TL072P is PDIP (plastic package, dual in-line pins, through hole), while a TL072D is SOIC (small outline IC, SMD). There are several others as well, and different manufacturers may use suffixes that are different from those used by the original maker.
+ +There are a number of parameters that are important, but just how important depends on your application. An LM358 will satisfy your need for a cheap opamp that includes the negative supply in its inputs and output, but it's not 'rail-to-rail' (that means that inputs and outputs can extend to the positive and negative supplies). The LM558 is a very low current device (around 500μA for a dual opamp), but it's noisy, has high distortion (especially crossover distortion) and it's slow. If used for audio, the results will be disappointing.
+ +This doesn't make it useless, in fact it's a very handy opamp. So much so that the PDIP version seemed to go out of production a few years ago, but it was quickly reinstated. I've used it in a number of projects (not for audio though), and I wouldn't be alone in being seriously miffed if I couldn't get them. Quite obviously, many major manufacturers felt the same way, so it returned. In truth, there aren't many opamps that can match some of its unique properties (especially its low cost), and it's very useful for basic signal processing.
+ +The venerable TL072 (or its 'twin' the LF353) isn't wonderful either. They are fairly noisy (but 'quiet' for early generation FET-input devices), but they are used in their thousands in commercial products ranging from guitar amplifiers, general-purpose audio circuits, instrumentation (a 1TΩ input impedance can be rather useful) and countless other circuits.
+ +One of the very first opamp ICs that was 'affordable' was the μA709. This was an improved version of the μA702, which had comparatively high distortion due to an un-biased output stage. The 709 had no internal compensation, so three circuit nodes were pinned to allow the designer to optimise the stability of the device for the required task. After Bob Widlar (the designer) left Fairchild and joined National Semiconductor he came up with the LM101/301, a greatly improved opamp that took the design to a new level.
+ +Meanwhile, Fairchild developed what is quite possibly the most popular opamp ever made - the μA741 (later released by others as the LM741). Slow, noisy and pretty ordinary distortion figures didn't deter anyone, and it's still in use. A dual version is the LM/MC1458 - it's just as basic as the 741, but there are two in a single package.
+ +Note that in some articles elsewhere, you might see the MC/RC4558 listed as a dual 'equivalent' to the 741. It's not and never was. It's a fairly competent dual opamp that has been the mainstay of guitar effects pedals for decades, and it has reasonably good specifications. It's quieter than a TL072, and almost dirt-cheap. No need to wonder why it's popular.
+ +A little-known opamp was the μA739. These were used in the famous (or infamous) Crown DC300A power amplifier, and I also used them in a very early state-variable crossover network I designed in the 1970s (as near as I can recall). These were unusual, as they used a Class-A output stage, and an external resistor was needed from the output to the -ve supply. The package contained dual uncompensated opamps that required external parts to set the stability criteria. It was claimed that they were tolerant of a shorted output, but I can say based on experience that this was untrue. It was the easiest opamp to 'blow up' I ever used.
+ +Compared to some of the latest opamps, everything mentioned so far is very basic, but that should never detract from getting an opamp to do just what you need it to do. In the early days, even modest opamps were expensive, but the performance available now is simply astonishing. In some cases you pay dearly for that, sometimes you get a pleasant surprise.
+ +The K2-W valve opamp was state-of-the-art when it was made (starting in ca. 1951), being the first 'general-purpose' opamp. Others came before it, but none was as easy to use or so small. Using a pair of 12AX7 valves, it featured the common elements we associate with opamps today. A differential input stage was followed by a VAS (voltage amplifier stage) and a cathode-follower output. R7 applies positive feedback to increase the gain of V2A (the VAS). Without that, the gain was too low to allow the gain to be set with external resistors, increasing open-loop gain from around 4,000 to 20,000 V/V. The opamp used an octal relay base which provided easy connection to the opamp proper. Positively huge by modern standards, it was a ground-breaker in its day. It used ±300V supplies (plus the 6.3V heater supply).
+ +It's interesting that the heaters were all operated in parallel, rather than the lower current series-parallel connection (using 12.6V). There's a vast amount of information on the K2-W, and it has been extensively analysed by countless luminaries in circuit design. I don't intend to add to this, but if you want to know more, simply search for 'Philbrick K2-W'. Note that some circuits you'll see use slightly different resistor values, but the end result is much the same.
+ + +One thing that I will not do here is debate the sound of the various devices. Nor will I claim that they all sound the same, because there are quite a few that will most certainly announce their presence with high noise and distortion, poor high frequency performance, etc. One thing I will state categorically is that to accuse any opamp of 'poor bass' is self-delusion. By their very design, opamps have the best possible performance at DC, because that's where they have their maximum open-loop gain, and therefore the most feedback and highest linearity.
+ +DC isn't 'bass', but in the range from 16Hz to ~100Hz, the differences between opamps are so small as to be considered negligible. However, there are some opamps (from the best to the 'worst') that may have excessive low-frequency (1/f) noise. Whether it ever becomes audible is debatable, but it is a possibility. Frequency response within the same range is determined by external factors, not the opamp.
+ +If I never hear someone complaining that XXX opamps have no 'slam' or 'punch' or are 'slow' in the bass region, it will be too soon. Get used to it - there is no difference, and simple logic says that this must be so. Changing opamps for 'better bass' is no different from gold-plating your letterbox in the hope of getting nicer letters.
+ +The article Opamp Frequency Vs Gain has some useful info that you can use to compare opamps, but only a limited number can be fitted into a short article, so don't be offended if your favourite is missing.
+ +Noise is often a major consideration, and there are two types - voltage noise and current noise. Voltage noise is dominant in low impedance circuits - up to 100k or so, and above that, current noise becomes the deciding factor. Using a low voltage noise opamp in a circuit with a 10MΩ input impedance would not be sensible, so you need a device with low current noise. That almost invariably means an opamp with JFET inputs (some CMOS opamps might have low enough noise, but most do not).
+ +Where someone believes that they do hear a difference with full-range material, the reasons need to be investigated. We can measure levels (of harmonics or other 'disturbances') that cannot be heard, and if there are notable differences they will show up in measurements. This is not just frequency response or THD, but intermodulation distortion as well. Transient response should never be an issue, since an opamp using ±15V supplies reproducing 30kHz at 5V RMS only requires a slew rate of 1.32V/μs (although at least 5V/μs would be advised). Any competent opamp can manage that, even though no 'small-signal' audio will ever require it.
+ +In some cases opamps are expected to drive low impedances. 600Ω is often quoted, but opamps driving internal circuits may have to deliver more current than expected. This will increase distortion, and may cause premature clipping within the circuit. Many equalisers require fairly high current internally, and you may see an unexpected opamp used where a more common type might seem more sensible. This is all part of the design process, ensuring that unintended problems don't appear in the final product.
+ + +Opamps can be used to amplify voltage, current or both. In reality, almost all circuits do both, so in that sense they are power amplifiers (power being the product of voltage and current). Most of the time, we don't draw much current from the output, and it's limited to a few milliamps at most. However, the current is available whether we use it or not. Some opamps can deliver ±20mA without distorting, but most cannot - even if the specifications claim otherwise. For example, a TL07x opamp can allegedly deliver ±16mA into a 600Ω load (based on datasheet graphs). This may be true, but expect the distortion to be much higher than it will be at more realistic currents (±5mA or less).
+ +Voltage amplifiers predominate, and they can be non-inverting (the most common) or inverting. Strictly speaking, an inverting amplifier is a current amplifier by default, but we don't often see it that way. However, it's still true, since the output voltage is determined by the input current. A voltage-to-current converter is almost always used at the input - it's called a resistor.
+ +Think of a 1k resistor. If it has 1V across it, it passes 1mA, regardless of the voltage being AC, DC, RMS or peak. That's the voltage-to-current converter right there. The opamp then operates as a current-to-voltage converter (aka a transimpedance amplifier), a term that often creates fear and loathing to the uninitiated, but there's nothing complex about the basic idea. It does become complex if your input current is only a few microamps (or less), but the principle is not changed.
+ +It doesn't even matter if the opamp has JFET inputs (normally considered extremely high impedance), because the inverting input is maintained at zero volts by feedback (a dual supply is assumed). In an inverting gain stage, the non-inverting input is grounded, and the inverting input is a virtual ground/ earth. If the non-inverting input is at zero volts (earth/ ground), then the inverting input has to be at the same voltage (see my 'rules' of opamps below). Opamps rely on feedback to function, as without it the gain is so high (and the frequency response so limited) that they would be no use to anyone.
+ ++There is a component that looks like an opamp, but it isn't. It's called a comparator, and these are designed to be operated open-loop. Applying feedback will result in oscillation, and there is no facility to apply compensation. These are covered in depth in the article Comparators, The Unsung Heroes Of Electronics. They are not discussed further here, but you do need to be aware of the differences. This is doubly true because the basic schematic symbol is the same for opamps and comparators. ++ +
Many years ago I determined what I called my 'two rules of opamps'. Provided any (conventional) opamp is operated within its linear range, the feedback works to keep both inputs at the same voltage. There will be small deviations caused by input offset, but the principle is unchanged. If the feedback cannot achieve this, the output takes the polarity of the more positive input. If the inverting input is at a higher (more positive) voltage than the non-inverting input, the output will be negative (or zero volts for single supply circuits). Naturally, the converse also applies. If you understand these basic rules, opamps will not cause any brain-pain.
The two rules are therefore ...
++ 1. In linear mode, the feedback works to keep both inputs at the same voltage, and ...+ +
+ 2. If this is not possible, the output takes the polarity of the more positive input. +
There is no (working) opamp circuit where one or the other of these rules does not apply. If you find a significant difference (more than a few millivolts) there is a wiring error or the opamp is faulty. Note that 'more positive' applies even if both inputs are negative. For example, -1V is more positive than -2V. These 'rules' always apply, but they are limited to voltage feedback types (the vast majority of all opamps in use). Current feedback (CFB) opamps are sometimes different, but with many the 'rules' still apply. These are a special case, and are covered in the article Current Feedback vs. Voltage Feedback.
+ + +The earliest opamps were valve (vacuum tube) based, and were rather limited. These are discussed extensively on-line, but one of the more popular versions was the K2-W, made by George A Philbrick Researches (GAP/R), which used a pair of 12AX7 valves, a differential input stage and cathode follower output. These were not used for audio, as they were too expensive and it was far easier to use conventional circuits.
+ +ICs that we now consider to be 'true' opamps began in 1964. Prior to that, most amplification was done with fully discrete circuits, including valve designs. Most were intended for AC only, because DC amplification was difficult. It was done when necessary by using early 'chopper' amplifiers that (pretty much literally) chopped the DC to produce an AC voltage (a squarewave), and that was amplified. If necessary this was converted back to DC after amplification. Chopper opamps still exist, and are often referred to as 'zero drift' opamps.
+ +The creation of opamps as we know them changed everything. Any number of (audio) people claimed they were 'horrible' compared to discrete transistor or valve designs, but reality (and pragmatism) quickly saw opamps used for most amplifying tasks that would otherwise be needlessly complex. This prejudice continues, and there is any number of people who will relieve you of (lots of) cash for discrete designs that few (if any) people will pick in a double-blind test.
+ +Please be aware that the circuit diagrams are believed to be accurate, but may contain errors or be slightly different from the actual circuit. There are many opamps missing, as I only included those for which a schematic could be found. I've resisted the urge to try to explain how each one works. Some will be easy to follow (and simulate), others not. Most of the common circuit 'blocks' are seen in the diagrams, such a s long-tailed pairs, current mirrors, Darlington and Sziklai pairs. Resistors are usually kept to the minimum, as they are comparatively difficult to fabricate on silicon. Fabrication of capacitors is also difficult, even with low values.
+ +The four important building blocks are shown above. The long-tailed pair (LTP - Q1, Q2) uses a current sink (same as a current source) in the emitter circuit. This uses a reference based on D1, D2, with Q3 set for 1mA (650mV forward voltage for transistors and diodes). The collector load for the LTP is a current mirror, which ensures that Q1 and Q2 draw the same current. The VAS (voltage amplifier stage) converts the signal from current-mode to voltage-mode, and is followed by an output stage (typically dual emitter-followers). These circuits are used extensively in all opamp designs (including discrete). The one shown has a gain of about 2,200 (66dB), but this can be increased dramatically by using a current source/ sink as the load for Q6 (replacing R3). A very rough simulation shows an open-loop gain (no feedback) of 17,000 (almost 84dB) with a 3mA current sink for Q6. ±12V supplies are assumed.
+ +These circuit blocks can be seen in all of the drawings below, although sometimes they can be hard to identify. I didn't include an output stage protection circuit in Fig. 3.0 to keep it as simple as possible. Every stage that's added makes it harder to keep the circuit stable (free from high-frequency oscillation), because each adds some phase shift. Even the simple circuit shown above will oscillate when a 3mA current sink is added to replace R3 (4k resistor). While the simulator claims it's stable, I don't believe that for an instant.
+ + +The μA702 was the first opamp to be released, although it was so expensive that it was probably only bought by the military. With only 9 transistors (all NPN), its performance was mediocre by modern standards, but at the time it was a minor miracle. Released in 1964, it was the first monolithic opamp IC (meaning everything on a single 'chip' of silicon).
+ ++ The μA702 is a monolithic DC amplifier, constructed using the Fairchild Planar Epitaxial process. It is intended for use as an operational amplifier in analog computers, as a precision + instrumentation amplifier, or in other applications requiring a feedback amplifier useful from DC to 30MHz. ++ +
The μA702 had very limited open-loop gain (around 2,500 or 68dB), but the IC process meant that it could outperform 'equivalent' discrete circuits. This is a characteristic of all IC opamps because the transistors are thermally matched, and this minimises offset drift with temperature. There are some very clever tricks used in the IC to allow the use of all NPN transistors. It's unusual, in that it included a ground pin, something that most opamps have not used since. The output stage is Class-A, using a resistor from the emitter of the lone output transistor. A bit of additional gain is obtained by applying positive feedback into the emitter of Q9, via R10 and R11. The emitter resistor (R6) is coupled to the junction, and it's a positive feedback circuit that adds some gain. The positive feedback must be kept below unity to prevent oscillation.
+ + +The μA709 followed in 1965, and was an immediate success. With much higher gain and better performance overall, it was also comparatively cheap. The output stage is unbiased, so crossover distortion would be inevitable at low levels. Feedback can't remove it, because the stage has zero gain when both transistors are off. No gain means no feedback. This IC had PNP transistors, which made internal level-shifting far easier than with all NPN devices.
+ +The IC fabrication process means that PNP transistors are rather poor compared to their NPN equivalents, but using clever design techniques meant that the effects were mitigated - at least to a degree. This has always been an issue with linear circuits, and even today the PNP transistors in an IC aren't as good as the NPN devices. All manufacturers have found ways to get around this limitation, and it should not be a concern with any opamp that you can buy.
+ ++These circuits are general-purpose operational amplifiers, each having high-impedance differential inputs and a low-impedance output. Component matching, inherent with silicon monolithic circuit-fabrication techniques, produces an amplifier with low-drift and low-offset characteristics. Provisions are incorporated within the circuit whereby external components may be used to compensate the amplifier for stable operation under various feedback or load conditions. These amplifiers are particularly useful for applications requiring transfer or generation of linear or nonlinear functions.+ +
+ +The μA709A circuit features improved offset characteristics, reduced input-current requirements, and lower power dissipation when compared to the uA709 circuit. In addition, maximum values of the average temperature coefficients of offset voltage and current are specified for the μA709A. +
There's no doubt that this was a very good opamp for the day. The unbiased output stage is a pity, but there are plenty of applications where this is not a major limitation. You can see that the two output transistors (Q10 and Q13) have their bases tied together, so the drive signal has to overcome the 0.7V base-emitter voltages before the output responds. The 'dead zone' created causes crossover distortion if used for audio frequency AC, but this is only an issue with 'true' audio signals. The cost of these early IC opamps was such that no one considered their use in audio circuitry, as discrete designs of the day were 'good enough' and budget-friendly. Indeed, by comparison, a discrete 2-transistor Class-A preamplifier would outperform a 709 easily.
+ + +The μA741 is quite possibly the most popular opamp of all time. When it was released in 1968, everyone loved the fact that no external parts were needed for stability (no external compensation capacitor), and it was stable at unity gain. This made it ideal as a voltage follower (buffer) or general-purpose amplifier, and they were used by almost everyone (including for audio). It was common to see them with a pair of low-noise transistors (usually as a long-tailed pair) at the input to get lower noise for RIAA (phono) and microphone preamps.
+ +It's almost certain that you won't find anyone old enough to see the introduction of the 741 who didn't use them. They are still popular, despite their many shortcomings compared to modern opamps, but often a designer just wants something that will work, with no fuss, and no need to be to particular about supply bypassing, PCB layout or anything else that may cause problems in operation. If you need a dual version, the MC/RC1458 is ideal - very similar specs overall, and almost no likelihood of malfunction even with breadboard or Veroboard.
+ ++ The μA741 is a general-purpose operational amplifier featuring offset-voltage null capability. The high common-mode input voltage range and the absence of latch-up make the amplifier ideal for + voltage-follower applications. The device is short-circuit protected and the internal frequency compensation ensures stability without external components. A low value potentiometer may be + connected between the offset null inputs to null out the offset voltage as shown in Figure 2.+ +
+ + The μA741C is characterized for operation from 0°C to 70°C. The µA741I is characterized for operation from –40°C to 85°C. The µA741M is characterized for operation over the full military + temperature range of –55°C to 125°C. +
The μA741 uses a biased output stage, with a now conventional bias servo based on Q18, Q19 and R10. There's also much more use made of current mirrors, both to increase gain and reduce non-linearities (distortion). We also see some of the first transistors with dual collectors/ emitters. These are easily fabricated in an IC.
+ +Later versions of the μA741 were different from early designs. Performance was (pretty much) unchanged, but as fabrication techniques improved, designs could be improved with virtually no cost penalty. The internal changes are not always obvious, but all manufacturers have a disclaimer on datasheets that says that they reserve the right "to make changes to their products or to discontinue any product or service without notice", and advise customers to obtain the latest version of relevant information to verify that information being relied on is current and complete.
+ + +These opamps dominated the market for a time. While never as popular as the μA741, they were much faster. Noise was about the same or slightly better (it wasn't mentioned in the 741 datasheet). Being externally compensated meant that the designer had to work out the optimum compensation capacitor for the desired performance, and it meant that an extra component was required. Hardly something to complain about, especially since it made the opamp more versatile.
+ +The LM301 didn't get a great deal of traction in audio applications, but it was used (actually the LM301A) in the Quad 405 series of power amplifiers. This (amongst other things) created something of a stir (to put it mildly) in the audio fraternity. Many people thought that using an opamp in a power amplifier was sacrilege, and the 'current dumping' technique used caused even more fuss. It was even claimed that it couldn't possibly work, even though it was quite obvious that it did!
+ ++ The LM101A series are general purpose operational amplifiers which feature improved performance over industry standards like the LM709. Advanced processing techniques make possible an order of + magnitude reduction in input currents, and a redesign of the biasing circuitry reduces the temperature drift of input current. Improved specifications include: + ++ ++
+ + This amplifier offers many features which make its application nearly foolproof: overload protection on the input and output, no latch-up when the common mode range is exceeded, and freedom + from oscillations and compensation with a single 30 pF capacitor. It has advantages over internally compensated amplifiers in that the frequency compensation can be tailored to the particular + application. For example, in low frequency circuits it can be overcompensated for increased stability margin. Or the compensation can be optimized to give more than a factor of ten + improvement in high frequency performance for most applications.- Offset voltage 3 mV maximum over temperature (LM101A/LM201A) +
- Input current 100 nA maximum over temperature (LM101A/LM201A) +
- Offset current 20 nA maximum over temperature (LM101A/LM201A) +
- Guaranteed drift characteristics +
- Offsets guaranteed over entire common mode and supply voltage ranges +
- Slew rate of 10V/μs as a summing amplifier +
+ + In addition, the device provides better accuracy and lower noise in high impedance circuitry. The low input currents also make it particularly well suited for long interval integrators or + timers, sample and hold circuits and low frequency waveform generators. Further, replacing circuits where matched transistor pairs buffer the inputs of conventional IC op amps, it can give + lower offset voltage and a drift at a lower cost.
+ + The LM101A is guaranteed over a temperature range of -55°C to +125°C, the LM201A from -25°C to +85°C, and the LM301A from 0°C to +70°C. +
National Semiconductor released the LM101 (and its lower spec LM301) in 1968. These were a vast improvement on many of the earlier Fairchild designs, and National Semiconductor was founded by former Fairchild employees. This is now a great deal harder, because most companies include 'non-compete' clauses in employment contracts to prevent this from happening (Intel came about by similar skulduggery). Bob Widlar moved to National and took his considerable design expertise with him, but that didn't stop Fairchild from releasing the 741 and pretty much taking over the market.
+ + +The LM318 was almost a quantum leap over earlier opamps. It was released by National Semiconductor in 1971. With up to 15MHz bandwidth (small signal) and a 50V/μs slew rate, their speed was unmatched at the time. There are dire warnings about the danger of not applying proper bypassing techniques. The effects may not be immediately audible or visible on a scope, but internal oscillation could cause degraded performance.
+ ++ The LM118 series are precision high speed operational amplifiers designed for applications requiring wide bandwidth and high slew rate. They feature a factor of ten increase in speed over general + purpose devices without sacrificing DC performance.+ +
+ + The LM118 series has internal unity gain frequency compensation. This considerably simplifies its application since no external components are necessary for operation. However, unlike most + internally compensated amplifiers, external frequency compensation may be added for optimum performance. For inverting applications, feedforward compensation will boost the slew rate to over + 150V/μs and almost double the bandwidth. Overcompensation can be used with the amplifier for greater stability when maximum bandwidth is not needed.
+ + Further, a single capacitor can be added to reduce the 0.1% settling time to under 1μs. The high speed and fast settling time of these op amps make them useful in A/D converters, oscillators, + active filters, sample and hold circuits, or general purpose amplifiers. These devices are easy to apply and offer an order of magnitude better AC performance than industry standards such as the + LM709. The LM218 is identical to the LM118 except that the LM218 has its performance specified over a -25°C to +85°C temperature range. The LM318 is specified from 0°C to +70°C. +
An interesting limitation is that when used as a buffer (voltage follower), the inverting input must not be connected directly to the output (unlike almost all other opamps). The minimum resistance between these two pins is 5k, which may be bypassed with a small capacitance (the datasheet suggests 5pF). The IC is internally compensated, but feedforward compensation can be used to increase open-loop bandwidth and increase the slew-rate to 150V/μs.
+ + +These devices were designed by Signetics (ultimately bought by Philips), and were aimed at audio. Released in 1979, they quickly cemented their place in audio circuits, and were the mainstay of almost every mixing console made since their release, and up until comparatively recently. They have high supply current, but were the first opamps that were designed to drive a 600Ω load (a common requirement at the time). With very low noise and distortion, they weren't surpassed until Texas Instruments released the LM4562. This point may be argued, but it's a view held by many audio designers. Like the LM318, proper bypassing is absolutely essential to ensure that performance isn't compromised.
+ +For most audio projects, the NE5532 (dual) is still an excellent choice. There are 'better' opamps to be sure, but in 99.9% of cases the difference will be inaudible. The IC is now available from multiple manufacturers, and while some people claim that different maker's ICs sound 'different', this is (generally) not backed up by measurements.
+ +Note that the component numbering is mine - the available schematics don't show designators, and most resistor values are also not included. Depending on the datasheet, you may see minor differences when a circuit diagram is included (they were not disclosed for many years). The one 'failing' of the NE5532 is that it has mediocre DC offset performance, but this is not (or should not) be an issue with any audio circuit, as DC should be blocked by a capacitor as a matter of course.
+ +Datasheet Description ++The NE5534, NE5534A, SE5534, and SE5534A are monolithic high-performance operational amplifiers combining excellent DC and AC characteristics. Some of the features include very low noise, high output drive capability, high unity gain and maximum-output-swing bandwidths, low distortion, and high slew rate.+ + +
+ +These operational amplifiers are internally compensated for a gain equal to or greater than three. Optimization of the frequency response for various applications can be obtained by use of an external compensation capacitor between COMP and COMP/BAL. The devices feature input protection diodes, output short-circuit protection, and offset-voltage nulling capability.
+ +For the NE5534A, a maximum limit is specified for equivalent input noise voltage.
+ +The NE5534 and NE5534A are characterized for operation from 0°C to 70°C. The SE5534 and SE5534A are characterized for operation over the full military temperature range of – 55°C to 125°C. +
Many people consider the TL07x series of opamps to be 'inferior', and don't consider them to be worthy of hi-if. In general this is untrue, provided you use them within their limitations. Probably the most annoying 'feature' is a polarity reversal if the input common mode range is exceeded. This is difficult within a circuit, but a TL07x that interfaces with the outside world is at some risk. If either input is brought close to the negative supply voltage (VEE), the output may change polarity - you expect it to be low, but it suddenly swings to close to the positive supply rail (VCC). Note that the offset null facility is only available on the TL071. The series was introduced in ca. 1978.
+ +The TL07x series has been popular for many years, and they are still common in audio gear, guitar amps, etc. The polarity inversion is so well-known that a lot of datasheets (especially for FET input opamps) proclaim that they are "free from polarity inversion if the common mode range is exceeded". This problem is often used as a reason not to use TL07x opamps, but it rarely causes any issues. If it does occur, the sound is most unpleasant, but I don't think I've ever had a problem in any 'real' circuit. As with the NE5534, the component numbers are mine - they aren't shown in the datasheet.
+ +Datasheet Description ++ The JFET-input operational amplifiers in the TL07x series are designed as low-noise versions of the TL08x series amplifiers with low input bias and offset currents and fast slew rate. The low + harmonic distortion and low noise make the TL07x series ideally suited for high-fidelity and audio preamplifier applications. Each amplifier features JFET inputs (for high input impedance) + coupled with bipolar output stages integrated on a single monolithic chip.+ +
+ The C-suffix devices are characterized for operation from 0°C to 70°C. The I-suffix devices are characterized for operation from –40°C to 85°C. The M-suffix devices are characterized for operation + over the full military temperature range of –55°C to 125°C. +
The input impedance of these opamps is quoted as 1TΩ (one tera-ohm, or 1,000GΩ), but this is a theoretical value that's almost impossible to achieve in practice. Printed circuit board leakage (along with leakage across the package itself) will dominate, even if the input is 'guarded' (a PCB layout technique that bootstraps the input section with a ring of copper around input circuitry). I've used a technique I call 'sky-hooking' - all input circuitry (including the input pin) is connected in mid-air, with no input pin connections to the PCB.
+ +The TL08x series is virtually identical to the TL07x (supposedly marginally better). The LM355/6 is generally considered 'equivalent' to a TL071. Many of the specs are almost identical, but these opamps weren't available in dual or quad versions. Usage seems to be very low - I've seen almost no circuits of commercial equipment that used them. It appears that they may be unavailable now.
+ ++ +
The OP07 is made by Analog Devices, and is classified as a precision opamp with ultra-low DC offset. Without nulling, offset is internally trimmed to be within ±75μV, and that can be reduced by using the offset null pins. It's not especially quiet (~10nV√Hz), but for a bipolar transistor input opamp it has a higher than 'typical' input impedance. This is a good opamp to use when very low offset is important.
+ +Considering the DC accuracy and its other specs, it's very reasonably priced from most distributors. It's not suitable for driving low-impedance loads (around 1kΩ is the lower limit), but that's rarely an issue for instrumentation applications. I've specified the OP07 in at least one project, but they are widely used in commercial/ industrial designs.
+ +The schematic is simplified, in that is shows current sources as a symbol rather than the complete circuit. Unfortunately, the current passed by each isn't stated anywhere. The gain is stated in V/mV (not uncommon), and it works out to 200,000 (106dB) open loop (minimum). In most cases it's much higher - there's a graph in the datasheet that shows a gain of 114dB at 25°C.
+ +Datasheet Description ++ The OP07 has very low input offset voltage (75 µV maximum for OP07E) that is obtained by trimming at the wafer stage. These low offset voltages generally eliminate any need for external + nulling. The OP07 also features low input bias current (±4 nA for the OP07E) and high open-loop gain (200 V/mV for the OP07E). The low offset and high open-loop gain make the + OP07 particularly useful for high gain instrumentation applications.+ +
+ + The wide input voltage range of ±13 V minimum combined with a high CMRR of 106 dB (OP07E) and high input impedance provide high accuracy in the non-inverting circuit configuration. + Excellent linearity and gain accuracy can be maintained even at high closed-loop gains. Stability of offsets and gain with time or variations in temperature is excellent. The accuracy + and stability of the OP07, even at high gain, combined with the freedom from external nulling have made the OP07 an industry standard for instrumentation applications.
+ + The OP07 is available in two standard performance grades. The OP07E is specified for operation over the 0°C to 70°C range, and the OP07C is specified over the -40°C to +85°C temperature range. +
You won't find the OP07 in audio circuits, although I'm sure that someone would have tried them. There's no reason that wouldn't perform well, but most audio designers tend to stay with opamps that are at least claimed to be suitable for audio. It would be useful for a DC servo (in power amplifiers) with its low DC offset, but having bipolar transistor inputs means that the impedance is likely to be a bit too low, meaning that a high value integrating capacitor is needed (see DC Servos - Tips, Traps & Applications for details).
+ +The input transistors have bias-current compensation, so the input current drawn from the external circuitry is greatly reduced. Unfortunately, the extra transistors (Q5,6,7,8) add some noise, so you can't expect to use it for low-noise circuitry.
+ + +Released (by Analog Devices) in around 1990, this is an RF (radio frequency) opamp, but is more commonly 'restricted' to high-speed video applications. Uncompensated, the gain-bandwidth product is 750MHz, but the usable bandwidth is 'only' 120MHz. The features (from the datasheet) are as follows ...
+ +++ ++
+High speed 120 MHz bandwidth, gain = -1 + 230 V/μs slew rate + 90 ns settling time to 0.1% + Ideal for video applications + 0.02% differential gain + 0.04° differential phase + Low noise 1.7 nV/√Hz input voltage noise + 1.5 pA/√Hz input current noise + Excellent DC precision 1 mV maximum input offset voltage (over temperature) + 0.3 μV/°C input offset drift + Flexible operation Specified for ±5 V to ±15 V operation + ±3 V output swing into a 150Ω load + External compensation for gains 1 to 20 + 5 mA supply current +
The schematic is simplified, and the datasheet says the IC has 46 transistors. The current sources/ sinks will all use transistors, and for IC fabrication, diodes are almost always 'diode-connected' transistors (base and collector joined).
+ +Datasheet Description ++The AD829 is a low noise (1.7 nV√Hz), high speed op amp with custom compensation that provides the user with gains of 1 to 20 while maintaining a bandwidth >50 MHz. Its 0.04° differential phase and 0.02% differential gain performance at 3.58 MHz and 4.43 MHz, driving reverse-terminated 50Ω or 75Ω cables makes it ideally suited for professional video applications. The AD829 achieves its 230 V/μs uncompensated slew rate and 750 MHz gain bandwidth while requiring only 5 mA of current from power supplies. ++ +
The AD829 is still in production (well over 30 years at the time of writing), but as you'd expect for such a high-spec part, it's not inexpensive. It's available in several packages, from DIP to SMD (including LLCC). This isn't an opamp that I'd suggest for audio, although if properly compensated I'm sure it would do a fine job. It's too expensive, and doesn't really offer any significant advantages over more common audio opamps. It can drive a 150Ω load, but with greatly reduced voltage swing. Distortion is low, but it doesn't compare to an LM4562 (for example). The internal compensation is sufficient with a noise gain of 20 or more, but for lower gain external compensation is required.
+ + +This is an unusual opamp, in that it has low supply current but can drive 600Ω loads. It claims to have no crossover distortion despite the very low current. I could find no details on when these were introduced, but apocryphal evidence indicates that they have been around for quite some time. The DIP package is now obsolete, but SMD versions are still available at low cost.
+ +++ ++
+Features: + 600Ω Output Drive Capability + Large Output Voltage Swing + Low Offset Voltage: 0.15 mV (Mean) + Low T.C. of Input Offset Voltage: 2.0μV/°C + Low Total Harmonic Distortion: 0.0024% (@ 1.0 kHz w/600Ω Load) + High Gain Bandwidth: 5.0 MHz + High Slew Rate: 2.0 V/μs + Dual Supply Operation: ±2.0 V to ±18 V + ESD Clamps on the Inputs Increase Ruggedness without Affecting Device Performance +
The MC33178/9 series is a family of high quality monolithic amplifiers employing Bipolar technology with innovative high performance concepts for quality audio and data signal processing applications. This device family incorporates the use of high frequency PNP input transistors to produce amplifiers exhibiting low input offset voltage, noise and distortion. In addition, the amplifier provides high output current drive capability while consuming only 420μA of drain current per amplifier. The NPN output stage used exhibits no deadband crossover distortion, large output voltage swing, excellent phase and gain margins, low open-loop high frequency output impedance, symmetrical source and sink AC frequency performance. ++ +
This rather unusual opamp uses a boosted output stage to combine a high output current with a supply current lower than similar bipolar input opamps. Its 60° phase margin and 15dB gain margin ensure stability with up to 1000pF (1nF) of load capacitance. The ability to drive a minimum 600° load makes it particularly suitable for telecom applications. Operation is from ±2V to ±18V, meaning that it can be operated from a single 5V supply.
+ +There's no reason not to use it for audio, but there's also no compelling reason to include it in a modern design. There are many other opamps that out-perform it in nearly all respects, but expect higher supply current. The combination of very low current and the ability to drive low-impedance loads makes it unique.
+ + +The CA3130 is a BiMOS (bipolar/ MOSFET) opamp, that can be useful in a number of circuits. This is not a 'hi-fi' device, but it is ideal for many simple instrumentation circuits. It's pretty noisy (no noise figure is even quoted for the 3130), but it will be perfectly alright for reasonable signal levels.
+ +The 3130 is uncompensated, and a capacitor is needed between the 'Compensation' pins. For a unity gain buffer you need around 56pF, but this can be reduced if the circuit is operated with gain. Because the input impedance is so high, best results will be obtained when the source impedance is 100k or more, as current noise is claimed to be quite low. It's not specified though.
+ +Datasheet Description ++ 15MHz, BiMOS Operational Amplifier with MOSFET Input/CMOS Output+ +
+ + CA3130A and CA3130 are op amps that combine the advantage of both CMOS and bipolar transistors. Gate-protected P-Channel MOSFET (PMOS) transistors are used in the input circuit to provide + very-high-input impedance, very-low-input current, and exceptional speed performance. The use of PMOS transistors in the input stage results in common-mode input-voltage capability down to + 0.5V below the negative-supply terminal, an important attribute in single-supply applications.
+ + A CMOS transistor-pair, capable of swinging the output voltage to within 10mV of either supply-voltage terminal (at very high values of load impedance), is employed as the output circuit. The + CA3130 Series circuits operate at supply voltages ranging from 5V to 16V, (±2.5V to ±8V). They can be phase compensated with a single external capacitor, and have terminals for + adjustment of offset voltage for applications requiring offset-null capability. Terminal provisions are also made to permit strobing of the output stage. +
Just because an opamp has a similar number doesn't mean that it's related in any way to another. You would think that the CA3130 and CA3140 were related, but they are very different devices. The CA3140 is classified as a BiMOS opamp, and uses MOSFETs for the input with most of the internal circuitry using BJTs. The noise is quoted as 40nV/√Hz (1kHz), and while you might expect to see a current noise figure quoted, it's not in the datasheet (at least not in the one I have).
+ +The input impedance is claimed to be 1.5TΩ, something that will be very hard to verify on the workbench. This is a good opamp to use where noise isn't a major issue, and I recommended it in the Project 154 PC oscilloscope adapter. It's not especially cheap, but it will work with low supply voltages, down to 4V single supply.
+ +Datasheet Description ++ The CA3140A and CA3140 are integrated circuit operational amplifiers that combine the advantages of high voltage PMOS transistors with high voltage bipolar transistors on a single monolithic chip.+ + +
+ + The CA3140A and CA3140 BiMOS operational amplifiers feature gate protected MOSFET (PMOS) transistors in the input circuit to provide very high input impedance, very low input current, and high speed + performance. The CA3140A and CA3140 operate at supply voltage from 4V to 36V (either single or dual supply). These operational amplifiers are internally phase compensated to achieve stable + operation in unity gain follower operation, and additionally, have access terminal for a supplementary external capacitor if additional frequency roll-off is desired. Terminals are also provided + for use in applications requiring input offset voltage nulling. The use of PMOS field effect transistors in the input stage results in common mode input voltage capability down to 0.5V below the + negative supply terminal, an important attribute for single supply applications. The output stage uses bipolar transistors and includes built-in protection against damage from load terminal short + circuiting to either supply rail or to ground. +
The devices described here are just a very small sample of what's available. I've only included one CMOS opamp, but left out OTAs (operational transconductance amplifiers) or other ICs that are/ were specialised.
+ +One that stands out is the Intersil (formerly Harris Technology, now Renesas) HA2539. Rated for up to 600MHz and with a 600V/μs slew rate, this is an outstanding component. I doubt that anyone used it for audio, simply because no traditional audio application requires that kind of speed. The 'lesser' HA2620/ 2625 (only 100MHz bandwidth and 35V/μs slew rate) were used in some high-end distortion meters, but were otherwise limited to esoteric applications. These would have included laboratory equipment, military and aerospace. These devices are now obsolete. The closest equivalent available now is probably the TI (Texas Instruments) LM6172 - 100MHz, 3kV/μs slew rate (yes, really!). The ceramic package will set you back a small fortune, but the SMD package is surprisingly low-cost (under AU$10.00 each).
+ +The HA2539 used special fabrication techniques that allowed PNP transistors to have similar performance to NPN, something that hadn't been achieved before. As with any fast opamp, bypassing was critical, and likewise PCB layout. I included it here because it represented the state-of-the-art when it was made. Unfortunately, I've not been able to determine when it was introduced. The datasheet I have is dated 2003, probably not very long before it was retired.
+ +Datasheet Description ++ The Intersil HA-2539 represents the ultimate in high slew rate, wideband, monolithic operational amplifiers. It has been designed and constructed with the Intersil High Frequency Bipolar + Dielectric Isolation process and features dynamic parameters never before available from a truly differential device. With a 600V/µs slew rate and a 600MHz gain bandwidth product, the + HA-2539 is ideally suited for use in video and RF amplifier designs, in closed loop gains of 10 or greater.+ + + +
+ + Full ±10V swing coupled with outstanding AC parameters and complemented by high open loop gain makes the devices useful in high speed data acquisition systems. +
The LM358 is not suitable for audio, but it's very useful for basic signal processing and other tasks. It is possible to force the output stage into Class-A by adding a resistor from the output to the negative supply, but it's still slow and rather noisy. Its biggest advantage is that it's close to impossible to contrive a board layout that will cause it to oscillate, and it doesn't care if there's no bypass capacitor for the supply rails.
+ +It's also unusual in that the input common mode range includes ground for a single-supply circuit (or the negative supply if a ±V supply is used), so it can amplify a signal that falls to zero. It is a very low-current opamp, drawing 500μA (typical) at any supply up to ~20V or so. This makes it ideal for battery-powered circuits. I've described a circuit using an LM358 that's designed to disconnect a rechargeable battery if its voltage falls below a preset minimum. The low current is handy for this kind of application.
+ ++ The LM158 series consists of two independent, high gain, internally frequency compensated operational amplifiers which were designed specifically to operate from a single power supply over a + wide range of voltages. Operation from split power supplies is also possible and the low power supply current drain is independent of the magnitude of the power supply voltage.+ +
+ + Application areas include transducer amplifiers, DC gain blocks and all the conventional op amp circuits which now can be more easily implemented in single power supply systems. For + example, the LM158 series can be directly operated off of the standard +5V power supply voltage which is used in digital systems and will easily provide the required interface electronics + without requiring the additional ±15V power supplies.
+ + Unique Characteristics ++
+ Advantages +- In the linear mode the input common-mode voltage range includes ground and the output voltage can also swing to ground, even though operated from only a single power supply voltage. +
- The unity gain cross frequency is temperature compensated. +
- The input bias current is also temperature compensated. +
+
+- Two internally compensated op amps +
- Eliminates need for dual supplies +
- Allows direct sensing near GND and VOUT also goes to GND +
- Compatible with all forms of logic +
- Power drain suitable for battery operation +
- Pin-out same as LM1558/LM1458 dual op amp +
The claim that the output can swing to ground is only partially true. It can get to within around 50-100mV of ground easily enough, but only if there's nothing in the external load to pull the output high. However, if the output is driving the base of an NPN transistor, only a limiting resistor is needed, where other opamps must have a voltage divider because their outputs usually can't go much below 1.5-2V above ground.
+ +One thing that's definitely worthwhile is the LM358 datasheet. There are some excellent application circuits, and most will work with any opamp. These range from a VCO (voltage controlled oscillator) through all the usual circuits (buffers, inverters, etc.), lamp drivers, active filters, current sources, oscillators, and many more. In this respect, it's almost an opamp design guide disguised as a datasheet.
+ + +We are starting to see many opamps built using the CMOS (complementary metal oxide semiconductor) technology. This has taken over for most logic and processor applications, and it was inevitable that CMOS linear circuits would be used. They have some unique advantages, but are generally noisy compared to BJTs or even JFETs. Note that this is a very different technology from BiMOS (e.g. CA3130/40), and uses the same manufacturing techniques as CMOS logic. Some early CMOS logic ICs could be used in linear mode, but performance was poor.
+ +The schematic is not intended to represent any particular device, but to show the basics of the internal circuit. In most cases, internal circuits aren't shown in datasheets. Many CMOS opamps are designed for low voltage operation, typically 5V. Almost all are SMD, with some having user-hostile packages (e.g. LLCC - leadless chip carrier or QFN - quad flat no leads). These are extremely difficult to work with using standard PCB assembly techniques.
+ +The range is increasing all the time, but most remain marginal for audio. Depending on the manufacturer, you might get to see distortion performance and a noise specification, but expect to be underwhelmed if they are compared to 'traditional' BJT opamps. A noise figure of 57nV√Hz is woeful, but some are much better than that. Many will state that they are RRIO, meaning that both input and output can swing to (or very near) the supply rails(s). The majority are intended for single supply operation, but a ±2.5V supply is perfectly alright.
+ + +For a topic such as this, there is no real 'conclusion', because new devices keep being developed all the time. This alone makes the idea that 'analogue is dead' look rather silly, because no maker or supplier will keep producing or selling stuff that no one wants. Linear circuitry is needed in countless applications apart from audio, and the demands for higher performance are never-ending. Everyone wants the 'ideal' opamp, with infinite input impedance, infinite gain and bandwidth, and an output impedance of zero ohms. While this ideal doesn't exist, there are opamps that come remarkably close.
+ +Of course there are limitations, and one that's always been a problem is stability (freedom from oscillation). This has always been a compromise, and that's not likely to be changed any time soon. The fact is that all active electronic devices have propagation delays, and these add up to create a frequency where the negative feedback becomes positive, so the opamp oscillates. The compensation is designed to reduce the open-loop gain to less than unity before any phase shift within the IC causes a polarity inversion.
+ +Unfortunately, frequency compensation means that as the frequency is increased, the open-loop gain is decreased, so there's less feedback and distortion will rise. It's all a careful balancing act, but with all competent opamps available now it's not an issue. Many of the latest opamps have so much open-loop gain and so little intrinsic distortion that it becomes very difficult to even measure it. When the THD of an opamp is quoted as 0.00003% (unity gain, 600Ω load, LM4562 opamp), you can be confident that the distortion contributed by the opamp is so low that it will defy most attempts to measure it.
+ +As always though, the choice of opamp depends on what you're using it for. If you need a transimpedance amplifier to increase the output from a photo-diode, you are looking at perhaps sub-pico amp input currents, and extraordinarily high impedance. An LM4562 would be a very poor choice indeed, because it's not suited to the task at hand. A FET input opamp, selected for very low current noise would be the device of choice, even though it may look much worse on paper.
+ +There are some truly awesome opamps available now. They generally come with higher prices than we're used to, but if you need the best opamp you can get (especially for instrumentation applications where a couple of PPM [parts per million] accuracy is required), then there is an opamp that will do the job. Once you get into this league, the choice of passive parts can have a significant effect, as can PCB layout. This is obviously outside the scope of this article, but it's now almost too easy. 1% resistors used to be uncommon and expensive, but today many people use nothing else for many circuits.
+ +Consider that a 16-bit audio signal (0-5V) has a resolution of 76μV, and a 32-bit processing system has a theoretical resolution of 1.16nV over the same range. This can't be achieved in reality, and the best you can hope for is a resolution of around 24-bits (300nV resolution, 5V). Ultimately, any design ends up being limited by the laws of physics, with thermal noise being the ultimate limiting factor. For example, an ideal (noise-free) amplifier with a bandwidth of 20kHz will have an input noise of -131.81dBV with a 200Ω source. That's a voltage of 256.6nV, just from the resistor! If the (noiseless) opamp has a gain of 10 (20dB), the output noise is at -111.8dBV. This cannot be improved, but it can be made a lot worse if you choose the wrong amplifier or passive component values.
+ + +Datasheets for the various opamps described were the main source of information for this article. In a few cases I had to search for internal circuits, many of which are available from a number of sites. The 'datasheet descriptions' are copied from the datasheet for the device discussed, but some may have been updated since original publication. The descriptions are from the datasheets I have, some of which are fairly old now. The following links provided a lot of background info, and are recommended reading if you want to know more.
+ +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Oscilloscopes |
One of the most useful pieces of test equipment for anyone involved in design, repair or even hobbyist interests is an oscilloscope. Modern digital sampling scopes are available for surprisingly little, but they have facilities that were unheard of only a few years ago. It's often thought that only professionals need an oscilloscope, but that's not the case at all. There are some things that simply can't be resolved any other way.
+ +For many applications, an 'olde worlde' analogue CRO (cathode ray oscilloscope) is actually a better choice, but it's now getting hard to find them new, and a second hand one may or may not be worth what you pay. It's no use to anyone if it's faulty, because most repairs will require the use of ... an oscilloscope. For anyone who needs to see what waveforms look like, nothing else will do. When I'm working on something (whether a repair or a new design), the oscilloscope is one of the first instruments to be turned on, and it's rare to find a situation where the scope doesn't tell you what you need to know about something in far greater detail than you can ever get with a multimeter.
+ +Oscilloscopes were first developed in the early 20th century [ 1 ], and were refined over the years to become one of the most popular pieces of test gear ever developed. Many different manufacturers have made oscilloscopes (which used to be known as a 'CRO' in Australia & Britain, or a 'scope' in the US), and the designs have been refined to the point where they are now available as small handheld units that exceed the performance of full sized bench (or trolley) units from 40-odd years ago. Trolleys were quite common many years ago, because decent oscilloscopes were very expensive, so a large workshop may only have had one or two oscilloscopes that were wheeled from one workbench to another as needed.
+ +It's hard to go past the paper published by Tektronix [ 2 ] for a good overview and a great deal of in-depth material as well. Although it (predictably) shows Tektronix scopes throughout the paper, the principles are mostly unchanged with other instruments. While much of the material is intended for the more advanced user, there's also a lot of good, basic information that will help your understanding.
+ +The oscilloscope is one of the few pieces of test equipment that has a familiar look and feel, regardless of the maker. While some parts of the front panel will be laid out slightly differently, there is never enough variation to flummox anyone who is even passably familiar with scopes and how to use them. The layout and control conventions used are logical and sensible - there is no need to change something that works close to perfectly with almost every instrument made.
+ +This is not to say that all scopes adhere to the conventions. The Philips PM3382A (a combination analogue/ digital scope shown below) is one that doesn't use the traditional rotary controls for the vertical or horizontal controls, but uses up/down buttons instead. It's quite functional, but nowhere near as easy to use as the rotary switches used on nearly all other instruments. However, the controls are in the usual places, and rotary controls are still used for vertical positioning.
+ +You will usually see the axes of an oscilloscope referred to as 'X' (horizontal) and 'Y' (vertical). The timebase feeds the X-axis and causes the beam (or spot) to traverse the screen linearly from left to right. The Y-axis handles the signal, and this deflects the spot up and down in sympathy with the signal itself. As a result, the wave shape is shown on the screen, and if it's well within the scope's bandwidth, should be an accurate representation of the incoming signal. This occurs regardless of the complexity (or otherwise) of the waveform if the instrument is calibrated.
+ +Some analogue scopes have an additional axis - 'Z'. This allows the intensity of the spot to be varied as it traverses the screen. With some external electronics, an analogue scope with a Z-axis input can display an almost perfect monochrome TV picture - probably better than a TV set, because the linearity is better. Most digital scopes don't have this feature, although intensity modulation is available on some more advanced scopes that have (fairly recently) become available.
+ +This article is not all about how to use an oscilloscope, as that information is in the user guide and is specific to the scope brand and model. There are usage guidelines and some useful hints and tips that may be missing elsewhere. The main aim is to provide info on the basic functions and help readers to understand what oscilloscopes are used for, and why. It may seem obvious, but there's a lot more to any scope than simply looking at a waveform.
+ + +Here are some example photos of oscilloscopes. These should not be considered an endorsement or otherwise of the brands and models shown - they are simply representative of several units with some reasonable age differences to give some perspective. Click on any image to see a 'super sized' photo with more detail. (Each photo opens in a new window.) + +
Figure 1 - Dick Smith Q1803 (Single Channel, Analogue)
This is the most basic type of oscilloscope, and has a rather limited 10MHz bandwidth. This is (just) enough for audio, but is of little use for anything much faster. It has a single channel, with a range from 5mV/ division up to 5V/ division. Higher voltages require an attenuator probe. While this type of CRO was 'cheap' in its day, compared to what you get today for not much more, it was not a bargain. This scope was bought on special, and often goes with me if I have to travel somewhere (and run tests on 'stuff'), as it's fairly small and adequate for the purposes it gets used for (which doesn't happen very often these days).
+ +
Figure 2 - Rigol DS1052E (Dual Channel, Digital)
The Rigol was extremely popular a few years ago, and was one of the first affordable digital scopes with a colour display. There was a period when they were sold directly from China that made them far cheaper than anything else at the time, and they were snapped up as a bargain all over the world. It features FFT, can display peak, average and RMS values for the input waveform, and has many other useful features. The use of a single set of controls for both channels is really annoying, and although it can save the displayed waveform to a USB flash drive, it requires several menu options and button presses. Traces from this scope are shown in many ESP articles.
+ +
Figure 3 - Siglent DS1052DL (Dual Channel, Digital)
Having a wider screen and simpler waveform save functions makes this an easy scope to use, but its FFT capabilities are not as good as the Rigol shown above. The wide screen (18 divisions) is nice, but not essential. Having separate controls for each input is much more convenient than a single set as seen above. An entire suite of measurements can be brought up simply by pressing the 'Measure' button. This scope is available with several different brand names - Siglent is (apparently) the original manufacturer, but rebrands the scopes for other suppliers. The scope is shown displaying a 1kHz sinewave of about 280mV peak (198mV RMS). The timebase is set to 250µs/ division.
+ +
Figure 4 - Philips PM3382A (4-Channel, Analogue / Digital)
This scope is an odd-ball in many respects. It's quite capable, but has a very limited sampling frequency (200MS/s) and can only be used to 100MHz in analogue mode. Note the use of up/down buttons for input sensitivity and timebase - a major departure from convention. It's a 4-channel scope, and this can be useful, although 2 channels is enough for most work. The display is on a CRT (cathode ray tube) for both analogue and digital modes. It has a very good FFT function (in digital mode) that's sharper than either of the digital scopes shown above. This scope also features a movable cursor and the position/ frequency and amplitude are shown as it's moved across the display.
+ +Although it has 4 channels, the 3rd and 4th channels only have two sensitivity settings ... 100mV/ division and 500mV/ division. All channels can be used simultaneously, but only one can be used as the trigger source. The extra channels would mainly be used to look at digital signals because of their limited voltage ranges. It has an inbuilt auto-calibration feature that seems to be very comprehensive (it should be, as it takes 4 minutes to complete).
+ +There are also USB oscilloscopes that are the genuine article, meaning they are true scopes and not just a modified sound card. Most are available at up to 100MHz bandwidth, and quite a few include an arbitrary waveform generator that can create (or re-create) almost any waveform desired. They use a PC for control and display, so (at least in theory) they are lower cost than complete instruments. This is not necessarily the case though, as comparing prices indicates that you usually get more for your money with a 'traditional' digital oscilloscope.
+ +The range of functions is often greater though, because the PC can be used to provide more processing power than you get with a stand-alone instrument. Because all control is from the PC, the 'look and feel' is different from a normal scope, and functions are accessed using the keyboard or mouse instead of rotary controls and push buttons. You can't expect that they will all be fairly similar in terms of layout, because the entire user interface is software controlled. Some can be frustrating to use due to non-standard controls, and sometimes decidedly non-intuitive methods to modify the functions.
+ + +Scopes are available to handle input signals from a few millivolts to hundreds of volts (usually with HV attenuator probes), and most (all 'modern' units less than 50 years old) can handle signals down to DC. The most important specification (and the one that has the greatest influence on price) is the bandwidth. 10MHz used to be common for 'hobbyist' scopes, many were around 20MHz, and serious test gear extended to 50 - 100MHz or more. Now, it's common to find scopes that can handle over 1GHz, albeit at considerable cost.
+ +Less than 20MHz is not worth the effort for most tasks, and >50MHz scopes are now both readily available and reasonably priced. Few people will need 1GHz or more, but if you are involved in RF (radio frequency) or high speed digital work the extra bandwidth is essential because so much of the RF spectrum is now above 100MHz, as are digital communication systems. The bandwidth refers to the frequency where the sensitivity has fallen by 3dB, so a trace that would occupy exactly 7 divisions at a low frequency will show 5 divisions at the maximum (-3dB) frequency ... and this is a hint as to how you can use a scope to measure the -3dB frequency of the equipment under test. If the level and screen position of the trace is set for 7 divisions at a mid frequency, the -3dB frequency is the point where the waveform occupies 5 divisions. (That's actually -2.92dB if you are picky, but it's usually close enough.) + +
Above the rated maximum frequency, the signal doesn't 'magically' disappear, but it rolls of at ~6dB/ octave above the -3dB frequency. A 50mHz scope can usually still display a 100MHz signal, but not at the correct amplitude or waveform (which is modified by the frequency rolloff).
+ +The conventions of oscilloscope controls are well established, and manufacturers use the same nomenclature and a familiar layout that has been in general use since the 1950s. Many instruments have additional controls, and some will be menu driven (for digital types). Scopes generally look complex to the uninitiated, but they are very logically laid out and it shouldn't take even a novice long to be able to display basic waveforms.
+ +The exact control set depends on the instrument, but most oscilloscopes have a familiar 'look and feel' to the controls (Figure 4 above is a notable & rare exception). The speed of the vertical amplifier of the instrument is referred to as its bandwidth, and there is a timebase control that sets the sweep speed. This even applies to digital scopes that don't have a sweep as such, because the data are presented on a digital display.
+ +Oscilloscopes are available with from 2 to 4 signal channels, although there are (or were) some budget units that are single channel as seen in Figure 1 above. There are also units with more than 4 channels, but they are primarily 'logic analysers' rather than conventional oscilloscopes. Combined systems are also available, with 2 or 4 analogue channels and 16 or more logic channels. The difference is that an analogue input has variable sensitivity, from perhaps 2mV/ division up to 50V/ division (or more with specialised probes). The digital channels are typically designed for a maximum input level of 5V and have no (or minimal) variable gain.
+ +The calibrated controls on oscilloscopes were standardised many years ago to use a 1-2-5 sequence. Vertical (signal) and horizontal (timebase) controls follow this sequence across their range. For example, you may have a vertical sensitivity control that follows the sequence of 5mV, 10mV, 20mV, 50mV, 100mV, 200mV ... (etc.). The timebase may have a sweep time of 10µs, 20µs, 50µs, 100µs, 200µs per division ... (etc.). Voltages and times are always per division, with each division being the size of each graticule marking. Most 'standard' scopes have 10-12 horizontal divisions and 8 vertical divisions, but newer 'wide screen' types have more horizontal divisions (18 for the Siglent shown above).
+ +Note that all figures for sweep speed and sensitivity are per division, and not full screen. This has been standard for many years, and it's important that this is understood. By measuring the number of divisions (or par thereof) and using the settings details, you can measure the actual voltage and/ or periodic time of the waveform. For example, if a waveform occupies exactly 5 vertical divisions at 20mV/ div, the peak to peak voltage is 100mV (50mV peak, or 35.4mV RMS). If the waveform completes a full cycle in 5 divisions at 100µs/ div, the period is 500µs and the frequency is 1/period = 2kHz.
+ + +All oscilloscopes have a range of standardised controls. These are found on everything from the most basic hobbyist scopes right through to the most expensive lab equipment, and there are few exceptions. There used to be some extremely basic scopes that didn't have a calibrated vertical amplifier or timebase, but these are next to useless for any serious measurements. Very early scopes (from the 1930s) often lacked calibration, but were the only way that radar systems could be examined in any real detail. Calibration is now standard for even the most basic types. The standard features/ controls are ...
+ +Most of the above are pretty much self-explanatory, but triggering is something that catches some people out. It's often necessary to set the triggering system to only act on a rising (or falling) part of the waveform, and in some cases the trigger level needs to be placed on a specific part of the waveform to get a stable display. Trigger systems may also offer TV (so the scope can lock onto a TV composite video waveform), mains (the local mains frequency), or LF / HF reject to stop the timebase from triggering on low or high frequency signals. Many scopes also feature a 'hold-off' control that delays the start of a sweep until a trigger event is detected.
+ +Depending on the oscilloscope, there will also be a number of additional controls. For digital scopes, the list is potentially enormous, so only the most common are described next. With digital scopes, the menu system(s) can be used to access everything from automated self-test routines to saving the displayed waveform to a flash drive or sending it to a printer. Some of the features may be used only rarely, and to assist the user, help screens are available for many (perhaps all) of the features offered.
+ + +The most common additional controls are shown below. As noted, some are only found on analogue (CRT based) oscilloscopes, and others are normally only found on digital scopes. There are also hybrid scopes - they aren't common, but they have a CRT for the display, but allow either digital or analogue operation, depending on the requirements for the measurements being taken. These have all but disappeared, but were very expensive when they became available about 20 years ago (at the time of writing).
+ +Most digital scopes are largely menu driven for functions that are over and above those shown above. One very useful additional feature is FFT (Fast Fourier Transform) allowing the user to see information in the frequency domain - an oscilloscope is intended to display time related information (the time domain). Before the advent of low-cost digital scopes, the only way to work in the frequency domain was to use a spectrum analyser - these are expensive, even today. Although the FFT function is useful, it does not mean that a spectrum analyser isn't necessary for precision RF work, because the scope is not optimised for frequency domain measurements.
+ +Other common functions include the ability to invert one channel, sum (add) channels, or use the scope in 'XY' mode, where the timebase is switched out and the second channel is used to provide horizontal deflection. Many scopes also have provision for an external timebase. It's common that some of the features will never be used by many users, but the cost to include them is small, and if they are omitted there will be buyers who'll simply look elsewhere ... even if they won't use the feature!
+ +One feature that was common on analogue scopes was 'delayed sweep'. I've worked with many experienced service techs over the years who never figured out how to use delayed sweep (or why they would want to), but it was an extremely useful feature if you happened to be looking at waveforms with fast rise times - especially if the waveform was complex. By highlighting a small range of the horizontal display, it could be expanded by means of a second (much faster) timebase that only worked over a portion of the waveform.
+ +The delayed sweep isn't needed with a digital scope, because the capture can be stopped and the entire trace expanded so that extreme detail can be seen. Note that this only applies when the sampling rate is fast enough to capture the high speed event(s) you wish to examine.
+ + +The basic specifications that you will see may be along the following lines for a fairly typical oscilloscope ...
+ +++ ++
+Parameter Value + Vertical Channels 2 + Vertical sensitivity 2mV/div - 10V/div + Maximum Input Voltage 400V Peak + Bandwidth 60 MHz + Rise Time <7ns + Resolution 8 bit + Sampling Rate 1 GS/s (1,000M samples/ second) - Digital only + Time base 10ns/div - 5s/div + Input Impedance 1 MΩ ±2% || 13pF ±3pF + Trigger Source Ch 1, Ch 2, External, TV, Line (mains frequency) +
This looks comprehensive, but it isn't really enough for a potential buyer to see whether the scope will suit his/her needs. There are many other facilities that are usually available with digital scopes, some of which are very useful, and others less so. One critical part of any oscilloscope is its triggering ability. Triggering is used to synchronise the sweep to the waveform being measured so the trace is stable. Any scope that can't trigger reliably on common waveforms is next to useless, but fortunately there are very few that fall short. Better units will have had a great deal more time spent on development of the triggering circuitry to ensure a stable display with complex waveforms.
+ +Many scopes feature an external trigger (aka synchronising or 'sync') input. This can be very useful when trying to look at a signal that's buried in noise, or if there are regular (non-harmonic) interruptions to the signal. For example, if one is using a tone-burst generator, the use of external trigger is almost essential, with the timebase triggered from the tone burst generator's sync output. It's provided for exactly this purpose.
+ +External triggering is also very handy if you are looking at the distortion residual from an amplifier. The scope is triggered from the signal generator, so harmonics, noise, and other disturbances don't cause false triggering which makes the residual waveform very difficult to see clearly.
+ +The most common input impedance for vertical channels (signal) is 1MΩ in parallel with some small capacitance, typically between 15-25pF. The capacitance is mainly 'incidental', in that it's not primarily a physical capacitor, but is due to the natural capacitance of input BNC connectors, attenuators and amplifiers (plus the 'stray' capacitance of wiring or PCB traces). However, in some cases a small capacitor is added, because oscilloscopes are expected to have an input capacitance that falls within the range of 15-25pF. Too much or too little would mean that 3rd party attenuator probes would not equalise properly (this is covered in more detail below).
+ +In some cases, an optional 50Ω input impedance is provided, specifically for RF applications where 50 ohms is a very common impedance. This allows the scope to act as a terminating load so that input cables don't cause frequency response errors. See the article on Coaxial Cables for more on this subject if you are interested.
+ +It's important to understand that an oscilloscope needs a wider bandwidth than expected if you wish to view pulse waveforms. A 50Mhz scope will give a passable display for 10MHz pulse or rectangular waveforms where it can display up to the 5th harmonic (just), but at 50MHz it can show only a very different waveform from that actually supplied - but you may be blissfully unaware of this. You may see this referred to as the "five times rule", and it even applies to sinewaves if an accurate amplitude measurement is required. Even with a 10MHz input sinewave, the level will be 0.2dB down with a 50MHz scope.
+ +
Figure 5 - 10MHz Waveform With 50MHz Oscilloscope
The above shows how a rectangular or pulse waveform will be distorted. There is clear evidence that the bandwidth isn't wide enough, and consequently the risetime isn't fast enough for the waveform to be displayed properly. The risetime of the input signal is well under 1ns, but the scope displays it as 7ns - that's a big discrepancy, even when the five times rule is used. For pulse waveforms, the scope needs to be at least 10 times faster than the highest frequency to be measured, and even then it will still distort the waveform.
+ +It's not until the scope is around 50 times faster than the waveform that it can display a reasonably accurate pulse waveform if the rise and fall times are particularly fast. It should now be obvious just how limited a low speed scope really is, and why I suggested earlier that a 10MHz scope is only just adequate for audio. Note that rise and fall times are always measured between 10% and 90% of the peak amplitude.
+ + +The basics of an oscilloscope are much the same whether it's analogue or digital. The internal workings are very different of course, but the end result is the same. The first thing in the signal chain is the vertical attenuator(s) and amplifier(s). Amplification is needed when the signal to be measured is too small to either deflect the beam of get a decent digital representation of the signal, and attenuation is necessary to allow higher voltages to be displayed. The vertical amplifiers and attenuators determine the bandwidth, and will usually have a response extending from DC to at least 10MHz, more commonly 50 to 100MHz, and up to several GHz for very high speed scopes. Early valve oscilloscopes were often AC only because it's difficult to make a DC coupled valve stage, especially one with high gain and low drift (time and temperature).
+ +Oscilloscopes (with the exception of some 'pseudo scopes' that are little better than a PC sound card) use BNC connectors for all inputs, and have done so since the late 1950s or early 1960s. See Connectors in the article about coaxial cables for more. Earlier scopes often used type 'N' connectors, but these were replaced when the BNC became available, as it's a much smaller connector but with no sacrifice in reliability. Scopes invariably use 50Ω BNC connectors, even though they have a 1MΩ input impedance. However, as noted elsewhere, some scopes offer a 50Ω input impedance as an option (usually switchable).
+ +The gain is switched using a 1-2-5 sequence, but most scopes also allow a fine adjustment which is uncalibrated. Some will display an indicator (such as 'UNCAL') to warn the user that the display is no longer calibrated, so accurate voltage readings aren't possible. This can be useful for some measurements where the absolute value is unimportant, but a relative reading gives you the information you need.
+ +Most modern oscilloscopes provide at least two input channels (dual trace). This was also common with better analogue scopes as well, but other than a very few highly specialised units, there is actually only one electron beam. There are two options for dual beam analogue scopes - 'chopped' and 'alternate'. When the beam is chopped, it's divided into very small segments (the frequency is typically around 250kHz) as the timebase causes the spot to traverse the screen. One set of 'segments' is used to display the data from input #1, and the other set handles input #2. The beam is blanked as it switches from one to the other so it looks like there are two completely independent traces. Each 'trace' can be repositioned on the screen without affecting the other.
+ +This trick only works at relatively low frequencies though, because as the timebase speed is increased, there's a finite limit to how quickly the beam can be moved from one trace to the other. That's where the 'alternate' setting comes in. One complete left to right sweep is for Channel #1, and the next is for channel #2. Again, the user sees two independent traces. If the timebase speed is reduced too far, you can see that the traces are indeed alternate. One trace is drawn across the screen, and when that completes, the second channel is displayed.
+ +There is no requirement for 'chopped' and 'alternate' modes for a digital scope because the signals are multiplexed by the ADC and any number of traces can be drawn. However, most low cost scopes (as well as some professional models) cannot provide the full claimed sample rate on both (or all) channels at once. The effective sampling rate is halved when two channels are in use, and halved again if 4 channels are active.
+ +
Figure 6 - Siglent Scope Showing Measurements
Some of the measurement capabilities of a digital scope are shown above. You can see the measurement panel on the right, and it shows that the waveform is 1.68V peak-peak, 320mV RMS, the minimum voltage detected (not useful in this case), and the period and frequency of the waveform. These are also not useful because it's a fragment of speech captured from the radio, but the scope tried to make sense of it anyway. The period of 5.8ms corresponds to the frequency displayed (172.4Hz). Note that the normal on-screen frequency measurement is quite different from that in the measurement panel. This is a clear indication that the reading can't be trusted (they are normally the same).
+ +All (other than old valve based) oscilloscopes have provision to set the inputs for AC or DC coupling, with most also providing a ground setting. The latter isn't useful for looking at waveforms (because the input stage of the amplifier is grounded), but it can let you 'find' the trace if it's sent off-scale because the input voltage is too great. Note that the input BNC connector pin is not grounded, as that would create a short at the device being tested. Some (analogue) scopes even have a 'beam finder' that reduces the X and Y sensitivities to some low value that lets you see where the beam has been deflected to - it's not always obvious! When AC coupled, the low frequency response is usually between 1.5Hz and 7Hz (-3dB frequency), so measurements below 20Hz will have a significant amplitude error.
+ +Most digital scopes have an 'auto-set' feature - press the button, and the scope will set the gain and timebase to display the waveform so it nicely fills the screen. This can be especially useful for beginners, because using an oscilloscope is as much an art as a science. Someone who knows his/her instrument well will be able to set it up to get the exact display desired in moments, and an observer won't have time to see what was done because it all happens so quickly. A beginner or infrequent user may take several minutes to achieve the same result, but perhaps not knowing exactly why controls are set the way they end up.
+ +It's important to understand that an analogue oscilloscope does not use the same system as a TV or computer monitor CRT. These draw the image by continuously scanning the screen from the top left to the bottom right, and images are drawn by modulating the intensity of the electron beam. An oscilloscope draws one line from left to right, and that line is deflected vertically by the input signal.
+ + +Because the internals of a digital scope are pretty much inscrutable, it makes more sense to examine the way an analogue scope works. A digital scope is designed to emulate the analogue functions, but most of the work is done by one or more ASICs (application specific ICs) and/ or microcontrollers and/ or microprocessors, and functions are controlled by software.
+ +In contrast, analogue scopes all work in a similar way, and are fairly traditional in terms of circuitry. There are as many different circuits as there are oscilloscopes, but the basic ideas have been with us since the days of valves (vacuum tubes). Transistors and ICs made scopes far more reliable, provided higher speed, and made it easy to add functions that would have been too complex with valves.
+ +
Figure 7 - Block Diagram Of Basic Oscilloscope
A simplified block diagram is shown above, reduced to the minimum for ease of understanding. The various blocks are shown below, starting from the output - the cathode ray tube. Power supplies are not included, and a typical scope may have 5 or 6 separate voltages, ranging from high voltage supplies (2kV to 8kV or more, positive and/ or negative), medium voltage supplies (200V or so), plus the voltages for analogue sections (perhaps ±8 to 15V), and often a +5V supply for logic ICs that are commonly found in triggering circuits and beam switching (for dual beam scopes).
+ +The above only shows a single vertical (Y) channel, but analogue scopes can have from 2 to 4 channels. Adding channels makes the overall system much more complex, because there has to be provision to use the same electron beam to display two (or more) channels, and the trigger circuitry has to be able to be switched from one channel to the other. Most scopes only allow a single trigger source. Digital scopes may have additional inputs (16 is common) for logic analysis. When provided, these usually have a limited voltage range with minimal controls. Further discussion of logic analysers is outside the scope of this article.
+ + +Normally, we'd start at the input, but in this case it's easier to start from the output - the CRT itself. The CRT is a (large) vacuum tube, and is vaguely similar to the tubes used in TV sets. However, deflection is not magnetic as it is (was) with TV, but is electrostatic. An electron beam is generated by the electron 'gun' at the far end of the tube, passes through accelerating and focusing electrodes, and then through the gaps between the deflection plates. A negative voltage on a deflection plate will repel the electrons in the beam, and a positive attracts the electrons. This allows the circuits to deflect the beam up and down, left and right.
+ +
Figure 8 - Cathode Ray Tube Basics
The essential parts of a CRT are shown above. Focussing and astigmatism and other elements are traditionally shown as grids (as found in a normal valve), but in reality they are often specially shaped plates or sub-assemblies. Most voltages are not shown because they vary widely depending on the CRT itself, and voltages on the various elements are variable to adjust the characteristics of the spot. Oscilloscope tubes are much longer than a TV tube (with the same faceplate size) because they use electrostatic deflection which is less effective than magnetic deflection, but much more linear. The long tube also means that the distance from the centre to the extremities of the tube face changes very little, ensuring good focus at all points on the tube face.
+ +In many of the better analogue scopes, the graticule is etched into the inside of the glass faceplate so there is no parallax error. The phosphor coating on the inside of the faceplate fluoresces when struck by the electron beam, and this provides the visible trace. The trace intensity is varied by changing the beam current. By modulating the grid that controls beam current, the trace can be turned off during the retrace (when the spot returns to the left from the right of the screen). Intensity modulation can also be applied by the Z-axis if provided.
+ +The power supply for a CRT based scope requires multiple voltages. The acceleration potential (negative) is applied to the cathode, although some tubes include an additional acceleration electrode that carries a high positive voltage. This varies, but 8kV or so is common on many Tektronix scopes. Another feature that you will see on most analogue scopes is a 'trace rotation' control. The earth's magnetic field affects the trace, and the rotation control allows it to be repositioned so it's perfectly in line with the graticule. Some cheap scopes (such as the one shown in Figure 1) rotate the tube itself. Better CRT scopes have a magnetic screen around the tube, generally MuMetal, although it's likely to be thin steel in cheaper versions. This minimises interference to the beam's deflection from nearby transformers in other equipment.
+ + +The graticule often greatly underestimated as a tool for measurements. It is there so that the essential characteristics of a waveform can be determined. Since each vertical division corresponds to a known voltage and each horizontal division is a known time period, the periodic time of a waveform can be determined easily, and frequency is simply 1/time. For example, a waveform that completes a cycle in 1.2ms has a frequency of 833.3Hz. When read from an oscilloscope, voltages are commonly stated as peak-to-peak, because that's what is most easily measured from the graticule. This is often all you need, and in some cases is exactly what you need. To see that an opamp preamp (for example) can provide 25V p-p indicates that it can drive any amplifier known, but if the output swings to (say) +13V and -2V at the onset of clipping, this should immediately raise an alarm - something is clearly not right ! + +
Figure 9 - Oscilloscope Graticule
The above is not from an actual scope - it's a composite image put together to show everything clearly. You will notice that the centre lines (both vertical and horizontal) have additional 'tick' marks at 0.2 division intervals. These make it easier to estimate the voltage or time being measured. There are also 0, 10, 90 and 100% indicators that are used to measure the rise or fall time of a pulse waveform. The peak values are aligned at the 0 and 100% points, and the risetime is measured between 10% and 90%. Not all scopes have these indicators. They aren't usually provided on digital scopes because they can measure the rise and fall times for you - albeit usually only by using the menus.
+ +In the above drawing, you would adjust the horizontal position control so the 10% point on the waveform was aligned with the central vertical line (as shown in light green). The rise time can then be measured by reading the time, based on the timebase setting (e.g. 5µs/ div) and the number of horizontal divisions needed for the rising edge to go from 10% to 90%. In the case shown, it's a little over 0.8 of one division, so we can estimate around 4.2µs risetime. Not an exact science, but greater accuracy is easily obtained by increasing the timebase speed to 2µs/ div. As noted, digital scopes can measure the rise and fall times accurately, by accessing the appropriate menu(s).
+ +The waveform completes a full cycle in 4 divisions (20µs in this example), so the frequency is 1/20µs = 50kHz. The amplitude is 5 divisions, so you can work out the voltage by referring to the vertical sensitivity. If it's 0.5V/ div, the amplitude is 2.5V peak-to-peak. Note that because the waveform is not a sinewave, you can't easily determine the RMS voltage (it's just over 2V RMS), although most digital scopes can calculate that for you. A single display tells you more about the waveform than a barrage of other test instruments.
+ +The scope graticule is the key to taking measurements. Because most scopes don't qualify as 'high precision' instruments, and measurements are based on what you can see on the screen, expecting better than 1-2% accuracy is unwise, and some scopes don't offer that degree of accuracy anyway. However, this in no way detracts from the usefulness of the measurement. You don't use a scope for its precision, you use it because it shows you what the waveform looks like, while still providing the details of the amplitude and speed of the viewed signal. This is what an oscilloscope is for, and no other instrument can do that.
+ + +The next stages of interest are the horizontal (X) and vertical (Y) amplifiers. These generally have a fairly large voltage swing, which depends on the tube itself. The static (spot centred on the screen) voltage will normally be somewhere between 50V and 200V, and the peak to peak amplitude will typically be between ±50V and up to ±150V or more. As one plate from either axis is made more positive, the other is made more negative by the same amount. The deflection amplifiers need to have a low output impedance and be capable of high peak current, because the deflection plates represent a capacitive load. This becomes more critical with wide bandwidth scopes, because they have to deflect the beam faster.
+ +
Figure 10 - Deflection Amplifier Example (Y Amplifier Shown)
The above shows a (highly) simplified Y-axis amplifier [ 3 ], and that for the X-axis will be similar, but will use higher voltages. The screen is wider than it is high, so more voltage is needed to get the extra voltage swing and deflection. The requirements are not easy to achieve. The amplifier needs high linearity, extremely good high frequency response, and must be able to drive the capacitive load of the deflection plates. HF correction circuits are shown as an example - in reality they are more complex. Both the input and output of the amplifier are balanced, although the circuit will convert an unbalanced input to a balanced output due to its design. The stage gain is about 50, and you may notice that the circuit does not use global feedback, so the transistors must be matched to get stable performance. There is a 500 ohm thermistor to correct for thermal variations.
+ +There is one rather important part in the above, simply marked 'Delay Line'. This is used to delay the signal display for just long enough to ensure that the user can see the 'event' that triggered the sweep. Without the delay, the sweep will start but the leading edge (for example) of the pulse (or other signal) that caused the trigger could never be displayed. The delay line is typically a length of coaxial cable, coiled up and stashed within the chassis. The normal delay time is around 100ns - very short, but long enough to ensure that the edge 'event' that initiated the sweep can be observed. A delay line is only used on the vertical axis.
+ + +The timebase and trigger circuits are shown as a block diagram. In a 'real' oscilloscope, the circuitry is surprisingly complex, because the sync circuits are such a key part of how it works, and that makes it hard to figure out what is going on. The basic concept is straightforward - we only need a linear ramp, a reset circuit, and a blanking output that turns off the electron beam during the retrace period. However, the overall operation of the timebase is complicated by the sync circuits which form such an integral part of the total.
+ +
Figure 10 - Triggering & Horizontal Timebase
A basic linear voltage ramp is easy to achieve by any number of means, but a scope requires the sync input to be able to produce a stable display. There is always a 'hold-off' period in the ramp waveform, during which it can accept a sync pulse to start the sweep. If no pulse is received or it is only received during the sweep period (where it is rejected), the trace will 'free run', meaning that the display is not stable unless the input signal is at the 'right' frequency. With a free-running display, it will only be stable when the input is at an exact multiple of the timebase period. Failure to trigger is often seen as a waveform moving across the screen - in either direction.
+ +In the above, the trigger pulse is produced at the positive-going transition. The free-running and triggered timebase waveforms are shown, and the hold-off period is essential to allow the trace to be synchronised. The sweep can only be initiated while the electron beam is at the extreme left of the screen, and it's usual for the beam to be blanked (cut off) during the retrace and hold-off periods. The beam is turned on when a sweep is initiated, and in the example shown, the screen will show 2½ cycles of the input waveform.
+ +A faster sweep speed will show fewer cycles (or perhaps only a part of the waveform), and a slower sweep speed will show more. Trigger pulses that occur (or may occur) during a sweep are either ignored or suppressed. From the above, you can also see why the delay line is used in the vertical amplifier. It always takes a finite time for the trigger pulse to initiate the sweep, and by delaying the displayed waveform by a small amount, the start of the waveform can be seen. This is especially important when looking at pulse waveforms.
+ +The hold-off period is important, and in many scopes it can be increased from the normal period (which depends on the manufacturer and their philosophy). A hold-off control is provided on many scopes to allow a longer period where the timebase waits for a valid trigger event, and this can improve the trace stability with difficult waveforms. Tone burst signals can be especially difficult unless external triggering is used, but adjusting the hold-off period can often make a big difference.
+ +The repetition rate of the sweep circuits depends on the speed of the oscilloscope. A scope that has a bandwidth of 100MHz (for example) will be expected to show no more than one cycle of the waveform (per division) at the maximum frequency. That means that the sweep rate may be up to 10MHz for a 100MHz scope with 10 horizontal divisions. That is achieved with a time scale of 10ns/ division. The actual sweep waveform may be at some lower frequency, as determined by the triggering and hold-off circuits.
+ +Yes, this is complex, and it's not easy to describe in simple terms. For many years, the triggering circuits were one of the main differentiators between the major scope manufacturers, and they are no less important today.
+ + +A simple vertical amplifier is shown next. This is not meant to represent any known scope, but it has elements taken from a couple of different circuits. The input switching allows the input to be set for AC, DC or grounded. It's important that only the input to the scope's internal circuit is grounded, or it would cause a short on the equipment being tested. The first attenuator is high impedance, and consists of a DC attenuator (using the resistors) and an AC attenuator (using the capacitors). When both are combined, the capacitive divider ensures flat response up to many MHz, which would not be the case if only resistors were used. The same approach is shown in Project 16 (Audio Millivoltmeter). The capacitors prevent unavoidable stray capacitance from reducing the frequency response. In many cases, the smaller values are either trimmer caps, or a fixed cap with a trimmer in parallel.
+ +
Figure 12 - Vertical Input Amplifier & Attenuators
The two attenuators shown are actually joined together on a single rotary switch, so that they follow the 1-2-5 sequence from the highest to the lowest sensitivity. The attenuator switching is usually fairly complex, because it's expected to give seamless operation over the full range. The type of gain stage depends on the scope maker, and it may be an integrated video amplifier IC as shown, or fully discrete. There may be a single gain stage, or it can be split across two or more separate stages.
+ +The main gain stage is shown as an integrated video amplifier, as these typically have a bandwidth of up to 200MHz (50MHz with flat response is more likely), and are (or were) a simpler and cheaper alternative to a dedicated discrete design. Performance would be adequate for a low cost scope, but the major manufacturers are far more likely to use a discrete design because all parameters can be optimised. The extra cost is easily justified due to the higher price commanded by 'brand name' equipment.
+ +![]() | Note: Most dual-trace scopes have the provision to invert one channel, then add the channel signals (to obtain a difference trace). This can be + useful to see if something changes the signal in any way, as there will only be a residual waveform is there is a difference. However, you can't increase the gain of + the two channels to get greater resolution, because the scope's vertical amplifiers will clip. The resulting waveform is created by the scope, and not the external circuits. + |
Noise has to be considered, but nearly all scopes show some visible noise on their most sensitive ranges. This is a surprisingly difficult exercise, because scopes have a very wide bandwidth, yet are expected to display signals of only a few millivolts. They are also required to have a high input impedance (1MΩ), and it's extremely difficult to have high gain, high impedance and low noise combined. Ultimately, the thermal noise from the input resistors will dominate at the most sensitive setting, and that can't be eliminated without breaking the laws of physics.
As an example, the noise from a 1MΩ resistance over a 50MHz bandwidth and room temperature (25°C) is about 0.91mV (907µV), and that's with no amplification at all. See Noise In Audio Amplifiers for details on how this can be calculated for any resistance, bandwidth or temperature.
+ + +Digital storage scopes (DSOs) introduce a second parameter that's just as important as bandwidth - sampling rate. If the signal is not sampled enough times in each display cycle, there are insufficient data points to be able to recreate the waveform on the screen. Unlike audio where the sampling rate only needs to be just over double the highest frequency, a DSO needs as many samples as possible or the waveform will not be displayed properly.
+ +Real-time sampling rates for modern digital scopes range from 500MS/s up to several GS/s - i.e. from 500 million samples per second to to several billion samples per second. In many cases, the maker will specify an effective sampling rate that's much greater than the actual (real-time) value. This can only work with repetitive signals, and by capturing several complete screen's worth of data, the waveform can be reconstructed so it's an accurate representation of the original. Glitches or other transient events may either be missed, or represented inaccurately.
+ +The inner workings of DSOs are (for the most part) completely obscure. Even if you have a complete schematic (unlikely), it won't tell you a great deal because nearly everything is done using high speed ADCs (analogue to digital converters), ASICs (application specific ICs), FPGAs (field programmable gate arrays) and microprocessors. The analogue input circuits (channels and trigger) will use some traditional techniques to provide the 1MΩ input impedance and over-voltage protection, but they may or may not provide any gain, leaving that to the ADCs. Many of the switching functions are performed by relays, because they (like everything else) are controlled by software.
+ +The inscrutable nature of the circuitry means that not even a block diagram is particularly helpful. As noted earlier, the software (or firmware) is designed to emulate the 'look and feel' of an analogue scope. What appear to be conventional pots (for trace positioning etc.) are often rotary encoders, and their outputs are handled by the digital electronics so that the function performed is as expected. In some cases, a single rotary control can be used for multiple purposes.
+ +The control directly above the CH1, CH2 (etc.) buttons on the Rigol is a case in point. It's used to alter the brightness, but is also used to scroll through menu selections and adjust the cursors - the manual calls it a 'Multi Function Knob'. It's used to set the frequency of the internal high and low pass filters as well (these can be very useful, and are common in digital scopes). The Siglent scope has a similar control, which is called a 'universal knob' (this seems a bit adventurous - it is limited to the scope, not the 'universe' as such. )
Most of the push-buttons are not latching types, because the microprocessor detects when a button is pressed and makes the appropriate decision (active/ inactive). The button states may be displayed on the screen, or back-illuminated with a LED. The rotary controls for input sensitivity and timebase are rotary encoders with detents, so they feel like switches. There are usually no settings shown on the front panel - they are shown on the screen instead. In some cases, the scope will 'beep' at you if you keep turning a control once the setting has reached its maximum or minimum limit. Usefully, most of the settings are retained in non-volatile memory, so when you next turn on the scope, it will be set the way you left it.
+ +So, while you may have hoped for a block diagram here, there isn't one because there's simply no point. Even the Rigol and Siglent service manuals don't include a block diagram (let alone schematics), because the PCBs are made using SMD (surface mount device) parts almost exclusively, and the intention is that if a board fails it will be replaced, not repaired. This is common with most digital scopes (as well as many other digital devices), because repair requires access to the specific parts (which may be proprietary) and SMD rework facilities. This is beyond most hobbyists and even many (most?) professional organisations.
+ +The facilities provided on most digital scopes are extensive. They all have the ability to save the waveform as an image (.BMP - bitmap is common) or data (CSV - comma separated values) file, and waveforms can be stored internally to generate pass/ fail tests. Most can also be connected to a PC which can control the scope via USB, they have internal digital filters so interfering signals can be removed (or enhanced), and usually have cursors that can be placed on any part of a waveform to measure instantaneous values. The list isn't quite endless, but it's extensive, and naturally varies with the brand and model of scope. The user guide or manual needs to be read thoroughly if you are to get to know the full scope of what's offered.
+ + +Although scope probes are (or appear to be) simple, they cause more problems than almost any other area of usage of an oscilloscope. In some cases you can use a simple x1 probe (straight through), but these can cause serious issues for the device under test (DUT). Even the lowest capacitance cable will add at least 100pF to the 20pF input capacitance of the scope, and because the lead is coaxial it can act as an unterminated transmission line at very high frequencies. This is why some scopes provide a 50Ω input for RF work.
+ +The capacitive load imposed by a simple x1 probe can cause some circuits to oscillate, stop RF oscillators from working, or otherwise cause circuit malfunctions or general misbehaviour. Not the least of these is severely reduced frequency response in high impedance circuits, caused by the cable capacitance. In short, it's uncommon that you can use x1 probes for a great deal of standard measurements, where the frequency is greater than a few kHz and/ or impedance is more than 10kΩ or so.
+ +For this reason, the x10 probe is one of the most common and popular oscilloscope accessories. In many cases they are indispensable. The drawing below shows the essential parts.
+ +
Figure 13 - x10 Attenuator Probe Details
It's all deceptively simple, but all x10 probes include a miniature trimmer capacitor that's intended to be adjusted by the user to ensure correct high frequency operation. All scopes have a 'Probe Adjust' signal available on the front panel. The output is a fast risetime squarewave, usually between 1kHz and 2kHz, and providing around 1-2V peak-to-peak. The x10 probe is connected to the terminals (signal and ground), and the trimmer cap carefully adjusted until the leading edge of the waveform is displayed correctly (as shown in red).
+ +Failure to adjust the probe at regular intervals will result in a display that is inaccurate at high frequencies and/ or impedances. There are also x100 probes, some of which are intended for higher voltages than standard or x10 probes. The limiting factors are the dielectric strength of the trimmer cap and the voltage withstand of the 9Meg (or 99Meg for a x100 probe) resistor.
+ +There are some x10 probes that have the compensation cap in a 'pod' at the oscilloscope end of the lead. These are much less common, but they may have an advantage for probes designed for higher than normal voltages. A cap is still used across the 9Meg resistor, but being a fixed value it's smaller than a trimmer and can use a high voltage dielectric. The process of compensation is the same, but the cap across the probe's resistor must be a little larger than normal. That means that the probe itself is always over-compensated, and the response is pulled back into line by adding more capacitance at the scope end of the cable.
+ +The topic of probes is the subject of complete articles, some of which can be found on line. There is also some disagreement about the terms 'under-compensated' and over-compensated', with some material using the terms in the opposite way to that shown above. The terms don't matter, provided you understand the concept and compensate your probe(s) properly. It's also worth pointing out that the act of compensation ensure that the frequency response is linear up to the limits of your scope. Although it's done with a relatively low frequency squarewave, the response extends to many MHz. When the squarewave is reproduced perfectly, your probe will be flat to 100MHz or more.
+ +While it is not immediately apparent, the circuit shown above has a turnover frequency of around 1-2kHz. Beyond 10kHz, the effect is that all higher frequencies are either boosted or attenuated by around 2-2.5dB. This is probably counter-intuitive, but if you examine the capacitive divider created by Cc (compensation cap) and the combined capacitance of the scope and cable, you get a simple voltage divider that is frequency independent. Compensation simply ensures that the resistive and capacitive voltage dividers are perfectly matched. This is why all oscilloscopes use a probe adjustment frequency of 1-2kHz.
+ +Standard x10 probes (by far the most useful for general work) range from around $20 for basic (and marginal quality) types that you'll find on ebay and the like, up to $10,000 or more for high speed name brand types (and no, that is not a misprint). When you buy a scope, it will usually come with probes and the cost of them has to be considered when you are comparing different products. Few of us need (or can afford) to spend $10k for a single probe, but it's unwise to imagine that you'll get high quality and durability if you only pay $20. The middle ground ($30-$75) should get you something reasonable, but you must verify that you aren't simply paying 3 times the price for a cheap Chinese version that you can get elsewhere for $20.
+ +There is a wide range of specialised probes available, but some come with very scary price tags. Current probes are a case in point, with the cheapest being around $250, and ranging up to $5,000 and more. If you need to monitor current at mains frequencies (which is very useful), then it is far cheaper to build a current monitor such as those shown in Project 139 or Project 139a. Similar techniques can be used for higher frequencies, but it's generally not as convenient as a dedicated current probe. As always, you have to balance the need against the cost.
+ +Another useful (but expensive) probe type is a differential probe. These are isolated, and don't need the standard ground clip. Some are designed for very high voltage use (allowing isolation to mains voltage standards). This is a case where the need really must exist, because the cheapest is around $380 and most are a great deal more (over $6k is not unusual). The cost is dependent on the speed, isolation voltage and accuracy, and this is not something that you buy on price, because your life may depend on it. Most differential probes are battery operated because they are active (using ICs, transistors, optocouplers, etc.). Nearly all other probes are passive, and do not require power.
+ + +In some respects, this topic is now a moot point. A quick search will reveal that almost no-one makes new analogue scopes, so the main way you'll get one is to buy it second hand. There is a small number of new models available (at the time of writing), but most are more expensive than much faster digital models, so it's hard to justify the extra expense. The days of cheap, basic analogue scopes are well past, and second hand is always a risk - especially if you aren't able to make repairs as needed. The chances of getting a replacement CRT are probably close to zero, other than for very expensive 'name brand' models.
+ +Despite the many advantages of digital instruments, they have one major trap for the unwary - sampling. If you have a scope set for the wrong timebase (sweep speed), an analogue CRO will just show a mess, but a digital scope can show a waveform that looks as it it may be what you expect. The only trouble is that if the signal frequency is greater than half the sampling rate (the Nyquist frequency), you get a phenomenon called aliasing, and the waveform you see is not the real thing - it's been created because the timebase setting is wrong.
+ +
Figure 14 - 900kHz Waveform Showing Aliasing
The above used a 900kHz input, but with the timebase set for 50ms/ division. The display should be a solid block of colour, but it's not - it shows a sinewave. One frequency readout says 899.999Hz (which is correct), but the measurement panel claims the frequency is 7.10Hz (which is quite obviously incorrect). This is a clear display of the problem, but it isn't something that you would normally do by accident unless you deliberately set up the scope incorrectly (as I did for the waveform shown). However, if you are taking measurements of RF circuits that incorporate lower frequencies (such as audio), you can easily be tricked unless you are careful.
+ +An analogue scope does not have this problem - it will generally show either a jumbled waveform that can't be synchronised, or a fairly solid 'block' of signal with no discernable detail. What you see depends greatly on the waveform itself. Analogue scopes also show a variable intensity depending on the speed of the electron beam which creates a moving spot on the screen. If the spot is moving very quickly, its intensity is reduced because it doesn't have enough time to excite the phosphors on the screen itself. For example, a very fast squarewave will often show horizontal dashed lines, separated by the amplitude, but with an almost invisible rise and fall time.
+ +Should the timebase be too fast, you may see an almost horizontal line which is just the part of the waveform that can be displayed. With an analogue scope it will probably be very faint, but most digital scopes do not have variable intensity, so show everything that can be displayed at the same intensity. This is not always desirable, but at the time of writing, variable intensity digital scopes are much more expensive than their fixed intensity brethren. These are often referred to as DPOs (digital phosphor oscilloscopes), as opposed to DSO (digital sampling oscilloscope) [ 5 ]. You will also see references to 'mixed signal' oscilloscopes, which usually combine 2 full function scope channels and 16 digital (logic analyser) channels.
+ +As a general rule, analogue scopes are easier to use and faster to set up for a waveform than digital types. Part of this is due to the fact that there is no aliasing, but they have a simpler control-set, with fewer options. Many service techs prefer analogue because they can have a visible trace with only a few adjustments, and the trace is unambiguous. This is especially true when you consider all of the menu driven options on digital scopes. Most require the menu system to change from AC to DC coupling, or even to change the trigger polarity. These are all front panel controls on analogue scopes because they don't normally have a menu - all controls are instantly available.
+ +While the options are limited, for general work they provide everything needed for most servicing tasks, and the 'bells and whistles' of digital scopes are not usually needed. There are exceptions of course, but they are (perhaps surprisingly) few and far between. Many of the 'old school' service techs are so used to using their analogue scopes that they find digital versions somewhat tedious or even annoying for many standard tasks.
+ +One of the most useful things about a digital scope is the ability to capture a single event, then stop. This allows waveforms or transients to be captured (for example), and examined in detail at your leisure, or posted as images in a web page as done here. While analogue storage scopes were not uncommon, they were very expensive, and the storage didn't last forever. The storage function was done within the CRT itself, and allowed the trace to be maintained (stored) for several minutes. Polaroid cameras were often added (with customised hoods to attach to the scope itself) for permanent storage. A digital scope can retain the waveform for as long as you like - many even allow the trace to be saved in internal non-volatile memory so it can be kept forever (or for as long as you have the scope. )
One other function of digital scopes also deserves a mention - averaging. This allows you to take a reading of a noisy signal, and by means of the averaging function, the noise (which is random) will disappear or be greatly reduced. This makes it possible to measure a waveform that may otherwise be buried in noise. You almost always need to use external triggering to be able to obtain a stable display when the averaging function is used.
+ + +It is hoped that this article has helped shed a little light on the workings and use of oscilloscopes. Despite the misgivings of many hobbyists, a scope is (IMO) an indispensable piece of test gear. There is nothing else that can tell you as much about how a circuit is functioning, or what it's doing wrong. A scope certainly doesn't take away the need for more traditional test gear, and you still need your multimeters, signal generator and other tools. Modern digital scopes can eliminate the need for a frequency counter unless extreme precision is required, and the ability to provide true RMS AC voltage measurements is also very handy.
+ +Whether you get a stand alone instrument or a USB scope that's driven from a PC depends on your needs and budget. There are a few things that USB scopes do very well, but they are usually harder to 'drive' than a conventional instrument. Since decent versions (having at least 50MHz bandwidth) are usually just as (or more) expensive than a stand-alone scope, it may be hard to justify the extra cost unless you need the PC's processing power for analysis.
+ +Before you commit to any of the available offerings, it's a good idea to do a search for the brand(s) and model(s) you are considering. People the world over have contributed reviews and forum posts that may alert you to any issues that various scopes may have. Be careful though, because not all reviews are by people who know what they are talking about.
+ +Regardless of the scope you have (or intend to get), don't forget the probes. I often use simple coax and alligator clips when working with low frequency/ low impedance circuits, but you really need to have a set of x10 probes. Some are available with a x1/ x10 switch, but in general a fixed x10 probe is a better option. Forgetting to set the switch or make note of whether it's set for x1 or x10 can really confuse your measurements, and the x1 setting can annoy some circuits which misbehave due to the capacitive loading. Digital scopes let you specify that you are using a x10 probe, so all measurements are scaled to the correct voltage range.
+ +Don't expect that you can buy a scope and instantly make sense of it and what it does. It takes time to acquaint yourself with the controls and to understand the waveforms you are looking at. Never imagine that it's not necessary to read the manual (even if you have used scopes all your life), because there are functions available now that were unheard of only a few years ago. You have to be prepared to look at different waveforms and work out what you can do with them using the scope's inbuilt maths (aka 'math') functions - these provide capabilities that can be extremely useful, even for fairly basic measurements.
+ +I got my first oscilloscope (or CRO as we knew them at the time) when I first started to become serious about electronics at around 17 or 18 years old. I have had one ever since, and have literally never been more than 5 minutes away from one if I needed it. Most of the projects shown in the ESP site would have been much harder to perfect without a scope, and some would simply have not been possible at all. In all cases, the use of an oscilloscope gives you information that you cannot get any other way.
+ + +![]() |
Elliott Sound Products | Voltage Protection |
Many electronic circuits are fairly low-cost, and the failure of a regulator may cause the supply voltage to increase to the point where some damage is experienced. A few opamps and capacitors might fail, but there's no damage that will cost the user a small fortune to fix. Others are very sensitive (and expensive), and they will be damaged or destroyed if the supply voltage increases even slightly. Logic circuits are one of those that are at risk, with 5V logic ICs pretty much guaranteed to fail if the voltage exceeds 7V (their absolute maximum voltage rating). There are numerous ICs available that are designed specifically for the job, but like so many specialised ICs made now, there may be no replacement available in only a couple of years after the product is manufactured.
This 'planned obsolescence' has become a major problem with many consumer goods, and industrial products aren't immune either. It's now common that any modern product will be almost exclusively based on SMD parts, and many cannot be repaired economically, if at all. There are specialist repairers who can fix SMD boards, but only if they can get the parts. This makes it all the more important to ensure that a power supply failure doesn't fry the main PCB(s).
Fortunately, it's uncommon for switchmode power supplies (SMPS) to fail with the output going high. It can happen (and I've seen it), and it can cause stress or failure of other parts. It can be caused by electrolytic capacitor failure, and the output may turn on and off, but with the 'on' period uncontrolled. Another failure mechanism is that the optocoupler used for feedback fails, resulting in a higher than intended output voltage. In a few cases, over-voltage protection is provided on peripheral boards to protect against SMPS failure, but all too often it's not included.
It's important to understand that there are two main classes for over-voltage protection. One (and that described here) is for electronic assemblies that rely on a well-regulated DC power supply, and the other describes mitigation for mains over-voltage conditions caused by supply network disturbances or lightning. Another class of device protects electrical gear against mains under- or over-voltage, and an example of this type of circuit is shown in Project 138. Protection against lightning (in particular) is much harder, because the energy available can be very high, and virtually nothing will protect equipment against a direct (or close by) lightning strike.
A comment I've made before is the answer to the question "Why doesn't lightning strike the same place twice", with my answer being that "The same place isn't there any more!" This isn't strictly true of course, but I used to have a large tree next door to my home that was hit by lightning, and it was literally blown in half. (For what it's worth, it survived - at least until the block of land was sold and the tree was removed.) As for the saying itself ... it's a myth. Lightning does strike the same place many times if it's designed for the task and/ or isn't destroyed.
If you have a system that uses microprocessors, ASICs (application specific ICs), FPGAs (field programmable gate arrays) or other expensive circuitry, over-voltage protection should not be an afterthought. All too often it's left to the power supply to always provide the right voltage, with sufficient current to ensure proper operation. If your circuitry draws a few amps at 5V (or other voltage as appropriate), then the supply should always be capable of supplying more current than the circuitry draws. A power supply that's on the edge is working hard all the time, and is more likely to fail than one that's over-engineered for reliability and long life.
However, any power supply can fail, and the results can be catastrophic if the failure mode means that the voltage increases beyond the maximum allowed for the ICs. Most analogue audio systems can't tolerate excessive voltages either, but the devices used in most gear are relatively inexpensive, and failures are uncommon. Even if a linear regulator IC does fail, the ICs can be replaced fairly cheaply. This is not the case when costly DSP (digital signal processing) devices or other expensive semiconductors are used though, so over-voltage protection is still a consideration. Doubly so if the supply is a switchmode type, as failure is somewhat more likely than a simple (well designed) linear supply.
Although generally considered 'brutal', the best over-voltage prevention device is a crowbar circuit. It's so-called because it's the electrical equivalent of dropping a crowbar across the supply terminals, with no consideration for any subsequent damage to the power supply. The supply has already failed (hence the over-voltage condition), so a short-circuit is the safest option to protect your circuitry. In some cases you may need to take additional precautions to ensure that the (very) sudden absence of supply voltage doesn't cause additional damage. A published amplifier design from many years ago used a crowbar circuit to protect a power amplifier from overload, but due to a design error in the amplifier, when the crowbar operated, the amp failed as well. This was not the desired outcome!
Under-voltage protection is less common, but there are applications where it can be very important. An example (and the one used in Section 6) is a motor, which cannot start under load if the voltage is too low. This can lead to failure in some cases. Under-voltage conditions can also cause circuits to misbehave, and while it usually doesn't cause any damage, it may still have an undesirable outcome.
You often hear people claim that a 'voltage surge' caused some kind of damage to equipment. The term is over-used and generally meaningless, because it fails to specify anything tangible. There are two different types of overvoltage, ESD (electrostatic discharge) and a condition where the voltage exceeds the nominal by some (excessive) percentage. ESD is very high voltage, but usually doesn't supply much energy. ESD is often responsible for damage to MOSFET and CMOS circuits, and is almost always the result of poor handling procedures by the assembler. It's counteracted in a production environment by the use of anti-static wrist bands, conductive flooring materials and the use of conductive foam (or carrier tubes, etc.) for susceptible parts. For test procedures, there's a 'human body model', where the human body is modelled by a 100pF capacitor and a 1,500Ω series resistance. During testing, the capacitor is fully charged to 2kV, 4kV, 6kV or 8kV, depending on the test procedure being used. The charged capacitor is discharged through the resistor to the DUT (device under test).
Figure 1.1 - Human Body Model
Static discharges can occur when equipment is in use. Not always because of static discharge per se, but often when a switchmode power supply is used to provide power to a circuit. Most SMPS are 'floating', and are not earthed/ grounded, and are classified as 'double insulated' (Class-II [IEC]). An internal capacitor (Class-Y1) bridges the insulation barrier, and is used to minimise EMI. The output of these supplies usually has an AC voltage present at the output, typically at around 90V with 230V mains, or 45V with 120V mains. This is highly variable though, and it can be more or less depending on the supply. If an input stage is connected to this voltage before the earth/ ground connection is made, it's surprisingly easy to damage the input device. High input impedance circuits are more susceptible than those with low impedances (not surprisingly).
I suppose one could call this a 'voltage surge', but it's a specific condition that is easily modelled and tested. It's not a 'surge', but a very short voltage 'spike'. The term 'surge' implies something that changes relatively slowly (a couple of milliseconds is 'slow' in electronics). In reality, surges are very uncommon. The AC mains is subject to long and short-term variations, but to qualify as a true surge it would have to be well over the nominal maximum (> 15 to 20% or so) and last for at least a few cycles.
Any manufactured product will (should) be able to handle the full mains variation of ±10% from nominal. Many SMPS can function normally with anything from 90V to 260V AC, 50/ 60Hz. What happens if the regulation fails depends on the supply, and some may produce an output that's much higher than it's rated for. A 12V supply may provide 20V or more if the regulation fails, and if your equipment can't handle that safely then it's probably going to be damaged. This condition can't be called a 'surge' either, as it's a constant excessive voltage that's present when the supply is powered. Some might have an overvoltage protection circuit built-in, but don't count on it! I've examined countless SMPS and have yet to see one with any (robust) form of protection. However, most failures result in no output.
Another area where overvoltage conditions are common is in automotive applications. The most common issue you'll see referred to is a 'load dump'. This occurs when a high-current load is disconnected, and the alternator's output can rise to a voltage that's far greater than the nominal 12V (or 24V for most trucks). Based on the standard (ISO-16750-2), a 12V system is tested with 10 pulses in 10 minutes, with a voltage of 101V in series with a resistor of between 0.5Ω and 4Ω. The clamping device will usually be a TVS diode, selected to be able to handle the power, and the peak voltage is usually clamped to around 35V. This is still much higher than the nominal 12V (usually up to 14.4V when the battery is charging), and it's expected that circuitry intended for automotive use will be able to handle at least 40V 'events'. The automotive environment is hostile, and electronics that can't handle the voltages, heat and vibration are not long for this world.
The two most common single components for (transient) overvoltage protection are MOVs and TVS diodes. MOVs are bidirectional/ bipolar, and TVS diodes can be either bidirectional or unidirectional (unipolar). MOVs are most commonly seen across the AC mains input, and can suppress mains transients caused by network faults, (distant) lightning, etc. However, note that a nearby lightning strike is perfectly capable of destroying any form of protection.
In some cases (less common today) gas arrestors are used. These are hermetically sealed, with a pair of electrodes in an inert gas. They are capable of very high discharge current, and are often used in telecommunications and (less common perhaps) antenna installations. Gas discharge tubes are available in a fairly limited number of voltage ratings, and usually the minimum voltage is around 75V. I don't intend to cover these here, as it's a rather niche market and they're not common in consumer electronics.
A very simple overvoltage detector uses nothing more than a zener diode and perhaps a transistor and/or optocoupler to provide a 'fault' signal that tells the SMPS to shut down. This simplified approach has many disadvantages, because the supply will turn on again after the power is cycled ("Turn it off and back on again" is a standard 'troubleshooting' technique for electronic equipment). A common approach is the crowbar protection system, which uses an SCR (silicon controlled rectifier, aka thyristor) to short the supply if it goes above a preset threshold voltage. The risk of fire (or further damage) is mitigated by using a fuse. When the SCR is triggered, it will attempt to draw a very high current, and hopefully the supply can provide enough current to blow the fuse.
There are examples of this technique on the Net, and it's as close as you can get to being foolproof. There are others as well, often using MOSFETs to switch off the supply if it's out of range (too high or too low). While the ICs designed for this purpose will work as intended, they rely on comparatively fragile switching devices (MOSFETs vs. an SCR). An SCR such as the BT151 or C122D (which I tested) are not powerhouses (12A, 8A [respectively] rated current in a TO-220 package), but they can handle 200A or 120A for 10ms. Very few power supplies will be able to manage that much current, although an electrolytic capacitor can provide that easily into a short circuit. However, there may not be enough stored energy to blow a fuse. There are many suitable SCR types, with some costing less than AU$1.00 each.
Naturally, there are other methods suggested that are (at best) ill-conceived, and while some might provide a small level of protection, they are anything but foolproof. Simple pre-regulators and other similar methods lack precision, and may also be far too slow to protect sensitive parts. You may also see electromechanical relays suggested, but they are not fast enough to protect anything. Even a fast relay will take at least 2ms to activate (most take longer), and that simply isn't fast enough. Zener diode protection schemes (of which there are many) are pretty much a waste of space, and cannot be recommended unless your requirements are very relaxed. A high-power zener diode will likely cost as much as an SCR based crowbar system, but it can never protect as well.
The biggest problem with a zener 'protection' scheme is power dissipation. If a 1A, 5V power supply is used for a microcontroller project, should it fail 'high voltage' due to a fault in the feedback path, it may try to output 7-8V at a minimum of 1A. A 5.1V zener diode would conduct, but it will dissipate at least 5.1W, but probably more. We'll assume a 5W zener, carrying 1A, and the dynamic resistance (from the 1N53 series zener datasheet) is 1.5Ω The zener voltage will actually be closer to 6.6V under these conditions, so any thoughts of real protection are imaginary. Zener diodes can be 'boosted' with an external power transistor, but it's still a bad idea. The details for the 'boosted zener' are shown in the ESP application note 'AN-007', but to be effective any zener 'protection' scheme needs a limiting resistor, which reduces the voltage available to your circuit and dissipates power.
Detection methods used involve either a simple comparator (over-voltage detection only) or a window comparator, which provides an output only when the monitored voltage is within the valid 'window'. If it's above or below the window thresholds, the detector output is in the 'invalid' state (which can be high or low, depending on the way the circuit is configured). For information on these often ignored components, see 'Comparators, The Unsung Heroes Of Electronics' (an ESP article).
It's very common to see TVS diodes used for ESD (electrostatic discharge) and/or 'surge' protection. It's very important to understand the difference between these 'events', and to be aware of the characteristics of TVS diodes. Like all components, they cannot handle infinite power, and the maximum current rating is dependent on the duration. A short (< 10µs) pulse (ESD) is very different from a longer 'surge', which is often shown as current vs. the number of AC cycles or rectified half-cycles. Waveforms are defined in IEC61643-123 (10/1000µs), and some datasheets also provide a specification referenced to IEC 61000-4-5 (8/20µs).
TVS diodes are not exact. They are far more predictable than MOVs (metal oxide varistors), but they both have internal resistance that determines the maximum voltage above the 'clamp voltage' shown in the datasheet for a given current. A nominal 6.8V TVS can vary between 6.45 and 7.14V at the 10mA test current, and may have a rated 'stand-off' voltage of around 5.80V (the maximum continuous voltage applied to the diode). All maxima have to be derated for elevated temperature and/or longer surge times. For example, a (nominal) 6.8V TVS such as the 1N6267A can handle a peak current of 143A, but the voltage at that current is 10.5V. This indicates an internal dynamic resistance of just under 26mΩ.
If you intend to use a TVS diode for protection, you must verify performance from the datasheet, and ensure that you don't exceed any of its ratings. In the case of a regulator failure the TVS diode may be considered sacrificial - if the PSU develops an over-voltage fault, the TVS diode will fail (almost always) short-circuit. The maximum long-term current has to be limited in some way, such as a fuse, PTC thermistor (e.g. PolySwitch or equivalent) or an electronic fuse (see Electronic Fuses).
Almost without exception, the first 'line of defence' is a regulator. It can be an IC type as shown, or it may be discrete. In some cases, there may be two regulators in series, with one to provide (for example) 12V and another to power 5V devices that are part of the same circuit. Mostly, this works out well enough, but the bit that's missing is circuitry to detect if the regulator fails. This isn't common, but it certainly does happen. One cause is not including an adequate heatsink, so the regulator runs hot. The other is to have an input voltage that's too close to the maximum allowable for the IC used. The 78xx series regulators are rated for a maximum input voltage of 35V, and if your input voltage is close to that with the nominal mains voltage (230V or 120V AC), a mains increase of 10% will result in an input voltage of over 38V. The regulator might survive, but it also might not. The failure mode for most semiconductors is short-circuit, so instead of 5V output, it becomes 38V!
Figure 3.1 - Simple Regulator Circuit With TVS Diode(s)
For most applications, it's unlikely that one would rely on a single regulator device to obtain a low output voltage from a 35V supply, and there is usually a secondary low voltage supply provided for the regulated low voltages. However, if cost is the only consideration (and/ or the constructor reads the datasheet and thinks s/he can get away with it), then it's quite possible. The problem is that if (when?) the IC fails, so does all circuitry that relies on the regulated voltage(s). The recommended input voltage is up to 25V. The input TVS (TVS1) would typically be rated for at least 20% more than the maximum expected unregulated input voltage, and the output TVS (TVS2) rated for no more than 10% above the required output voltage.
The use of a 'PolySwitch®' [ 2 ] or a fuse means that the main power supply is not subjected to a permanent overload if either TVS diode conducts heavily. Neither is especially fast, but a Polyswitch will reset when power is removed. A fuse is permanently open after a fault, and (if it's internal) it won't be replaced until the fault has been identified and (hopefully) fixed. Either protective device has to be rated for the normal current drawn by the downstream circuitry. Be careful if you use a PolySwitch, because they are sensitive to the temperature inside the equipment. The current ratings shown in the datasheet are at 25°C. All circuits shown below with a fuse can use a PolySwitch if preferred.
Even if the input voltage to the IC regulators is within acceptable limits, that does not guarantee that no failures will occur. The simple reality is that semiconductors can and do fail, and if you have expensive circuitry 'downstream', a regulator failure ensures that many other ICs will also fail. Even if they appear to have survived, it's probable that there will be degradation and performance will be impacted. The TVS diode (whether unidirectional [unipolar] or bidirectional [bipolar]) must be selected to suit the regulator's output voltage. Low-voltage TVS diodes are mostly unidirectional and SMD, so the choices are somewhat limited. As noted above, a TVS diode is not a precision part, so relying solely on a TVS for protection may be unwise. Limited long-term power dissipation means that a fault will almost certainly cause a TVS diode to fail - hopefully short-circuit.
The only way to ensure that downstream parts are not damaged is to employ additional circuitry to detect an over-voltage, and remove the supply voltage before it causes damage. In the examples that follow, I've shown only positive circuits with the exception of Fig. 4.3, but the same principles can be used for negative supplies as well. When a circuit uses dual supplies, it's usually a good idea to ensure that both supplies are removed simultaneously. This adds quite a bit to the circuit, and a dual protected supply isn't shown in this article.
Note that there is no negative version of an SCR, so the circuitry had to be 'tricked' into using an SCR with a negative supply voltage. I leave this as an exercise for the reader, but it's not particularly difficult to do. An SCR is triggered with a gate voltage that's positive with respect to the cathode. You can also use a TRIAC which is bidirectional and will work with a negative supply.
A shunt regulator is almost fail-safe. Should the input voltage rise above the expected value, the zener diode (or transistor assisted zener) conducts harder. If the dissipation exceeds the maximum allowable, the zener and/or transistor will fail (short-circuit), protecting the powered electronics. If R1 is a wirewound type, you may be able to set it up so that if it overheats enough, the solder will melt and a spring (or gravity) will take it out of circuit. Unfortunately, these regulators are very inefficient and have maximum semiconductor current at minimum load. Provided the input voltage doesn't change, the resistor dissipation is constant.
Figure 4.1 - Shunt Regulator
Shunt regulators used to be quite common, and they're still used in many circuits where 'perfect' regulation isn't needed. Dissipation is not a problem for low-current applications, but if you need a lot of current (or it varies widely) a shunt regulator is not the way to go. However, it is generally a fail-safe option, and that alone makes it useful. If the input voltage climbs to 50V (rather unlikely, but may be possible with some circuits), the resistor dissipation will increase to over 20W, and a 5W wirewound resistor will de-solder itself. All you need to add is a spring (I leave the details to the reader), and it becomes a home-made thermal switch.
A zener can be used to activate a 'proper' protection scheme (using an SCR crowbar), but it's not a precision approach. Zener diodes always have some tolerance, and it's typically ±20%, although you can get 10% or 5% versions as well. A highly simplified circuit such as that shown next will work, but can never be precise, even with a close-tolerance zener diode. The issue with an over-simplified design is that there's no way to account for thermal effects (hot semiconductors conduct at a lower voltage than when cold), and there's so sensible way to make it adjustable. As shown, the circuit is designed for use with a 5V supply, with the circuit drawing no more than around 100-200mA. At higher current, the fuse will have measurable resistance (the voltage drop of a 1A fuse at rated current is typically about 200mV). More information about fuse characteristics is available in the article 'How to Apply Circuit Protective Devices'.
In the following circuit, a TRIAC is shown as an alternative, with the BT139 being able to handle a 10ms pulse of 145A. If you have positive and negative supplies TRIACs can be used, since they allow you to use the same circuit topology for both polarities. Note that MT1 and MT2 are not interchangeable. The trigger voltage must be applied between the gate and MT1, but the gate and MT2 voltage can be positive or negative with respect to MT1. For optimum triggering, the polarity of the gate and MT2 should be the same. The BT139 is only a suggestion, as it can handle up to 600V and it's inexpensive (less than AU$2.00 from some suppliers). The TRIAC can be used in the other circuits shown as well, but I've not included it to keep the circuits simple.
Figure 4.2 - Simplified 5V Protection Circuit
In theory, the circuit shown above will trip if the input voltage exceeds about 5.7V. The SCR will turn on, and the fuse will blow and/ or the supply's output will be shorted. However, temperature will play a big part here, because of the SCR's gate voltage. At 25°C, it will conduct with a gate voltage of about 1V, but this falls to around 800mV at 50°C. If the SCR were to get hot (because it's next to a high power resistor for example), then the circuit will trip with 5.5V input - assuming the zener voltage is exactly 4.7V and the SCR is 'typical'. There are too many assumptions and not enough certainty for this to be considered a precision approach. However, it's a lot better than nothing.
Figure 4.3 - Simplified ±5V Protection Circuit (Dual Polarity)
A dual polarity version is shown above, using either SCRs or TRIACs as switches. TRIACs allow the circuit to be fully symmetrical, but there's no particular advantage. The two switches (positive and negative) are identical with dual SCRs, maintaining the same polarities. It may look a bit weird, but the function isn't changed. Be warned that all of the simplified circuits only shut down the faulty supply, so if the positive voltage causes its circuit to operate, the negative supply will continue to work normally. This can cause circuits to misbehave (large DC offsets for example), so it's not a panacea. Shutting down both supplies (regardless of which one fails) is preferable, but harder to achieve.
Ideally we need something that can be varied to a precise trip voltage. It can then be tested, adjusted and verified (using a variable lab supply) before it's put to use. Needless to say, the solution becomes more complex, but it only needs cheap parts (certainly cheaper than the circuit being protected) and can be built as a small module, ready to be installed anywhere that you'd like to protect a sensitive circuit. The circuit itself needs to be flexible enough that it can be used with different supply voltages, but that becomes difficult with some ICs that use a 3.3V (and some even lower) supply. To protect these, you're almost certainly going to need a dedicated IC, or a more complex circuit with a separate supply.
Figure 4.4 - Adjustable Protection Circuit
Now we have a circuit that can be set to a precise voltage, using the TL431 adjustable voltage reference. It doesn't rely on any semiconductor junction variations, whether from unit to unit or with temperature. It can be adjusted from 3.7V up to 15.1V as shown, but the range can be modified by changing the value of R2. Increasing the value means it will respond to lower voltages and vice versa. This general idea is not at all new - it's been around in various forms for many years.
Many other schemes can be found if you search, but many are poorly thought out and have potentially fatal flaws. The circuit shown in Figure 4.4 can be made more complex by using a comparator, which may provide a theoretical advantage, but no actual improvement. The circuit has to be fast-acting, even though over-voltage faults are usually not especially fast. The more parts that are used, the longer it takes ( typically measured in microseconds) for the protection circuit to react. An SCR is very fast - once triggered, the transition to full conduction takes almost no time at all. The BT151 (for example) has a turn-on current rise of 50A/µs, meaning that once triggered, the current will be 50A after just 1µs (assuming that the supply can even deliver that much current). Reality is different of course, but I measured a C122D SCR, and it took only 5µs to reduce 7V to 0.5V at low current (SCRs tend to get faster as current is increased).
Figure 4.5 - Regulator With Over-Voltage Protection
In the Figure 4.5 circuit, if the regulator (U1) should fail, the protection circuit will operate and remove the input supply. The diode across the regulator is to protect it against reverse voltage during testing, but I suggest that the diode always be used. The protection circuitry should be adjusted and tested with the SCR connected to the supply via a resistor (choose a value that will provide about ½A), and once it's been tested and verified, the resistor is replaced by a link. The circuit will do nothing until there's a fault and it then short-circuits the incoming supply. This may never happen of course, but if the regulator fails and you have no protection, it could be a very expensive failure. There's no need to worry about the voltage drop across the fuse, so a lower value can be used if the protected circuitry doesn't draw much current. The same arrangement can be used for any current (and any regulator) with the only change being the voltage setting.
In most cases, the protection circuit needs to operate as quickly as possible. Depending on the circuitry being protected, there may be a possibility of very narrow 'spike' voltages that could conceivably trigger the protection circuit (false triggering). If this is a possibility then the circuit may need to be slowed down, or add a TVS and/or a big capacitor (about 1,000µF) in parallel with the output. If needed, the response time can be increased by adding a small capacitor between the base and collector of Q1. With the values shown, a 1nF cap between base and collector will add a 1.3µs delay. Increase the value to increase the delay (e.g. 10nF gives a delay of ~4.5µs). It's important to have as little delay as possible, as this provides maximum protection.
Most protection circuits will be used with relatively low voltages, and they will almost always be regulated. Using over-voltage protection with an unregulated supply is generally not necessary. By nature, unregulated supplies vary their output voltage depending on the load current and mains voltage. Since the mains can change by ±10% (and sometimes more), any protection scheme has to consider that, and any circuit that uses an unregulated supply will (or should) be designed to handle normal variations without failure. This is a topic unto itself, and is not relevant here.
Any solution using a variable voltage reference needs to account for the emitter-base junction of the trigger transistor. The reference voltage for the TL431 is 2.5V, and one must allow 700mV for the emitter-base voltage of the transistor. That means that the minimum voltage that can be detected is 3.2V, but it would not be prudent to try to use that. There's simply insufficient 'headroom' and no safety margin. This is where dedicated ICs come into their own, as they are designed to work with all common supply voltages.
Where voltages and currents are appropriate, you may be able to use a 'Polyswitch™' PTC thermistor in place of the fuse. These will provide protection, but don't need to be replaced if the SCR is turned on by an over-voltage. This can be handy, but you're relying of it acting every time power is turned off and on again. A blown fuse is a sure indicator that something is seriously wrong, but it's of limited use if the faulty power supply can't provide enough current to ensure that the fuse opens. Given that this is circuitry that may never operate for the life of the equipment, keeping the cost as low as possible is advisable. You also need to consider the internal resistance of PTC thermistors, which can be up to double that of a similarly rated fuse.
There are countless ICs designed to provide protection for sensitive electronics. These are often referred to as 'supervisory' circuits, because they can monitor several voltages and provide a 'power good' signal when all monitored supplies are within the limits defined by external resistors or internal software. Many don't have the ability to activate a crowbar protection system, although it can be cobbled together with some devices. Others have no provision to actually do something proactive if the supplies are out-of-bounds, other than provide a signal to the power supply to turn off. A supply with a fault may not be able to do so, and there can still be enough stored energy (in filter/ storage capacitors) to cause damage.
Because there are so many different ICs designed for power monitoring, it's not sensible to even try to cover them all, so this section is largely 'commentary' to advise the reader of the existence of such devices. The search and selection depends on too many criteria that are specific to an application. However (and purely as an example), I've shown a circuit for reverse polarity, under-voltage and over-voltage below. This is based on the LTC4365 datasheet, and in this instance it's intended for automotive applications.
To be able to use N-Channel MOSFETs as the switching devices, the LTC4356 uses an internal charge-pump supply to drive the gates, with up to 9.8V available at 20µA. With an operating range from 2.5V to 34V and a protection range of -40V to +60V, it's designed to cover a very wide range of potential uses. Naturally, it's only available in SMD packages (two different packages are made, but they're not pin-compatible). This is why pin numbers aren't shown in the drawing.
Figure 5.1 - Automotive Under/ Over/ Reverse Voltage Protection Using An LTC4356
The datasheet has many examples of different circuits, and if you wish to know more then download it from the link [ 5 ] below. The parameters are programmed by using resistors, and the arrangement shown (with two MOSFETs) is only needed if reverse polarity protection is required. A single MOSFET is enough to provide simple under and over-voltage protection. There's always a problem with this approach, because the resistors will often be inconvenient (i.e. unobtainable) values, so will usually end up being series or parallel devices to get the resistance needed. Of course you can also use trimpots, but they will take time to set properly in a production item, and also give the end-user something to fiddle with if so inclined. This rarely ends well.
As already noted, this device is one of a great many, and its suitability has to be verified for the needs of the designer. There can still be situations where the circuit malfunctions, or one (or both) MOSFETs become shorted due to an overload or high-energy transient of either polarity. All protective circuitry involves compromise, and building something that can handle all unexpected events is difficult, in part because some 'events' are unexpected, and no one would normally anticipate them. Unfortunately, life is full of 'unexpected events', as the recent COVID-19 pandemic has demonstrated only too well.
In some cases, the designers have already thought of things that the user may not have considered. For example, just a 300mm length of wire has a parasitic inductance of about 300nH, and that can cause ringing with fast transients. In such cases, use a TVS diode or other fast-acting means of damping transient ringing, which can cause under- and over-voltages that are too short to activate the protection circuit, but they can still cause damage!
Figure 5.2 - MAX6495 Overvoltage Protection With Regulator
The MAX6499 IC operates almost identically to the LTC4356, but doesn't have reverse polarity protection circuitry. There's another IC in the same 'family' that does though - The MAX6496, which uses an N-Channel MOSFET for overvoltage and a P-Channel MOSFET for reverse polarity protection. For many applications (e.g. those that are permanently wired internally), reverse polarity protection isn't needed, so the circuit is simplified. The basic application circuit is very similar to that shown above. It's easier to program with the resistors, but it's only available in a TDFN package that's hard to work with. The MAX64xx devices can operate at up to 72V.
The MAX6495 can monitor the output of a regulator, either a DC-DC converter or linear. If the voltage at the 'OVSET' pin exceeds 1.24V the IC turns off the MOSFET. However, the circuit shown is an over-simplification, because it will turn on and off if the regulator fails. To be useful, you'd need to incorporate a latch so that once triggered, it cannot restart. Note that the circuit shown is adapted directly from the datasheet, which doesn't offer a suggestion as to how to prevent it from turning on and off for as long as the fault continues. It is claimed that the IC will enter a 'linear' mode to maintain the output at the OVSET level, but this can only protect against transient events and long-term operation will cause the MOSFET to overheat and probably fail.
In addition to specialised devices, many Class-D amplifier ICs also include under- and over-voltage detection. This prevents erratic operation at low voltages, and protects the IC and output MOSFETs from over-voltages that Class-D amps can develop due to a phenomenon known as 'bus pumping'. (An explanation of bus pumping is outside the scope of this article.)
I imagine that some readers will wonder why anyone would bother to detect under-voltage conditions. It's tempting to think that if the voltage is too low, nothing 'bad' can happen. Unfortunately, this isn't the case at all, and even some otherwise well-behaved circuits can malfunction if the voltage is too low. One example (and it's directly related to audio) is opamps. There are several common opamps that misbehave quite badly if the voltage is less than their rated minimum voltage. The TL07x series is an example, where they either make 'odd' noises as the voltage falls below the threshold (which is around ±4V but it varies) or show very high output offset voltages that can cause loud 'thumps' through the speaker (via a power amplifier of course). The Project 05 power supply was designed to include a muting circuit for this very reason.
Other devices that can (and do) misbehave include switchmode power supplies. The controller IC almost always includes a facility to detect an under-voltage condition and prevent the supply from functioning. There is one IC that I know of that does not include this - the XL6009 boost converter IC, which is supposedly an 'equivalent' to the LM2577. The latter has in-built under-voltage protection, where the XL6009 does not. As a result, at low input voltage the boosted output voltage is uncontrolled, and can reach 40V with an input voltage of 3V. The datasheet claims that it has under-voltage protection, but it doesn't work. There are undoubtedly other examples, but the ones mentioned are those I have experienced first-hand.
In general, under-voltage cutouts are used anywhere that a circuit might malfunction or misbehave once the supply voltage falls below a minimum, determined by the circuitry involved. It's usually much less of a problem than over-voltage, as it's unlikely to cause any damage to the circuitry. The switchmode boost converter referred to above is a rare exception. Most designers don't bother unless they know that the circuits will do something 'bad'. Most circuits just stop working if the voltage is too low.
An exception to the 'they just stop working' idea is an electric motor. Whether AC or DC (including many 'brushless' DC motors), if these are powered under load when the supply voltage is too low, they may not be able to start, and that causes very high current flow with no cooling (many motors rely on an internal fan to force air past the windings). With these, an under-voltage protection circuit should be considered essential if there's a likelihood that the supply voltage may fall to a level that's insufficient for the motor to run normally. It's not a common problem, but it certainly exists, and can cause expensive damage.
One is faced with a conundrum with any under-voltage cutoff system. To be able to function, the cutoff circuitry must be able to work at the lowest likely voltage, but be able to handle the 'normal' full voltage equally well. The circuit doesn't have to be functional with zero volts input for obvious reasons, but (depending on the nature of the load) it may need to work with less than 5V input. It should also cause no voltage drop of its own, as that would reduce the voltage to your circuitry (or motor) all the time, and it may be subject to high dissipation when powering the load.
This is a place where a relay can be useful, as they have low contact resistance and dissipate very little power. However, there's a trap! Let's assume that you use a 5V relay with a 10A contact rating, and suitable for up to 30V DC switching. There are countless candidates, and most will be very similar to each other so I'm using 'generalised' data. A 5V coil relay will typically activate with around 3.5V across the coil, so if your circuit operates from 5V, the relay alone will prevent power being delivered to the circuit unless there is at least 3.5V available.
However (and here's the trap), the relay will continue to provide power until the coil voltage falls below the 'drop-out' voltage, which can be as low as 500mV. A 12V relay will pick up at around 8.5V, but won't release until the coil voltage falls below 1.2V. The drop-out voltage is far too low if the supply voltage falls after the relay has engaged. An example might be an automotive application, where the battery voltage is sufficient to allow the relay to activate, but falls as soon as any significant current is drawn (an almost flat battery or a high-resistance battery connection will do just that). The relay may not release under these conditions, so additional circuitry is essential to force the relay to release if the voltage is lower than your device can tolerate.
Figure 6.1 - Automotive Under-Voltage Protection Using A Relay
Figure 6.1 shows one way this can be done. The input voltage must be greater than 10.6V (nominal) for the relay to activate, and if it falls below 10.6V at any time Q1 turns off and so does the relay. This circuit is the simplest way to achieve the result, but it has a built-in flaw! If you attempt to power your circuit (let's assume a motor) and that causes the voltage to fall, the relay will drop out. With no current drawn from the battery, the voltage will rise again, and the relay will re-engage. The connected load will cause the voltage to fall again, and the cycle will continue.
C1 provides a small delay to prevent the relay from behaving like a buzzer, but a better solution would be to use a low-voltage opamp or comparator to provide hysteresis and a timed 'lockout' period (at least one second). All problems have a solution, but it's not always obvious, and a seemingly trivial exercise can become complex very quickly. There is always a cost-benefit formula to be satisfied, and this is especially true for commercial products. For example, no car maker will include anything that's not strictly necessary, so don't expect to find circuits such as the above for each motor in your car. It would be 'nice' if they did so, but most modern vehicles have many motors, and the cost would be prohibitive. In most cases, there's no real benefit either, but your specific application may be very different.
This is especially true if a motor is turned on remotely, where there's no one around to verify that it's working. Systems using microcontrollers or similar should have the necessary protection built into the code, with a routine to verify that the motor's supply is 'good', and/ or to monitor abnormal operation.
There's another class of under-voltage detection and disconnection circuit, namely battery protection. Whether it's a single cell (e.g. Li-Ion) or a complete battery (a collection of cells in series, parallel or series-parallel, any chemistry), most battery types will be damaged if discharged below a specific voltage. This varies with different battery chemistries, and for Li-Ion it's about 3V/ cell, or 1.8V/ cell for lead-acid cells (open-circuit voltage). Ni-MH (nickel-metal hydride) cells should not be discharged below 1.0-1.1V per cell. Recommendations vary, so you must do your own research.
There are countless ICs for protection and balance-charging for Li-Ion cells and batteries, and some include under-voltage (or over-discharge) protection. It's always tricky with batteries, because the under-voltage protection circuitry will consume some power, and that can cause the battery to be discharged even after the load is disconnected. Project 184 shows how I got around that limitation, by disconnecting both the load and the under-voltage cutoff circuit from the battery if the voltage falls below the minimum. The circuit is turned on by the act of connecting the battery. An ultra-low power version of the circuit is shown below. It draws only ~700µA in use (excluding the load). The LM285LP-2-5 regulator IC will regulate down to 10µA cathode current.
Figure 7.1 - Battery Cutoff Circuit (From Project 184)
There are many requirements for this kind of circuit. Nearly all battery chemistries are 'upset' by over-discharge, so a means for prevention is essential. Any piece of test equipment or other gear that uses a rechargeable battery should incorporate an under-voltage cutoff to prevent battery damage. It needs to be designed so that the protection circuit itself doesn't cause further discharge, either by disconnecting itself, or by using ultra-low power consumption electronics. The allowable 'parasitic' discharge depends on the application, the battery size (in Amp/ hours) and the likelihood (or otherwise) of the battery being left discharged for a long period. There is no 'one size fits all' solution. As is to be expected, the Net has countless examples, but not all are satisfactory. There are quite a few options that allow an entire cutoff circuit to work with a supply current of less than 200µA, but it does require some trick circuitry to work with such a low current. The circuit shown above is a good option, but an even lower-power single opamp would reduce current even further.
CMOS opamps are potentially a good choice, but most are rated for a maximum supply voltage of 5.5V (5V recommended). This means that the opamp's supply has to be regulated as well, which complicates the design. Very few applications require that the under-voltage cutout circuit should draw less than 1mA, unless the connected circuitry is also very low-power. A current draw of even 10mA for the cutout is of little consequence if the circuit draws 100mA or more. 10mA would be silly if the connected circuit only draws 2mA, so the design has to be adapted to the application.
Keeping protection circuit operating current to the minimum has several advantages. One is that the circuit itself doesn't draw a current that reduces battery life, and the other is to ensure that the circuit doesn't continue to discharge the battery after a forced disconnection of the load. This is particularly important for equipment that may sit around for extended periods without being used, as even 100µA will eventually discharge a battery or cell to zero volts. It may take a long time, but when combined with self-discharge (a common 'feature' of most battery types) it will eventually cause over-discharge damage.
Great care is necessary if the application is 'mission critical' (a battery-powered drone or other aircraft for example), and you usually have to accept the possibility of over-discharge to prevent your aircraft from falling from the sky when the voltage falls below the threshold. A damaged battery isn't cheap to replace, but it's a great deal cheaper than replacing the entire aircraft and its payload. In such cases, it's better to have a signal that warns you that the battery voltage is low so remedial action can be taken before the battery is damaged or the aircraft crashes. Further discussion of this is outside the scope of this article.
It's worth adding a short section on MOVs (aka voltage dependent resistors or just varistor), as they are commonly used in SMPS (switchmode power supplies) and for mains 'surge' protection in power boards and the like. A MOV can dissipate a prodigious amount of current for a short time, depending on the device used. 1kA (1,000A) or more is easy, but the duration has to be very short (20µs or less). Every time a MOV conducts, a small amount of the working material (typically zinc oxide) is damaged, and eventually the MOV will fail. The most common end-of-life failure mode is short-circuit, and the MOV will (literally) explode. In some cases a MOV will be paired with a thermal fuse that opens if the device gets hot - the precursor to complete failure.
There are MOVs with an internal thermal fuse, and some have an extra terminal to connect an indicator that shows that protection is still provided. The same thing can be done with an external thermal fuse, but it needs to be in close thermal contact with the MOV(s). While MOVs with an integrated thermal fuse are certainly a good idea, you must also consider replacement at some time in the future. Should the selected part become unavailable, your protection circuit can't be repaired. An external thermal fuse should be rated for no more than 150°C.
Figure 8.1 - MOV Overvoltage Protection With Indicator
The general idea is shown above. The thermal fuse would be mounted between the two MOVs, and in good thermal contact with both. The indicator should be a high-brightness LED, as the available current is less than 1mA with 230V mains. R1 must be rated for the full voltage, but in most cases you'd use two resistors in series to ensure safety. As long as the MOVs are intact and the thermal fuse hasn't opened, the LED will be 'on', indicating that all's well. The diode in parallel with the LED protects it against reverse voltage.
The MOVs, main fuse and thermal fuse all need to be selected for the mains voltage in use (230V or 120V AC), and everything has to be in an enclosure that prevents accidental contact. MOV selection is almost an art form, because not all manufacturers have the same recommendations for the voltage rating needed for the mains voltage. It's fairly common to use 275V AC rating for 230V mains, and around 150V for 120V mains. If in doubt, consult the datasheets, as these recommendations vary widely. Selection has been simplified (somewhat) recently, and you can often use a MOV that's rated for the mains supply voltage in use.
Specifying a voltage that's too low will cause premature failure of the MOV, so it's usually better to use one that has a higher voltage than will ever be experienced in normal use. For example, although Australia mains power is nominally 230V, it can (and does) occasionally exceed 260V. The same thing happens everywhere, and expecting any MOV to clamp a sustained voltage that exceeds its rating will fail prematurely.
Note that you may see references to MOVs being used between active (live), neutral and earth/ ground. In most countries this will not be permitted, and they should only be connected between active and neutral.
This article is a brief look at the world of 'supervisory' and protection circuits, designed to protect electronics against (predominantly) over-voltage conditions. This isn't an area that attracts too many readers, which is a shame, because there's a great deal to be learned from datasheets and other literature on the topic. One thing that's guaranteed in electronics is that there's always something new (even if it's only new to you) to be found, and by knowing about these techniques you are less likely to be left wondering what to do if you encounter a problem with a design.
As with any circuit, the implementation will determine whether it works or not. With IC designs, there are many tests necessary to ensure that the reference voltages are set correctly, and you must also consider component tolerances. No component is exact - IC internal reference voltages, resistor values and in some cases even PCB track resistance can affect the design, and everything has to be considered. This isn't quite so critical for a simple crowbar over-voltage protection circuit, but IC designs can be very fussy (look at some of the resistor values in Figure 5.1 as an example).
Make sure that tracks are sized appropriately if a crowbar circuit is used. I'm sure that most constructors would rather replace a fuse than have to repair a track that's been blown off the board. This can happen with very high currents, repairs can be difficult and always look messy afterwards.
Crowbar circuits are very robust and operate quickly - generally within a couple of microseconds. This can cause problems with some circuitry, so it's essential that you understand the circuit being protected, and add any necessary additional protective measures to ensure that a very sudden removal of power doesn't cause any problems. Be especially careful with 'downstream' regulators (e.g. from 5V to 3.3V), as they may not have an 'anti-parallel' diode as shown in Figure 3.1. The purpose of the diode is to ensure that no voltage (greater than 650mV) can exist at the output of a regulator when the input voltage is suddenly removed (or shorted out). Under normal operating conditions the diode is not usually necessary, but it becomes essential if a crowbar circuit is introduced.
Not every circuit needs protection, and the need has to be determined based on the allowable supply voltage range, the cost of the protected circuitry and the cost of protection. It would make no sense to use a $10 IC to protect a $5 circuit, and nor would it make sense to try to protect a $1,000 circuit with a 10 cent fuse. Power supplies (especially 'old school' linear [mains transformer based] types) are remarkably reliable, provided the regulator IC is provided with a heatsink as needed and everything is rated appropriately. Even most switchmode supplies are surprisingly reliable, but they won't last for 50 years or more (common for linear supplies). Given the expected life of many modern systems, many people seem to have accepted that a life span of 5 years is alright (I disagree, but that's another story).
![]() |
Elliott Sound Products | Phantom Power |
Anyone involved with audio will know about phantom power, also sometimes known as P48, because the source voltage is (or is supposed to be) 48V DC. The term 'phantom' comes from the lack of any on-board power supply (such as a battery), and the power is delivered seemingly by magic. Of course there's no magic involved, just engineering. The P48 standard was pioneered by Neumann (a very well known German microphone manufacturer), and was developed during the 1960s. Rumour has it that Neumann was told by the Norwegian Broadcasting Corporation (NBC) that their new microphone at the time must use 48V phantom power [ 1 ].
48V DC is very common, as it forms the power supply for the entire telephone system, and it was always provided by a large battery bank using 24 lead-acid cells in series. However, lest you imagine that phantom power was initially derived from the telephone battery bank, it wasn't. The telephone system operates with a negative supply, with the positive grounded. This was done to minimise corrosion should water get into the system (oxygen forms on the anode, and that would eat the wiring away in no time). Presumably the NBC had no such issues and used the more 'conventional' negative ground.
During the 1960s, great progress was made with 'solid-state' electronics, and providing a 48V DC supply in a mixing console or broadcast studio became simple and relatively inexpensive. P48 has been in constant use since it was introduced. It's now standardised in DIN 45596, and is used worldwide. There are alternatives as well, being P12 (12V DC) and P24 (24V DC), but the original is (IMO) still the best. Phantom power input circuits are always balanced, but the powered equipment may use unbalanced outputs, or just a simple impedance balancing arrangement. All work equally well in practice.
![]() | It's commonly accepted that microphones should always use a balanced connection, however this is not necessarily the case in practice. An unbalanced connection usually works just fine, because microphones are a floating source (there is no ground reference in 99.9% of cases). However, no studio or live production team will ever use unbalanced connections (i.e. a simple coaxial cable, having an inner conductor and shield) because professional microphones are always wired in balanced mode. Having two types of XLR leads (balanced for equipment interconnects and unbalanced for microphones) would be a logistical nightmare, and would almost guarantee that the wrong cable would be used. By standardising the cables, this saves a great deal of grief all round. |
48V is not the only voltage supplied, and there are three variations. P48 is by far the most common, and is specified to provide +48V with a maximum power of 240mW (14mA short-circuit current). The other standards are less common, with 24V and 12V being 'sanctioned' variants (IEC 61938:2018). As always with standards, you have to pay to get the documentation.
Phantom Power | Voltage (no load) | Imax (Shorted) | Rfeed (Ohms) | Working Current |
P12 | 12V ±1V | 35mA | 680 || 680 | 17mA |
P24 | 24V ±4V | 40mA | 1.2k || 1.2k | 20mA |
P48 | 48V ±4V | 14mA | 6.8k || 6.8k | 10mA |
The three 'official' standards are shown above. One problem is that with voltages below 48V, the resistance of the feed network is lower, making it harder for the electronics to drive the load. While more current is available at the lower voltages, this may not be useful, since it's generally expected that something designed to operate at (say) 12V should not be damaged (and should still work normally) with the more conventional and widespread 48V phantom supply. It should be pretty obvious that if something is designed to work only with P12 phantom power, it will be plugged into gear delivering P48 phantom power. If it doesn't work that's one thing, but you'd be quite rightly very annoyed if it killed the circuitry. Note that I added the 'Working Current', and it's not specified in the standards. It's also highly variable in practice, but the values shown are those that I'd recommend.
If a product is designed for P48, it may not work (or will work very poorly) if supplied with P24 or (worse still) P12. Ideally, standards are just that - standards. To have three different 'standards' that all use the same principle and the same connector is mad. If alternatives are offered, they should use a different connector unless the product can function normally with any of the supply voltages (and feed resistances). This is uncommon, but it is possible.
Any P48 system has to be considered as a whole. Each part relies on the others for its function, and the overview shown below covers the essential ingredients. Firstly, a power supply is required to produce the +48V DC used by the end devices, which are usually microphones, but may also be DI (direct injection) boxes allowing on-stage instruments to connect directly to the mixing console. The microphone preamp will invariably have adjustable gain so that the level of each signal can be set in relation to the others (this is one of the sound engineer's tasks).
Figure 1.1 - Phantom Powering Circuit Overview
The mic cable is self explanatory, but is also responsible for a great many faults (mic cables have a very hard life). Pin 1 of an XLR connector is always ground, and connects to the shield of the cable, and always at both ends. Pin 2 is considered 'hot', and with unbalanced (or impedance balanced) circuits, the signal appears on Pin 2. Pin 3 is 'cold', and in some cases it may be grounded by equipment. For 'true' balanced connections, Pin 3 carries a signal that is an inverted copy of that on Pin 2. The effective signal level is doubled by this technique.
Because Figure 1.1 is simplified to the extreme, each of the functions will be covered in detail below, with the exception of the mic preamp. There are countless variations, but one of the most important functions (the mic preamp protection circuit) is covered. Even with this, there are many variations, but the one shown is adapted from an ESP project and is known to work very well. It provides a level of protection that ensures mic preamps will survive most abuse (but not all - some faults can destroy everything in the signal path).
Because DC is provided to both signal conductors, and through equal-value resistors, the DC appears as a common-mode signal and it doesn't affect the audio. If the feed resistor tolerance isn't good enough, common-mode rejection is compromised, so hum and buzz (the most common noises injected into balanced cables) are not attenuated as well as they should be. The same applies to the microphone or DI box - it must also apply equal loading to both signal lines. Contrary to belief, the signal does not have to be present on both wires, and 'pseudo-balanced' electronics are common in microphones, even with high-end models. What matters is impedance, not the voltage. If the impedance for both signal lines is not the same, hum and other noises will be picked up by the cable.
Nothing is without limitations or compromises, and P48 is no different. It would not be sensible to allow external gear to draw as much current as it likes, as a faulty lead or equipment could cause serious damage. Some early mixers used centre-tapped transformers to provide the DC, but the centre-tap was never connected directly to the phantom power supply. Instead, it was supplied via a resistor to limit the current under fault conditions. Feeding the current to the centre-tap ensures that the transformer is not subjected to any DC magnetic field, as the two windings cancel the flux. If it were otherwise, the transformer core could saturate, causing gross distortion.
Electronically balanced microphone preamps don't use a transformer, and due to the cost of even 'average' mic transformers, most equipment uses a differential preamp, direct-coupled to the mic connector. The standard feed resistance is 6.81k, selected for two reasons. Firstly, specifying 6.81k demands that the resistors are 1% tolerance - you cannot buy 6.81k resistors with 5% tolerance. The other reason is pure compromise. Lower values would load the microphone, reducing its output level and increasing noise, and higher values would be unable to supply enough current to power any useful electronics.
Most equipment now just uses 6.8k resistors as they are readily available with 1% tolerance, something that was not the case in the 1960s or 70s. The two resistors are effectively in parallel for DC, so the total limiting resistance is 3.4k, which allows a total short-circuit current of 14mA. If the internal circuitry of the microphone (or other phantom powered gear) requires 10V for normal operation, then the maximum current available is 11mA. The available current for any operating voltage is easily calculated. A design current of up to 10mA is usually safe, and that gives the remote circuitry a maximum supply voltage of 14V.
The rather miserly current provided means that circuitry has to be low-power to ensure it can operate from the available voltage and current. This means that the designer has to be fairly clever to ensure minimal current drain along with good performance, sufficient to suit the application. High impedance circuits don't draw much current, but they are noisy due to resistor thermal noise and other noise from semiconductors etc. Low impedance circuits draw more current and are quieter, but they may not have enough voltage to handle high signal levels without clipping.
The available power with P48 equipment may seem to be limiting, but it's usually not a problem. You have to live with the minimal current available, and the remote circuitry doesn't have to drive low impedance loads. Most mic preamps have an input impedance of at least 3kΩ, and the extra loading by the 6.8k resistor (on each signal conductor) means that the overall impedance is high enough to be easily driven with relatively simple circuits. Contrary to belief in some circles, microphones should never be terminated with a value equal to their output impedance.
An often serious limitation is that circuitry using P48 phantom power cannot be ground isolated. This is of no consequence for microphones, as they are a 'floating' signal source (not electrically connected to anything else). With DI boxes and the like, there is ground continuity between the mixer and instrument amplifier, and this often leads to issues with hum. It's sometimes possible to reduce the hum to a low level by incorporating a 10Ω to 100Ω resistor in series with the shield at the remote end. This should be bypassed with a 100nF capacitor to ground RF (radio frequency) interference. If included, a switch should be added so the resistor can be shorted out if it's not needed.
There are several ESP projects that are designed to provide phantom power, with one of the most popular being Project 96. The drawing below shows the general idea for a microphone input. Although it's possible to provide phantom power via a TRS (tip, ring, sleeve) ¼" stereo phone jack, this is not recommended. Many things (such as guitars) can be plugged into a ¼" (6.35mm) jack socket, most of which are not designed to handle any DC at all. While damage is unlikely in most cases, it is still possible, so P48 is normally only made available via a female XLR socket, designed to accept a balanced microphone lead.
Figure 3.1 - Phantom Powering Circuit Including Preamp Protection Diodes
Not all phantom supplies include the protection diodes, but I consider them to be absolutely essential. You'll note that my circuit uses zener diodes, which can handle very high instantaneous peak current, and clamp the worst-case voltage to around ±11V. If you are using equipment that's powered via USB (5V), it would be advisable to reduce the zener diode voltage to 3.9V to protect the mic preamp which (probably) runs from a 5V supply.
There are countless different ways to provide the 48V DC used to power the microphones (or other P48 equipment). The circuit shown in Project 96 is a well proven design, and a modified version is shown here. Many USB powered mic preamps for use with a PC use a small switchmode supply. A 12V to 48V version is described in Project 193, which is capable of up to 100mA (roughly 10 microphones). USB versions are more limited, because they have to boost from 5V without exceeding the USB current limit. This usually means one or two mics at most, because a standard USB port can only supply 100mA unless it negotiates a higher current (up to 500mA) via software.
It's important to note that the performance of any regulator depends on the transformer. For example, if you use a voltage-doubler to provide the 'raw' DC, the transformer has to supply a minimum of twice the output VA. 48V at 200mA is 9.4 watts, so the transformer must be rated for at least 16VA, but there's a lot to be gained by using a 30VA transformer. The peak current is a great deal higher than you might expect, and that reduces the unregulated DC voltage. With the two regulator circuits shown next, the peak current may be as high as 1.7A with an output of 200mA. An under-powered transformer will cause the unregulated voltage to fall, and you may get ripple at the 48V output.
Figure 3.2 - Phantom Power Regulator
The regulator shown can supply over 200mA easily, and is simple to build. It needs a 25V AC input, which is fed to a voltage-doubler (D1, D2 C1, C2). A bridge rectifier would be a bit better (and dispenses with the voltage doubler), but then you need a 50V AC transformer winding. Otherwise 'odd' voltages are not a problem with a mixing console, because that will have a dedicated power supply that can provide all voltages needed at whatever current the circuits demand. Most mixing consoles will probably not be able to use phantom power on all channels at once, but that's rarely a problem. The circuit shown is easily modified to provide more current if it's needed. The regulator circuit needs an input voltage of at least 56V DC to ensure reliable operation and low noise. The output noise will usually be less than 50µV RMS (primarily residual 100/ 120Hz hum). It can be reduced by increasing the value of C4, but that shouldn't be necessary.
The transistors shown are examples, and virtually any device with similar specifications can be substituted. You might need to change the value of C6 (220pF) if the circuit shows signs of instability (radio-frequency oscillation). A larger value reduces the transient response and might allow more high-frequency noise to get through. An added LC (inductor-capacitor) filter can be added if you wish, but it should not be necessary.
IC regulators are available that can handle the voltage (standard 3-terminal regulators such as the LM317 cannot!). While using an IC may be superficially simpler than the discrete design shown, there's always the problem of sourcing the parts, and being able to find a replacement in 10 years time if the IC fails. A discrete circuit can always be repaired, especially if it uses common parts throughout. Many modern products are not made to be serviced, so if (when) they fail, the only option is to replace the entire unit. A disadvantage of the IC approach (at least with this particular type) is that it requires an output current of at least 15mA to ensure regulation, and that's why the 3.3k resistor (R3) has to be rated for 1W.
Figure 3.3 - IC Based Phantom Power Regulator
The TL783 IC is rated for up to 125V input, with an output current of up to 700mA. Interestingly, the datasheet doesn't mention the maximum power dissipation, but it would probably be unwise to exceed 20W, even with a good heatsink. The output voltage is determined by ...
VOut ≅ 1.27 × ( R3 / R2 + 1 ) Which works out to be ...
VOut ≅ 1.27 × ( 3.3k / 82 +1 ) ≅ 52.4V Just outside the P48 specifications)
The output voltage will usually be different from the calculated value because the internal voltage reference (nominally 1.25V, but shown as 1.27V in the datasheet) can vary between 1.2V to 1.3V. This means that the output voltage will actually be somewhere between 49.5V and 53.6V with the values shown. This won't cause any problems in normal use. In a most unusual state of affairs, there is no mention of the minimum input-output differential in the datasheet, but it would appear that it should be greater than 25V (meaning an unregulated supply of at least 75V). With a 25V input-output differential, dissipation with 700mA output will be 17.5W, which will be alright with a good heatsink.
Small switchmode power supplies are often used to boost the voltage from (say) 12V to 48V. These usually need a lot of filtering, because the supplies themselves are noisy. The noise is outside the audio band (typically 50kHz or more), but it can still interfere with the audio by producing intermodulation artifacts. Without exception, USB audio interfaces use a switchmode booster, but they have to boost from only 5V. Most will require at least 500mA to be available from the host USB port, or it's not possible to get enough current for the P48 feed and the internal circuitry.
It's to be expected that most P48 supplies made today will use a switchmode converter (aka switchmode power supply or SMPS). There are countless ICs that do everything - the regulation and main switching MOSFET are all part of the IC itself, requiring a minimum of external parts. For DIY, getting a suitable inductor may be difficult, but otherwise it's a good solution provided the output is well filtered. Miniature DC-DC SMPS modules are available from a number of manufacturers, but if you expect to boost 5V to 48V the input current of a 1W converter will be about 250mA for 21mA output (only just enough for two P48 supplies). Issues with replacement parts are also a consideration, and the ICs available today may not be compatible with newer versions. This means that if the supply fails after 5 years it may need to be completely re-engineered.
Figure 3.4 - IC Based Switchmode Phantom Power Regulator
The circuit shown above is taken from Project 193 - Obtaining a +48 Phantom Supply From 12V. This circuit has been fully tested, and everything you need to know is in the project article. It can power up to ten P48 microphones easily, but additional filtering is recommended to ensure low noise. This is shown in the project article, and it's easily built on Veroboard.
A few brave souls have used a Cockcroft-Walton voltage multiplier (see Rectifiers, Section 7) to obtain the P48 supply, but this is (IMO) a rather poor way to obtain the required voltage. Even with a full 12V peak-peak input, you need a 7-stage multiplier just to reach just 42V when loaded (two microphones). You need a 9-stage multiplier to obtain 48V with a total load of 20mA, and a zener regulator is needed to keep the voltage stable. Voltage multipliers work well with very low current, but aren't suitable for more than a couple of milliamps. There's no doubt that this arrangement can be made to work, but the input current is fairly high from the switching circuit, requiring at least 250mA from a 12V supply. A small switchmode boost supply such as that shown in Figure 3.4 is a much better alternative, and can supply the same load with an input current of less than 100mA.
At the microphone (or DI box) end, there are many different ways to get the required DC and power the circuitry. I can only provide a few examples, because each design will be different. It's probably fair to say that there are as many different implementations as there are manufacturers, and I don't intend to even try to cover them all. There are two primary challenges at the signal source end of the cable, being able to extract the DC supply for the electronics, and superimposing the audio onto the DC supply. Neither is difficult, but it requires some ingenuity. The mic preamp in each circuit is simplified, and has internal DC blocking and overvoltage protection as shown in Figure 3.1.
In each case here, I've assumed an electret mic capsule, but 'true' capacitor (aka condenser) mics are also phantom powered. They generally use a very simple switching boost converter to provide the capsule polarising voltage - typically from 24V up to around 100V DC. Some other mics use an RF oscillator and demodulator to detect the change of capacitance in the capsule. These techniques aren't possible or necessary with an electret capsule.
Figure 4.1 - Phantom Powered Microphone Preamp #1
The Figure 4.1 circuit is fully balanced, and the emitter followers (Q2, Q3) are used to buffer the signal and modulate the phantom supply. The supply is taken from the collectors of these transistors. It's up to the mic preamp's interface circuit to supply P48V and isolate the preamp's inputs from the phantom supply. The circuit draws surprisingly little current, and its operating voltage is around 19V. In some cases, this will be regulated with a zener diode for particularly sensitive applications. It's shown with an electret mic capsule, but the signal source can be any type (e.g. guitar, bass, keyboard). If it's not built as a microphone, R1 and R12 would be omitted. The circuit can then be used as a DI (direct injection) box, allowing instruments to connect directly to the mixing desk. Note that the input impedance is too low for an electric guitar or bass, and an input buffer would be needed to increase the impedance to at least 100k.
Figure 4.2 - Phantom Powered Microphone Preamp #2
The second circuit is adapted from an ESP project (Project 93, Recording and Measurement Microphone). Unlike the previous circuit, this circuit uses impedance balancing, with the impedance set by the two 100Ω resistors and their series 100µF capacitors. From the perspective of a mic preamp, this is almost identical to a 'true' balanced circuit, unlikely as that may seem at first. The DC is derived from the two 2.2k resistors (R10, R11), and the circuit is designed to be able to drive the low resistance without overload.
Figure 4.3 - Phantom Powered Opamp Preamp (With Protection Diodes)
Note that these are intended as examples only - the idea is to show two different ways to utilise the phantom power to derive a power supply for the electronics. Providing P48V is simplicity itself, needing only a power supply and a pair of resistors. Obtaining DC for the remote-end circuitry is another matter altogether, as the three examples shown indicate. Those shown are by no means the only circuits used, but they are representative of the techniques that can be used.
If the remote circuit uses opamps, their outputs need to be protected from possible damage if a 'hot' cable (with P48V turned on) is plugged into the unit (D1 and D2). The cable has capacitance, and without a load it charges to the full 48V, and there is little or no current limiting. The transient impulse is very short, but can easily cause damage, forcing the opamp's output to +48V before it has a supply voltage. All P48V DC 'take-off' circuits take some time before the supply is available to the circuit. It's usually only a few milliseconds, but sensitive circuitry can be damaged in microseconds!
Most DI Boxes are battery powered, because this allows the ground connection to be broken (commonly called 'ground lift') to prevent hum caused by circulating ground currents. This isn't always convenient, and several DI boxes have the option of battery or phantom power. See Project 35 for a couple of other examples. One is completely passive, and uses a transformer. This is always a good solution, but decent transformers are expensive, and the input impedance is usually fairly low making them unsuitable for use with an instrument with no amplifier.
Figure 4.4 - Phantom Powered DI Box
A phantom powered DI (direct injection) box has several limitations, with the common ground connection being the biggest problem. In the design shown above, the 10Ω resistor (R12, in parallel with C6) may be enough to prevent hum, but it also may not. While it's shown as optional, in most cases it will be necessary. If you wish, you can add a switch to short out the network if the unit proves to be hum-free.
The current drain is quite low, and the zener has been increased to 24V to allow an input level of up to 2V RMS. It can take a bit more, but distortion will become a problem with anything over 2.2V RMS. Input impedance is 25k, which is too low for direct connection to a guitar, but is fine for the line output from an amplifier or keyboard. The overall gain of the circuit is 1.9, and you can use a level pot at the input if higher input levels are expected.
It is possible to use batteries for phantom power. Five standard 9V batteries in series gives a voltage of 45V (nominal), which is within the P48 specifications. New batteries will measure about 10V each, giving 50V which is also within the allowable limits. A phantom powered mic will typically draw a maximum of 10mA in total, so a series string of five should last for at least 25 hours of continuous use, more if the mic draws less current. 'Standard' 9V alkaline batteries have a capacity of around 580mA/h, so with a 10mA discharge that can be worked out to be 58 hours operation. However, the battery voltage will be down to about 7.5V (each) if fully discharged, so you can't operate for as long as you might have thought.
Of course, you can use more batteries and then regulate the voltage, but the regulator would need to be very low power. Something like the Figure 3.3 IC based circuit is completely unsuitable, because it draws 15mA by itself. By comparison, the Figure 3.2 circuit draws around 6mA (no load), which is better, but too high if you were using batteries. It can easily be redesigned to use far less current, but that's outside the scope of this article.
Typically, a battery operated regulator would use 8 × 9V batteries (72V nominal, 60V with 7.5V [end of life] for each battery). Ideally, it would draw no more than 1mA or so with no load. Extreme filtering isn't needed because batteries are fairly quiet (they are not noise free), but a filter is easily added. Battery operation is normally a last resort, and if required it will end up being much cheaper to use a rechargeable Li-Ion battery pack and a switchmode boost converter. The initial cost is higher, but the saving on 9V batteries will add up fairly quickly if battery power is needed regularly.
There's another 'phantom' powering scheme that was developed in the 1960s, called 'T-Power' (aka Tonader, Tonaderspeisung, A-B powering, or parallel powering) [ 3 ]. This is completely incompatible with P48 powering, and dynamic mics are likely to be damaged if inadvertently used with T-power. The source is 12V DC, and the DC is provided between the two signal leads. The general scheme is shown below, but since T-Powering is considered obsolete I won't cover it in great detail.
Figure 6.1 - T-Powering Circuit
In many respects, this method is something of a dog's dinner. A miswired cable could reverse the polarity of the DC, and because the DC is between the two signal lines, it may damage dynamic or ribbon microphones. It can only supply 33mA into a short-circuit, but that can still be enough to cause problems with sensitive microphones. None of this is helped by the fact that some T-Power mic connectors were wired in reverse to suit older Nagra tape recorders which used positive earth/ ground (almost certainly using germanium transistors).
A typical T-Powered microphone may still work (more-or-less) normally if the cable shield is open-circuit, potentially leading to a recording that is found to be unusable when played back in a studio. The ability to keep working with an open shield is 'admirable' in a way, but it can't be considered desirable.
Mics used with computer sound-cards do not use phantom power in any traditional sense. The tip of the 3.175mm (¹/8") TRS (tip, ring & sleeve) mini-jack plug is the signal from the electret mic capsule, and the ring is connected to the 5V supply by a resistor to power the capsule itself. The tip and ring are usually shorted at the microphone end. This is the most basic of all connections, and it powers only the mic capsule. There are no other electronics involved in almost all cases.
The wiring and operation of these very basic interfaces aren't covered here, as they are irrelevant to the topic. If you want to know more, I suggest a web search, or you can read the Electret Microphones - Powering & Uses article on the ESP website. This article also covers many of the topics here, but with less detail. Some computer mic sockets are stereo, with the tip being 'Left', the ring is 'Right' and the sleeve is ground. There is often no way to know for sure how your computer mic interface is wired without taking measurements. It might be in the manual, but I wouldn't count on it.
The mic input can often double as a 'line' level input, so you'll need to get into the software that controls the 'mic in' and 'line out' connections to set it up the way you need it to be. This also happens with many external interfaces - until you get the settings right in the sound controller, you won't get the results you need. This setup is outside the scope of this article. and is not covered here.
If you are designing your own phantom-feed mic preamp, you may be tempted to increase the available output current to suit a particular piece of gear you wish to power. In a word, "DON'T". There is a risk that doing so may damage other gear, and it becomes non-standard. There are good reasons for keeping to the standard resistances and voltages, because custom 'solutions' are not compatible with commercially available (and ubiquitous) equipment.
Although it's generally accepted that dynamic microphones will function normally if P48 is applied (usually by accident), it should be disabled. There's always a remote possibility that leakage paths within the mic may cause nasty noises between the voicecoil and mic housing, and it serves no purpose. Some microphones are very sensitive to external voltages, particularly ribbon mics. If they use a transformer to step up the voltage they might be alright, but good practice demands that phantom power is used only when it's needed.
Operators and installers should make it a habit to ensure that mics are not connected with P48 turned on. When a mic is connected to a 'hot' mic lead, large transients can be created that place both the microphone and the mic preamp at risk [ 2 ]. A long cable with the conductors (and mic preamp coupling capacitors) charged to 48V can deliver a considerable peak current, which may be more than enough to damage the mic, mic preamp, or both.
Although you may see dissent elsewhere, if a P48 circuit shorts the two signal lines to ground, the worst-case current is 14mA (7mA for each signal conductor), which will cause the 6.8k resistors to dissipate less than 400mW. The resistors will get quite hot, and it might be sufficient to affect their tolerance, but there's little or no evidence to indicate that it's actually a problem. Normal dissipation when supplying 5mA on each signal lead (10mA total) dissipates 170mW in each resistor, and leaves 14V available for the remote electronics. This is generally considered 'optimum', and many lower-current mic capsule amplifiers use a zener diode to limit the internal working voltage.
While T-Powering is rare, you need to be aware of it, especially if working in the film industry. While most systems now will use P48, there's still a possibility that you'll come across it. One would hope that such mics would use a Tuchel/ DIN connector to ensure they couldn't be connected to P48, but many T-Powered mics used XLR, which is most unwise.
Phantom feed, and 48V in particular, has been with us since 1966. It initially replaced separate power supplies for microphones, and has proven itself to be invaluable for minimising stage and studio clutter. It provides a simple, safe and convenient way to power remote electronics. Microphones remain the most common P48 powered devices, but DI boxes and even piezo pickups for various instruments can be powered just as easily.
Although some manufacturers have dallied with P24 (and even fewer with P12), P48 remains the dominant phantom power scheme. Even manufacturers of USB microphone interfaces for PC sound recording have realised that if they are going to provide phantom power, it has to be 48V, because many microphones simply don't work with lower voltages. It requires little extra effort to provide the full 48V supply, and it means that compatibility with popular professional microphones is assured.
There are a few things that users need to adjust to (such as making all connections before turning on the P48 supply), but even if you forget, no harm will come to equipment that's been designed properly. Yes, you'll get very loud noises through the monitors if the fader happens to up, but that's a lesson quickly learned. P48 is so common now that few people involved with audio will be unaware of it, even if they don't know how it works.
This article is fairly comprehensive, and there's also a vast amount of additional information available on-line. However, finding it isn't always easy, and not all writers manage to get their facts straight. Finding info that's technically accurate can be a minefield, as anyone can write on-line articles, and not everyone gets it right. Fortunately, P48 has been around for long enough that most of the info you find will be fairly close to reality, but there are still some misconceptions and/ or errors.
![]() | + + + + + + + |
Elliott Sound Products | +PA Systems |
In the context of this article, sound reinforcement systems are referred to as 'PA' - PA (public address or sound reinforcement/ high power reproduction) systems are the life-blood of bands, DJs and many other performers. Regrettably, there is very little useful information about the systems themselves, or how they should be configured. One of the most common these days are commonly referred to as stick-boxes/ PA on a stick/ etc. ... boxes that sit on top of stands. These come in active (aka self powered) and passive versions, and can use speakers ranging from 200mm (8") to 380mm (15") diameter, usually with a compression driver and horn for the top end.
+ +While these are easy to carry around and can make a lot of noise, they are generally a serious compromise. The latest offerings use plastic enclosures, and while these are much lighter than a plywood or MDF box (but only because the walls are so thin), they are often plagued by panel resonances because there is little or no effective internal bracing. However, when fitted out with a high sensitivity loudspeaker and a decent amplifier, they are often all that's needed for smaller venues. Adding a sub is necessary if a really solid bottom end is needed, because few of the boxes can perform well down to 40Hz, and many are struggling to get to 60Hz. Those that do get to 40Hz often do so only by applying bass boost (a peaking filter is common, similar to a single band of a graphic equaliser). This eats up available amplifier headroom, so the maximum output is never available.
+ +By taking away the low frequency component, the amplifier is capable of a lot more output, because the bass boost circuit is no longer active. This relieves the headroom constraints in the amp, so crossing over to a sub at around 100Hz or so is well worthwhile.
+ +Larger systems are also a compromise, but for different reasons. They are rarely self-powered, so require external amplification and active crossovers. One of the biggest problems facing those who use these systems is speaker failure, but the reasons are not well understood, and 'solutions' are often dangerously wrong.
+ +Much of this comes about because few operators understand the reality behind the specifications of loudspeakers, and the concept of efficiency vs. power handling is almost never discussed. This is aided and abetted by marketing (dis)information, which is far more likely to cause confusion than impart any real information. The perception is that the only thing that matters is Watts. More Watts, better, lots more Watts, better still. When speaker system makers state 'output power' and give a figure in Watts, this is complete nonsense - I'd accept it if the values were around 10W (acoustic), but they actually refer to the input power, which determines the acoustic power only by taking the driver's efficiency into account. Making things a lot worse is the fact that many people who design some of the rubbish that's available now know little about the design of loudspeakers, less about designing reliable amplifiers, and less still about how their 'creations' are used.
+ +The art of speaker (or system) design is knowing that there must be compromise, and knowing which compromise makes an audible difference and in what direction. The next stage is to assemble all the compromises in the one enclosure - if you get the mix right, you have a speaker, otherwise a large paperweight. Science assists the art - people made good speakers before the science existed, and continue to make bad speakers today - despite the science.
+ + +Let's look at a simple example. Two systems are set up side-by-side. One has an overall sensitivity of 100dB/W/m, meaning that an input of 1W will give an SPL (sound pressure level) of 100dB at 1 metre distance. Power rating is 100W maximum continuous average for this first example.
+ +The second system is rated at 90dB/W/m sensitivity, but has a power rating of 1,000W - also continuous average. The question is ... which will be louder? + +
In theory, both will be exactly the same - they will be capable of 120dB SPL at 1 metre with full rated power. In reality, the 100W box will be somewhere between 3 and 6dB louder than the 1kW system, because at such a low average power there will be little power compression in the loudspeaker(s). All loudspeakers have to dissipate nearly every Watt from the amplifier as heat, which means that the voicecoil gets very hot indeed at sustained high power. For those that can (allegedly) handle 1kW, this is the same amount of heat as you'll get from a 1,000W radiant heater. As voicecoil temperature goes up, so does the resistance of the voicecoil itself, which increases the impedance, which in turn reduces the power obtained from the amplifier.
+ +If only 3dB is lost to power compression, the amp power needs to be doubled (2kW) to restore the balance, but this makes the voicecoil get even hotter. This vicious circle continues until either the speaker fails or the amps have no more to give ... often both. A 90dB/W/m loudspeaker has an overall efficiency of about 0.62%, so with 1kW going in, only 6.2W comes out as sound - the remaining 993.8W is converted to heat. The 100W speaker (100dB/W/m) has an efficiency of roughly 6.2%, so 6.2W emerges as sound, and only 93.8W is wasted as heat. It is far easier to remove 94W of heat from a loudspeaker than it is to remove 994W - this should be immediately obvious. In both cases, the acoustic power is 6.2W, but it will only take a short time before the 1kW system shows power compression and reduces its output. Power compression figures for high powered loudspeakers range from around 4dB (very good) to as much as 7dB or perhaps more - this is not at all good.
+ +In all cases, it's very common that the amplifier will be specified to deliver about twice as much power as the speaker can handle. The reason for this seems to be largely historical, but it is false reasoning in many cases. It is normal to describe the extra power as 'headroom' - some extra power to cope with transients so the amp won't clip. For many, many years it has been 'common knowledge' that speakers are damaged when amplifiers clip (distort). The conclusion is that if amps are big enough, they won't clip, and loudspeakers won't be damaged. No-one seems to have noticed that guitar amps clip much or most of the time, but the speakers usually don't fail. Sensible design for a guitar amp dictates that the speaker should be able to handle double the amp's rated power!
+ +The next piece of common wisdom is that the use of a compressor/limiter will prevent the amp from clipping, and therefore will stop loudspeakers from blowing up. When both of these 'solutions' are applied simultaneously, this must surely fix the problem once and for all. Vast numbers of speaker drivers are destroyed because of this line of thinking. It's not (and never was) clipping that destroys speakers, it's sustained high power. An amplifier that's in full clipping (squarewave output) delivers twice as much power as the same peak-to-peak voltage of a sinewave, and that power doesn't change much regardless of the dynamics of the signal. It sounds seriously awful and speakers blow up.
+ +With lesser degrees of clipping, the average power is still much higher than would normally be the case. Use of a compressor/limiter is only helpful if the attack time is short and the decay time long (at least a couple of seconds). When set up the way they are most commonly used, the compressor/limiter will simply maintain a (very) low peak to average ratio, and thus increase the average power delivered to the loudspeaker.
+ +Contrary to popular belief, the speaker driver doesn't actually care if the sound is distorted or clean. Damage is caused because of the high average power, and if a limiter is used with an amplifier that has 3dB of headroom, it's probable that with some (most likely already heavily compressed) programme material, the speakers could be expected to handle close to the full amp's rated power for extended periods. There won't be any (amplifier) distortion, but the sustained high voicecoil dissipation and/or excessive cone excursion means that the speaker driver will have a short life.
+ +While not commonly talked about, there are many additional complications created by power compression. Since the voicecoil gets hot with sustained power, this increases its resistance. This is understood, and it is this increase that reduces the power delivered to the loudspeaker and the subsequent SPL that emerges from the noisy end. The bit that no-one wants to talk about is the simple fact that the same resistance increase also changes the loudspeaker's parameters!
+ +In particular, the speaker Qts is modified, meaning that the carefully aligned enclosure no longer works properly. The enclosure tuning can be further modified if the air inside heats up, and this can happen easily with many boxes, because there's either no airflow at all (sealed box) or the vents are incapable of creating a change of air within the enclosure. To put this into perspective, we know that 3dB of power compression is considered a very good result for modern high powered loudspeakers. This means that for all intents and purposes, the impedance has doubled.
+ +Consider any passive crossover network. If done properly, they are designed for the actual measured impedance of the connected drivers. Now we have a conundrum - should we design the crossover network for a voicecoil temperature of 25°C, 80°C, higher, lower? It doesn't matter - at any voicecoil temperature other than that for which the crossover was designed, it is wrong! Bugger! In this respect, you can't win - so I suggest that any PA system intended for high levels must use active crossovers for all drivers. To do otherwise is asking for trouble.
+ +This is especially important for horn compression drivers. A crossover network that's designed to be -3dB at 2kHz with an 8 ohm voicecoil will reduce the crossover frequency to 1.6kHz if the impedance doubles, and the compression driver will also be subject to an additional peak of 1.8dB at ~2.2kHz because the filter is no longer correct. This increases diaphragm displacement and can lead to failures. Likewise, the crossover loading is changed when the midbass driver's impedance doubles, so the crossover alignment is now completely wrong. Where the compression driver is in the same box as the midbass drivers, it can't be expected to remain cool - even if there is no power at all delivered to its voicecoil!
+ +The above should be enough to make anyone think, but it gets a lot worse. In the next section, cone excursion is discussed. Cone excursion is determined by the box tuning and the power delivered to the loudspeaker. As the voicecoil heats up, its resistance goes up as discussed. When modelling the enclosure/speaker combination, the tuning frequency (based on internal dimensions and vent characteristics) is determined to get the optimum performance from the speaker. Again, should the box be modelled at 25°C, 80°C, higher, lower? Yet again, it doesn't matter, because at any temperature other than that for which it was designed, it will be wrong! Double-bugger!
+ +The loudspeaker/enclosure combination modelled in the following section has issues - as will almost any design you can come up with. When the voicecoil temperature and resistance go up, the cone excursion also increases, so even less power can be delivered to the speaker before cone excursion reaches the danger zone. For the JBL 2241H, the tuning changes from being fairly flat down to about 36Hz (-3dB) to having a 4dB peak at 60Hz. If the operator applies EQ to make up for the loss of extreme bass, the speaker will be destroyed unless it is done with extreme care, and with full knowledge of the driver characteristics. We all know how often that will happen - exactly zero times for the life of the system.
+ +None of the above considers the velocity of sound in air or air mass, both of which change with temperature and both of which directly affect the tuning of a loudspeaker enclosure. If the air inside the box is at a temperature of anything other than the design value, the tuning will be wrong, and cone loading may be found to be quite different from that expected. This can easily lead to greatly increased cone excursion and possible loudspeaker damage. There are so many variables that it will be impossible to even try to compensate. While it's theoretically possible to have computer monitoring of all parameters and apply compensation exactly as required, this would be a massively expensive undertaking and is unlikely to happen in the near future.
+ +I mentioned 'stick boxes' earlier, and these are a prime case in point. Many dump a significant amount of the power amplifier's wasted heat into the cabinet, which simply accelerates the cascade of problems described here. The only saving factor is that almost all of these systems overstate the amp power, so the loudspeaker will hopefully have a reasonable safety margin - provided that the speaker's power handling hasn't also been overstated. Certainly, many of the more popular designs are surprisingly reliable - even when used by DJs, some of whom are renowned for their ability to convert sound equipment into scrap. Many other popular designs will simply blow up if pushed hard for extended periods, but there is no way to know which is which. Models change regularly, and a known reliable box today could easily become your worst nightmare tomorrow.
+ +In short, it is worth considering a few major points for any loudspeaker/enclosure design ...
+ +Needless to say, this covers only a fraction of the important considerations for a PA system expected to do anything more than amplify speech to a comfortable sound level. There are a great many other things that demand attention from the designer. Only by optimising a system for the desired parameters will a satisfactory result be achieved, and this invariably means that compromises must be made.
+ +It must be understood that there is no such thing as a 'no-compromise' system - regardless of any marketing claim, they don't exist. The art of system design is knowing the difference between the really important and that which no-one will notice.
+ + +A (big) trap is cone excursion. Few people pay a great deal of attention to this, perhaps assuming that the equipment manufacturer will have taken steps to ensure that the cone travel remains within safe limits. From what I've seen of the available powered loudspeaker systems, the manufacturer has actually done nothing at all to limit excursion, and in some cases has actually incorporated bass boost circuits that ensure that linear travel will be exceeded. This causes greatly increased distortion, and exceeding the linear travel also increases voicecoil dissipation and reduces the cooling effect of the gap. Once the voicecoil has left the magnetic gap, instantaneous efficiency falls to zero, so every single Watt applied in excess of that needed is converted to heat. Without the proximity of the magnetic pole pieces, the voicecoil's temperature rise can be almost instantaneous.
+ +In the case of separate enclosures, there is a minefield of traps for the unwary. There is a lot of information that is simply not published - not anywhere, by anyone. A perfect example of this is one of the more popular 460mm (18") bass drivers - the JBL 2241. These are very impressive drivers in all respects, and for a bass transducer they have a very respectable efficiency. JBL provides frequency response data with the 2241 installed in a 280 litre enclosure, tuned to 30Hz, and this appears to be a good alignment. Since the reference enclosure is described rather well, giving the port dimensions and box capacity, it's not unreasonable to assume that this will work well for normal PA duty for bands or DJ work. XMAX is a healthy 7.6mm, and XDAMAGE is claimed to be 40mm peak-to-peak (20mm peak). Power compression at rated power is 4.3dB - this means that the actual efficiency has fallen from 98dB/W/m to 93.7dB/W/m. To attempt to obtain the maximum SPL expected would require that amp power be increased by around 6dB (because there will be additional power compression at the higher power) - does it sound sensible to hammer a 600W driver with 2,400W? + +
There is just a tiny little problem, and it's not mentioned in the data sheet. If the 2241 is driven to its rated average power (600W) with an amp having a mere 3dB of headroom (1,200W), the loudspeaker will die. Cone excursion at 45Hz will be a very unhealthy 14.4mm peak (28.8mm p-p) - well in excess of the rated XMAX. At that frequency, the maximum instantaneous power that keeps the driver within its rated XMAX is ... 280W. You can imagine the damage that can be caused by a 1,200W amplifier driven to the onset of clipping. If this causes you some concern, then what happens below the 30Hz tuning frequency should really make you think ...
+ +At a frequency of 24Hz, the damage limit of 40mm p-p is reached, and while this is not especially common with live music, it's easily achieved with recorded music - especially electronic music common for dance music and the like. Even with live music, it is necessary to prevent very low frequency 'transients' from getting to the speakers. These transients can be caused by switching on/off the phantom feed for a microphone, during setup for a variety of reasons, or simply by a bass player damping the strings with the palm of his/her hand. This can create signals as low as a few Hz, and a direct injection from the bass to the mixing console ensures that there will be plenty of level at 10-25Hz.
+ +
Figure 1 - JBL 2241H Cone Excursion at 1,200W Input Power
Figure 1 shows the cone excursion (taken from WinISD Pro) of a 2241H in the suggested 280 litre vented enclosure, tuned to 30Hz and driven with a 1,200W amplifier at full power. Quite obviously, something must be done to prevent anything below 30Hz from getting to the speaker, and for this reason, all vented loudspeaker enclosures used for PA work should have active high pass filters prior to the amplifier to prevent excessive excursion. In addition, to prevent driver failure, it is essential to know the power limits for each driver in the system, and ensure that amplifiers are sized accordingly. For the 2241, this indicates a maximum power of around 500W for each speaker - while this is still capable of exceeding the rated XMAX, it remains safely below the damage limit at all times.
+ +Use of a high pass filter (such as the ESP P99 subsonic filter) is not only highly recommended, it should be considered mandatory. For this driver/enclosure combination, a 30Hz filter is perfect, and will eliminate excess cone excursion at very low frequencies - from any source. This is especially important if the system is likely to be used for DJ duties - turntable rumble and very low frequency feedback through the turntable suspension can generate awesome amounts of low frequency energy. However, even if no DJ will ever use the system, the filter should still be considered absolutely essential.
+ +There's a further benefit too. By removing those frequencies that can't be reproduced at any useful SPL, the system will be a lot cleaner, with significantly less distortion. When the voicecoil leaves the magnetic gap, it cannot respond to any electrical signal until it returns. Without the magnetic field, the speaker essentially clips - just like an amplifier that's overdriven. Since very low frequencies aren't pushing the coil out of the gap at regular intervals, overall sound quality and sensitivity are improved and it will often be possible to get the same overall SPL with lower distortion and less power.
+ +While I used the JBL 2241 as an example, the same principles apply to all loudspeakers. I used that one because I already had the details, and had modelled it previously. I also know that repairs to these drivers are common, and though few speakers are actually 'burnt out' due to overheated voicecoils, voicecoil and suspension damage are fairly common. Many other (cheaper) drivers are discarded when they fail, because recone kits aren't available and/or repairs are expensive. Experience with both construction and modelling using other speaker drivers tells me that a great many (probably most) of the drivers available today will show an almost identical trend. The actual figures will be different, but the principle is unchanged. One PA hire operator that I know of destroyed something like 30 18" drivers - a very expensive exercise to put it mildly. The combination of excessive 'headroom', highly compressed programme material and no high pass filters pretty much guarantees this result.
+ +![]() | Note Carefully: For reasons that remain totally obscure, for some time JBL reversed the polarity of their drivers. One expects that a + positive voltage on the red (or + terminal) will cause the cone to move outwards, but JBL reversed this so positive on the black terminal causes the cone to + move outwards. Incorrectly phased drivers in the same physical enclosure will be damaged very easily, because there is no loading on the cone. Newer drivers are phased + correctly. + |
Even drivers in separate boxes can be damaged if they are wired out of phase, especially if place near to each other. In this case, the bass output will be much lower than expected, so the operator will increase the power, likely to the point where the amp and speakers will be pushed to (or beyond) their limits. That is exactly what happened to the hire operator mentioned above. The power amps used had a bridging push button that switched half the subs out of phase. No-one noticed, so the amps were driven to the max, and one by one the subs failed until they were all defunct.
+ +Loudspeaker manufacturers have all followed similar philosophies over the years. JBL (and before that, Altec) has always been a leader, with many of the smaller companies adopting the same general ideas. Users want to use more powerful amplifiers, so speaker makers produce loudspeakers that can handle more power, but usually at the expense of outright efficiency. Because no-one wants to have to transport really large enclosures, speaker makers modify the design to allow good (or at least acceptable) bass response in a smaller cabinet. Again, efficiency suffers because the cone must be made heavier to allow reasonable bass response in a small cabinet, so more power is needed.
+ +About the only saving grace for subwoofers is that their impedance is much higher than nominal over much of their operating range. So, while the speaker may be connected to a 1kW amplifier, the actual average power is somewhat less than the maximum. If the impedance increases by a factor of 5, then power at that frequency is reduced by the same ratio. A 1kW (8 ohms) amp will deliver 200W into 40 ohms at speaker resonance. This is real and important, but don't assume that the reduced power at resonance reduces the cone excursion - the graph shown in Figure 1 refers to amp power, but in reality it's just based on the voltage from the amp.
+ +While many makers seem to have broken the laws of physics if their advertising material is to be believed, this hasn't actually happened. What we have now are relatively small boxes that can produce a lot of SPL, but at a cost that many of the old-timers consider unacceptable (and yes, I'm one of them). No-one would ever claim that the PA systems used in the 1970s and even up 'till late last century were all 'high fidelity', but there were many systems that would cheerfully annihilate the vast majority of those around today. Some still exist, but because they are physically large and often time-consuming to set up, no-one is really interested any more. Systems used to have power ratings of around 1-2kW in total, but the use of horn loaded enclosures and efficient drivers meant that they were (extremely) loud, and if used properly very clean and punchy. Power demands were relatively modest, but it was always easy to get more than enough SPL in all but the largest venues with the systems of 'old'. Transport was difficult though, because of the size of the enclosures - which were also very heavy.
+ +Today, power is cheap and transport is expensive, so small and light is preferred by most. A complete system that will fit into the back of a station wagon or ute is a better proposition for most operators than a system that fills the back of a reasonable size pantech truck. It is still a major compromise though, and the same level of performance cannot be expected from the smaller systems common now, regardless of claimed power. Remember that power compression becomes a major problem with any loudspeaker driven from an amp rated at more than ~200W (40V RMS into an 8 ohm load).
+ +Now, I must warn readers at this point. Much of the remaining is not especially complimentary of systems, both old and new. It is important to point out that what I've written is mostly facts, but also includes opinions. Having worked in the industry for many years, it's impossible not to have opinions, some of which will inevitably be biased. I shall leave it to the reader to figure out which bits are biased. Just about everything written can be easily proved, but most information from a manufacturer's website can be considered to be only what it is ... information from a manufacturer's website. Despite many attempts to convince you otherwise, very little factual info will be found on these sites - there are exceptions, but it's not always easy to pick which is fact and which is fiction. One clue can be found - if the information seems to telling you things you'd rather not know (things that you are surprised that a manufacturer would admit to), then it's probably real. It must be noted that I'm not trying to sell anything to anyone (well, apart from my subsonic filter board), whereas that is the primary purpose of any website run by a manufacturer or distributor.
+ +During research for this article, I came across some interesting material from one of the manufacturers of popular PA equipment. Mackie published an article WILL THE REAL MAXIMUM SPL PLEASE STAND UP?. According to the article, a common method of calculating the maximum SPL from systems is too bizarre to believe. If a loudspeaker's sensitivity is (say) 97dB/W/m and it is powered by a 200W amplifier (23dB above 1W), the maximum is 97dB + 23dB = 120dB SPL. For some utterly incomprehensible reason, it's common to add an additional 3dB to account for the 'crest factor' of a sinewave. What? This is complete bullshit! A sinewave has a crest factor of 3dB alright, but you can't just add that on to the SPL figure because it makes the specification look better (and that is all it does). The crest factor of a sinewave has absolutely no relevance to anything. I'm in complete disbelief that anyone would be so bloody-minded as to try to pretend that this is in any way real. At least Mackie takes the trouble to do a band limited pink noise test to determine SPL, rather than a nonsense calculation.
+ +In reality, for any kind of meaningful calculation, you need to subtract about 1dB for each 100W of claimed power handling - if the manufacturer's data doesn't give a power compression figure. On that basis, the speaker and amp referred to would manage not 123dB as 'calculated' using the nonsense explained above, but around 118dB SPL (at 1 metre). This is much more realistic. However, some claims are quite obviously made up, and have no basis in reality whatsoever. Like PMPO, some figures are simply invented to impress, but are completely meaningless.
+ +What you will not hear often but is nonetheless quite true, is that line arrays are all about sight-lines. Anything that gets in the way of the concert-goer's view of the stage cannot be tolerated, because it means some seats will have to be sold cheaply, or (horror of horrors!) not at all. Concerts 'in the round' are now popular too, and all pretense at stereo is gone. The whole system is mono, and everyone gets sound that is hopefully 'acceptable' (translation - "generally awful, but not so much so that people will complain".) I can no longer go to concerts, because the sound is so poor (and/or loud!) that I find the experience thoroughly hateful.
+ +Almost all large concerts now are using line arrays for the PA system. These have become very popular, and even very small ones are available for smaller venues. While I know that many people will disagree, I consider the line array to be an unmitigated disaster in most cases. Those that I've heard all sound (often radically) different from each other, but they all share one thing - they generally sound bloody awful. Coupled with bizarre thinking about how they should be set up in the first place, the only ones I've heard so far that sounded even passable were in relatively small clusters (4 per side), and were situated high above the stage area. Contrast this with the glowing comments you may see elsewhere - a lot of people think that the line array is the best thing since sliced bread, and will wax lyrical about how they have solved all PA problems.
+ +Bollocks! While there is certainly some real science involved (a much touted 'advantage' over earlier systems), for the most part the science has not made a system that's nice to listen to. Line arrays are fairly quick to set up, and may be much faster than the horn loaded systems they replaced. They are supplied with flying mounts, and even a large system can be hoisted up into position in a few hours. They are much smaller than the older systems, so are easier and cheaper to transport. They might even sound better than a horn system in some highly reverberant venues, but mostly they don't. They are comparatively inefficient, so it's not unusual for even a mid-sized system to be rated at 20kW or more. It is not at all uncommon for the 20kW of amps to be driven into clipping, making the relatively high distortion levels even higher.
+ +Because everything is running at the limits, comprehensive monitoring is needed or loudspeakers will fail at an alarming rate. Many amplifiers now have remote monitoring facilities for power, temperature, speaker load, etc. for just this purpose. Some of the descriptions of, and explanations for, line arrays that I've read are just total rubbish - they are wrong in almost all significant respects. Manufacturers' literature is often no better - there is no science in having a marketing puke executive write 'brilliant' sales copy. They want to sell the product, and don't give a rodent's rectum about the facts. While this may be less of a problem for very expensive professional equipment where prospective owners may want to test the claims before buying, gear that's affordable for bands to use is subject to little science and lots of hype. Something that's always worth remembering is ...
+ Wavelength = speed of sound / frequency (λ = c / f or λ = 345 / f) ++ +
Almost all line array systems require specialised equalisation, high slope crossovers (typically 24-48dB/octave) and some method of speaker monitoring to prevent overdriving the speaker drivers. This adds considerable complexity, and it's no longer possible for a few old-time 'roadies' (road managers) to set up the system. Some have dedicated software or spreadsheets to calculate the power distribution and array shape for a given venue. While this definitely real science, it's doesn't seem to have produced better sound for the most part.
+ +The biggest single problem is that the effective line length varies with frequency. At 10kHz, even a couple of cabinets will easily exceed a line length of 10 wavelengths (at which point the line can conceivably be considered 'infinite'). At 1kHz, wavelength is 345mm, so the line must be at least 3.45 metres high to be considered close to an infinite line. At 100Hz, we need a line 34.5 metres long for the same effect, but needless to say this is usually out of the question. Tapered or shaded high frequency drivers can be used to restrict the effective or apparent length of the HF line as frequency increases. While this will reduce lobing and may prevent some of the problems, it is unlikely that the compression drivers used could keep up with the rest of the system.
+ +For reasons most obscure, most designers claim that line arrays should face straight out, with no toe-in. This creates a hole in the middle of the venue where even a small head movement causes a most unpleasant phase effect, and also precludes any possibility of good stereo imaging. Because the treble 'line' is (very) long compared to wavelength, it delivers a lot more energy at middle distances than the midrange and bass, and tends to tear your ears off - a really hard, metallic sound that is utterly unrealistic in every respect. I would equate the sound 'quality' of most that I've heard with a cat farting into a milk bottle. In general, loudspeaker systems should always be pointing towards the middle of the listening space, preferably to a point about 1/3 of the room length. This depends on the room, and assessment has to be made on a case by case basis.
+ +When speakers are angled so they point towards the front-middle of the auditorium, this is referred to as toe-in. By doing so, reflections from side walls are reduced, which in turn can help reduce room echo and reverberation. There's no fixed rule, but ideally, the centre lines of each speaker stack (or array) should intersect well before the middle of the auditorium, but even lesser amounts of toe-in invariably sound better than having the stacks/line arrays facing straight forwards. If you really want to mess up the sound and ensure that it's truly horrible, splay the speaker stacks as shown below.
+ +
Figure 2 - Correct and Incorrect Speaker Positioning
The simple trick of using toe-in has been used for hi-fi from the earliest days of stereo, and I can't think of any serious listener I know who would use a system set up with the boxes facing straight out, and none would tolerate the speakers being splayed. Regardless of anything that may be claimed, the vast majority speakers should always have toe-in. This isn't a new problem - people have been setting up PA systems without toe-in for decades, and for decades the systems have suffered from the awful 'hole in the middle' syndrome. It's not uncommon to see large line arrays set up with two columns facing straight out, and another pair splayed to cover the side areas. In a sense this is fair - everyone attending hears bad sound, so no-one is disadvantaged more than anyone else.
+ +Unfortunately, there are a great many people around today who have never heard a decent sound system. Home systems that consist of little cubes and one-note 'subwoofers', MP3 players (with a cheap docking station perhaps) and generally pathetic live music systems have raised an entire generation of people who seem to think that what they have been listening to is 'good'. Anything that sounds different is likely to be considered 'bad' - and that probably includes dynamic range. Almost nothing on CD or live is free of massive amounts of compression - everything is compressed to within an inch of its life, and everything is the same volume. Real bass (below 40Hz), good stereo imaging and overall clarity are generally missing from all the music sources and playback systems that are available for reasonable prices, and even some big-name (and comparatively expensive) systems are woeful. It's pretty hard for anyone to realise that a PA system sounds like pox if that's all one has ever listened to.
+ +If the reader is slowly getting the impression that I don't like line arrays, then I have managed to get the point across. In early times of PA (during the 1960s), the line array was all many of us had - typically 4 x 300mm drivers in a column enclosure. These boxes were almost always simply called 'columns' - the term 'line array' is much more recent. These columns didn't suffer from the same ills as the new versions - they had problems of their own though. It was not uncommon to find twin-cone drivers, with what was sometimes called a 'whizzer' cone - a small additional cone directly attached to the voicecoil former that made a reasonable effort to reproduce the high frequencies. While there were some issues with this approach, they were generally used only for the vocals, and managed to do a reasonable job at the time. Some makers (such as WEM in the UK) traditionally used a small horn as well as the cone drivers - this fixed some problems and created others. In some cases, a second set of column speakers was used, and was occasionally powered by a separate mixer/amplifier for drums and guitar. Most column speakers were rated at about 100-200W, and used 4 x 25W (or 4 x 50W) speakers, and amplifier powers of 100-200W were as much as could be economically obtained at the time.
+ +One issue that line arrays have 'solved' is the ability to deliver acceptable sound at a consistent level to every seat in an auditorium. It's not about sound quality, it's about economics, sight-lines (sell more seats), setup and tear-down time, and making sure that everyone can hear the band. This minimises complaints (which are costly), and gives the best possible return to the promoters. That sound quality is no longer really considered is demonstrated simply. Look at the performance spaces that are being used now. The goal is to get as many seats as possible into the auditorium, including seats where one's view of the performance is insufferably bad. Stuck against a wall with a view of one end of the stage is not a way to see the performance, and it's quite obvious that the PA cannot possibly create a realistic stereo image if you only have one side of the speaker stack to listen to. I choose not to purchase tickets if that's the only option left - there's no point, and I'd rather spend the money on CDs.
+ +Interestingly, the vast majority of comments about high quality sound come from the manufacturers, distributors and PA companies who have invested in line arrays, but very little from anyone else. Everyone I've spoken to thinks about as highly of the line arrays they've heard as I do - they are basically pretty awful. While they perform pretty much as claimed, this in no way should be taken to infer that quality is part of the equation. Line array makers like to point out how their products are superior to 'conventional' (i.e. horn loaded) systems because of their directionality, they fail to mention that the horn systems were often just as directional. They also may allude to the lobing problems of conventional PA systems, but completely fail to point out that the line array has its own lobing problems.
+ +
Figure 2a - Lobing Created By Unequal Distance From Listener To Drivers
You can't place two drivers handling frequencies up to 1kHz or so 500mm apart and not have lobing issues. At an angle that causes the distance between the drivers to be the same as one half wavelength, there is a deep null in the frequency response. Lobing is dependent on the distance between the drivers and the frequency, and causes a succession of peaks and nulls as the listener moves in front of the speaker system. Using toe-in can often help enormously, because when the listener is in a null zone for (say) the left stack/ column/ line, it is very unlikely that s/he will also be in a null for the same frequency from the right stack. Lobing of this nature is a fact of life, and we've always had issues whenever the sound source width or height approaches ½ wavelength at any frequency.
+ +Some line arrays alleviate this problem to some degree by having the drivers as close to the HF horn as possible, sometimes angled inwards to form a simple horn as shown in the inset of Figure 2a. This is much better, and means that lobing is minimised. There is only a limited range of angle where both drivers are audible - however, the HF horn still needs to be crossed over at a relatively low frequency for this to be fully effective. Half a wavelength at 1kHz is only 170mm - it becomes readily apparent that there is lots of scope for problems. Because most systems use a fairly short diffraction horn, the ability to cross to the horn at a low enough frequency to prevent lobing is generally limited. Note that lobing is not limited to the horizontal plane - it also affects the vertical plane - lobing occurs whenever there is a path length difference between the listener and all audible drivers. Even if the high frequency line were absolutely continuous and can present a perfectly cylindrical wavefront, lobing will (and does) still happen with the horns as well. Some arrays manage this reasonably well, others don't.
+ +There is almost an infinite variety of line array box layouts from almost as many manufacturers. Being the 'flavour of the month', any PA operator who doesn't use them is likely to miss out on work, because almost everyone has been convinced that this is the only way to go.
+ +This is patently untrue and misleading, but it's extremely difficult for any operator to convince a client to use his/her ears and choose the system based on merit. It's even harder to convince a promoter, since the main thing that drives their agenda is the financial return. Most neither know nor care which PA sounds the best (or even better than something else), they are going to allocate the job based on a basic specification and price. A system that takes longer to setup (or horror of horrors - blocks the view of some punters so certain seats can't be sold) will never get a look-in, regardless of how good it might sound.
+ +I've not had the opportunity to mix a live band through a line array, but I suspect that it would be possible to get a good sound from an average size array. If sound quality is the goal, then other factors must be sacrificed - this is the ever present rule of compromise. Sound quality can almost certainly be optimised, but at the expense of comparable SPL at every seat. Some seats simply cannot be used if sound quality is the target, because they are too far off to either side of the stage. It is essentially impossible to provide genuine high quality audio for any listener who is not between the speaker stacks.
+ + +Much was made about lobing in the above, but as noted, it happens with all PA systems that use more than one loudspeaker. The simple fact is that only a point source is free of lobing, and this cannot be achieved with any driver that exists. What is a point source? This is a theoretical sound source that is very small compared to any wavelength within the audio range. All sound is radiated from this one point in space. In the physical world, the point source radiator does not exist.
+ +If the system consists of a single stack on each side of the stage, it is almost possible to avoid lobing, by crossing over each driver before its dimensions become greater than ¼ wavelength. This is easy enough for low frequencies, but becomes progressively harder as frequency increases.
+ +High frequency horns suffer from lobing, because they are invariably larger than one wavelength at anything above a few kHz. Since a single stack of single drivers can never produce enough SPL for even a medium sized indoor gig, it is necessary to use more speakers. As soon as additional drivers are introduced, lobing becomes an issue - there is no answer, unless there is only one member of the audience, and that member is nailed to the floor. I cheerfully accept that this is not generally in anyone's interests (especially the poor bugger nailed to the floor), so we have to accept that ...
+ +As already pointed out, to some extent lobing can be mitigated by using toe-in for the PA stacks, and this is necessary (and works) regardless of the type of system. We also have to accept that audience areas will inevitably spread beyond the stage width, so only a relatively small number of punters (patrons) will hear optimum sound quality. Likewise, it is these same punters who have the optimum view, and it is unrealistic to expect otherwise. Big video screens give vision to those who can't see the stage properly, but there isn't much that can cure the audio problems.
+ +Many large concerts use delay stacks - separate PA systems further back into the audience areas that have the audio signal delayed to match the distance from the main PA. For example, if these stacks are 100m from the main PA, the signal is delayed by 290ms so the sound from the main PA and the delay stacks arrive at the same time - to a listener who is further back from the stage than the delay stacks (say 120 metres from the main PA). It's important that delay stacks have very little radiation from the rear of the boxes (which are towards the stage), and if subs are used these really do need to be directional. While folded horns are probably the best, active pattern control by means of additional drivers driven out of phase will be the method of choice these days.
+ +As a mental exercise, it's worth thinking about the effect where a listener is slightly off axis to both the main and delay stacks, but can hear both clearly. What will be the effect if the time difference between the sound arriving from each is around 30ms (that's a path length difference of only 10 metres)? If you've not heard it, the effect is best described as 'interesting' - not quite an echo, extremely poor articulation, and very odd frequency response.
+ +The simple reality is that it is unrealistic to expect that everyone in a venue will hear perfect sound, unless every punter is handed a pair of headphones as they come in. While attractive from a 'perfect sound' perspective, it not an idea that's likely to find favour for very large (or any) concerts . Once we look at the problem from a realistic perspective, the line array becomes a little more attractive, however, it is still critically important that the effective line length vs. frequency remains relatively constant. The biggest problem (referred to at the beginning of the Line Array section), is still at that critical distance from the array where the sound is decidedly 'top heavy' and tries to rip your ears off. This happens because the line array is very long compared to wavelength at high frequencies, but is progressively shorter as frequency is reduced.
To some extent, the standard 'J' curve that's applied to line arrays will ensure that this effect is minimised, so those who are relatively close to the system hear (at most) perhaps two sets of boxes in the array, and the number of audible boxes increases as one moves further away. Whether or not this actually solves the problem is up to the skill of the operators and those who install the system.
+ +In essence, there is no ideal system - every approach has problems, and it's up to the sound engineers and promoters to determine the ideal system for each venue. Present thinking is that line arrays should be used for everything, but this is simplistic and unrealistic. Many venues would be served far better by a traditional horn system.
+ + +The column speaker was popular for quite some time. Larger ones like those described above were the most common for live performances, but for popular groups with lots of screaming fans, it was inevitable that no-one actually heard the band because the systems were unable to achieve the SPL needed to drown out the screaming. This was highlighted in 1964 during the Beatles tour - Sydney Stadium could accommodate 12,000 fans, but no-one had a PA system that was big enough. The entire PA system consisted of a couple of column speakers and mixer/amplifier that was probably no more than 120W or so. These systems were the mainstay of nearly all bands and even some major tours all over the world until the late 60s and early 70s, when things started to change. It's worth noting that column speakers are still used sometimes, especially where only speech reinforcement is needed. Churches commonly use column speakers for sound reinforcement, because they are reasonably directional.
+ +It was well known by the 1950s that columns cause lobing and interference patterns in the vertical plane, so to minimise this some columns were 'tapered'. Not in the physical sense, but electrically. The centre speaker would be full range, and each driver above and below was driven via a low pass filter, continued as necessary depending on how many drivers were used. This made the column appear to have the same acoustic length (in wavelengths) for the frequency range of interest. Doing the same with line arrays might solve some of the inherent problems, but the high frequency horns and drivers are almost certainly far too wimpy for just one or two of them to be anywhere near loud enough to be heard.
+ +Interestingly, there was very little cross-discipline activity in the early days of sound reinforcement for 'rock' bands. Extremely powerful audio amplifiers were in common use for AM radio broadcast, but none was ever used to drive loudspeakers (other than for testing, research or fun). No loudspeaker existed that could handle more than a few Watts, so maximising efficiency was very important. Until the late 1960s, none of the cabinet designs used for movie theatre sound reproduction were even considered for live sound. Each of the various audio fields tended to keep strictly to itself, so no-one learned much from anyone else. At that time, most amplifiers were valve (vacuum tube) based, and the maximum power that could be obtained from a portable system was about 200W. If more power was needed, slave power amps were sometimes used to drive more speaker cabinets. While more powerful amps existed, they were fairly uncommon.
+ +Many concerts in the late 1960s used multiple columns powered by multiple amplifiers. This created a distributed source that did very interesting things to the sound, depending on where you were standing in relation to the speaker arrays. None of this even approached high fidelity, but the excitement of a live band or concert usually made up for the poor sound quality. With popular groups, the fans would still easily overpower the PA system anyway, so sound quality wasn't much of an issue. In this respect, little has changed.
+ + +5.1 - Horn Systems
+The new systems that emerged in the 70s typically used fully horn-loaded designs. W-bins were adapted from movie theatre designs for the bottom end - the 'adaptation' was to make them small enough to move around, but that reduced their bass response. The W-bin was a folded horn bass speaker, and was much loved by a lot of bands because of the great chest compression it produced with the kick drum. Most struggled to reproduce anything below around 70Hz at significant sound levels, but this was better (louder) than anyone had heard before. There were various horn loaded boxes used for midrange, but like the folded horn W-bin, most were adapted from (mainly) Altec Voice of the Theater™ designs, as well as various JBL and RCA theatre systems. In many ways this was unfortunate, because the old theatre systems were engineered mainly for speech clarity in a theatre, and were based on the most efficient loudspeakers available. Power (and audio bandwidth) was severely limited, so the loudspeaker had to make up for the lack of electrical power. Fidelity was usually pretty woeful in theatres at that time (and in many cases still is), but if the audience could understand the dialogue then that was the best that could be hoped for.
Many quite large concerts (such as Woodstock in 1969) used remarkably little power. The Woodstock system is said to have used 10 x McIntosh MI-350 mono valve amps - a total of 3,500W. There is some disagreement on this, and surprisingly little real information. Many of the early (even quite large) systems only used about 1500W in total, and this was generally far more than could be obtained economically from any valve amps that were available at the time. Relatively large transistor amps became common in the late 60s (the Crown DC300/DC300A was rated at 150W into 8 ohms, but gave closer to 200W). These were only possible after various semiconductor manufacturers had perfected power transistors that were capable of handling significant voltage and current. It wasn't until around 1970 that these new devices became available at prices that mere mortals could afford, but once their price came down to something tolerable, things changed forever. Before this, the only (cheap) readily available high power transistor was the venerable 2N3055. In the early 70s, I built guitar amps and (mainly column) PA systems, and the only high-power transistor I could get at the time was the Solitron 97SE113 - now long gone, but not forgotten. The release of the Crown DC300 power amp in 1967 (closely followed by the DC300A, Phase Linear 200, 400 & 700 and a few others) signalled a new opportunity - plentiful power.
+ +There were issues faced by these early systems that were not understood by many of those who put systems together. The theatre systems were engineered for particular drivers, but few people ever made changes to the design to suit the more powerful loudspeaker drivers (which behaved very differently) that became essential to get the SPL needed. As a result, many of the horn enclosures used simply didn't work properly, but they made up for any lack of fidelity by being far louder than anything that had come before. Boxes such as the Altec A6 or A7, or the JBL 4560 were stacked side-by-side with radial, sectoral or multicell horns on top. No-one seemed to notice that this arrangement caused huge phase and frequency response anomalies. Compression drivers were often used in pairs on a 'Y' throat adaptor to get more SPL (which usually didn't work at all well). Even today, many people don't seem to realise that a compression driver on a horn can only achieve an undistorted SPL that is based on the peak pressure at the throat. Small throats are necessary for good high frequency reproduction, but will have problems if driven too hard. A 25mm (1") throat horn simply cannot go loud enough before serious distortion if it is expected to keep up with perhaps a pair of high efficiency 380mm horn loaded midbass drivers. The reason is largely that air has a non-linear relationship between pressure and volume, so adiabatic compression and rarefaction can only approximate a linear function over a very limited range. A larger throat allows higher SPL, but reduced extreme high frequency response - one of many compromises.
+ + +![]() | For what it's worth (and because you'll find very little on the Net), the maximum acoustic power into the throat depends on several factors. First is the relationship of actual frequency to the horn's cutoff frequency. As the ratio of f/fc (frequency divided by cutoff frequency) increases, so does distortion for a given acoustic power per unit of throat area. A sensible upper limit for throat acoustic power is around 6-10mW/mm², meaning that a 25mm (1") throat should not be subjected to more than 3-5W. A 50mm throat can take 4 times that power, or 12-20W acoustic (see graph [ 1 ]). The amount of acoustic power that can be accommodated decreases as frequency increases. For horns intended for operation from (say) 800Hz and above, the normal rolloff of amplitude with frequency (as found in most music) means that the acoustic power also falls with increasing frequency.
+
+ If the conversion efficiency of a compression driver is (say) 25%, this means there is absolutely no point supplying more than 20W (electrical) to a compression driver on a 25mm throat, or 80W for a 50mm throat, allowing for a sensible distortion of 2%. Past a certain limit (which varies with frequency vs. horn cutoff), supplying more power creates no increase in SPL, but simply creates more and more distortion. The maximum power must be reduced as frequency increases. CD horns require HF boost, so can easily be pushed much too hard at high frequencies, resulting in greatly increased distortion. + +Quite obviously, any horn that has a small throat must have limited power capability, and providing amplifiers that are (much) larger than needed for 'headroom' is a completely pointless exercise. It is both convenient and accurate to consider the effect as 'air overload'. + +According to a technical note from JBL [ 2 ], the situation is actually worse than the graph shows. A 200Hz horn at 10kHz can readily generate 48% second harmonic distortion, with as little as 2.5W (electrical) input - a mere 0.75 acoustical Watts. As noted in references 1 and 2, this information was first determined in 1954, but over time seems to have been lost. As you can see, I'm determined that this will not happen. + |
With the top end being handled by horns with compression drivers, the next question was "which horn?". The well heeled would often use multicellular horns while others used either sectoral or radial horns. In some cases you'd see a combination of different sized (and different types) of horns, which may or may not have been crossed over at different frequencies, and may or may not have been compatible. All worked well enough, but the multicell horns still have a place in the hearts of the many who ever used them - there is just something almost magical about the multicell that no other horn design can match. A web search reveals that there is great confusion in some quarters - some seem to think that a sectoral horn is multicell - no it isn't - they are quite different. The biradial horn (sometimes known as the 'bum horn' because it looked rather like someone's backside) saw little use for live sound. This was an attempt at what became known as CD or constant directivity horns. While a good idea in theory, they usually don't load the driver properly. This means that the compression driver must be de-rated or it will fail due to over-excursion. As noted above, CD horns require high frequency boost to maintain flat response, and this can lead to excessive distortion at the higher frequencies.
+ +These high frequency horns actually came in many different forms. Sectoral, radial, multicell, diffraction, horns with acoustic lenses, 'bullet' tweeters, ring radiators - the list is long and diversity is great. Materials varied too. Aluminium castings were common, but many radial horns were timber, and later came moulded fibreglass and various other plastic materials. Every manufacturer claimed to make a 'better' horn than the competition. In some cases the difference was clearly audible at typical showroom demonstration levels, but at 120dB SPL, no-one could tell the difference.
+ +In many cases, people used to refer to horns used with compression drivers as being either 'long' or 'short' throw. The theory was that long horns had greater directivity, so could reach the back of an auditorium whereas a short horn could not. This was mainly nonsense, because the directivity is determined largely by the shape of the mouth, and the length (and mouth area) determine the lowest frequency at which the horn can provide proper diaphragm loading. Still, it was a myth that was almost impossible to get rid of at the time, and it still persists. The horn arrangement that gave the best control of directivity was always the multicell, but they were always the most expensive of the many different types. Essentially, one can classify almost any horn as 'long throw', because they have controlled directivity. For good dispersion at close range, a horn acoustic lens (preferable - mostly made by JBL) or diffraction horn (not so preferable IMO) is a better proposition. Line arrays generally use diffraction horns.
+ +
Figure 3 - Altec Multicellular Horn
Some systems used JBL slot, bullet or ring radiators for the extreme top end, because the 2" throat compression drivers don't offer much above 8kHz or so, due to diaphragm break-up at higher frequencies and high distortion caused by excess power in the throat. Many people felt that the extreme top end was unnecessary, because at the kind of SPL one gets at a live performance, one's ears can no longer hear the last octave anyway.
+ +While the horn loaded cone loudspeakers have almost vanished these days, 2" compression drivers and exponential horns are still fairly common. They are even used in some of the more powerful plastic 'stick box' enclosures, although most use 1" compression drivers. The horn is simply moulded into the enclosure, and while economical, they lack proper bracing and damping, and most are too short to use down to 500-800Hz as used to be common. Because almost no-one uses horn loaded midbass boxes such as the A7 or 4560, a significant loss of efficiency is experienced, which requires more amplifier power and all the ills that I referred to earlier. Many other horn loaded midrange boxes were used too - dual 12" horn boxes were common, and these typically worked down to around 200Hz or so. There's some useful background info on horns and drivers at the LenardAudio website. Indeed John Burnett (from Lenard) and I used to make horn loaded PA systems, using fibreglass enclosures (and horn flares) for the midrange boxes. The bottom end was handled by folded horns, which were ideally used in groups of four to get sufficient mouth area.
+ + +Larger systems in the 70s were generally fully horn loaded. The top end was handled as described above, and the bass and midrange were handled by a variety of different systems. In most cases, each had its loyal followers and often equally vocal detractors. Very few of these systems were actually designed for the loudspeaker that was to be installed in them, so performance often varied widely, even though from the outside they looked like any other of their ilk. Some of the boxes could be described as midbass - intended to handle both bass and midrange, but in many cases doing a poor job of both.
+ +Because the enclosures were modifications of designs that were used for movie theatre systems, they often did not perform as expected when used with a high power (perhaps 150W or so) loudspeaker. In most cases, any deficiency was simply ignored. The boxes had been built, speakers installed, and the system went into service - warts and all.
+ +![]() + Figure 4 - Altec A7 With Sectoral Horn + | ![]() + Figure 5 - JBL 4560 Midrange Box + |
The typical enclosures used for midbass varied. There was the Altec A7, another that was commonly known as the 'Roy' box, both horn loaded. Another popular enclosure of the day was the JBL 4560 - a single horn loaded 15" driver in a (kind of) vented enclosure. I must have seen plenty of Roy boxes, but unfortunately can't recall any details - a Web search indicates that they used 2 x 12" drivers and may have used a conical flare, but information is scarce. There were a lot of other designs as well, many of which were obscure even then, and most have passed into history now. Many of these were variations on the ones listed, and there were plenty of people making horn systems at the time.
+ +Bass was most commonly handled by W-bins. These were made by several major manufacturers (Altec, RCA, etc.), but were quickly copied. The typical speaker complement was a pair of 15" or 18" drivers. Very few actually reproduced 40Hz, because the flare length and mouth size are simply prohibitive at that frequency. However, when used with two per side (or more) they usually managed to be able to deliver very high levels at around 70Hz or so - just right for the kick drum.
+ +![]() + Figure 6 - General Layout of a W-Bin + | ![]() + Figure 7 - Cerwin-Vega Folded Horn + |
Nearly all folded horn boxes used straight sections, with the average expansion being (more or less) exponential. These boxes were big, very heavy, and difficult to move around - although they were still much smaller than those used in large movie theatres!. However, those who've never heard them in action would find it a jaw-dropping experience. A huge amount of power just isn't needed when you have an enclosure that boosts the efficiency to around 106dB/W/m with the right drivers installed. Coupling a portable transistor radio to one of these horns would have SWMBO and the neighbours yelling at you to turn it down in short order ... from as little as 250mW of input (and I know this from personal experience). Folded horns weren't all of the 'W' shape though - a great many bass horns used a single flare, as shown in Figure 7.
+ +As with HF and midrange horns, there was a very diverse array of designs. Some worked very well, and some were only marginally better than a direct radiating loudspeaker. In nearly all cases though, the speaker had better protection from mechanical damage caused by over excursion than any direct radiating design. The majority of the systems I used, built, or helped design/build were exceptionally reliable. Although amplifier power was very modest by today's standards, these systems were all easily capable of exceeding the maximum SPL allowed in most venues (some of which used a 'traffic light' SPL cutout - if the red lamp was on for more than 10 seconds, the stage power was cut!). In the majority of band's PA systems of the 70s, it was almost unheard of that any loudspeaker would be driven much beyond 200W, yet these same systems were considerably louder than anything available now with the same power rating.
+ +If you are contemplating using a bass horn (of any design), the use of a high pass filter should still be considered mandatory. While the rear compression chamber of folded horns restricts cone movement below the cutoff frequency, there is still wasted power and more excursion than may be desirable. Midbass horns (such as the Altec A6/7 or JBL 4560 designs) absolutely require the filter, as the cone is unloaded at low frequencies, so cone excursion can easily reach dangerous extremes.
+ +It's also worth noting that a folded horn presents a relatively benign load to the driving amplifier. This is good, because it means that the amp's internal protection circuitry is unlikely to operate. Many amplifiers, both old and new, famous and infamous, have hyperactive protection circuits (examples are some Yamaha and Phase Linear amps, Bose, etc.). When these operate the audible result is usually very nasty indeed - much worse than clipping distortion - see VI Limiters in Amplifiers for more. An impedance that is largely well above the nominal rating means that the amp has an easy time, reducing wasted heat in both the amp and loudspeakers. In contrast, many vented direct radiating systems have a much lower overall impedance, and the load seen by the amplifier is far more difficult to drive. Any amplifier with a marginal protection circuit may cause spikes on the audio waveform - often well before clipping.
+ +
Figure 8 - Impedance Curve of W-Bin
The graph above shows the impedance curve of a 2 x 15" Etone (an Australian Speaker manufacturer) W-bin, measured some 20 years ago (it's been converted from a hand-drawn image). While the nominal impedance is 4 ohms, the actual impedance is at least 6 ohms for the normal frequency range of these boxes, and over 8 ohms for the area where a large proportion of the energy is needed. While users may have thought they were using a 500W amp (for example), in reality the power would have been considerably less than 250W peak, with an average of perhaps 80W or so.
+ + +At the time of writing this article, I had a powered sub with two satellite boxes at home (long since sold as of 2016)). The sub used an 18" driver, with a 900W amplifier. The satellites each had their own 300W amp. The system was pretty loud and sounded quite good, but I know from past experience that the same drivers, same total power, but with horn loading throughout would have been much, much louder and would sound better. The compression drivers and HF horns would need a serious upgrade though, or they wouldn't match the midrange and bass. Including the better directivity of the horns, I'd guess at another 10dB with horn loading. To get the sub-satellite system to the same SPL would therefore have needed another 10dB of power - from 1,500W to 15kW, which would (of course) simply blow the speaker drivers almost instantly. That's a seriously big difference. I do admit (however reluctantly ) that the horn loaded system would not fit into the back of a family station wagon or even a large SUV, but the difference in efficiency is astonishing. I've used many horn loaded systems in large venues with a lot less than 1,500W, but the sub-satellite system would only ever be suitable for small pub bands.
The sub-satellite system had some interesting specifications. Maximum SPL for the sub at 1m was claimed to be 137.81dB at 10% THD, and the speaker driver is rated at 101dB/W/m. 900W is 29.5dB above 1W, but strangely, 101dB + 29.5dB is only 130.5dB - almost 7dB shy of the claim. If the 900W amp were pushed to full clipping (producing a squarewave output), it gains another 3dB, but that's just a tad more than 10% distortion (43.5% in fact). Where did the extra SPL come from? It can only be magic, because it certainly can't be explained with maths or science . Strangely, the satellite boxes were rated correctly, although there was no compensation for power compression.
Some major tour suppliers have even devised cardoid (directional) subs - something that a decent sized array of horn loaded subs did automatically. To cancel the sound from the rear of the box, additional drivers are mounted and driven in anti-phase from the main speakers. This makes the overall system even less efficient, because the power fed to the rear speakers is completely wasted - it contributes nothing to the SPL in front of the box, but only cancels the bass as heard from the rear. While very clever and undoubtedly scientific, the power needed to achieve a realistic SPL in a large venue is simply staggering. One I looked at claims 2,250W for a single subwoofer box. A decent sized venue might need somewhere between 4 and 10 of them, so would have between 9kW and 22.5kW of power - just for the subs.
+ +Perhaps surprisingly, there are still a few manufacturers of horn loaded systems - including bass bins. Some small operators have designed and built their own, and a (small) few concert PA suppliers also have high efficiency horn loaded systems, not just for the bottom end, but also for midrange. The top end is still almost exclusively horn loaded, with horns and compression drivers available from many suppliers.
+ +Unfortunately, it's impossible for any major manufacturer to rely on their horn loaded systems to make worthwhile profits, so line arrays form a large part of their offerings. We can also expect to see many more bandpass subwoofers being used. There are already quite a few available, and while these can be extremely efficient, there is something of an art to designing them properly. A high pass filter is essential with any bandpass or normal vented box, because the cone will be completely unloaded at very low frequencies. Bandpass subs (like vented enclosures) can also present a difficult load for the amplifier, so an amp that has well designed protection circuits is essential.
+ +There are several on-line sellers of speaker box plans, with a large proportion of those being horn loaded. I don't know how well the various designs work, but I would expect fairly respectable performance and much higher efficiencies than plastic 'stick-boxes' or line array systems. The usage of these systems is unknown, but I'd expect them to be popular with home builders and budding musicians. They are certainly not mainstream, and it's unlikely that fully horn loaded speakers will ever return to their former glory. I'd like to be proved wrong of course, but that's unlikely.
+ +One new 'trend' that is extremely unwelcome is the proliferation of switchmode power supplies (SMPS), Class-D amplifiers, SMD components and custom ICs that cannot be replaced by anything that one can buy. It is sometimes possible to make repairs where the fault is a failed output MOSFET or some other part that uses through-hole mounting to the board, but there is a great deal of equipment where repairs are simply not possible. This can be due to SMD part failure, and that is often accompanied by wholesale destruction of the PCB, including tracks that are literally blown off the board.
+ +This isn't helped when a complete 2kW/ channel power amplifier (for example) uses one single (large) PCB for everything - the power supply, power amps, input circuitry and/ or DSP (digital signal processing). Even if a replacement PCB is available, the only thing that gets re-used is the chassis and (maybe) the connectors. Better than nothing, but once the supply of spare boards runs out, the entire amp is scrap.
+ +It used to be expected that instruments, amplifiers and PA systems could be repaired if something failed, but we are now seeing a great deal of gear that simply cannot be fixed when it fails. You might be able to get a replacement PCB if the gear is less than 5 years old, but otherwise it's likely that the entire unit will have to be scrapped. This is made even harder when manufacturers flatly refuse to provide service information (some will will even threaten prosecution if you dismantle the product). This is an untenable situation, and causes vast amounts of 'e-waste'. Powered speaker boxes aren't immune - if the amp fails and can't be repaired or replaced, as often as not the whole system becomes junk.
+ + +Horns work. Simple as that. Yes, they are large and hard to move around, but in terms of 'bang for the buck' and reliability, nothing else comes close. Because of the horn loading, speaker cone excursion is minimised, so extreme XMAX drivers are not needed. Cooling is better because the voicecoil remains in the gap, and because much less power is needed, there's not as much heat to get rid of. There is still the issue of frequency response lobing when more than one horn is used side-by-side, but even that problem is easily solved, and total power requirements can be lower again.
+ +The Grateful Dead did it years ago with their 'wall of sound' system ... each set of speakers is effectively an independent line array PA system (but not the same kind of line array that is used now). With a completely separate PA for each instrument there is almost zero interaction, and while there is some lobing from each system, it's spread out across multiple PA systems and is far less objectionable. One PA was used for the vocals, another for the drum kit, another for lead guitar, one for rhythm guitar, one for keyboards, etc. By separating each instrument, the overall mix and balance is easily changed, and outrageous SPL can be achieved with relatively modest power amplifiers. See The Wall Of Sound for the history and photos of this system. It is most regrettable that no-one has utilised this concept since, as it is a technique that could make a lot of current systems sound a great deal better than they do now.
Just as biamping a system can achieve close to 4 times the apparent amp power (see Biamping - Not Quite Magic (But Close) for more), splitting the PA does the same, but better. The drum PA can be optimised for drums, the vocal and guitar PAs don't need any subs, the keyboard PA can share its subs with bass guitar - the possibilities are endless. All too easy with the mixers that are available now, but it has always been possible. Unfortunately, the Grateful Dead was the only band to make full use of this arrangement to my knowledge, and they did it mainly from necessity - for the most part, big PA systems just didn't exist at the time.
+ +If a single large PA system runs out of amp headroom and clips, everything is distorted. If separate PAs are used, if one distorts it may not even be noticed unless the distortion is gross and long term. The odd transient that gets clipped isn't audible, but when the entire band depends on a single PA system, then you will need plenty of headroom. With low efficiency direct radiating speakers instead of horns, speaker damage is inevitable unless everything is carefully monitored at all times. This tends not to happen, except at major concerts where the added cost can be justified. Just for the record - line arrays do not (and cannot) address this. They are comparatively inefficient, but are designed to (hopefully) survive the insane power that people expect to pump into them. I don't see this as progress!
+ +Of course, one needs to look at the SPL that's actually required. While it's not uncommon for systems to register a fairly consistent 110dB SPL in typical venues, one must ask if this is really necessary. At 110dB, the recommended exposure time is around 2 minutes in any 24 hour period, after which permanent hearing damage is probable. Even at a rather subdued 100dB SPL, the limit is around 15 minutes! I'm not suggesting that PA systems be run at 90dB - part of the experience of a concert is the volume level and the feel of the bass. To some extent, we (unfortunately) must accept that some hearing loss is almost inevitable, but the excitement factor is easily created without running the PA flat out all night.
+ +One of the tricks I used to use when mixing live sound was to turn the master faders down when the band played quietly. The quieter the playing, the lower the faders ... people would actually stop trying to talk and listen! Since I made it my business to know the music, I knew exactly when the crescendo was due. The faders were snapped up to (almost) the maximum, and a very common comment heard from the punters was "That's the loudest f...ing PA I've ever heard !!". It wasn't (the whole system was about 1,200W), but by having dynamics it sounded as if it was much more powerful. It also adds greatly to the music ... loud bits and soft bits are as essential to the sound as the use of different notes. No-one would want to listen to a band that played and sang at only one note for the entire night, so why should people have to listen to the same volume for the entire gig?
+ + +In the late 1960s and early '70s, the mixers used were usually incredibly primitive. A typical mixer may have had 8 channels, all rotary pots (including the faders), pretty bare-bones EQ, and little if anything by way of channel inserts or effects sends (no-one needed effects sends because there were few effects units available), other than a tape echo and (maybe) a graphic equaliser a little later on. The mix was commonly done from the side of the stage, and foldback was unheard of except for a very few larger systems. Once the need became apparent, large format mixers were made by major manufacturers, various sound companies and individuals. 24 channels were usually enough, and most bands got perfectly good results with 12 or 16 channels. Effects racks started to develop once it was apparent that this new 'fad' wasn't going away and effects units became available and affordable. By the mid 70s, one would expect to find an active crossover, a compressor/limiter, and a tape echo machine along with a few other semi-random effects units. Many bands systems also included domestic equipment - especially things like audio cassette players.
+ +Mixers also became much more capable. By the early 80s, mixers were readily available that were not much different from what we see today. The old style 'PA head' that was used with its column speakers was reinvented as the 'powered mixer' in the late 70s. Where the column amp might have had 4 channels with bass, treble and volume (but little else other than a master volume), the powered mixer was usually a reasonably competent mixer with all the expected frills, that just happened to have a stereo power amp built in. These are (still) mainly used for smaller venues, because it has been difficult to get much power from amps that would fit into the available space. Now that Class-D (switching) amplifiers are becoming more common, far more power can be packed into a small space than was possible before.
+ +While I'm sure that there is a great deal more info available than I've got here, it's largely academic. While there have been great strides in technology, the humble analogue mixer was already very good by around 1980, and subsequent additions have just provided more functionality (especially effects in later mixers) rather than make any quantum leaps in sound quality or performance. Analogue circuitry has really only made a few baby-steps in the last 20 years, and most of the improvements are close to the limits of audibility. At over 100dB SPL, there is no audible difference at all. Some of the early mixing consoles are actually sought after today for their 'sound', especially things like old Neve and SSL mixers - some of which almost have a cult following.
+ +Of course, we now have digital mixing desks. These can make life a lot easier once set up, because they offer full automation. The usefulness of automation depends on the musicians, the programme material and the skill of the operator. Regardless of claims though, don't expect the sound quality to be any better than a decent analogue desk. One of the things you do get is the far more flexible signal routing capabilities. This can make it a simple matter to split the various sources into separate PA groups, eliminating many of the problems of having everything handled by one big system. Unfortunately, I don't know of anyone who's doing that. Pity, because that's one area where huge gains can be made, and the final mix can be cleaner and more dynamic (and with less power) than is generally possible otherwise.
+ + +As noted above, I either designed or helped design and build PA systems, guitar amps, bass amps and the like. A few of the projects from the 1970s are shown here, along with some info about each. These designs were all in production at the time, and some were used as the basis for a successful hire business. Unfortunately, there are no longer any photos of what I consider to be one of the best PA systems available at the time. I designed and built it in the early 1970s, and used it with a number of different bands. It used particularly high efficiency speakers, was 2-way horn-loaded, and managed to blitz every other system it was ever set up beside. There are many things I'd do differently now, but that's always the way (commonly known as '20-20 hindsight' ).
There were several other systems we made at the time as well, but photos seem to be long gone. At the time, we operated under the name 'Burnett-Elliott', being John Burnett (Lenard amps) and myself. I toured with a band using one of the concert PA systems, and the combination of sound quality and great music went down well everywhere. Like all systems, the concert PA had some interesting quirks, but they were relatively benign, and the mixer had more than enough equalisation available to iron out the wrinkles.
+ + +I'm not silly enough to try to predict what will be next. There are a few people in professional sound who (like me) dislike line arrays and hanker for the PA systems of old, but with a bit more applied science. However, it is very doubtful that we'll see a resurgence of horn-loaded speakers, simply because of their size and weight. There are a few around - new designs with fully horn-loaded drivers still exist and are being manufactured, but these seem to be limited to midrange and the top end. Other than the products from a few experimenters, there are few horn loaded bass cabinets any more. Ample power and bass drivers with huge excursions mean that bass can be delivered by much smaller cabinets than before, but with the ever-present risk of driver failure. This is unlikely to change.
+ +There is a growing trend to use microprocessors (or at least microcontrollers), DSP based systems, and surface-mount components in audio equipment, and these systems are generally impossible to service by traditional means. If a circuit board develops a fault, then the entire board is replaced, and when spare boards are no longer available, you throw the equipment away. This is already happening, but expect it to get worse. While this is a little off-topic I suppose, it is an important consideration - especially for pro-audio gear that's expected to last a long time. In addition, Class-D (PWM) amplifiers are now becoming mainstream. These are capable of extraordinary power outputs with very little heat - the limiting factor is the mains outlets! + +
Since it's unlikely that buyers will start selecting speaker drivers based primarily on efficiency, we can expect that ever more powerful amplifiers will be unleashed on the poor unsuspecting loudspeakers in systems, loudspeaker manufacturers will desperately try to satisfy the buyers' lust for power (most buyers will continue to ignore efficiency just as they do now), and we'll see more of the same for some time.
+ +We already have loudspeakers that are 'protected' by means of internal series filament lamps [ 5 ], and these can provide us with at least 10dB of power compression - perhaps more. The punters are happy though, because "this 160mm speaker can handle 175W". Few seem to have noticed that after around 25W, it doesn't get any louder, but if they looked inside they'd see the light.
One thing I'd really like to do is take the limiters off some systems, and jam them up the sound engineer's backside. Often, everything is compressed to within an inch of its life, so a solo acoustic guitar is just as loud as the band at full tilt. NO! Music is not like that. It has (or should have) loud bits, soft bits and everything in between. The same is done with CDs and broadcast FM (forget DAB - that's often worse than MP3). Compression is even worse when the system is still driven into distortion!
+ + +It's difficult to make any absolute conclusions with such a disparate range of topics, but there are some things that are very obvious. One of these is the myth of power handling and the general inattention paid to cone excursion. These two have seen the demise of countless loudspeaker drivers over the years, and will undoubtedly continue to do so. At the very least, all tuned boxes and horn systems require the use of a high pass filter to remove programme content below the lowest frequency that can be handled by the loudspeaker/enclosure combination. Where amplifier 'headroom' is provided (by using bigger amps than needed), even greater attention must be paid to ensuring that voicecoil dissipation and cone excursion are kept within safe limits at all times.
+ +Using peak limiting is perfectly alright, provided the limiters are set up to maintain at least some dynamic range. This means a fast attack and relatively slow decay - preferably a few seconds if possible. This maintains an acceptable peak-to-average ratio, makes the music sound more alive, and gives loudspeaker drivers some hope of long-term survival.
+ +Issues like lobing will forever be a problem with high power sound systems. Since there is no way to generate the sound power needed with single drivers, multiple drivers are simply a fact of life. With multiple drivers comes lobing (no extra charge ). The effects can never be eliminated, but they can be minimised by careful speaker placement, or by splitting the system so parts of it are used for separate sections (e.g. instruments and vocals).
High distortion is easily produced in the throat of a horn with a compression driver. There is only one answer to this, and that's to keep the power levels low, and use multiple drivers and horns to achieve the required result. It is also necessary to select the optimum system based on your needs, and this can involve a great deal of research. So much of the data you find is either erroneous or simply leaves out the very information you need to make an informed choice. Without knowledge, you are at the mercy of every snake-oil merchant in the business.
+ +It's important for anyone choosing a system to avoid deciding on something based on its (apparent) popularity elsewhere. Elsewhere does not have the same venues that you do, and apparent popularity is just that - apparent. Anyone can write glowing testimonials and place them on their website. Unless you can speak to the actual people who wrote the testimonials, they are meaningless. Also, be wary of people who post in newsgroups and forum sites. While they often seem to be unbiased, you'll find that some have a vested interest in a particular brand, but may 'forget' to disclose this.
+ +It's undoubtedly been noticed that I have a preference for the highest possible efficiency in a system. I know that power is cheap, and that there are drivers that seem to be able to take the claimed power. This doesn't change the fact that power compression is a very real and easily demonstrated problem. Only by keeping the power as low as practicable can you avoid the worst effects of power compression, and the side-issues that are created when drivers (and the air inside the cabinet) are allowed to become hot.
+ +Needless to say, I don't recommend that any high power system be run with passive crossovers. Apart from the fact that they introduce their own losses, passive crossovers also mean that once the amp clips, the entire audio spectrum is contaminated. The ability to manage the signal level in each frequency band can only be achieved sensibly when active crossovers are used, and this gives the skilled operator a system that is louder and cleaner than will ever be possible with passive crossovers - with the same total amplifier power ratings. When passive crossovers are used, you need a lot of extra headroom because of the full bandwidth signal, but you must then restrict the average power to suit the speaker power ratings.
+ +Yes, active crossovers require more amps and possibly cables, but that's why you can get 4-pole Speakon connectors almost anywhere. Remember that horn compression drivers don't need (and can't use) anything above perhaps 100W (allowing for headroom), so amp power requirements are minimal. The small extra bother is well worth the improvement in sound quality.
+ + +This is very hard. There were countless sites that I looked at, and while a few had some useful information, many had virtually nothing that was even close to reality. While it would be nice to have been able to put together the history of PA systems, there is remarkably little info available with factual material.
+ +So, the WWW as a whole may be considered the secondary reference, with the rest coming from accumulated knowledge, memory and the few links shown below. There are obvious references to JBL, Altec (Lansing), RCA, Cerwin-Vega and other manufacturers, and some of the photos are adapted from their websites or other sources. Any claim of breach of copyright cannot be entertained, since I only used photos that are effectively in the public domain, as they are published on many different websites.
+ +There is some additional information about horns on the Lenard Audio site, and a lot of additional info about PA systems and the like.
+ +Acknowledgments
+My thanks to Phil Allison, Les Acres and John Burnett for proof reading, suggestions and additional information.
![]() | + + + + + + + |
Elliott Sound Products | +Patents 101 | +
Greetings. My name is Ian Millar. I'm a semi-retired 'Registered Patent Attorney' working from home (occasionally) in the northern Sydney suburb of Cheltenham - just down the road from where our friend Mr Elliott resides. My CV is now offline, it used to be linked from here, but not as a blatant attempt at self-promotion - I really don't want any new work - it's tedious. I also have a DIY audio blog, where a few of the ESP projects are quietly hiding.
+ +I have been a long-time reader of Rod's audio pages and have built a number of the ESP projects over the years for my own use and this has provided enormous personal satisfaction. I have learned a great deal from Rod and am eternally grateful for his generous assistance when I've occasionally bitten off a little more that I could chew so to speak (like trying to put too many projects that weren't designed to go together in the same box with multiple power supplies!) .
Rod has asked me to put down a few words about patents and how they might (or might not) be relevant to DIY audio enthusiasts and in particular readers considering or having built any of the excellent projects published on the ESP site. There are many other places to read detailed analyses of the patent system generally and up-to-date case law is available by subscription to (quite boring) specialist journals. Instead, this will be pretty basic and I will try to keep it as relevant to DIY audio enthusiasts as I can. I'll break it up into topics of relevance, some of which may seem a little generic, but necessary for a basic understanding. I'm probably pretty good at the limited scope of what I do, but not very good at presenting 'courses' so don't expect to pass any exams by reading this.
+ + +What I hope to achieve here then is a concise overview as seen from my own perspective. Hopefully after reading here for just a few minutes a lay person can go away with a basic understanding of something that they never really considered before.
+ +By the nature and context of this article it really must be incomplete. If questions arise or glaring omissions are pointed out, I can always update the text later. So I'll type this 'off the cuff' without resorting to anything but my acquired knowledge with a view to keeping it as non-technical as possible.
+ +The disclaimer bit:&nsp; OK - I accept no responsibility for omissions or errors. Please make no decisions on the basis of what you read here, but make your own enquiries. I'm sure Rod will add his own disclaimer too.
+ ++ ESP has published this material solely as basic information to describe patents, and the processes involved. No part of this material is to be taken as legal advice. Patent law where + you live may be slightly different from that described here. Information is provided in good faith, although there may be errors, omissions, local variations or other circumstances that + may affect the accuracy of the material for you. No decision should be made without consulting a Patent Attorney.+ + ++
Well it's not a Trade Mark (which is usually an indication of a brand) and it's not Copyright. It's not a Registered Design (covering the appearance of an article) either.
+ +A patent is a limited-term monopoly awarded by the government as an incentive to innovation and the advancement of technology. At the end of the term of a patent the invention becomes a gift to the public. Like most things in a commercial world the patent system is driven by basic human greed and self-interest. Some disagree with the concept of monopolies while others (at least try to) profit from it.
+ +A patent is a form of Intellectual Property. It is an asset which can be sold like any other asset or licensed to others. A patent can depreciate like a rusty car as it approaches the end of its usefulness.
+ +A patent in Australia is a statutory monopoly for any useful and inventive 'manner of new manufacture' which might range from electrical and mechanical devices and methods of manufacture and apparatus use, to pharmaceuticals and DNA sequences.
+ +Patents are generally not available for mere working directions, business schemes, rules for playing games, or instructions for operating known machinery. In my opinion software is really nothing but a list of instructions for operating known computers, but in recent years applications have been passed for software-related inventions with a 'leave it for the Courts to decide' approach.
+ +A patent application for (say) a method of under-biasing a power amplifier output valve by turning an existing trimmer pot slightly to the right then slightly more to the left ought to be refused. That may sound stupid (it was supposed to), but many years ago Rolls Royce was refused a patent application for a method of flying a jet aeroplane more quietly over built-up areas simply by throttling down a bit! Brilliant eh?
+ +The layout of copper tracks on a PCB or silicon wafer is generally not patentable subject matter either. There could be exceptions in unusual circumstances where factors beyond the mere track layout were involved. Say if the board had to be non-planar or something special, but patent protection in such cases would be unlikely to be limited to a specific track layout. In Australia there is separate (non-patent) legislation protecting circuit layouts.
+ +Human beings are not patentable, although I have seen an Australian patent for a radioactive glowing 'pig'.
Patents are unlike Copyright in which it must be proved that actual copying has taken place for infringement to be found. Technically you cannot infringe copyright by independently creating a work that happens to be similar to one in which copyright happens to subsist.
+ +A scope of monopoly afforded by a patent is defined by a list of numbered paragraphs called 'Claims' which define an alleged invention. I say 'alleged' because claims can fall over in Court for being shown up as defining non-inventions. Claims are long, oddly punctuated and formatted sentences at the end of a specification which includes drawings and a description of a 'preferred embodiment' of the invention. Actually a good claim (for the patentee) is short, but they are difficult to achieve. I once drafted a two-line independent claim and saw it through to grant. I don't know whether that ever took a tumble later on though. Most are more like the example that follows. Some 'smarty pants' Patent Attorneys invent words and apply the word 'said' instead of 'the' excessively in an attempt to impress the easily impressed. One particular disgrace is the addition of 'ingly' to words such as 'seal' or 'press'. For example "… wherein said plug sealingly engages with said aperture" instead of simply "… wherein the plug seals the aperture".
+ + +An 'independent' patent claim for (say) an isobaric loudspeaker (when that was a new thing) might have read something like this:
+ +1. A loudspeaker comprising:Actually the last two limitations are probably unnecessary, but I included them anyway. Usually there are further claims which depend from a claim like the above one which serve a back-up role. A 'dependant' claim for the isobaric speaker might have read something like this:
+ +2. The loudspeaker of Claim 1, wherein the first and second transducers are substantially coaxial and both face in the same direction.
+ +Dependent claims have within their scope every limitation of the claim(s) from which they depend and sit quietly just in case their precedent claim(s) is/are found to be invalid by a Court.
+ ++ At the time of writing (May 2010) there were several recent US in-force patents and patent applications for isobaric speaker systems lodged with the USPTO (US Patents & Trademarks Office), + the most recent being an application filed in 2006. Based on a (very) quick read of these, it seems highly unlikely that any would stand up if taken to court. The vast majority of + all claims are based on things that loudspeaker box manufacturers would seem to do regularly in the normal course of building an enclosure.+ +
+ There are many patents in the audio industry that appear to be intended to do nothing more than make it appear to the uninitiated that the manufacturer is 'clever', and can do things that others + cannot. Many of these patents might have managed to get past the examiners for whatever reason, but they would fall over in an instant if challenged.+
The claims are directed at a hypothetical person skilled in the art of loudspeaker construction and need not spell out everything. For example Claim 1 above omits any mention of the apertures across which the transducers would typically be mounted and whether the transducers are connected in series or parallel, but these details are not crucial. The claim is intended to define the invention and not the minutia of construction. The claims include no explanation of how the loudspeaker really works or what its benefits might be. The independent claim(s) just define of what is (hopefully) 'covered'.
+ +Assuming claim validity, unauthorised exploitation of a loudspeaker as defined by the any one of the independent claims during the term of the patent will infringe. There is a principle called 'exhaustion of rights' by which it is OK to sell legitimate patented goods which are second-hand, because the patentee made his profit from those already. This is subject to certain cross-border provisions which come into play in circumstances where a patent for the same invention is owned by or licensed to another party in another country.
+ +An independent claim can be broken down into 'essential integers' or more simply 'features' and it is the combination of these features in which the monopoly resides. For example an independent claim could be broken down as ...
+ +1. An invention that has:
+ Feature A;
+ Feature B;
+ Feature C; and
+ Feature D;
+ wherein Features A, B, C and/or D can or do mutually interact in a new and inventive manner.
+
+
... and I'll include a dependant claim for reference later ...
+ +2. The invention of Claim 1, further having Feature E which is attached to Feature A in some special way.
+ +Features A to D might all be off-the-shelf items, but if the combination was new and inventive at the date of filing the patent application, the owner is entitled to monopolise it.
+ +There is no world-wide patent for anything. Whilst there is an International Patent Application process (called a Patent Cooperation Treaty application) , it is only a means for filing a bundle of national applications at one place and time. The end result is that national patents are granted in some or all of the places designated when the PCT Application was filed.
+ +Each patent is enforceable only under the law of the country in which it is ultimately granted. An Australian patent cannot be infringed by the sale in the USA of a US or Chinese-manufactured knock-off. The manufacture of a knock-off in China is not of itself an infringement of the Australian patent either. The Australian patent is infringed when such goods arrive in Australia (there are temporarily visiting vessel exclusions to this). The importer infringes. To stop the Chinese factory a PRC patent application ought to have been filed.
+ + +20 years is now the maximum term of a standard patent in most if not all countries. Some countries allow an extension of term for pharmaceutical patents because it takes some years for government agencies to approve new medicines before they can be sold - rendering the first years of the monopoly useless.
+ +Renewal fees must be paid at regular intervals throughout the term of a patent. If these are not paid the patent lapses. After expiration everything that was disclosed in the document becomes free for the public to exploit (subject to possible restoration of patents that lapsed because a renewal fee was unpaid accidentally).
+ + +Patent Offices (that's where you file patent applications) are not interested in your potential to infringe earlier patents. Their primary job is to filter out public burdens in the form of applications for known inventions and allow only technically enforceable patents for new inventions.
+ +Standard patent applications are examined for 'novelty' (newness) and 'inventive step' (non-obviousness) in most jurisdictions. There are lesser patents called 'Innovation Patents', 'Utility Models' or 'Short-term Patents' and the like in some countries in which a tick-a-box approval process fast-tracks a 'grant'. Post-grant examination is necessary in some countries before rights in these can be enforced.
+ +For standard patent applications searches are performed by examiners (public servants at the Patent Office) who are usually graduate engineers, scientists and people studying to become patent attorneys (for some reason). Examiners scrutinise applications for possible rejection. Some are heavy-handed, but most are reasonable. Applicants can't select a particular Examiner.
+ +Applicants usually start by claiming broadly - say by claiming just the combination of Features A, B and C in an independent claim, knowing that Feature D is really the crux of the invention. They temporarily place Feature D in a dependent claim (say Claim 2) with high hopes of it slipping past an examiner. The Examiner searches for and finds older documents (usually patent documents) and will reject the broad independent claim if he considers it to be 'anticipated' by any one of them. The cited document of relevance could be a hundred years old or quite recent. It could even be owned by the same applicant. The applicant is then allowed to amend the text of the independent claim to include Feature D (and delete Claim 2 as you can't have two claims of the same scope). Feature D cannot be plucked from thin air mind you. It must be a feature as disclosed in the specification as filed. New subject matter cannot be added after filing unless a further application is filed.
+ +Where an Examiner cannot find a direct knock-out, but a document that anticipates A+B+C, he might reject the claim anyway on the basis that the addition of Feature D seems obvious. Usually such rejections can be argued successfully either directly with the Examiner or via a process of appeal.
+ +So lets say that by convincing argument and the benefit of any doubt being decided in the applicant's favour the case is accepted for grant with feature D included in Claim 1 (as shown in the above example). Feature D is now a limitation of the scope of monopoly afforded by the patent. The patent is therefore of lesser commercial value than the applicant had initially hoped.
+ +Quick Summary: + +
It should perhaps be emphasised that not all granted patents are valid. Examiners are not expected to (and don't) have a high level of personal knowledge of the current “state of the art†in every field of invention likely to come across their desks. Reliance must be made heavily on what prior art they can find in a search when attempting to reject an application in an unfamiliar field. It stands to reason then that applications are regularly accepted “wrongly†for dubious inventions. Some countries have “opposition†provisions whereby interested parties can oppose grant on the basis of what they can demonstrate to be old, but still many applications slip through the system resulting in invalid patents being granted.
+ + +Patent applications are generally published 18 months after their 'priority date'. This is either the filing date of the application, or that of an earlier application to which the application is linked. This means that if you were to search the public records today by subject matter for applications filed in the last 18 months, you generally wouldn’t find them.
+ + +A concept that applicants and novice patent owners have trouble understanding is that they can be granted a patent for their own invention and infringe an existing patent at the same time. This arises where there is an earlier in-force patent (less than 20 years old) cited by the Examiner or otherwise overlooked.
+ +Lets say the earlier patent (I'll call it 'the reference') was just 10 years old and remains in-force. Remember if the Patent Office cited the reference during examination, it didn't care about its renewal status or what it claimed. They merely considered its disclosure as though it were a published journal article to ensure that our claim included something extra.
+ +Lets also say that the reference has a Claim to the combination of Features A+B+C. Our patentee was entitled to a patent to A+B+C+D because feature D was not disclosed by the reference but he cannot exploit his own invention in Australia for the next 10 years without answering to the owner of the reference. Adding feature D does not get our applicant off the hook. He must take A+B+C to exploit his invention.
+ +This common situation can be resolved by cross-licensing. I.e. our patentee pays a royalty to the owner or licensee of the first patent and takes it from there. The next inventor will come along with Feature F sooner or later too, so filing a patent application is like stepping on a ladder and slowly climbing up as the topmost patentee gets off (when his patent expires) and someone else steps onto the bottom rung with the latest addition to the original invention. A truly brilliant invention is a rare thing indeed. These tend to avoid the ladder game. In 22 years in the profession I have seen only a handful of these.
+ + +Australia has a patent novelty standard known as 'Absolute Novelty'. Subject to certain 12-month filing grace provisions, this means that the validity of a patent claim can be contested on the basis of 'prior art' in the form of printed public disclosure or use by anyone (including the patentee) anywhere in the world before the 'priority date' of the claim. The priority date is either the Australian filing date or a foreign filing date for the same invention filed by the applicant up to 12 months earlier and from which 'Convention priority' is claimed under an international convention. Other jurisdictions have a standard known as 'Relative Novelty' where 'use' outside the region is ignored and very few if any countries still apply a 'Local Novelty' test.
+ +Prior secret use of an invention by parties other than the patentee is irrelevant to patent validity. But where the defendant used (secretly or otherwise) an invention immediately prior to the patent filing date, this can be a valid ground of non-infringement. "Why should we stop doing what we've been doing all along just because this guy came along with a patent?" would seem a valid rhetorical question.
+ + +Patentable Inventions are supposed to involve some (at least small) level of non-obviousness and patents can fall over in Court for want of it. Novelty is one thing, but if Feature D is an obvious addition, then the patent claim ought to fall. Some Patent Offices (USA and the European Patent Office in particular) are becoming very tough with this requirement. The Australian Examination threshold is somewhat lower (but under review) and the current practice is not to uphold obviousness rejections where there is any element of doubt. What might be obvious to a genius with the benefit of perfect hindsight might not be obvious to the hypothetical skilled, albeit unimaginative, addressee of the patent.
+ +This hypothetical person is taken to be an ordinary skilled artisan in the field of the invention (say a loudspeaker manufacturer in the above example) and an obviousness assessment is to be taken by him in the light of common general knowledge of his peers before the patent application was filed. Patent Examiners are not in a position to establish what is commonly known in the field of every application that they examine and there are insufficient public resources at their disposal to gather the necessary number of expert witness declarations to establish it. I underline 'unimaginative' as it's a good knock-out for expert witness credibility.
+ +An 'expert witness' is often called upon in patent matters to support a case of obviousness against a patentee, but the expert must have no imagination by which he might be capable of invention himself. If you are capable of invention, then your opinion as to obviousness must be tainted by it. A quick search of patent records can sometimes reveal that the expert witness giving evidence against you was once nominated as an inventor on a patent application. "Hey that guy was an inventor for Patent 6480100 in 1979. He must be imaginative". Goodbye expert evidence! .
I've lost count of the number of times over the years that I've set people straight on a strange misconception that permeates society for some reason. It's like the many myths that abound within the audiophile fraternity - like the one that says that left and right interconnect cables must be the same length or the perceived sound will be delayed at one side, or that cryogenically treated RCA sockets sound better. It's the "Don't they just have to change it by 10 percent?" myth. Sometimes it's 20 percent or some other arbitrary percentage.
+ +There is no provision in the legislation or common law for altering an invention by any 'percentage' to avoid patent infringement. It's ludicrous. Percentage of what anyway?
+ +If you take every essential feature that is claimed in any independent claim of the patent (Features A+B+C+D in the above example) with the defined interrelationship (and E, F and G if you like), you generally infringe the claim. There are provisos relating to non-essential features and 'mechanical equivalents' of these, but I won't go into them here because in my opinion non-essential features have no place in an independent claim anyway.
+ +A patent in Australia affords the patentee the exclusive right to exploit the invention and to authorise others to do so during the patent term. Unauthorised supply of the whole patented product is of course an infringement. There are also 'contributory infringement' provisions. For example the unauthorised supply of a key component (say Feature B) of a patented product will infringe if that component has only one reasonable use (being an infringing use). Even the supply of a non-key component can infringe if it can be proved that the supplier had 'reason to believe' that the component would be put to an infringing use. There are also inducement provisions whereby any instructions by the supplier to use a product in an infringing way will constitute infringement by the supplier.
+ +Infringement can be by end-use and by supply.
+ +Each country has its own provisions which are somewhat similar in most regards to these.
+ + +The defendant in an infringement action usually files a counterclaim that one or more of the Claims is invalid say on the basis of knock-out prior art that has come to light. The whole patent doesn't fall just because one claim is found to be invalidated however. It could be that only 2 of 10 claims are infringed, and these two claims are shown to be invalid.
+ + +In Australia, apart from injunctions to cease trading in the goods and the right of seizure, a successful patentee/ litigant has the choice of an award of 'Damages' or an 'Account of Profits'. In the USA there are triple damages available where blatant infringement is shown to be wilful.
+ +These awards are generally calculated on the basis of infringing activity which occurred after the date of publication of the patent application and awards are calculated retrospectively to that date. For this reason patent applicants aware of a potentially infringing activity in the market place can request immediate publication and expedited handling of their application. In countries with opposition provisions they must then sit patiently and refrain from making any threats against the offender until a patent is granted. Infringement proceedings cannot be instituted on the basis of a pending application and any threats would be nothing more than an invitation to oppose grant.
+ +The public is generally deemed to be aware of every patent. There are 'Innocent Infringement' provisions that relieve the defendant of these remedies if they can show that they were not aware of the patent in suit, but these are quashed if the patented products were marked to indicate a patent number for example and were sold in quantity in Australia before the infringement took place. That's why it is important for patentees to mark their goods with a patent number.
+ + +So What about the ESP PCBs and the detailed directions that Rod so generously provides?
+ +For the sake of illustration, there is an in-force Australian patent for the NTM™ (Neville Thiele Method) crossover, a schematic for which is published on the ESP website. If ESP were to publish detailed construction information and sell or offer to sell PCBs in Australia for the project, this would make ESP liable for patent infringement under the contributory infringement provisions.
+ ++ This is a perfect example of Ian's '10% myth' described above. It doesn't matter if the NTM™ circuit published on the ESP site is the same as that normally used, or is a + completely different topology. The patent describes the method of achieving the result, and is not concerned with the specific mechanism or component values. It's immaterial + if the circuit I describe is different from or identical to that used by licensees, if it achieves the same end result then it infringes the patent. For example, changing a few + resistor values (say 10% of them) has no effect whatsoever - the patent is infringed no matter how many resistor values are changed.+ ++
In Australia we are all deemed to know about the patent if the patentees or their licensees sold a number of their own crossovers with patent numbers marked on their packaging.
+ +Indeed, Rod advised me that a challenge from the patent licensees was issued when the NTM schematic was published on the ESP site, but they were appeased when promised that the article was the end of the published information, that no further information would be given (tuning formulae etc.) and specifically that PCBs would never be made available.
+ +With few exceptions for the published ESP projects for which PCBs are supplied, the PCBs are the sole physical item included in the sale. If there were patents in force with independent claims covering the finished projects, the sale of the PCBs could constitute contributory infringement and/or infringement by inducement.
+ +My advice to Rod (if I thought he needed it, which I don't) would simply be to ensure that each of his published projects for which he sells PCBs is based on publications that are over 20 years old (so that they could knock-out any in-force patent for the project), or if based on more recent work, be done with the written consent of the original designers, or if based on his original work, be sold only after doing some relevant patent searching in case someone else has been similarly machinating and likes filing patent applications. + +But What About Me? I'm just a DIY End-user. They won't sue me will they? + +
We all go out and buy the resistors, capacitors and opamps etc. ourselves. The suppliers of those legitimate non-key components don't know what we plan doing with them so they are off the hook. The suppliers of any fake components have their own answering to do. But when we finish soldering it all together (in Australia) we could infringe patents under the 'use' provisions whether we purchased a PCB from ESP or not.
+ +Technically the patentees could sue us, but they'd have to be pretty crazy. It would be thrown out of Court as a complete waste of time and I do not know of any reported case in which an individual one-off end user has been sued for patent infringement. Litigation expenses are enormous. Hundreds of thousands (and even millions) of dollars are spent in patent litigation matters. 'Costs' are awarded to the successful litigant, but these don't scratch at the actual legal expenses. So the likelihood of being subject to litigation over the purchase of a $25 PCB plus components to populate it (say another $80 ) is unlikely in the extreme.
+ +Unless you are out there selling your completed wares or using your finished project in a public manner, a patentee will not even know about you. Even if they did, their awards as either damages or an account of profits would be meagre. There would be no sale profits as you didn't sell anything. Even if you used your project back-stage in a concert or in the mixing of a musical recording or something, it is seriously doubtful that any profits from ticket sales or CD sales could be attributed to your having used the project somewhere in the background. So that leaves damages which would be based on the item that you 'ought to' have purchased from the patentee if you hadn't decided to be a DIY type, and they'd have a tough time proving that a DIY type was ever going to spend more than 100 bucks on a commercial crossover anyway.
+ +Well, although not originally designed so, the patent system has evolved into an 'elite sport' to be played out in the Courts only by those with very deep pockets and enormous commercial interests. Back yard inventors would be blind to file patent applications with a view to self-funded enforcement of their potential rights. What most applicants hope for is to attract the interest of the large players with a view to selling or licensing their patent rights to them.
+ + +Neither Rod nor myself are aware of any patents which might cover any of the projects for which PCBs are currently supplied by ESP. If any reader knows of a commercial product similar to an ESP project that is marked with a patent number, please contact Rod with the information.
+ +Well that's Patents 101 (more or less). I hope it was easy reading, a little informative and not too boring.
+ +All the best, Ian
+ +Firstly, my thanks to Ian for putting this together. The patent system is mysterious to most people, the language is arcane, and the audacity of some people to patent products that are clearly a minor variation of common knowledge is mind-boggling. One of my articles on the use of current drive for loudspeakers was cited as prior art in a patent (which was granted), and the patented product does nothing markedly differently from the current drive system I described. Some digital signal processing was added, but it appeared to be no different from analogue filtering that was done in the past by others. + +
There is no doubt that the 'elite sport' that Ian referred to is in full play. If in any doubt, read the claims and counterclaims of major microprocessor manufacturers and personal computer operating system vendors (I think you can guess to whom I refer). Multi-million dollar court battles have become a spectator sport ... at least for the IT observers. The only thing we can be sure of is that we, as customers, will be the ones who pay the price of these battles, with inflated processor and/or operating system prices.
+ + +Useful patent references are ...
++ Google Patent Search (US only at the time of writing).+ +
+ Esp@cenet - Worldwide Patent Search, European based.
+ IP Australia
+ British Library Patents Information
+ UK Patent Office
+ Canadian Patent Office
+ European Patent Office
+ United States Patent and Trademark Office +
![]() |
Elliott Sound Products | Phase Shift Delay Networks |
Given that analogue systems (by definition) use analogue processes, it's an interesting exercise to look at the arrangements we can use to achieve time delays using only analogue circuitry. It's easily added if there's a DSP (digital signal processor) in there somewhere, but the ICs that used to be available to give short delays (generally less than 500µs) have disappeared, so we need to see how it can be done. The traditional method has always been an all-pass filter, which doesn't affect amplitude, but does affect phase. More importantly, they can be used to add group delay, which is what we're after.
Group delay refers to a process where a group of frequencies (a frequency range) is delayed by a predetermined amount, almost always to account for a tweeter being closer to the listener than the midrange driver. To be exact, it's not the diaphragm or the voicecoil, but the 'acoustic centre', which is a lot harder to pin down accurately. Mostly it will require measurement, since (for reasons that I've never understood) this information is lacking in every loudspeaker driver datasheet I've seen.
This omission is a real pain, because measuring the acoustic centre of a driver isn't a simple task. It can be estimated using a number of 'rules of thumb', but ultimately it comes down to measurement [ 1 ]. How you do that depends on what equipment you have available. As a first approximation, the acoustic centre of a driver can be considered the point where the cone is attached to the voicecoil, but there will be differences when it's measured. I've tried for some time to come up with a simple, foolproof method, but thus far without success.
For the examples shown here, the difference between acoustic centres is 25mm, which amounts to a time delay of 73µs (based on the velocity of sound being 343m/s). This changes with temperature and humidity, and you only need to settle on a suitable average value. The phase shift network used to create the delay can be second, third or fourth order, meaning that each section needs to provide a group delay of 36.5µs, 24.3µs or 18.25µs. Don't worry if this doesn't make sense just yet - all will be revealed as we progress.
When designing delay networks, attempting 'perfection' isn't helpful, as a variation of 10% either way usually makes little difference. The acoustic centre of a driver is not necessarily the same for all frequencies, and most of the time a small variation is of no consequence. Few drivers can maintain a response within ±2dB across their range, and if the delay network can keep the theoretical response within that range then that will generally be acceptable.
This article is essentially an addendum to Phase, Time and Distortion in Loudspeakers, which was written way back in 2001. It's taken me a while to provide the essential details, which are only hinted at in the original. However, the two articles are complementary, as this fills in the blanks in the original, and that provides background information in more detail.
![]()
Please note that in all the circuits shown here, I have not included a 100Ω resistor in the final output. This is required to prevent opamp oscillation when connected to coaxial cables, which (like all cables) have inductance and capacitance. This often causes opamps to oscillate, and especially if they have wide bandwidth. If you use any of these circuits, the resistor must be included on the output of the final opamp. Likewise, power supplies and opamp bypass capacitors haven't been included either. As should be apparent, the opamps won't work without power, and will oscillate without bypass caps.
To see how driver misalignment occurs, the drawing below shows an example. The precise location of the acoustic centres of the drivers depends on their construction, and their behaviour is not always predictable. It can vary with frequency, and the only way to determine if there is a problem or not is by measurement. One technique is to use an offset baffle, so the tweeter is mounted further back than the midrange. This can work, but it makes the cabinet harder to build and means there's a greater vertical distance between the drivers. The 'stepped' baffle can also create response problems due to diffraction. Some designers use a sloped baffle, but that means that you are listening off-axis.
Figure 0.1 - Driver Offset Causing Tweeter Signal Delay
A flat baffle means that the tweeter is mounted further forward than the midrange driver, so its sound will almost certainly arrive at the listening position first. In this article I've assumed a distance of 25mm between the acoustic centres, but this is only an example. A great deal depends on the drivers themselves, with some tweeters having a relatively deep recess (a partial waveguide), and others less so. Likewise, some midrange drivers are fairly shallow, while others are much deeper. As noted above, it would be helpful if driver manufacturers provided data on the acoustic centre, but they don't.
It's usually obvious with most drivers that the voicecoils will not be aligned. The voicecoil gap is not a reliable indicator of the acoustic centre, but knowing that they are misaligned is usually a pretty good indicator that some remedial action may be needed. Just how much depends on the crossover network and the response deviations of the drivers themselves. There's probably little point trying to correct a 2dB dip (due to delay) if the midrange driver has +2dB peak at the crossover frequency.
All crossover networks have group delay, but it's the same for the high and low pass sections, assuming symmetrical slopes. This applies whether the crossover is active or passive, as it's a simple function of physics. The problem of 'time-alignment' becomes apparent when the acoustic centres of the midrange and tweeter drives are not at the same distance from the listener, and this is almost always the case. It's also highly driver-dependent, as some drivers have their acoustic centre further forwards (or backwards) than others. As noted above, accurate determination of the acoustic centre is not trivial, and it's not a simple 'fixed position', as it can change with frequency.
The effects of a time offset become worse as the filter order is reduced. This is almost certainly the opposite of what you would have thought, but the graphs below show the reality. These are all done using electrical summing, which is the worst case - acoustical summing is never quite as dramatic, but the general trend is the same. I ran simulations with 6, 12 and 24dB/ octave filters, all with the same crossover frequency (2.5kHz) and with an appropriate (phase shift network derived) delay (25mm or 73µs) applied to the midrange. The tweeter delay has to be greater than calculated for the 6dB and 12dB crossovers, because they have more overlap across the crossover frequency range. If the delay extended to the full 20kHz with zero 'droop' there would be no need to extend it, but phase shift delay circuits have an upper frequency limit.
Figure 1.1 - 6dB/ Octave Crossover
In each case, the red trace shows the uncorrected response, with no delay. The delay is set for slightly different periods to obtain improved response with the two lower-order crossovers. For example, with the 6dB crossover, the delay needed to be increased to 108µs to get the response shown, and it could still use some work. Still, the maximum deviation is reduced to ±2dB, and few drivers will match that. However, it would be rather pointless to build an active 6dB/ octave crossover network because not many drivers will perform well, other than at low levels (less than 20W or so), and it's easier to use a simple series passive network. See 6dB/ Octave Passive Crossovers for the details.
Figure 1.2 - 12dB/ Octave Crossover
This same delay was also used for the 12dB crossover, giving a far more respectable result than using 73µs. The 'wobbles' in the corrected response are no greater than 0.25dB at their worst, and that will be better than most drivers can manage across their passband. When cabinet and speaker surround diffraction are included, the result will almost certainly be a great deal worse. Overall, this is not a bad compromise for people who (for whatever reason) don't like higher order crossover networks.
Figure 1.3 - 24dB/ Octave Crossover
With the 24dB crossover, the delay was set for the expected 73µs, and the result is as close to perfect as you'll get. Given that this is electrical summing, there will be other obstacles to obtaining anything near as good when the driver response and diffraction effects are considered. As seen in Figure 1.3, a 73µs delay between the midrange and tweeter causes a dip of just over 2dB, that will (probably) be audible, but other factors may cause response to be affected.
There are many attempts to optimise crossover networks, and some people believe that asymmetrical crossovers are 'better'. While it is possible to arrange for the group delay of each section to be different (typically delaying the tweeter output), at the crossover frequency, even a perfectly aligned asymmetrical network has almost no differential group delay. If you thought that this was a way to create an acoustic offset, mostly you'd be mistaken.
There is never a requirement to apply any correction between the woofer and midrange in a 3-way system. Because the frequency is much lower (my preference is for no higher than 300Hz), the wavelength is such that even a comparatively large offset will have little effect. For example, with a crossover at 250Hz, even an offset of 100mm (291µs) causes a dip of only 0.32dB. This will never be audible as driver response will always have deviations far greater than that. It will cause more issues at higher frequencies, but it will rarely be audible even under ideal listening conditions. 100mm of offset would be unusually high, unless the woofer is particularly large.
Siegfried Linkwitz claimed that "active crossover circuits that do not include phase correction circuitry are only marginally useable" [ 2 ]. Personally, I disagree, for the simple reason that the effects (particularly with a 24dB/ octave filter) are likely to still give a far better result overall than a carefully engineered passive crossover. The latter are notoriously difficult to get right if you aim for 24dB/ octave, and the component sensitivity is high. The 'components' include the drivers themselves, because the voicecoils will change resistance with temperature, and it's almost impossible to correct for that.
Mostly, a good active crossover will beat almost any passive competitor hands down. Adding delay only makes it better, but even without it the results are almost always better than even the most carefully designed passive design. It's pretty much guaranteed that the vast majority of listeners would never pick at 2dB dip at 3.2kHz in anything other than an anechoic chamber, and most wouldn't hear it even there. Small dips are generally considered 'benign', in that they rarely detract from any programme material.
If the midrange and tweeter are not vertically aligned, you'll have issues with directionality at the crossover frequency. The effective combined wave front will move horizontally (or diagonally) us the signal passes through the crossover region. It used to be common to see drivers mounted with horizontal displacement, but few designers will to that any more (other than in 'cheap and cheerful' systems). Predictably, these are not the topic here.
If one driver is closer to the listener than another, the sound from the second driver is delayed. It would be foolish to do so, but imagine that the midrange driver is 1m back from the tweeter. The sound from the midrange driver will reach you 2.9ms later than that from the tweeter, and this will be very audible. In all designs, the actual delay will be much less, and it's based on the acoustic centres of the drivers and their physical position on the baffle. The determining factor is the velocity/ speed of sound in air, taken to be between 343 and 345 metres/ second. Small variations due to air temperature can be ignored because the changes are very small, and attempting to compensate would not be worth the effort.
Many designs use stepped baffles to align the acoustic centres of the drivers, but this comes with caveats. A stepped baffle may create diffraction that can make the cure worse than the disease. The alternative is to delay the output from the tweeter, so that the signals arrive at the listening position with exactly equal delays. This is only important at higher frequencies, where the wavelength is short enough to make the delay cause audible problems. This obviously requires some maths.
λ = c / f (Where λ is wavelength (metres), c is velocity in m/s, and f is frequency)
From the above, it's obvious that the wavelength at 343Hz is one metre, and at 3,430Hz the wavelength is 100mm. Wavelengths are generally considered 'significant' for a ¼ wavelength, or 25mm. If a midrange and tweeter are separated by 25mm or more horizontally (the tweeter's acoustic centre in front of that for the midrange), this qualifies as significant at 3.43kHz. In reality, there will be audible issues at lower frequencies, and for the sake of the exercise here I'm going to assume a crossover frequency of 2.5kHz (24dB/ octave, Linkwitz Riley).
Calculating the delay for a given distance is essentially a rearrangement of the formula for wavelength. Since sound travels at 343m/s, it stands to reason that it will travel 1 metre in 2.9ms. From this we can use the following formula to determine how long it takes for sound to travel the distance between the midrange and tweeter.
Delay = 1 / ( c / distance )
If the acoustic centres are offset by 25mm, the delay is therefore 72.88µs (73µs is close enough). If the acoustic centre offset is (say) 35mm, the delay becomes 96µs. While there are differences caused by temperature, they are insignificant for these calculations. In case you were wondering, no, I will not include formulae using feet, furlongs or fortnights .
The ideal delay is (naturally enough) a real delay-line, but apart from a few (reasonably) high-resolution digital delay ICs that used to be available, this was never really feasible. Even the ones you could get had limited fidelity, but there are no suitable (single IC) devices available any more. Many people have used DSP (digital signal processing) to create both the crossover network and the delay, but there are quite a few who have since purchased Project 09 PCBs and gone back to analogue, because they were not entirely happy with the results. There's no doubt that very good results can be obtained, but you have to pay serious money to get true hi-fi performance.
Converting phase to delay at any given frequency (and vice versa) isn't difficult. The formulae below are for a specific frequency, but for time-alignment we need group delay ...
Delay = Phase° / f / 360
Phase = 360 × Delay × f
For example, a phase shift of 90° at 2.5kHz provides a delay of 100µs, and a delay of 250µs at 2.5kHz requires a phase shift of 225°. These two formulae are useful, but not when designing all-pass filters intended to time-align loudspeaker drivers. They are included for reference, and are very handy to know when you need to make conversions.
An all-pass filter shifts the phase of the signal, but more importantly it has group delay. All frequencies below the nominal 3dB frequency of the filter are delayed, and this remains satisfactorily consistent up to about one fifth of the 3dB frequency. If the 3dB point is (say) 13kHz, the group delay will be almost perfectly flat up to 2.6kHz. It's often considered that 3rd order all-pass networks are likely to provide the optimum response with most systems, requiring three opamps, however using a 4th order delay offers some advantages. The topology of the networks isn't important, but a cascade of 1st order networks is by far the easiest to configure. However, if you need flat group delay to at least 10kHz, the simple approach is not optimal.
Figure 3.1 - 1st Order Phase Shift Network
Figure 3.1 shows the basic 1st order all-pass filter, which is the basis for most of those that follow. As noted above, in the majority of systems you'll need to use a 3rd order network, because it's usually impossible to get enough group delay with a high enough upper 3dB frequency with less. With the values shown, the 3dB frequency corresponds to 90° of phase shift, and this is at 12.9kHz. Note that adding sections does not affect the 90° phase shift frequency, but it does increase the overall group delay.
f90 = 1 / ( 2π × Rp × Cp )
f90 = 1 / ( 2π × 2.2k × 5.6nF ) = 12.9kHz
At this frequency, the group delay is half of that obtained at lower frequencies. It's important that the phase shift and group delay are not significantly affected at the crossover frequency, because that makes the end result far less predictable. Low frequency group delay is equal to twice the time constant of the resistor and capacitor, so ...
Group Delay = Rp × Cp × 2
Group Delay = 2.2k × 5.6nF × 2 = 24.64µs
Adding another section identical to the above creates a 2nd order network, and the only thing of importance (group delay) is doubled. If a third section is added, the group delay is 74µs - three times that of a single network. At 2.5kHz the group delay is a bit less - 71µs. This is less than desired, but may be ok, depending on the crossover frequency and slope.
Figure 3.2 - Group Delay, 3rd Order Network
Figure 3.2 shows the group delay obtained with three identical networks, all using the Figure 3.1 circuit. The delay obtained is flat from zero to 1kHz, and is 10% down (66.5µs) at 4.32kHz. As shown above (notably Figure 1.3), this still provides a summed electrical response that's almost completely flat. This can sometimes be improved by adding a fourth network, but it's usually not necessary. Ideally, the delay would remain the same up to at least twice the crossover frequency, but this means a shorter time constant for each delay circuit, and the subsequent increase in the number needed.
Figure 3.3 - Delay at 1kHz, 3rd Order Network
Talking about group delay may not mean a great deal as shown in Figure 3.2, so the signal waveform is shown above. The red trace shows the input to the network, and the green trace is the output. You can see that the output is delayed by 74µs. This is what you'll see on an oscilloscope, since they don't have the facility to show group delay. If you have a digital scope, you can set the cursors to the peak of the input and output waveforms, and measure the delay that way. You can measure the delay at any frequency, and below 1kHz it remains constant. Above 1kHz, the amount of delay reduces with increasing frequency. Without an oscilloscope (or a simulator), it's very difficult to detect the delay.
Simply using a series of identical networks may seem like a rather pedestrian way to achieve our goal. A second-order network would appear (at least on the surface) to be more 'elegant', but it uses nearly the same number of parts (more if the inverter is included), and has higher component sensitivity [ 3 ]. It's not as easy as simply working out the 90° frequency (to ensure it's well away from the crossover frequency) and determining the group delay with a simple formula. Once we have to use odd-value precision components, the task becomes tedious and error-prone. A second order network is also inverting, where a pair of cascaded first order networks is not (assuming that you use the version shown here, having a resistor feed and a capacitor to ground).
As most regular readers will know by now, if there's a simple and a complex way to achieve the same goal, I will always opt for the simpler approach - provided it doesn't compromise performance. This is a case in point. While you can certainly use a second order phase shift network followed by a first order network to obtain an overall 3rd order network, there is no point if it makes the design more sensitive to component values, and/ or requires the use of odd value resistors (the caps don't change). Both use the same number of opamps, resistors and capacitors, so there's no saving.
Both networks shown here have close to 73µs delay, and without any changes to Figure 4.2 they have identical performance. The difference is that the Figure 4.2 circuit can be improved slightly, to provide the same group delay (give or take a couple of microseconds), but with improved flatness with increasing frequency. Some may find this appealing.
Figure 4.1 - 2 x 1st Order Networks
While the above circuit works perfectly, it may be seen by some as 'old school'. There is an alternative shown below, but despite the requirement for an inverter to maintain the signal polarity, the performance of the two is identical. You'll also note that there's a requirement for odd value resistors in the feedback networks, which detract from its simplicity. The Figure 3A circuit uses identical sections, and all resistor values can be the same. The phase determining resistors are shown as 2.2k, but they may need to be changed to get the required phase shift and group delay. With the values given, the group delay is 49µs.
Figure 4.2 - 2nd Order Network
This is a second-order phase shift network, implemented with a multiple feedback (MFB) bandpass filter followed by an adder (subtractor if you prefer). The filter Q is 0.5, and the tuning frequency is 7.2kHz with the values given. It's performance is the same as the Figure 4.1 network, but to ensure there's no signal inversion, the final inverter stage is required. Without the inverter, the signal polarity is reversed (180°) which will usually be inconvenient. There is no immediately apparent advantage using the Figure 4.2 circuit for a second order delay, and the requirement for the inverter makes it even less attractive.
Alternative delays can be achieved by scaling Rp1 and Rp2 and Cp1 and Cp2. The group delay of the two versions is identical when the same values are used for Rp and Cp. The responses are shown below.
Figure 4.3 - Group Delay, Figure 4.1 - Red, Figure 4.2 - Green
The green trace is not visible, because it lies directly below the red trace - they are perfectly aligned. There's a lot to be said for circuitry that remains benign, and where it is easy to calculate the values. MFB filters are useful, but they can be difficult to work with, and doubly so when they are combined with other circuitry as shown here. There is no apparent advantage to the more complex network, and indeed, the opposite is true.
Note that I have elected not to provide the design formulae for the second order network. If it's something you'd like to play with yourself, see Reference 3 which has everything you need and more. The same applies to the Figure 5.2 third order version. While I could provide these data, for most hobbyists it's unlikely that you'll be willing to pursue these more complex designs, especially since only the third order version provides a noticeable benefit (but at the cost of high component sensitivity. I particularly dislike recommending any circuit that requires odd values for resistance and/ or capacitance, because these are not likely to be found in anyone's 'junk box' (including my own).
Of course, the simplest is to cascade three first order networks. We know how to determine the delay easily, and we also know how to select the components for the required delay. Each stage adds its group delay to the total. We determined at the outset that a delay of 73µs was needed, suitable to move the acoustic centre of the tweeter back by 25mm. Each stage needs a little over 24µs delay, and the calculations are straightforward.
Figure 5.1 - 3 x 1st Order Networks
The next circuit is harder to recommend. It does have some good points, but they are overshadowed by its component sensitivity and the requirement for some inconvenient resistor values. It does let you get to a higher frequency for a given time delay, but the difficulty of calculating the values (which are critical) makes it less attractive than the simpler method shown above. There are no real component savings, as it has the same number of opamps and capacitors, and only one less resistor.
Figure 5.2 - 2nd Plus 1st Order Networks
This is where things get tricky. When the two circuits are added, component sensitivity becomes quite extreme, and working out the values needed is partway between a lottery and an exercise in advanced maths. Everything affects everything else, and changing the value of R1, R2 or R3 can affect the performance far more than you'd ever expect. While it certainly has an advantage that you can achieve very flat group delay up to much higher frequencies than the 'simple' version, if you don't use precision resistors and capacitors (nothing greater than 1%), the end result may be unsatisfactory.
Like most MFB circuits, the second version is very sensitive to component variations. A difference between the first two caps (Cp1, Cp2) of only 5% causes a peak or dip of over 1dB, and even Cp3 is critical. Basically, you need to use 1% tolerance parts for all resistors and capacitors. In comparison, no value is critical in the 'simple' version, and if 1% resistors are used throughout, no capacitance change will affect the frequency response. If the caps are not as calculated, group delay is affected, but that's to be expected. This makes the Figure 5.1 version far more attractive, as it uses just one more resistor, and component sensitivity is low. This makes it far better suited to experimentation, and you can easily add more 1st order sections if desired.
Figure 5.3 - Group Delay, Figure 5.1 - Red, Figure 5.2 - Green
As you can see, the two circuits are identical below 1kHz, and the 'simple' version is about 10% down (66µs) at 4kHz, somewhat shy of twice the crossover frequency. The Figure 5.2 circuit reaches almost 8.9kHz for the same reduction, so (at least in theory) it should give a better result. In reality (as shown in Figure 1.3) the end result is near perfect with either circuit. It's a slightly different matter with the lower order crossovers. If the Figure 5.2 network is used with a 6dB/ octave crossover, the ripple is reduced to just under ±1.5dB, with a 1.4dB dip occurring at 17kHz.
Applying the Figure 5.2 delay to a 12dB/ octave crossover gives a peak of 0.6dB at 2.84kHz. Perhaps surprisingly (perhaps?), this is worse than the response obtained with the 'simple' circuit, which has a maximum deviation of +0.29dB/ -0.23dB. It's not always obvious that a theoretically superior circuit can give results that are worse than a simpler version, but the comparison here shows that it can happen. Whether (or not) you will hear the difference is something that has to be tried - simulations work very well, but are 'perfect' - there's no 'real-world' variation in component values or parameters.
It may seem pedantic, but there is a very real difference between a phase shift induced group delay and an actual delay. Group delay is frequency dependent, and developing a circuit with constant group delay over the audio band is difficult using analogue electronics. One method is to use a length of coaxial cable, but when you need a delay of up to 100μs that becomes difficult. I doubt that anyone wants to accommodate 20kM (yes 20,000 metres) of RG58 coax just to obtain a delay of 100μs, but that's how much you'd need. A 'typical' coax cable has a propagation delay of about 5ns/ metre. This is significant for radio frequencies, but pretty much useless for audio. The method of choice now is a DSP, which can be programmed to apply any delay you like (within reason).
However, in an otherwise completely analogue circuit that means adding an ADC, the DSP and the a DAC to return to the analogue domain. While the difference (compared to a phase shift network) is certainly real, it's not generally a problem with loudspeaker systems. People have been using these simple circuits for many years to achieve time alignment, and the results are always 'good enough' for audio. The reasons for this are fairly simple - audio is slow. No instrument can produce an instantaneous pulse signal for example, with the possible exception of a synthesiser. However, these will always be set up to sound musical to a greater or lesser degree. The characteristics of instruments involve resonances, and the maximum rise-time is limited to how quickly a column of air or a piece of metal, wood or plastic can change direction.
These all provide real constraints on how quickly a sound can reach maximum amplitude, and how long it takes to decay. Another limitation is how quickly a loudspeaker diaphragm can move. Tweeters can obviously move much faster than woofers, and ultimately what we hear depends on our hearing. Up to perhaps 17-20 years old, we will be able to hear to 20kHz, but by age 40 that will be down to around 15kHz, and it decreases further with age. For a 60 year old listener, installing a super-tweeter to extend the range to 30kHz or more is obviously pointless.
So, while a phase shift network is not a 'true' delay, it works well enough in practice to smooth out response anomalies in loudspeaker systems. A discrete impulse will not be delayed at all, and the phase shift network only mangles the waveshape. This may seem like a serious limitation, but it's immaterial in reality because discrete impulses are rarely a part of the music. They are certainly generated by clicks from vinyl playback, but I doubt that anyone really cares if they aren't reproduced perfectly - not being reproduced at all is generally preferred.
While the differences can certainly be demonstrated with simulations and graphs, I don't intend to go into further detail. This isn't because it's too hard, it's simply because it's irrelevant to the topic.
This article is intended as an overview, although the techniques (and simulated results) give results that will be very close to what you'll find via measurement. These are circuits where simulation and reality will be very close, provided the capacitors are the marked value (measurement is recommended to get them as close as possible). This is particularly important with the 2nd and 3rd order networks, and the capacitors should be selected to be within 1% for best results.
I've deliberately stayed clear of formulae for the more advanced techniques, because it's unlikely that they will hold much appeal. This is primarily due to the component sensitivity of anything other than the basic phase shift based delay circuits. The results are clear, and using a delay on the tweeter will almost always provide response that is flatter that you'll get without it, although with high-order crossover networks (e.g. 24dB/ octave) the extra fuss may not be warranted. Ultimately, it's only worthwhile if you spend a great deal on the drivers, and want the best possible outcome.
Normally, I suggest standard MKT (polyester) capacitors, but in this role I suggest that you opt for polypropylene for best results. They are larger and more expensive, but are warranted in a comparatively complex circuit intended to provide time delay and (hopefully) nothing else. Likewise, skimping on the opamps would be unwise, so for a stereo system it will not be a cheap undertaking. Whether it's worth the effort is something that only the constructor can answer. For experimentation, MKT polyester caps and TL072 opamps will be fine, and you might find that the end result is quite good enough to use in a system.
Determination of the exact delay needed can be difficult, and the networks themselves are superficially simple, but the high-order versions have hidden characteristics that aren't always clear from the descriptions. Ultimately (and despite the term 'phase shift network') we aren't interested in phase shift at all. The required parameter is group delay, and that has a limited frequency range before it starts to be reduced. While we would like it to extend for the full frequency range, it doesn't. Unfortunately, physics doesn't care what we'd like, it does what it is predestined to do, based on the component values we choose .
It's well worthwhile to read Phase, Time and Distortion in Loudspeakers, which will help you to understand some of the 'finer' points. The article is fairly old now (18 years at the time of writing this), but nothing has changed. If I were to write the article now, it would undoubtedly have some of the information provided here (it only covers the basics), but it's not going to be re-written anytime soon.
An alternative that's sometimes used with passive crossovers is to apply a frequency offset (which also means a different phase response) in the hope of minimising response disturbances. While this can be made to work, it will almost always be an empirical approach, and will probably only work with the exact drivers specified. On occasion, you'll also see asymmetrical networks, having (for example) an 18dB/ octave filter for the tweeter and a 12dB/ octave filter for the midrange. This too can be made to work well, but almost always requires a time delay circuit or tweaks to the crossover component values to get a satisfactory result.
Overall, there remain far more reasons to use an active crossover and electronic networks to obtain the desired response. The best part with active networks is that there is no loudspeaker driver interaction, as each driver has its own amplifier. The opportunities to get everything exactly right are made a great deal easier, with only small, low cost parts needed to get results that will beat those from even the best of passive designs.
The use of a delay network is entirely dependent on your expectations and the relative driver offset. If the tweeter uses a waveguide, that will almost always move it back far enough that the acoustic centres of the tweeter and midrange are very close together, and no delay is necessary. Most people who have built the Project 09 24dB/ octave crossover have found that there's no need for a delay, because the dip created in most systems is less than the normal response variations from typical drivers.
![]() | + + + + + + |
Elliott Sound Products | +Phone Jacks & Plugs |
In theory, phone jacks/ plugs are some of the simplest connectors around. They are generally low-cost, and they're used on countless pieces of equipment, from guitar amps (where they are invariably used for inputs, and often for speaker outputs as well) to mobile (cell) phones and many other common audio items. You can (of course) find some that are rather expensive, but that doesn't ensure higher quality (or 'better' audio).
+ +They were originally used in early manual telephone exchanges (central offices) to connect the caller to the desired party, and these were always TRS (tip, ring and sleeve) types. They were used because the phone system (POTS - plain old telephone system) is balanced, so tip and ring were used for the phone connection, with the sleeve grounded or floating. The most common jack for guitar and other musical instruments is just a TS (tip and sleeve) mono type.
+ +There are many different sizes, with the 1/4" (6.35mm) being the original standard, but now we also have 3.5mm (commonly referred to as 1/8"). 2.5mm versions are also used, although these are less common (and far easier to damage) than the larger types. It appears that the 3.5mm types have been 'converted' into an imperial measurement (albeit wrong) by those who don't speak metric. A reader alerted me to this, and a measurement confirms that they really are 3.5mm.
+ +In all cases, the sleeve is also the main support structure for the jack plug, and it was always intended that this would be earth/ ground. For reasons unknown, Apple (being the pack of bastards they are) changed that for TRRS jacks, making the sleeve the mic connection (or video connection where appropriate), and using the first ring the ground. To say that this is uninspiring (and IMO bloody stupid) is to put it very mildly indeed, and all it achieved was to make Apple accessories incompatible with other devices. Most mobile phone makers have adopted the Apple convention - probably not because they wanted to, but to make accessories interchangeable.
+ +Fig 1 shows the three standards, commonly available in 3.5mm and 6.35mm sizes. The drawing shows how the sections are supposed to be used, but as noted below the TRRS plug and jack wiring has been hijacked so it doesn't make sense. With metal plugs, the housing is always electrically connected to the sleeve, so with the mangled wiring scheme first implemented by Apple, that would make the housing connected to the mic wiring. That makes absolutely no sense however you look at it, but it became the defacto 'standard' and Android phones (and tablets) adopted the same idiotic scheme.
+ +The jack is useful for more things than you may think. When stripped down to the basics, the threaded collar, washer and nut make a fine 'bearing' for pot or rotary switch extension shafts. When it was available, the ESP extension shaft used the collar from a phone jack as the 'bearing', so the shaft wasn't just rotating in a hole in the front panel. However, they aren't especially easy to get apart - I used my lathe as other methods are hit-and-miss. The cut end also has to be re-rivetted to hold together, requiring tooling that you'd probably have to make for yourself (as I did).
+ +The internal structure of the plug is not so easily accessed. There used to be plugs that had a screw-on tip, and they could be dismantled easily. Unfortunately, it was common for the tip to unscrew and disappear, and all modern versions are rivetted. The connections for the rings are generally concentric tubes with insulation between each. The standard of construction isn't always as one might hope, but failures (other than wear and tear after years of use) are (surprisingly) uncommon.
+ +Many jacks (sockets) are simple types with no switching or other 'fancy' stuff. Fig. 2 shows the most basic - a mono TS plug and jack. These are common for basic connections, where no switching is required. This connector has been used for decades for musical instruments and amplifiers - generally as the speaker output. IMO it's not at all suitable for speakers, but it's so common that it's pretty much impossible to change. A major disadvantage when used on a speaker cabinet is that the tip and sleeve are shorted during insertion - not good for solid-state amplifiers!
+ +Note: The sleeve should always be ground. This is the way these connectors were designed, and the way they are supposed to be wired. Unfortunately, Apple, in its 'wisdom' changed that for TRRS types (see below) and other manufacturers followed suit. The 'alternative' connection is stupid and makes no sense. The sleeve has a heavy termination designed for anchoring the shield, but it's not used for its intended purpose when the wiring scheme is changed by idiots! Stupidest decision I've come across for a long time.
+ +Most of these sockets are not insulated from the chassis, so the sleeve is forced to be at earth/ ground because it's attached to the chassis by default. Insulated types are also available, but they have a plastic housing that isn't very strong and may be easily broken. Many of the plastic sockets include some switching.
+ +Headphones are almost always used with TRS plugs, and the Tip is always intended to be the left channel. The same wiring is used for 6.35mm and 3.5mm connectors, and adapters are readily available to convert from 3.5mm to 6.35mm, since most fixed stereo systems use the larger version. The stereo version is often used for balanced connections, and 'combo' connectors are available that will accept either XLR or stereo (6.35mm) phone plugs for the input. The terms 'hot' and 'cold' refer to common terminology for balanced circuits, where 'hot' is the positive signal and 'cold' is the negative signal, or no signal in the case of pseudo-balanced circuits. See the article Balanced Audio Interfaces for info on this topic.
+ +When used with a stereo headset (stereo headphones plus a mic connection), the above shows how it was intended that the connectors should be wired. Unfortunately, Apple decided that this was too sensible and they changed it so the sleeve is the mic connection, and the second ring is used for earth/ ground. Android phones eventually did likewise so that headsets would be interchangeable. This non-standard (and IMO just plain stupid) connection scheme conveys zero actual benefit (quite the reverse in fact, as it's hard to connect the shield so it adds strain relief). The idea of using non-sensible wiring schemes isn't just on phones - many manufacturers (e.g. Sony, Panasonic, Toshiba) also used the sleeve inappropriately. This is something that will often happen whenever any connector has more than 2 wires - some lunatic will decide to rearrange them rather than follow accepted standards.
+ +When the plug is fully inserted, the Tip connection in the socket is meant to engage with the notch on the plug. This provides some resistance against pull-out, and it requires a deliberate effort to disengage the connectors. It's not a 'true' latching system though, and accidental disconnection is fairly easy. There are true latching phone plug/ jack combos, but they're not common.
+ +TRS sockets often have basic switching that disconnects internal speakers (for example) when a set of headphones is plugged in. This is very common, and the connections are shown next.
+ +Simple switching such as that shown is very common. The signal will rarely be the actual speaker output, as connecting headphones directly to a power amp would not be sensible. This is because their sensitivity is measured in dB SPL/ mW, and even a 10W (8Ω) amp will deliver around 2.5W into 32Ω headphones (anyone care for 134dB SPL peak, assuming the 'phones survive?). If you need indirect switching, then other options are available. Not all can be obtained for all sizes though, with 6.35mm sockets usually having the most (and most robust) switching options.
+ +The way the switching is performed varies with different designs. Some use a fully isolated switch, while others have one contact wired to a socket terminal. The version shown is the simplest arrangement, using simple SPST switches for Tip and Ring. Where isolated switching is required, a more complex mechanism is needed.
+ +The switching that's included ranges from a simple normally closed (NC) switch to SPDT (single-pole double-throw) as shown above or DPDT (double-pole double-throw). In some cases the switches are connected to one or more of the contacts, and in others they are separate (I've shown a separate isolated SPDT switch). This allows the user to use the switching circuits independently of the contacts. This is often used to apply power to the device when a jack plug is inserted - particularly if the product is battery powered.
+ +Switching is generally available with both 6.35mm and 3.5mm connectors, although the type of switching can be more limited with small connectors as there's less space for complex contact assemblies. One of the difficulties isn't making the switches themselves, but ensuring that they are mechanically rugged enough to withstand normal 'abuse' - especially where the connector is expected to last for a long time.
+ +In some products, the switch may be used to detect that something has been plugged in, and this is common for computers. When a plug is inserted, software detects that 'something' has been plugged in, and you may be asked to identify if it's a 'line' input (typically ~100mV) or a microphone (1-5mV). Circuitry is switched accordingly under software control.
+ + +Phone plugs and jacks are one of the most common audio connectors around, and they serve a need in many different pieces of equipment. They are compact, fairly reliable and easily wired by a hobbyist with reasonable soldering skills. They are not intended for high power (although 6.35mm [¼"] types carry the current for 100W from a guitar amp quite happily), and they're available anywhere - at least in their basic form for sockets. 2.5mm types aren't recommended unless you already have equipment that uses this size. Because these are so small they are physically weaker than their larger brethren and are more easily damaged.
+ +As mentioned above, the sleeve is intended to be ground, and the plug has a strong anchor to hold the shield (and the cable via the clamp at the end) and provide strain relief for the smaller and more delicate internal wire(s). The fact that most jacks have a grounded sleeve connection shows that this was always the intent. However, if you need a TRRS plug for stereo headphones and a mic (for a phone or a tablet) then you are stuck with the stupid arrangement that's now used for most (if not all) modern devices.
+ +This article is intended as a quick look into phone plugs and jacks, and not every combination can be covered. For example, 5-pole (TRRRS) types are made, but they're not readily available and decidedly non-standard. If you need more than 4 connections (TRRS) then I'd suggest that you choose a different type of connector.
+ + +![]() ![]() |
![]() |
Elliott Sound Products | Passive Line Level Crossovers (PLLXO) |
For reasons that are unclear to me, some people seem to imagine that all circuitry should be passive. This is clearly not possible for power amplifiers, and presumably they will have to be blessed, then coated liberally with fairy dust (more commonly known as snake oil) so as not to affect 'the sound'. The notion that a passive line-level crossover (PLLXO) must be 'better' (no horrible opamps for example) is wishful thinking, and doesn't stand up to scrutiny. Remember that the vast majority of all recordings have had individual tracks and the final mix pass through more opamps than will ever be found in a home reproduction system. It seems that this point is missed, or perhaps it 'doesn't count' for some reason.
The answer to the question posted in the title ('Useful Or Not?') is 'not'. Basically the whole idea is based on a false premise, and the performance can never reach that of a properly designed active crossover network. While this can be mitigated by using an opamp buffer before and after the passive network to ensure a low source impedance and a high (approaching infinite) load impedance, this means that it's not 'passive' any more. Using 'simpler' circuits (valve cathode followers, FET source followers or transistor emitter followers) will increase distortion and most will fail to approach the performance of an opamp by an order of magnitude.
A crossover network is always a requirement with any system using two or more loudspeaker drivers. The choice of frequency (or frequencies for multi-way systems) depends on the drivers used, and the slope depends on personal preference, driver protection and the level of complexity the constructor is willing to undertake. While some high quality systems go to great lengths to get everything right, many don't, so the result is not always as expected (or hoped for). The vast majority of loudspeakers have an internal crossover network, ideally using inductors, capacitors and resistors, but on occasion just a single bipolar electrolytic cap may be used (this is not a crossover - it's a cheap (and very dodgy) way to 'protect' the tweeter).
The idea that passive 'line level' (as opposed to speaker level) systems avoid the use of opamps, bipolar transistors, FETs or valves (vacuum tubes) seems to be appealing to some DIY people, but consider that no major manufacturer will attempt to use a completely passive system because it imposes too many restrictions. Snake-oil vendors are not included of course, because they are selling dreams rather than reality. You must 'believe', or the magic will dissipate and reality may even become apparent. That would never do!
My preference is for 'proper' active crossovers, but for a simple system this may be thought difficult to justify. The cost penalty isn't great, but it adds a few more parts. However, these extra parts also ensure that it works correctly, and doesn't rely on the vagaries of the external components (preamp and power amp). All 'line level' crossovers mean that a four-wire (or more) connection is needed for the speaker. This usually isn't sensible for a simple 2-way box that is used at low power, and often a simple 2-way series speaker level crossover is all you need. An example of just such a system is shown in Project 73 (Hi-Fi PC Speaker System), and that shows a series network. This has been in daily use for nineteen years (at the time of publication of this article), and has seen several different PCs in its time. Apart from one repair (a faulty electrolytic capacitor in the power supply), the system hasn't missed a beat in all that time!
For the time being, we'll imagine that a PLLXO is (potentially) viable, and look at the limiting factors. These are always present with any system, but fully passive filters are far more easily compromised than an equivalent active system. The more compromises you have to make, the greater the performance degradation.
If you want to bi-amp using a passive crossover then there's really no need to make it complex unless you are after something that has greater than 6dB/ octave slope (as discussed in this article). However, to be useful the network requires a sufficiently low output impedance to ensure that it isn't loaded by the following power amplifier. Any loading will not only alter the crossover frequencies, but also create response errors. If the passive network is loaded by an impedance that's ten times the nominal filter impedance, the frequency shift is minimal, but there will be a level difference of 0.8dB between the high and low pass sections. A pair of simple networks are shown below, with a nominal crossover frequency of 3.38kHz. It's not recommended, and it's shown only to demonstrate the principle.
First, it's necessary to determine the optimum crossover frequency. The frequency is determined by the following formula ...
fo = 1 / ( 2π × R × C ) Where fo is the desired XO frequency
Armed with this, the networks can be designed. The drawing below shows both first (6dB/ octave) and second (12dB/ octave) filters. With an infinite load impedance (or close to it), the 6dB/ octave filter will sum flat to within well under 0.01dB - a perfect result. However, an infinite load impedance isn't possible, so it will have to be something finite (which is most inconvenient).
Figure 1 - First & Second Order PLLXO, 1k Impedance, 3.38kHz Crossover Frequency
It's an absolute requirement that the source impedance should be no more than a few ohms, or the crossover frequency will be affected, as will be the relative levels between high and low pass filters. The minimum impedance for all networks is close to 1k, with an impedance at the crossover frequency of 1.414kΩ This isn't an easy load for most preamps, and is completely unrealistic for a passive preamp. This is doubly true if the 'passive preamp' has only a volume pot, or uses a transformer to get gain. It's even a difficult load for some opamps. The impedance of the following power amplifiers has to be at least 100kΩ to prevent level variations. It should be apparent that this isn't a viable option for the vast majority of systems.
If you'd prefer a 12dB/ octave filter then you are in for a world of pain. It can be done as shown above, and the filters still need a very low source impedance. The required load impedance now needs to be around ten times higher than before, so you need power amps with a 1MΩ input impedance, and there will be a 0.5dB dip at the crossover frequency. You also have to reverse the phase of one driver to prevent having a deep notch at the crossover frequency (all second-order crossover networks require a polarity reversal).
Provided you know the exact input impedance of the power amplifiers you are using, this can be used as part of the filter network. While this works with high-pass filters, it's not ideal for the low-pass section. The amp's input impedance is in parallel with a capacitor but in series with the low-pass resistor, and that reduces the level of the low frequency section. If the power amps have gain controls this can be addressed, but the input impedance (and therefore the gain control) still needs to be at least 100k to 1MΩ. With a 100k load, the 12dB/ octave filter will have the response shown below.
Figure 2 - Second Order PLLXO Frequency Response
I don't know who originally thought that a PLLXO was a good idea, but in a nutshell, it's not. Certainly, some of the issues can be addressed using capacitors and inductors, but then you have a system that still needs a low source impedance, but it also needs high-value inductors, and a well defined and unchanging load impedance. If it's imagined that this is somehow 'better' than a proper active filter using opamps, then be prepared to be surprised (but not in a good way).
There is one (very small) benefit, in that you now have a line level crossover that uses separate power amps for each driver, so there is no need to be concerned about driver impedances. However, a proper active crossover will outperform it in every way. The idea that opamps somehow 'ruin' the sound is just silly, and an active crossover is a far better (and more predictable) option overall. The circuits shown here are examples only, and I don't propose to discuss the design process in any more detail. Any circuit that is so dependent on external influences (in this case, output and input impedances) is not especially useful unless it's incorporated within the main chassis and doesn't rely on any external equipment.
If you are game enough, you can use capacitors and inductors to realise the filter function required. There's one small problem that I have covered before, namely that inductors are the worst passive components you can buy. Because they rely on magnetics, they are very susceptible to stray magnetic fields, their internal resistance is often rather high, and they suffer from 'self resonance' due to the distributed capacitance within the windings.
However, I'll persevere because you can buy line-level crossovers that use them. The requirements as described above do not change, so the source impedance must be low (ideally very low) and the filter characteristics are affected by the load impedance (the power amplifiers). R1 and R2 in both versions provide the correct terminating impedance for the filters, and if the amplifier's input impedance is less than ten times the resistor value, response will be seriously affected. There's another small trap waiting for you as well, namely the resistance of the inductors. They are comparatively high values, and will require many turns of fine wire and a ferrite magnetic path. Air-cored inductors would be impossibly large, and very susceptible to magnetic interference. The 12dB/ octave filter is aligned for a Q of 0.5 (Linkwitz-Riley), so the outputs will sum flat.
Figure 3 - First & Second Order L/C Filters
In both examples, an amplifier input impedance of 1MΩ will cause a dip of 0.085dB at the crossover frequency. This is reduced if the impedance is higher, and is made worse if the impedance is lower. The amplifier's input impedance can be made a part of the circuit. For example, an amp with an input impedance of 22k (very common, and used in most ESP designs), then R1 and R2 can be increased to 18.33k. That provides almost exactly 10k load to the filters and they will be close to perfect (or as 'perfect' as can be achieved with inductors). In reality, there will be response anomalies cause by the winding resistance of the inductors, and adjustments will be necessary to suit the inductors you use.
Calculating the values isn't difficult. The standard formulae are used for both the capacitor(s) and inductor(s), and for the second order filter the Q must be accounted for. The Q for a second order Linkwitz-Riley filter is 0.5, so if RL is 10k and we use the same crossover frequency (3.38kHz) ...
XL = XC = RL / Q Where XL is inductive reactance, XC is capacitive reactance and RL is load impedance L = 2 × RL / ( 2π × fo ) C = 1 / ( 2 × RL × π × fo )
For the 6dB/ octave filter, the Q is always 0.5, and the values of L and C are based on the actual resistance (10k). The value of 2 × RL is only necessary for the 12dB/ octave version. Basically the same formulae are used for speaker crossovers, except the impedance are far lower, meaning higher capacitance and lower inductance. Should you prefer a Butterworth response, you divide the load resistance by 0.707 ( 1 / √2 ). Remember that one output of the 12dB/ octave filter must have its phase inverted - it makes no difference whether it's active or passive, this is required!
It's always a good idea to work any calculated values backwards to double-check your results. If you do this with the values shown in Figure 3, you can re-calculate the frequency and Q of the second order filter. This is also useful to check a circuit you find elsewhere. The formulae you need are as follows ...
fo = 1 / ( 2π × √( L × C )) Where fo is the crossover frequency Q = R / √( L / C )
Quite clearly, the values can be somewhat 'inconvenient' (no standard values), so you can either change the crossover frequency or you'll need capacitors in parallel to get the desired value. The inductors values are also inconvenient, but they will be custom-made so can be made to provide the exact inductance required. Be aware that ferrite pot-cores saturate easily, so you'll almost certainly need to use a core that's larger than expected. If you expect to use a 'line level' voltage of more than 1V RMS, saturation becomes more likely. I do not intend to go through any of the coil design processes, because it would just be a waste of my time. The idea of a passive L/C line-level filter is (IMO) extremely silly, and it deserves no more attention than already provided.
For anyone who still thinks this is a 'good idea', you are now on your own. The end result will be irksome to build, sensitive to magnetic fields, more expensive than an opamp filter, and it won't work as well. Admittedly, you don't need a power supply, but you do need a preamp with low output impedance, or the filters have to be designed with the actual output impedance as part of the design process. I'm not going there.
A rational approach needs active components. There are examples of second-order (12dB/ octave), third-order (18dB/ octave) and fourth-order (24dB/ octave) filters shown in the projects pages, and there's even a state-variable first-order filter. Because it's part of another project (and is also described in the State Variable Filters article), it's included here because it's a far better proposition than a completely passive design. The first-order state-variable filter is uncommon, and this is one of the few websites that describes it.
Figure 4 - First Order State Variable Crossover
There's no point showing the response graph because it's perfect! The two outputs combined sum flat, and the frequency has been set to 3.38kHz as before. The frequency can be made variable by using a pot in place of R5, allowing the frequency to be changed at will. There should be a resistor in series with the pot to ensure that the frequency can't be adjusted to anything too high to be useful. Compared to a passive version, the circuit shown doesn't care about the input impedances of your amplifiers, although it does require a low source impedance (in common with just about every filter circuit known).
As with any other circuit using opamps, 100Ω resistors (R6 and R7) are required in series with the outputs if the circuit will be connected to the power amps via shielded cables of more than 100mm or so. These prevent the opamps from oscillating due to cable capacitance. Figure 3 is a real circuit, without compromises, and doesn't require any silly formulae to allow for the input impedance of the power amps. It uses only a single-gang pot (if you need it to be variable), and a dual-gang pot can be used for stereo.
For other slopes (12, 18 and 24dB/ octave) refer to the Project list, as there are examples of each. The 'gold standard' is probably Project 09, which is 24dB/ octave and as close to ideal as you can get. Ultimately, no passive crossover (line level or otherwise) can match the precision and freedom from outside influences as one built properly, using opamps. The passionate hatred of opamps in some circles is baffling, as there are many that come so close to the 'straight wire with gain' ideal that it's hard to even measure their distortion. With a bandwidth from DC to well above the audio spectrum, very low noise and low power consumption (typically less than 5mA for each opamp), it's hard to find any fault with them.
Be that as it may, there are countless websites that will 'explain' how opamps will ruin the sound, and often offer seriously degraded performance alternatives that can never come close to that available from the 'evil' opamp. This has been going on for years, and the PLLXO is just one example of a 'cure' that's far worse than the alleged 'disease'. For those who think that a discrete opamp (using transistors, FETs and other 'conventional' components) is superior to the integrated circuits we use in so many products, you can spend a small fortune to get something that might come close to the common NE5532 opamp, but many times the size. I cannot understand the 'logic' of this.
It's probably due to lack of knowledge of filter principles in general that ensures there will be people who imagine that passive filters of the types described have 'better' phase response than active filters. Especially those using 'nasty' opamps! This is simply untrue. Any filter with a particular slope and/ or Q has the same phase shift as any other, and it makes zero difference if it's active or passive. As noted in the introduction, the music you listen to has almost invariably passed through possibly hundreds of opamp stages during the recording and mixdown processes. More will be used in a disc cutting lathe (for vinyl), and CD players also use opamps as part of the DAC (digital to analogue converter) and to buffer the outputs for low impedance.
There may be a few 78 RPM discs that were cut directly from the studio feed and perhaps only used a couple of valves in the process, but to imagine that these are somehow 'high fidelity' is clearly preposterous. Passive filters were pretty much all that was available in the early days of recordings, but to think that they are superior to a modern version is wishful thinking.
In many cases, when a user tries something different (such as a PLLXO) in place of a more conventional filter, the result may be different. Unfortunately, for many people 'different' means 'better', so myths are created and others come to the same conclusions. Whether this is due to peer pressure, a feeling of wanting to 'fit in' or simple delusion is impossible to know. In most cases, there will never be even the most rudimentary attempt at a blind test, so the results are unreliable at best, useless at worst. Blind testing is the only way to determine if there's a real difference, but it does not provide a means of knowing which is better. That relies on careful measurements, but 'perfection' is not everyone's goal. Ultimately, if you find the sound of a PLLXO somehow 'pleasing', then by all means use it. Telling others that it's better than the alternatives is an opinion, and as such it generally should be treated with some suspicion.
A hundred years ago, these passive filters started to be used in earnest, for telecommunications systems, early radio (wireless) and a few emerging industrial applications. Back then, this was all that was available, so quite naturally they used what they had. Today we can make filters that are closer to the 'ideal' than ever before, and regression to techniques used a century ago is not sensible.
It's no accident or omission on my part that I'm not offering a spreadsheet to calculate the values needed for any given topology. Since the PLLXO is a flawed concept at best, spending more time to develop a spreadsheet isn't worth the time or effort.
![]() | + + + + + + + |
Elliott Sound Products | +Small Power Supplies (Part I) |
Across the Web, there are countless designs for low current (typically 1A or less) power supplies for preamps, small PIC based projects, ADCs, DACs and almost any other project you can think of. Many are very basic, using nothing more elaborate than a resistor and zener diode for regulation, while others are very elaborate indeed.
+ +For most beginners and many experienced people alike, it becomes very hard. One has to decide where extreme precision is needed, how much noise can be tolerated and just how complex the supply needs to be for the application. Some assume that a 'super regulator' of some kind must be better than a readily available IC solution, whether or not it will make an audible difference is neither checked nor tested.
+ +It must be understood that a regulator (in almost any form other than a zener diode) is an amplifier. Admittedly the amplifier is 'unipolar', in that it is designed for one polarity, and can only source current to the load. Very few regulators can sink current from the load, but shunt regulators are an exception! + +
Since amplifiers can oscillate, it follows that regulators (being amplifiers) can also oscillate. As the bandwidth of a regulator is increased to make it faster, it will suffer from the same problems as any other wide bandwidth amplifier, including the likelihood of oscillation if bypassing isn't applied properly.
+ +There is also an endless fascination by some to build the smallest and cheapest power supply possible. Many circuits can be found that don't even use a transformer, and while some have acceptable or adequate warnings about safety, others don't. Indeed, there is one published design that breaks the wiring code of every country on earth, has no warnings, and is a death trap (this one has its own section in this article - see Cheap Death).
+ +If you are not experienced with mains wiring, do not attempt the following circuits. In some countries it may be unlawful to work on mains powered equipment unless you are qualified to do so. Be aware that if someone is killed or injured as a result of faulty work you may have done, you may be held legally responsible, so make sure you understand the following ...
+ +++ ++
+WARNING : The following description is for circuitry, some of which is not isolated from the mains. Extreme care is required to ensure that + the final installation will be safe under all foreseeable circumstances (however unlikely they may seem). The mains and low voltage sections must be fully + isolated from each other, observing required creepage and clearance distances. All mains circuitry operates at the full mains potential, and must be insulated + accordingly. Do not work on the power supply while power is applied, as death or serious injury may result. +
For anyone who is unfamiliar with the terms 'creepage' and 'clearance' as applied to electrical equipment, they may be defined as follows ...
+ ++ Creepage: The shortest distance across a surface (PCB fibreglass or other insulating material) between conducting materials. Allow at least 8mm for general purpose + equipment.+ +
+ Clearance: The shortest distance through air between conductors. Again, 8mm is recommended, but may be reduced if there is an insulation barrier between the conductors. +
The distances are measured between high and low voltage circuitry, and between high voltage conductors where the voltage may track or arc between conductors without adequate separation. System specifications such as IEC60950-1 and IEC61010-1 dictate the required creepage and clearance spacing for a given system. IEC60950-1 regulates the requirements for Telecom Equipment, and IEC61010-1 regulates the requirements for Industrial and Test Equipment. In the US and Canada, UL/ CSA standards apply respectively. In many cases, power supply (especially SMPS) makers will cut slots into the PCB to increase the creepage distance. Different applications have differing requirements, but if you allow 8mm (a little under 0.32") that will cover most cases. 5mm (0.2") should be considered the absolute minimum. This is the distance between the pins and PCB pads of most optoisolators for example.
+ +All countries have electrical wiring codes and standards, but compliance may be voluntary, implied or (in a few countries) mandatory (at least for some products). In any case, if a product is found to be dangerous, there will usually be a recall, which may be mandatory if the safety breach is found to be a built-in 'feature' of the product. It is the responsibility of anyone who builds mains powered equipment to ensure that it meets the requirements set in the country where it's built or sold. The authorities worldwide take electrical safety seriously, and woe betide anyone who falls foul of the standards by killing or injuring someone.
+ +Note: IEC60950-1 and EN60950-1 will be withdrawn in June, 2019 (since amended to December 2020), and transferred to IEC62368-1. IEC62368-1 is the standard for safety of electrical and electronic equipment within the field of audio, video, information and communication technology, business and office machines. The Australian/ NZ version will be AS/NZS62368-1 and UL62368-1 in the US.
+ + +We'll start with the ideal regulator and work back from there. The ideal regulator has perfect regulation, so the voltage does not change regardless of load. It is also infinitely fast, so infinitely sudden load changes (over an infinite range of current) have no effect. Noise is non-existent (which also means zero ripple), the output is not affected by any variation of input voltage provided it's above the output voltage, and the voltage remains stable over the entire temperature range ... from -50°C to 150°C would be sufficient.
+ +Needless to say, the ideal regulator does not exist. All regulator circuits have limitations, and it is the job of the designer to determine which limitations will have the greatest impact on the device being powered, and work to minimise those at the expense of other parameters. For example, a simple discrete based preamp will have relatively poor power supply rejection, so noise is a potentially major problem. Since the current won't vary much in use (for this hypothetical design), extreme speed is not needed. This hypothetical supply needs to be reasonably stable and have very low output noise - high speed and extremely good regulation are not necessary.
+ +Another supply might be needed for a medical application where the voltage is critical and the load varies in fast steps (a high speed analogue circuit followed by an ADC, and with digital logic control perhaps). Noise doesn't need to be especially low, since the ADC chip has its own voltage reference which includes good filtering. This supply needs to be very fast to keep up with the changing load current, and requires accurate voltage. It will also need to be inherently safe, because it's for a medical instrument. As such, it will have to be fully certified in the countries where it's used.
+ +The above are but two (extreme) examples of possible supply requirements, but there are as many different requirements as there are circuits. In some cases, it is not possible to suggest a supply unless you know exactly what will be powered from it. In others, almost anything will work just fine. Since The Audio Pages are mostly about audio, I shall concentrate on supplies that are applicable to audio projects, however the same basic principles apply for all power supplies, large and small.
+ +Since most hi-fi products are powered from the mains, we need to galvanically isolate the output of the supply from the mains voltage. This is a vital safety requirement, and cannot - ever - be ignored, regardless of output voltage or power requirements. Galvanic isolation simply means that there is no metallic electrical connection between the mains and the powered device. A transformer satisfies this requirement, but is not the only solution. One could also use a lamp and a stack of photo-voltaic cells ('solar' cells), but this is extremely inefficient. Because most of the alternatives are inefficient or just plain silly (such as the example above), transformer based supplies represent well over 99.99% of all isolation methods. Switchmode supplies also use a transformer, so are included in the above.
+ +Transformers only work with AC, so the output voltage must be rectified and filtered to obtain DC. This is shown in Figure 1 - the transformer, rectifier and filter are shown on the left. For simplicity, mainly single supply circuits will be examined in this article - dual supplies essentially duplicate the filtering and regulation with the opposite polarity. The filter is the first stage of the process of noise removal, and deserves some attention.
+ +C1 (the filter capacitor) needs to be chosen to maintain the DC (with superimposed AC as shown in Figure 2) above the minimum input voltage for the regulator. If the voltage falls below this minimum because of excess ripple, low mains input voltage or higher current, noise will appear on the output - even if the regulator circuit is ideal. No conventional regulator can function when the input voltage is equal to or less than the expected output. It can be done with some switching regulators, but that is outside the scope of this article.
+ +In the above schematic, there is about 380mV RMS (1.24V peak-peak) ripple at the regulator's input, but only 4.5mV RMS (14.2mV p-p) at the output. This is a reduction of 38dB - not wonderful, but not bad for such a simple circuit. Load current is 142mA. With the addition of 1 extra resistor and capacitor to create a filter going to the base of Q1, ripple can be reduced to almost nothing. If you wish to experiment, replace R1 with 2 x 560 Ohm resistors in series, and connect the junction between the two to ground via a 100µF capacitor. This will reduce ripple to less than 300µV - 62dB reduction. Alternatively, one might imagine that just adding another large cap at the output would be just as good or perhaps even better. Not so, because of the low output impedance. Adding a 1,000µF cap across the load reduces the output ripple to 3.8mV - not much of a reduction. While simple, this regulator will actually cost more to build and use more PCB real estate than a typical 3-terminal IC regulator. The IC will also outperform it in all significant respects.
+ +The regulator in Figure 1 is very basic - it has been simplified to such an extent that it is easy to understand, but it cannot work very well. This is not to say that it's useless - far from it. It must be remembered that the simple regulator will cost more than a 7815 3-terminal regulator IC though. Prior to the introduction of low-cost IC regulators, the figure 1 circuit used to be quite common, and a very similar circuit was common using valves (vacuum tubes). Early voltage references were usually neon tubes, designed for a stable voltage. These will not be covered in this article.
+ +While a simple regulator may well be all that's needed for many applications, especially for circuits that use opamps, the regulator itself is generally not particularly critical. This is because most opamps have a very good power supply rejection ratio (PSRR) - the TL072 has a PSRR of 100dB (typical). This means that any low frequency signal on the supply (or supplies) is attenuated by 100dB before finding its way to the opamp's output pin. This varies with frequency! + +
Please note that the above does not apply if there is a connection from either supply to an opamp's input pin. If this is the case, extensive filtering may be needed to remove supply noise. If any supply noise is presented to an opamp input, it will be amplified along with the signal.
+ +Referring to Figure 2, it should be obvious that the filter capacitor C1 removes much of the AC component of the rectified DC, so it must have a small impedance at 100Hz (or 120Hz). If the impedance is small at 100Hz, then it is a great deal smaller at 1kHz, and smaller still at 10kHz (and so on). Ultimately, the impedance is limited by the ESR (equivalent series resistance) of the filter cap, which might be around 0.1 Ohms at 20°C.
+ +It is important that capacitive reactance is not confused with ESR. A 1,000µF 16V capacitor has a reactance of 1.59 Ohms at 100Hz, or 15.9 Ohms at 10Hz. This is the normal impedance introduced by a capacitor in any circuit, and has nothing to do with the ESR. At 100kHz, the same cap has a reactance of only 1.59 nano-Ohms, but ESR (and ESL - equivalent series inductance) will never allow this to be measured. The ESR will typically be less than 0.1 ohm, and is generally measured at 100kHz. Indeed, at very high frequencies, the ESL becomes dominant, but this does not mean that the capacitor is incapable of acting as a filter. It's effectiveness is reduced, but it still functions just fine. Some people like to add 100nF caps in parallel with electros, but at anything below medium frequency RF (less than 1MHz), such a small value of capacitance will have little or no effect. While this is easily measured in a working circuit, few people have bothered and the myth continues that electrolytic caps can't work well at high frequencies.
+ +Contrary to popular belief in some quarters, electrolytic capacitors do not generally have a high ESL. Axial caps are the worst simply because the leads are further apart. ESL for a typical radial lead electro with 12mm lead spacing might be expected to be around 6nH. A short length of track can make this a great deal worse - this is not a fault with the capacitor, but with the PCB designer.
+ + +The regulator itself has a number of primary functions. The first (surprisingly) is not regulation as such, but reduction of the power supply filter noise - mainly ripple. Including a reasonably stable voltage as part of the process is not difficult with ICs, so this is included as a matter of course. The regulated voltage is not especially accurate, but this is rarely an issue.
+ +The output impedance should be low, because this allows the voltage to remain constant as the load current changes. For example, if the output impedance were 1 Ohm, then a 1A current change would cause the output voltage to change by 1V. This is clearly unacceptable, and one might expect the output impedance to be less than 0.1 Ohm - however, this is frequency dependent and may include some interesting phenomena with some regulators (LDO - low drop-out regulators can be especially troublesome). For more details of the issues you may face with these types, see Low Dropout Regulators which has information you need to know before using them.
+ +In order to maintain low impedance at very high frequencies, an output capacitor is commonly used. This is in addition to any RF bypass capacitors that may be required to prevent oscillation.
+ +It must also be remembered that in any real circuit, there will be PCB traces that introduce inductance. Capacitors and their leads also have inductance, and it is theoretically possible to create a circuit that may act as an RF oscillator if your component selection is too far off the mark (or your PCB power traces are excessively long).
+ +Bypassing is especially important where a circuit draws short-term impulse currents. This current waveform is common in mixed signal applications (analogue and digital), and the impulse current noise can cause havoc with circuitry - an improperly designed supply path can cause supply glitches that cause false logic states to be generated. Even the ground plane may be affected, and great care is needed in the layout and selection of bypass caps to ensure that the circuit will perform properly and not have excessive digital noise.
+ +In general, linear opamp circuits will not cause impulse currents, because the audio signal is relatively slow. In many cases, the power supply current will not be modulated at all, because the opamp's output current remains substantially within its linear (Class-A) region. Even where the supply current is modulated, it will a relatively slow modulation, and track inductance is generally insignificant within the audio range.
+ +The essential sections of almost all regulators are shown above (in highly simplified form). The voltage reference is most commonly a band-gap reference, because these are very stable, easy to implement during IC fabrication, and have excellent performance. The nominal reference voltage is 1.25V, and this is easily amplified to achieve the required voltage. Alternatively, the band gap reference can be used to control a current source that supplies a 6.2V zener diode. This voltage is chosen because the positive and negative voltage coefficients of the zener cancel, providing a very stable reference voltage over a wide temperature range.
+ +The error amplifier simply compares the output voltage with the reference. If they are the same (the output voltage may be scaled using a resistive divider as shown), then all is well. If the output voltage is low, the error amplifier makes the appropriate correction, and passes this to the series pass device (most commonly a BJT (bipolar junction transistor), and this process continues (extremely quickly) until the output voltage is restored. Should the output rise (reduced load), the opposite occurs. In many circuits, the input voltage and/or output current is constantly changing, so the error amplifier is always working.
+ +The regulator circuit uses feedback to maintain a low output impedance and to maximise noise rejection. Because all feedback circuits have stability criteria that must be met to prevent oscillation, there will always be a frequency above which the regulator cannot function well. A suitably sized output capacitor is used to maintain the low impedance up to the highest frequency of interest.
+ +Because of the amount of feedback used, most regulators have a very low output impedance. As a result, adding a very large output capacitance does not necessarily reduce the noise as much as one might expect - or even at all. Where extremely low noise is essential, a simple resistor/capacitor filter can be added, but at the expense of load regulation.
+ +There are a number of terms that are used to describe the performance of any regulator. These are listed below, along with brief explanations.
+ +Parameter | Explanation | |
Load Regulation | A percentage, being the change of voltage for a given change of output current | |
Line Regulation | A percentage, being the change in output voltage for a given change of input voltage | |
Dropout Voltage | The minimum voltage differential between input and output before the regulator can no longer maintain acceptable performance | |
Maximum Input Voltage | The absolute maximum voltage that may be applied to the regulator's input terminal with respect to ground | |
Ripple Rejection | Expressed in dB, the ratio of input ripple (from the unregulated DC supply) to output ripple. | |
Noise | Where quoted, the amount of random (thermal) noise present on the regulated output DC voltage | |
Transient Response | Usually shown graphically, shows the instantaneous performance with changes in line voltage or load current |
There are obviously many more, such as power dissipation, maximum current, current limiting characteristics, etc. These are dependent on the type of regulator, and the specifications and terminology can vary widely. Many of the parameters are far too complex to provide a simple 'figure of merit', and graphs are shown to indicate the transient performance (load and line) and other information as may be required to select the right part for a given task.
+ +One special family of regulators are called LDO (low drop-out) regulators. Where a common regulator IC might need 2 to 5V input/output differential, an LDO type will generally function down to as little as perhaps 0.6V between the input and output. These are commonly used in battery operated equipment to maximise battery life. Some of these devices are also very low power, so there is a minimum of power wasted in the regulator itself.
+ +Few (if any) regulator ICs presently available have poor performance. While there may be 'better' types one can use, this does not mean that a better (more expensive) regulator will cause a system to sound any different.
+ + +Very few audio applications really need anything more than the traditional fixed voltage regulators, such as the 7815 (positive) and 7915 (negative). Yes, they are somewhat noisy, but the noise is generally (but not always) immaterial when the circuit is opamp based. See below for the reason.
+ +A 7815 (or 7915) has a typical output range of from 14.4V to 15.6V, so expecting the voltage to be exact is unrealistic. The load regulation (i.e. the change in output when the load current is changed) is anything from 12mV to 150mV when the load current is changed from 5mA to 1.5A. For this test, the input voltage is maintained constant.
+ +Ripple rejection is quoted as a minimum of 54dB to a typical value of 74dB. These figures can be bettered by using the LM317/337 variable regulators. They have lower noise and better ripple rejection than the much older fixed regulators, but in most circuits it makes no difference whatsoever. Claims that there is some 'quality' of DC that is somehow (magically?) audible are usually nonsense. The use of super regulators is usually unjustified for any opamp circuit, and has marginal justification at best even with very basic discrete designs. For lowest possible noise, a cap is required from the adjustment pin to earth (ground), and this should have a discharge diode fitted between the adjust and output pins (both oriented appropriately for polarity of course).
+ +There are quite a few other regulator types on the market, but the National Semiconductor types seem to have the lion's share of the market as far as normal retail outlets are concerned. Not that there is anything wrong with them - they perform well at a reasonable price, and have a very good track record for reliability. While one can obtain more esoteric devices (with some searching), many of the traditional manufacturers are concentrating on switching regulators, and don't seem to be very interested in developing new analogue designs.
+ +While there are many discrete or semi-discrete regulators to be found in various books, websites (including this site) and elsewhere, they are usually only ever used because no readily available IC version exists. An example is the ESP P96 phantom power regulator - this design is optimised for low noise and the relatively high voltage needed by the 48V phantom system. Regulation is secondary, since the phantom power voltage specification is quite broad. It is still quite credible in this respect, but it has fairly poor transient response, which is not an issue for the application.
+ + +LDO (low drop-out) regulators are becoming much more popular, because people like to be able to have regulated supplies from a battery supply. Users would also like to be able to use batteries down to the last drop (as it were). The low dropout regulator achieves this by using a PNP (or P-Channel MOSFET) series pass transistor (for a positive regulator), and the voltage differential between input and output can be less than 0.6V, compared to a couple of volts or more for a traditional regulator. There are some caveats when using LDO regulators though, because they are far less stable than their conventional counterparts.
+ +The series pass transistor operates with gain because it's not an emitter/ source follower. This introduces additional output impedance, so the external load has more influence than with a conventional regulator. Capacitance, ESR (equivalent series resistance) and inductance at the output pin have to be within specified limits to prevent oscillation, so there is some loss of flexibility. A normal 78xx regulator can usually have anything from 100nF to 10,000µF across the output and it will work perfectly happily regardless, but no such liberties can be taken with the LDO version.
+ +In many cases, just substituting the output cap with a another having a lower ESR can convert a stable and happy regulator into an RF oscillator. It is essential to get the data sheet for any LDO regulator and make sure that you follow all recommendations to the letter. Instability often results if the output cap isn't large enough or has an ESR that is too high or too low. LDO regulators are not inherently stable, and manufacturer data sheets must be used to determine the stability criteria.
+ +Virtually all LDO regulators rely on the ESR (and perhaps ESL - equivalent series inductance) of the output capacitor to correct the phase response of the internal circuitry to ensure stability. This is a complex area and will not be covered in any detail here. Also, be careful with selection. Many LDOs are designed for low input voltages, and are generally used for providing low voltage (1.2V - 3.3V) supplies for microprocessors and the like. For the most part, they are not suitable for use providing typical opamp voltages (±15V for example). Negative versions are available, but making a selection of either positive or negative parts is difficult because there are so many different types.
+ +For more info on these see the Low Dropout Regulators article.
+ + +Because noise is not just 100/120Hz supply ripple, we also need to look at regulator (wide band) noise. Common 78xx/79xx regulators have pretty good ripple rejection, but are usually quite noisy. The noise is predominantly high frequency, and is at frequencies where opamp PSRR is nowhere near as good as it is for low frequencies. As a result, some opamp circuits may produce audible noise that comes directly from the power supplies. In general, this is a non-issue and will not cause any problems at all, but for those occasions where noise is audible, the fixes are quite simple.
+ +One solution is to use adjustable regulators such as the LM317/337. These are much quieter than the 78/79 series ICs, and the difference may be audible, especially in high gain circuits. As an example, the original version of the ESP P37 discrete preamp has a PSRR of around 31dB for wide band noise. 10mV of supply noise will result in 297µV of output noise. This may be audible under quiet listening conditions, although few (if any) regulators will be that noisy. 10mV was a convenient reference level - the data sheet for an LM7815 says maximum noise level is 90µV. In reality, most off the shelf regulators will be fairly similar.
+ +If the noise floor is audible, then two possible causes need to be addressed. If it's caused by the opamp itself, then replacement with a different (low noise) type is the only solution. If the source is power supply noise, the easiest way to get rid of the vast majority of this noise is simply to use a simple RC (resistance, capacitance) filter at the output of the regulators.
+ +Using 10 ohm series resistors from the supply with 1,000µF caps to ground for each polarity, noise is almost completely eliminated. The supply voltage is reduced by only 100mV for each 10mA of current drawn which will not affect any audio circuit. This is a far cheaper option than using a relatively expensive discrete power supply that requires exotic opamps and costly 'audio grade' capacitors and other components. The noise can be expected to be reduced by at least 60dB with this simple filter. High frequency noise (the most intrusive, and least affected by the opamp's PSRR) is affected the most by the filter. Note that it is pointless adding a large cap without a series resistor - the output impedance of most regulators is so low that it will have almost no effect.
+ +High frequency noise from the regulators can be reduced by adding a capacitor from the ADJ terminal to earth/ common. It is then essential to add a diode from ADJ to the output to discharge the cap should the output be shorted. There are very few opamp circuits that will genuinely benefit from the extra filtering though.
+ + +The easiest way to make a super regulator is to use two regulators in series, with the first one at a higher voltage than required at the output. For example, a 15V output might have an input to the second regulator of perhaps 22V, and additional filtering (as shown below) may be added as well. While ripple will be reduced to virtually nothing at all, will doing any of this improve the sound? Almost certainly, the answer is "No". While many have claimed superior performance (with the usual superlatives and a complete lack of any objective evidence), it is unlikely that anything changed. Note that only the positive side is shown in Figure 4. Refer to the article for complete details.
+ +One popular version is the Jung 'Super Regulator' (a modified version is shown below). While I have no doubt whatsoever that its performance is exemplary, the level of performance achieved is simply not necessary in most audio circuits. The general arrangement is a pre-regulator (an LM317), followed by an opamp based error amplifier, precision reference diode and series pass transistor. In other words, two cascaded regulators. Although it also allows remote voltage sensing in some versions, this is of little use when the power supply and the audio boards are only 100mm or so from each other. The use of a fast opamp and optimised circuitry will certainly give excellent transient response, but no normal audio signal has a high enough frequency to make transient response an issue.
+ +Superlatives abound on many sites describing the circuit. Some people have noted that it may be prone to oscillation (so has to be made slower) in some configurations, and I have received emails from people complaining that this has happened (and no, I don't know why people would complain to me about someone else's circuit). Meanwhile, no-one seems to have noticed that the vast majority of opamps being powered don't actually care one way or another if the DC has 1 or 100µV of supply noise.
+ +Naturally, since the Jung version is popular, others have jumped on the bandwagon. As a result there are several versions of alternative super regulators, many of which will be prone to oscillation, and will almost certainly not provide any measurable improvement in audio performance ... unless they do oscillate of course. Predictably, regulator oscillation can never provide a positive outcome in any audio circuit.
+ +For anyone who wants to make a super regulated system, a far cheaper option would be to use a pair of cascaded LM317s (for example, a pair of P05 boards). At an output current of 150mA, the first regulator reduces the input ripple from 680mV peak-peak (206mV RMS) to less than 470µV P-P (143µV RMS), a reduction of 63dB. The following filter (R3, C3) reduces this to 123µV P-P (42µV RMS), another 11dB. The second regulator reduces this to 116nV P-P (42nV RMS), 60dB - at least according to the simulation. The total is almost 134dB ripple rejection, but a single misplaced track or wire could easily degrade that badly.
+ +Remember that this is the voltage on the power supply, and the PSRR of any opamp circuit hasn't been considered yet. Discrete circuitry, and especially low feedback designs, are less tolerant of supply ripple, so some circuits of this type may benefit from the additional ripple filtering offered by a cascaded regulator circuit. However, unless you are amplifying exceeding low level signals, it is unlikely that any of the above will be necessary. Add the 70dB PSRR of any reasonable opamp, and the expected output noise is so far below the noise floor of any system that no further improvement will yield any audible difference.
+ +It is also worth remembering that even straight wires have resistance and inductance, so even if transient response and regulation were perfect at the power supply, 100mm of wire will instantly introduce losses. Remote sensing can be used to counteract this, but for an audio circuit ... complete overkill to achieve no useful purpose.
+ +Figure 4A shows my simplified version of a Jung (et al) 'super' regulator. There are many variations on the basic theme, but many are similar to the original. One notable common part is the D44H11 series pass transistor. This is described as a fast switch, and has a rated fT of 30-50MHz (speed depends on the manufacturer). The opamp (AD825) is also common in many alternative versions, as it is also very fast and can provide more output current than many other opamps. It only appears to be available in a SMD package, and is not a cheap part. Other suitable devices include the AD797, which has lower noise but is considerably more expensive. The LM317 is set up so that its output voltage is about 2.6V higher than the final regulated output. I have eliminated the E96 resistor values that are often specified (499 ohms for example) because they are simply not necessary in this application. 1% or 2% metal film resistors are expected regardless, not for accuracy but low noise.
+ +The output voltage is set by the voltage divider using R6 and R7, and all significant voltages are shown on the circuit. R6 is bypassed by C4 so the AC gain of the circuit is unity, ensuring minimum noise. I haven't built one of these, but a simulation shows that it has extremely low output impedance, but like most regulators is it still unipolar. It cannot sink current from the load, but this is rarely a requirement for any internal power supply. All 100µF capacitors should be low ESR types. The opamp gets its DC from the regulated output. Note that it is quite possible that the circuit shown may oscillate, depending on the devices used, PCB layout, etc. Fast opamps can oscillate easily, and may only need a few millimetres of (unbypassed) PCB track in a supply line to introduce enough stray inductance to cause problems.
+ +As noted earlier, there is no hard evidence to show that the use of this (or any other) 'super' regulator will affect the output from any opamp based circuit. Claims include 'better bass' and/ or 'improved soundstage', but any opamp can supply an output signal to DC, and the output is largely independent of the power supply. There is no reason to expect that having 'perfect' DC will make any audible difference ... provided of course that any comparative test is double blind. Sighted tests are fatally flawed, and while measurements may well show that the DC from a 'super' regulator has lower noise or better regulation than a simple LM317/337 regulator, that does not automatically translate to improved sound quality.
+ +![]() | Note: You must consider the possibility of inductive and/ or capacitive coupling in and around the power supply. A single misplaced + wire can make all your efforts to obtain a 'perfect' DC supply completely meaningless, because there may be significant 'pollution' coupled into the supply or ground + wiring. Transformers radiate a magnetic field, and while toroidal types are better than 'conventional' E-I laminated types, there is still some degree of magnetic + leakage (especially where the wires exit the transformer). If you really do need an ultra-pure DC supply, the transformer and all mains wiring should be in a separate + box, separated from the electronics by at least 500mm or so. If you don't do that, a 'super' regulator is pretty much a waste of time. + |
Shunt regulators have some advantages over traditional series regulators, despite their low efficiency and comparatively high power dissipation. The advantages of shunt regulators are as follows ...
+ +There are also disadvantages, as is to be expected ...
+ +The simplest shunt regulator consists of nothing more than a resistor and a zener. If designed properly, this is a very simple power supply arrangement, and offers acceptable performance for many applications. For example, the P27B guitar amplifier preamp has a pair of zener shunt regulators on the board, and these give hum free performance despite the very high gain of the preamplifier.
+ +There are very few shunt regulators used in modern equipment. This is not necessarily a good thing, since almost no-one designs in an over-voltage crowbar circuit, so failure of a series regulator is often accompanied by wholesale destruction of the circuitry that uses the regulated supply. This is especially so with logic circuitry ... 5V logic circuits will typically suffer irreparable damage with a supply voltage above 7V.
+ +In the two circuits shown above, it is quite obvious that the high performance circuit will outperform the simple zener. As a quick test (which is by no means conclusive, but gives a good indication), the circuits were simulated. The DC input was deliberately 'polluted' with a 2V peak (1.414V RMS) 100Hz sinewave to measure the ripple rejection of each version. The zener alone was able to reduce the ripple to 11mV RMS, a reduction of just over 42dB.
+ +If R1 and R2 are replaced with a single 100 ohm resistor (omitting C2), ripple rejection falls to 25dB (82mV RMS ripple). This technique for ripple reduction used to be very common when people built discrete regulated power supplies. The two resistors and the 47µF capacitor form a low pass filter, with a -3dB frequency of 14.4Hz. Note that a split resistor is essential - if the 470µF cap were simply in parallel with the zener, there is very little improvement - the RMS ripple voltage is only halved to 40mV, rather than reduced to the 11mV measured using the split resistor method.
+ +Why? Because the zener has a low impedance, and this acts in parallel with the cap's impedance. By splitting the resistance, the capacitor works with the effective impedance of the two resistors in parallel - this is much greater than the impedance of the zener, so the cap has more effect. Needless to say, a larger capacitance gives better ripple performance - doubling the capacitance halves the ripple voltage for example.
+ +The opamp based version achieved 2.3µV RMS - over 116dB rejection. This figure must be taken with a (large) grain of salt of course - simulators and real life don't often coincide. In reality, I'd expect about 80-90dB reduction for a 'real' circuit. Please be aware that the opamp based regulator circuit is shown as an example - it is not a working circuit, and would almost certainly oscillate if constructed as shown.
+ +Both circuits are supplying a load current of about 75mA (15V, 200 Ohm load).
+ +For the simple zener version with full load, the zener dissipation is 440mW. This rises to almost 1.7W with no load. If a 1W zener were used, it would fail if the circuit were operated with no load for more than a few seconds. Resistor dissipation remains the same whether the circuit is loaded or not, but it increases if the output is shorted to ground. The two resistors need to be at least 1W, since each dissipates about 500mW.
+ +The high performance version needs a 5W resistor for R3. Transistor Q2 has maximum dissipation with no load, and this will be around 3.5 Watts. Dissipation is around 2.3W with the rated load of 75mA. While the shunt current can be reduced from the 250mA used in Figure 5, performance will suffer if it falls below about 150mA. This can be reduced by using the same split resistor scheme used for the simple zener regulator, and this will improve ripple rejection performance further as well.
+ +It's worth noting that most shunt regulator designs (whether opamp or discrete based) regulate their own supply voltage. This gives an inherent advantage, in that the supply to the circuitry is stable, thus ensuring that the overall performance is optimised without any requirement for pre-regulation.
+ +Finally, a version that has been used by many constructors is shown in Project 37. This is a simple shunt regulator, but the zener power is boosted by adding a transistor as shown above. Note that the resistor is split, with a cap between the two. As noted in the article, noise is extremely low - 100/120Hz hum can be expected to be less than 20µV or so. I found that it was almost impossible to measure hum in the prototype, since normal circuit and test equipment noise was predominant. Although the latest PCB for P37 now uses ±15V, the regulator is still useful for those who wish to experiment.
+ +If you need a negative version, simply reverse everything and use a PNP transistor (BD140 for example). For different voltages, you change the zener, but remember that the output voltage will be between 700mV and 1V higher than the zener voltage because of the transistor's base-emitter junction. The actual voltage depends on the current. For more information on the use of zener diodes in general, see AN008 - How to Use Zener Diodes on the ESP website.
+ +The design of shunt regulators in general isn't difficult, but there are quite a few things that need to be calculated. The unregulated input voltage must be higher than the desired output, and this includes any ripple. For example, if the minimum voltage is 16V and the maximum 20V (4V peak-to-peak of ripple) you can't expect to get 15V output because 1V headroom just isn't enough. The minimum voltage should be not less than 25% greater than the desired output. For 15V out, that means no less than 18-19V input. Remember too that the incoming mains will vary and this has to be taken into account as well.
+ +The feed resistance (R1 and R2 in Figure 5A) should pass a minimum of 1.5 times the maximum load current. If your circuit draws 50mA then the resistors need to pass 75mA. The voltage across the feed resistance is the input voltage minus the output voltage. You then need to work out the power dissipation of the resistors, zener and shunt transistor. Some general approaches to determining capacitor values is available in the article Voltage & Current Regulators And How To Use Them. I do not propose to explain the complete design process here - most of it is based on nothing more complex than Ohm's law.
+ + +So-called 'transformerless' power supplies can use a resistor or a capacitor to drop the AC mains voltage to something usable by electronics. The resistor approach is not covered here, because it's very rare that it will have low enough power dissipation to be usable in most cases. A capacitor provides a 'lossless' voltage drop, because it's a reactive component. While it has a very low (i.e. 'bad') power factor, these supplies are generally only used for limited output current, and the poor power factor is not an issue.
+ +++ ++
++ WARNING : The following circuits are not isolated from the mains and must never be used with any form of general purpose + input or output connection. All circuitry must be considered to operate at the full mains potential, and must be insulated accordingly. No part of the circuit may + be earthed via the mains safety earth or any other means. Do not work on the power supply or any connected circuitry while power is applied, as death or serious + injury may result.
To some, the idea of making a power supply that does not use a transformer is appealing. Even relatively small transformers are bulky and heavy, and they will always radiate a small amount of magnetic interference. However, supplies that don't include a transformer are not isolated from the mains supply and are inherently extremely dangerous lethal.
There are several safety points that you'll see repeated here. This isn't because I like repeating myself, but to make absolutely sure that potential (sorry ) constructors don't miss them. These supplies are lethal in the wrong hands (inexperienced constructors in particular) and if my repetition only saves one life it's worth it.
These supplies are usable in a limited range of products, and they can have no direct input or output connections. This limits their usefulness somewhat, since most projects require some connection to the outside world. While isolation is possible using opto-couplers, these are often slow and not very linear, so hi-fi applications are ruled out. A remote sensor (for example) can be used, provided that the sensor, lead and connector are all fully insulated, rated for mains voltages, and have no accessible metal parts.
+ +Where such circuits are used, they will be completely enclosed, and may have circuit functions accessed by well insulated push-buttons, infra-red or radio remote controls. Well insulated (plastic shaft) pots can also be used. Typical applications are wide-ranging, and include motor speed controllers, 'high tech' light dimmers, temperature controllers and many others. Audio is not included in any common usage.
+ +While it would be possible to isolate inputs and outputs using transformers, no-one makes 'line level' transformers that are rated to withstand mains voltages. Even if they were available, the cost would be far greater than a small mains tranny and a basic conventional power supply.
+ +Consequently, the applications are strictly limited to areas where the necessary inputs and outputs can be opto-isolated, or where there is no direct connection to the outside world at all. Many PIC based projects are intended for controlling mains appliances, and these can use a transformerless supply without problems. Naturally, external probes or other sensors must also be insulated in their entirety. They must withstand the full mains voltage, safely, and for well beyond the expected life of the apparatus.
+ +Now, looking at the circuit, it is obvious that one side is referenced to the neutral, and neutral is connected to the building's safety earth or to safety earth at the local mains distribution transformer (this varies by country). Therefore, you might think that the circuit should be safe. However, the regulatory bodies in every country insist that the neutral is a 'current carrying conductor', and it is recognised everywhere that the possibility exists for active (aka live or phase) and neutral to be interchanged. This may occur in old buildings (wired before any standard was applied), or could be caused because of an incorrectly wired extension lead. Many countries have non-polarised mains plugs that may be inserted into an outlet either way.
+ +Any one of the above makes the circuit deadly. The output becomes referenced to active, not neutral, so all connected circuitry is at mains potential. For this reason, circuits such as those shown may only be used in such a manner that no part of the power supply or its connected circuitry may be accessible to the end user. This means no connectors for input or output, and all components must be fully insulated to prevent accidental contact.
+ +Now that the necessary disclaimers are completed, we can look at the circuit itself. The fuse (F1) is obviously intended to guard against the risk of fire, by opening if the current exceeds that expected. R1 limits inrush current, which can be very high if power is applied while the AC input is at its maximum value. R1 needs to be at least 1W, and it is intended that its value is considerably less than the capacitive reactance of C1. In some cases, R1 may be a fusible resistor, thereby eliminating the separate fuse. I consider this to be a poor protection mechanism, but it's cheap.
+ +C1 is the actual current limiter. By using a capacitor, there is almost no lost power - capacitors used within their ratings have extremely low losses. R2+R3 is intended to discharge the capacitor when mains is disconnected, and two are used to obtain a satisfactory voltage rating. Without this, C1 can hold a significant change for several days, so anyone touching the pins of the mains plug could receive a very nasty shock. R2+R3 must be rated for the full mains voltage. It may be necessary to use 3 or more resistors in series to ensure they will withstand the applied voltage continuously.
+ +C1 will have almost the full mains voltage across it (230V RMS for the circuit shown), and cannot & must not be a DC rated capacitor. A 400V DC cap will work with 120V mains but this is most unsatisfactory and it will fail eventually. The cap voltage should be a minimum of 275V AC if used with 230V mains. In general, it is unwise to use DC rated capacitors where high AC voltages will be across the cap - the use of AC rated components is highly recommended in all cases. X-Class capacitors are designed to be connected across the mains, and are the only type that should be used.
+ +D1 and D2 form the rectifier. D2 must be installed to prevent C1 from charging to the peak of the mains voltage (340V, via D1). Without D2, the circuit will not work! C2 is the filter cap, and needs only to be rated at slightly above the zener voltage. A 6.3V electrolytic will be quite acceptable. Finally, D3 (a 5.1V zener diode as shown) provides regulation. The DC will have significant ripple - in the circuit shown and at 50Hz input, there will be about 325mV peak-peak of ripple on the supply. This is normally quite acceptable for a PIC circuit, provided it is not expected to perform any accurate analogue to digital or digital to analogue conversions.
+ +Ripple can be reduced by adding a resistor (R3) between C2 and D3, but care is needed to keep the voltage across C2 within ratings. For example, a 33 ohm resistor reduces ripple to about 63mV peak-to-peak and keeps the voltage across C2 just below 6.3 volts. Figures shown are for a 220 ohm load at 5.1V - about 23mA. The available current is reduced if the voltage is increased, so 330 ohm loads are shown in Figure 6B. In general, there should be a second capacitor in parallel with D3 to allow for higher than normal peak current in the powered circuit.
+ +A common variant of the circuit shown above is to use a zener diode in place of D1, and D3 is not needed. This reduces the component count, but the output voltage will be 650mV lower than the zener voltage, and there will be more ripple on the DC supply. A resistor and a second electrolytic cap can be used for better filtering, but the output won't be as well regulated because of the series resistor.
+ +There is no real limit to the voltages available, but the highest voltage normally used will be around 24V or so. If you need multiple voltages, simply add series zeners as shown in Figure 6B. As shown, you have ±5V, but that can just as easily be +5 and +10V, simply by deciding which terminal is 'common'. It is critical that you understand that 'common' is NOT the same as earth/ ground. No part of the circuit can be touched safely, and the supply can only be used for fully enclosed applications with no input or output connectors that can be accessed by the end user.
+ +One problem that's often faced is the low current available. Yes, the capacitance for C1 can be increased, but the cap will be physically large, and the cost may be prohibitive. Even the 1µF cap shown will be rather bulky, and will most likely be a pair of 470nF X-Class caps in parallel (close enough to 1µF). The inrush current also has to be considered. Because a discharged capacitor acts rather like a short circuit when voltage is first applied, the 100 ohm inrush limiter shown is the only thing that limits the current. Worst case is when the mains is switched at the peak of a half-cycle, which will cause the peak current to be 2.3A with 230V mains (an instantaneous dissipation of 530W!), or 1.2A with 120V. If the value of R1 is increased, inrush current is reduced but continuous dissipation is increased. This dissipation is real power that you pay for and is wasted as heat.
+ +The standard circuit is full wave as far as the mains is concerned, but rectification is only half wave. The negative half cycle of the mains current is not used, so output current is limited. The maximum current you can expect depends on the voltage, but as shown above will be around 25mA for each microfarad used as C1 (at 230V input). For example, with 1µF you'll get about 25mA, or 10mA for 470nF. The available current is roughly half with 120V mains. We can do better ...
+ +A major improvement is to use a bridge rectifier as shown above. The diode bridge means the available output current is increased, and smoothing is easier because the ripple frequency is double the mains frequency. The disadvantage is that the output common isn't referred to the neutral which is a limitation if the circuit is intended to drive a TRIAC for mains switching (for example). In most cases this is the version that should be used, and it can supply enough current to operate a relay. The circuit shown in Figure 6C can supply up to 55mA, compared to ~28mA for the supply shown in Figure 6A.
+ +By using a bridge rectifier, you make full use of the capacitor current. The current you can get depends on the capacitance and supply voltage & frequency. It can be calculated easily enough by using the formula for capacitive reactance ...
+ ++ XC = VMains / I+ +
+ C = 1 / ( 2π × f × XC) +
If you need 70mA from 230V mains, XC is 3.2k, and the capacitance is 1µF. If you don't need that much current, 470nF is a standard value, and will provide 34mA. Always aim for a bit more current than you need because the mains voltage will vary. The only capacitor recommended is an X-Class type, as these are rated for mains voltage. Over time, expect to replace the cap, as its value will decrease depending on the 'hostility' of the mains in your area. Every time there's a voltage 'spike' (from other equipment, lightning, etc.) the cap will lose a tiny bit of its metallisation, gradually reducing the value (this is designed into X-Class capacitors to ensure protection against fire).
+ +Transformerless power supplies such as the one shown are fairly efficient. The mains current draw is predominantly capacitive, so while you may draw (say) 5.5VA from the mains, the power is less than 600mW (assuming an output of 24V at 25mA). The zener diode is very important. Without it, if the load is less than the calculated 25mA, the voltage will rise. With no load it can (theoretically) rise to 325V DC, but the filter cap will fail well before that.
+ +In general, if you need to drive a relay (for example) use a 24V coil, as that requires less current than lower voltages. That means your supply voltage will be 24V, and it doesn't matter if that falls when the relay has activated, as it won't drop out until the voltage falls to around 2-5V. Your circuitry has to be able to tolerate the voltage change of course, but that's usually not hard to achieve.
+ +As should be quite apparent, this type of power supply is completely useless for general purpose work and cannot be used for audio because of the serious electrical safety issues. There are only a few applications for circuits such as this, and these are generally control systems and the like. Remember that all external connections, probes, etc, must be isolated to the standards required for the full mains voltage.
+ +Instead of a capacitor to limit the current, you can use a resistor instead. However, this technique results in a high dissipation in the resistor - almost 4.6W for a 20mA output from 230V or 2.4W at 120V. The heat has to be removed somehow, and that's difficult when the power supply and all other circuitry must be fully enclosed for electrical safety. There are some applications where a resistive circuit is the only one that will work properly, but in general it's not viable.
+ +It has been suggested several times that the 120V mains (as used in the US and Canada) could be rectified and used directly as an amplifier power supply. This will give an effective supply voltage of about ±85V with the use of a suitable splitter circuit. While the idea seems plausible ...
+ +Don't even think about it!
+ +The problem is that the entire amplifier is at mains potential, and all inputs and outputs have to be isolated using transformers. This scheme used to be quite common for radio receivers and TV sets - they were commonly referred to as 'hot chassis' sets. Because it's relatively easy to isolate the antenna connection with a high voltage capacitor, these sets were popular because they were cheaper to make. None that I know of ever had auxiliary audio (or video) inputs or outputs, as these would have to be transformer isolated.
+ +If an isolated DC-DC converter is added, you can then refer the output to earth/ ground. There are many different SIP (single inline pin) converters available, and they can be obtained now reasonably cheaply from major suppliers. The isolation working voltage must be a minimum of 1,000V (1kV) - note this is the working voltage, not the isolation test voltage, which will be 2kV or more! Great care is needed to ensure that there is adequate creepage and clearance between the live (mains) side and the low voltage output.
+ +++ ++
++
Be very careful with your device selection. While many of these little converters have a stated isolation test voltage of 1kV, that does not + mean they can be operated with the full mains voltage across the isolation barrier. Most of the low cost units are designed to be operated with no more than 40-50V AC or 60V DC between + input and output, and these are not suitable for use in the circuit shown. +
These little converters are fairly efficient, and can supply 50mA or more (depending on the output voltage). Most are rated for 1 or 2 watts at most, but you'll find that unless you use one with an input voltage of 24V, you probably will be unable to get the rated current output. Have a look at the website of your preferred supplier. Input and output voltage can be selected to suit your needs. Remember to verify that the working voltage differential is at least equal to your mains voltage.
+ +This arrangement is only suitable where (very) low current is required, otherwise, there are AC-DC modules available from many suppliers that only require an AC input (usually 80-277V, 50/ 60Hz), and have either single or dual outputs at the desired voltage(s). While they are larger than the DC-DC converters, they are actually not much bigger than a 1µF X-Class capacitor by itself, and provide the smallest available power supply you can get. However, they all require EMI (electromagnetic interference) filters at the mains input, or unacceptable interference may be experienced. Some complete 2-3W AC-DC converters are less than AU$25 each, but others may be much more expensive. You can also get small (Chinese made) AC-DC switchmode supplies from various on-line sellers, and they are usually very cheap. Whether they are any good or not is another matter. Some I've used are actually very good, others not. (See section 8 below for another solution).
+ + +With all these nasty limitations, a chap called Stan D'Souza at Microchip Technology decided that there had to be a way to make the circuit 'safe'. Thus, in 2000, a technical bulletin (TB008) was issued that claimed to overcome the inherent safety issues of the traditional transformerless power supply. According to Stan, his circuit could be used just like any normal transformer based supply, but without the expense of a transformer. To state that it wasn't thought through properly is a gross understatement! + +
What he (and various others at Microchip) completely failed to recognise is that the circuit described violates the wiring rules of every developed country on the planet! No-one else before or since has ever suggested such a dangerous circuit. The circuit is shown below - this is not the exact same circuit as described in TB008, but is based (for clarity) on that described in Figure 6A. The overall concept is identical.
+ +At first glance, it seems to be alright. Look closely! It uses the earth pin of a 3-pin power connector as the return path for the circuit - this is not allowed in any country that I know of. The earth pin is for safety earth, and is intended to carry fault current away from the appliance to prevent electrocution. The earth (ground) pin must not be used as a current-carrying conductor - ever! All current-carrying conductors must be insulated from earth and/or chassis with wiring suitable for the mains voltage used. No country's wiring rules will consider the neutral conductor to be 'safe', because there will always be situations where active and neutral are swapped over - perhaps because of very old wiring, inexperienced persons failing to appreciate the difference, incorrectly wired extension leads, etc.
+ +Next, there is a fuse joining earth and neutral. Again, this is not permitted under any wiring codes. By joining the earth and neutral, it will instantly trip any electrical safety switch (aka earth leakage circuit breaker, ground fault interrupter, core balance relay, etc., etc). Some countries use what is called the MEN system (main/ multiple earth neutral) albeit by a different name, and a link between the incoming neutral conductor and the earth (safety ground) stake is permitted (or required) at one location per installation. In other countries there will often be no connection between earth and neutral at the premises, as the connection is made at the distribution transformer. While it is possible that the rules elsewhere might allow multiple connections, the connection shown will never be allowed in any appliance. There are very good reasons for this, and the following is only one of many possible scenarios ...
+ +What happens if the active and neutral in a wall outlet are reversed (and the earth is connected)? Firstly, the fuse will blow (violently), and the loud bang and bright flash will give the poor user a terrible fright. This in itself is unlikely to be deadly. It is after the fuse has blown that things become really dangerous, because if plugged into another outlet (provided the earth connection is sound and the safety switch doesn't operate), the circuit will continue to work.
+ +Most householders will be baffled - "Gee, it just blew up, but everything still works!" The next thing will be to try it in the original outlet again ... "Hell, why not, it still works." But remember, this outlet has active and neutral reversed, so it won't work in that outlet - there is no connection to active. The poor user is now flummoxed, so chucks the (whatever it might be) in a corner and forgets about it.
+ +In the US, Canada and some European countries, it is quite common for appliances to have a 2-pin non-polarised mains plug, with no earth pin. If the owner of this particular appliance with its 'Cheap Death' power supply decides to simply replace the 3-pin plug with a 2-pin version, the real fun can start. Whether the fuse is intact or not is more or less immaterial, because only one of the two ways a 2-pin plug can be inserted is 'safe'. Should the wrong choice be made (and the poor user has no knowledge that there is a 'safe' and 'unsafe' way to plug the supply into an outlet), the entire circuit is now at the full mains potential. Anyone touching any part of the circuit - chassis, connectors, etc. - is now connected directly to the mains via the capacitor. The cap is capable of passing more than enough current for a fatal electric shock.
+ +Even assuming that nothing untoward happens (and there is no installed safety switch), if there is a significant load on the branch circuit where this monstrosity is connected, there is a very real possibility that the fuse will blow because of the potential difference between the neutral and earth connections. An electric kettle or a heater can quite easily elevate the neutral lead by a couple of volts with respect to earth, and the fuse will blow. Any pretense of 'protection' afforded by the fuse is now gone.
+ +There are many other 'what if' possibilities that I urge you to explore, any one of which could result in the chassis becoming live, resulting in death or injury. Remember that it doesn't matter how unlikely a given scenario may seem to be, it will happen somewhere, sometime, if there are enough devices using the technique available. A one in a million chance becomes a certainty if there are a million users.
+ +Because of the extreme danger posed by the circuit scheme, I contacted Microchip's technical support group in 2008 with the following information (both messages are verbatim, including errors, grammatical mistakes, spelling, etc.) ...
+ + +This issue applies to Application Note TB0008. The circuit shown is inherently lethal, and violates every mains wiring code on the planet. Tying neutral to
+ earth (ground) is not allowed anywhere, and using a fuse for the purpose does nothing for the 'safety' of the published circuit. + + I strongly recommend that the app note TB0008 be withdrawn before someone kills themselves with it. I cannot believe that you actually published this circuit without + so much as a single warning that it is potentially lethal. + + While similar circuits have been used for many years, no-one ever has thought to tie the neutral to earth, and where used, such circuits are always intended + to be totally isolated from contact by any person. + + To say that I am shocked by the circuit is both a terrible pun and a gross understatement. Please remove it - someone will think it's a good idea, and will + kill themselves or someone else if it is ever constructed as shown. + + Cheers, Rod Elliott |
A few days later, the following was received (with no name or direct contact details). That the response is unsatisfactory is to put it very mildly. This was Microchip's 'proposed resolution' (again, the text is verbatim, but the yellow highlights are mine).
+ +Rod, + We apprecaite your saftety concern and taking the time to contact us. + + I've looked into the design quite carefully, and there is no issue with the design. The purpose is to cover the + accidental case of plugging in the plug backwards (unlikely since most plugs are no polarized), and to cover the common case of a miswired outlet of a swapped + hot and neutral. If these situations occurred with this design, the neutral would have power on it (due to the swap) and would be immediately grounded. + + The result would be a rapidly blown fuse (the one connecting to neutral in the design) and thus provides a safety to prevent the neutral line from having hot on it. + The hot line would actually then be connected to neutral, thus the design would have no power, and no return, and no hot connection in this case. + + I'd agree that since it is a high voltage design, a warning would be a good idea if only from a legal perspective. However, this was not a design presented in some + magazine to the public, but is an engineering document. It is assumed that engineers would be viewing it and would take normal precautions when working with AC power + circuits. However, we should not make that assuption, so I will request a warning be added regarding working with AC power. + + The design otherwise does not contain a flaw. + + Also, if the power is connected correctly (neutral to neutral, etc), then the connection of neutral to earth should be harmless since they are supposed to be at the + same potential. If they are not, it suggests a wiring fault, or other issue. I'd argue that perhaps a small resistor should be added to deal with ground being a + few volts above and below neutral and not blow the fuse - this situation can happen when heavy loads are placed on the power lines, and the voltage drop it causes + can cause a small difference in voltage between grounds of outlets on different circuits, or a difference in neutral and ground potentials. + + If after this discussion you still feel the circuit presents a hazard, please detail how this is the case - where the current would flow, what conditions, etc. It + would also be helpful to indicate where this violates code. I do believe that it is not permitted in house wiring, but there is nothing I am aware of that says it + can not be done in a design. Also note the lines are not directly connected, but are fused. However, the worst case even for a non fused link, would be to trip + the circuit breaker on the house, which would detect the excessive ground current (due to the live being connected to ground). + + If the resolution provided does not solve your problem, you may respond back to the support team through the web interface at support.microchip.com. Telephone + support is also available Monday-Thursday between the hours of 8:00am and 4:00pm MST and on Friday between 10:00am and 4:00pm. |
Unbelievable! Remember, this is verbatim, replete with spelling and other errors. Although I immediately posted a response through the tech support contact form, no further reply was ever received - Microchip's tech support people seem to think the issue is 'resolved', simply by stating that they see no issues in the circuit. A far as the person who examined the problem is concerned, it is perfectly alright. What makes this far worse is that it was claimed that "engineers would be viewing it" - well, TB008 can be found all over the Net. Because it was produced by a large and well known company, a great many beginners will assume that it must be safe, and that any criticism is unfounded and has no credibility (at least by comparison). There was even one site that had a link to the article for use by schools! + +
Interestingly, Microchip has also released AN954, which also describes a transformerless power supply. It has many highlighted warnings throughout the text, and in a bizarre twist of logic the evil TB008 is cited as a reference. The versions described do not attempt to use safety earth as a current carrying conductor, and quite correctly have no connection between earth and neutral. TB008 seems to have been buried in the Microchip website, and I was unable to find the original. An admission of wrongdoing would have been nice, but I suppose that's too much to ask for.
+ +++ +Note that as of 2017, TB008 is still all over the Net, with no evidence that I can see that it has been satisfactorily withdrawn or proper warnings + issued against its use. There are still many sites referencing it - usually their own copy on their web page server or a file repository. The one (very small) + bright point is that it no longer shows up on the Microchip website when one searches for it. There should be a recall or cancellation notice explaining that + the original design is flawed and dangerous and must not be used, but there's no such notice.
+ +It's now April 2017, and Microchip has still not issued a 'recall' notice, admitted they were wrong, or apologised to me for being pig headed arses + by failing to address the concerns I raised. TB008 is referenced in some other documents, although the original is no longer shown. However, it's still available + elsewhere in the Net and it took me all of 30 seconds to find a copy.
+ +There's even a discussion on the Microchip forum (posted in 2008) that declares the circuit to be wrong and dangerous, but Microchip did not respond - even in + their own forum!
+
Over nine years have passed (as of this update), and no satisfactory response from Microchip has ever been published. That really isn't good enough.
+ + +Where a physically small power supply is required for a project (including audio, but not necessarily for true hi-fi use), one can use the intestines of a miniature 'plug-pack' (aka 'wall-wart') SMPS. Although only small, some of these are capable of considerable power, but installation is not for the faint-hearted. Quite obviously, the circuit board must be extremely well insulated from chassis and protected against accidental contact when the case is open.
+ +The advantage is that the project does not require an external supply. This is often a real pain to implement, because there is always the possibility that the wrong voltage or polarity can be applied if the external supplies are mixed up (which is not at all uncommon). The disadvantage is that the unit now must have a fixed mains lead or an approved mains receptacle so a lead can be plugged in.
+ +++ ++
++ WARNING : The following description is for circuitry, some of which is not isolated from the mains. Extreme care is required when dismantling + any external power supply, and even greater care is needed to ensure that the final installation will be safe under all foreseeable circumstances (however unlikely they may seem). All primary + circuitry operates at the full mains potential, and must be insulated accordingly. It is highly recommended that the negative connection of the output is earthed to chassis and via the mains + safety earth. Do not work on the power supply while power is applied, as death or serious injury may result.
The photo in Figure 8 shows a typical 5V 1A plug-pack SMPS board. As removed from the original housing, it has no useful mounting points, so it is necessary to fabricate insulated brackets or a sub-PCB (made to withstand the full mains voltage) to hold the PCB in position. Any brackets or sub-boards must be constructed in such a manner that the PCB cannot become loose inside the chassis, even if screws are loose or missing. Any such board or bracket must also allow sufficient creepage and clearance distances to guarantee that the primary-secondary insulation barrier cannot be breached. I shall leave the details to the builder, since there are too many possible variations to consider here.
+ +This arrangement has some important advantages for many projects. These supplies are relatively inexpensive, and the newer ones satisfy all criteria for minimum energy consumption. Most will operate at less than 0.5W with no load, and they have relatively high efficiency (typically greater than 80% at full load). The output is already regulated, so you save the cost of a transformer, bridge rectifier, filter capacitor and regulator IC.
+ +The SMPS pictured is a 5V 1A (5W) unit, and for most PIC based projects this will provide more than enough current. Consider the safety advantage compared to a transformerless supply - the finished project can have accessible inputs and outputs, and is (at least to the current standards) considered safe in all respects. Personally, I would only consider it to be completely safe if the chassis is earthed. However, it is legally allowed to be sold in Australia, and we have reasonable safety standards for external power supplies. They are 'prescribed items' under the Australian safety standards, meaning that they must be approved before they can be sold.
+ +There is no more effort required to install a supply such as this instead of a transformerless supply, and at least you can work on the secondary side without having to use an isolation transformer. While it is more expensive, how valuable is a life? Far more than any power supply, and that's for certain.
+ + +For a variety of reasons, you may find that the regulator you need is only available in the positive version. This may be because it's a high current type, and for reasons best known to the manufacturer it was decided that people don't need a negative version. For example, you may be able to get LT1038 regulators (up to 10A, but now obsolete), but there is no negative equivalent. Much the same applies for other IC manufacturers, and it's now difficult or impossible to find complementary high current regulators.
+ +You may find yourself in the same position if you use off the shelf switchmode supplies. These usually only have a single output, and while dual output types might be found, they will not be as cheap as the single output versions. There are many places where symmetrical (positive and negative) supplies are needed, but your options can be very limited if you need high current.
+ +Provided the transformer has separate windings (not a single winding with a centre tap), you can simply build two identical positive regulators and wire the outputs to get positive and negative outputs. This may work out to be a cheaper and batter option than trying to make separate positive and negative regulators. It may be counter-intuitive, but a regulator doesn't have a designated 'common' connection, other than that created when the supply is wired. It makes no difference whatsoever if the regulated (positive) output is deemed to be the output or common. Figure 9 shows how a positive regulator can be deemed to be a negative regulator, simply by moving the 'common' connection.
+ +The above certainly looks 'wrong', but it's not. The regulated output has no sense of 'positive' or 'negative'. As long as all parts operate with their correct polarity, any part of the circuit can be connected to earth/ ground without altering the performance of the regulator in any way. There are several places where it simply wouldn't make any sense, but as shown the negative output voltage is smoothed and regulated in exactly the same way as it would be with a dedicated negative regulator.
+ +The arrangement shown above can be used with any type of regulator. You can wire a pair of 20A switchmode supplies the same way, and they can have the same or different voltages. The only requirement is that the outputs are floating - with no fixed connection from either supply line to earth/ ground. This lets you connect either supply line to the system common (earth), or you can build a 'stacked' power supply, providing (for example) +12V and +24V. Any voltages can be used, but make sure that the output current will always be within each supply's ratings if you build 'interesting' combinations.
+ +Unfortunately (and as you will find if you look), high current linear regulators are now hard to get, and those you find are likely to be insanely expensive. It's now expected that all high current applications will be met by using switchmode regulators, but for many applications the switching noise will be intrusive and they are not considered an option for things like mixers and other audio applications where large numbers of opamps are used. The decision now is whether to go back to using discrete regulators as discussed briefly above, or to use multiple smaller 3-terminal regulators, with each pair of regulators powering a single section of the circuit. Linear regulators can be 'boosted' by using external transistors, but you lose the inbuilt current limiting that's provided in most 3-terminal devices. This is shown below.
+ +It's certainly not as easy as it once was, but there are ways to get around any limitations you may face. It just requires you to have a greater understanding of the principles, and (almost) anything becomes possible.
+ + +As noted above, high current linear regulators are no longer readily available. However, it's fairly easy to boost the current from a smaller regulator IC to get as much current as you need. The disadvantage is that the external pass transistor(s) have no current limiting, and thermal shutdown only works on the regulator IC, not the external transistors. While it's certainly possible to add both current limit and thermal shutdown, this adds complexity to a circuit that was once as simple as a single TO-3 regulator IC plus a couple of support components. Very basic current limiting may only involve a couple of diodes and a resistor as shown - D8 and D9 clamp the voltage across R1 to ~1.3V, so Q1 enters constant current mode at about 2.7A and the regulator will then be able to limit the current it supplies. It's not perfect, but it does work. Add another diode in series with D8 & D9 to increase the output current. Three diodes limits the current to about 4.5A.
+ +Figure 11 shows how to add a current booster with rudimentary current limiting. I've only shown the positive regulator, but a negative version is much the same. You only need to change the transistor for an NPN type (and a negative regulator of course). Diodes and capacitors have to be reversed as needed for a negative version. The component values shown are representative only. Only one series pass transistor is shown, but two or more can be used if needed. If transistors are operated in parallel, a resistor must be used in the emitter circuit of each. 0.22 ohms is generally satisfactory.
+ +The circuit works by means of Q1 sensing the voltage across R1, which will typically be around 2.2 ohms and will dissipate less than 1W. At low current there's very little voltage across R1, so Q1 remains off. As the current is increased, the voltage across R1 increases and Q1 will turn on just enough to maintain the preset output voltage. The output voltage is set by the regulator, and the transistor acts only as a current booster. If the regulator is set for 12V output, Q1 will dissipate around 35W at an output current of 5A. The regulator IC will typically pass about 650mA, and will dissipate about 5W. Naturally enough, the dissipation of Q1 and U1 will depend on the regulation of the transformer as well as the load current. The arrangement shown will work with a wide range of different regulator ICs (including fixed voltage types) and external series pass transistors.
+ +Without D8 and D9 there is no form of overload protection, and care must be exercised when this type of supply is used. For example, a short at the output will almost certainly kill the series pass transistor, and a fuse is of no use because the transistor will blow first. In critical applications, additional circuitry will be needed. This may include more complex current limiting, over-temperature sensing and perhaps an over-voltage 'crowbar' to save the load circuits should the regulator or series pass transistor fail. None of these functions are especially difficult to add, but they do increase overall complexity and component count.
+ +This arrangement has been used in many circuits (and products) from all over the world. It appears to be in decline now, because all new high current designs use a switchmode buck converter. They are far more efficient than a linear design, but are only suitable where their noise will not cause problems. We can expect linear regulators to remain the circuit of choice for low noise applications for many years to come.
+ + +Decisions, decisions. The main purpose of this article is to provide some general information about small power supplies, regulation, their application and potential dangers. There is no doubt that the traditional transformer based supply is the safest. It is extremely easy to ensure that no live connections are accessible, often needing nothing more than some heatshrink tubing to insulate joined wires. Note that if possible, two layers of heatshrink should be used to provide reinforced insulation over joined wiring.
+ +A transformer has full galvanic isolation and requires little or no EMI filtering, leakage current is extremely low, and a well made transformer based supply is so reliable that it will almost certainly outlive any equipment into which it is installed. While certainly not the cheapest option, a transformer provides a reasonable attenuation of common mode mains noise, and the final supply can be made to be extremely quiet, with virtually no hum or noise whatsoever.
+ +The next best option is a modified plug-pack SMPS or a purpose built chassis mounting SMPS. These are useful where high efficiency is needed, along with very low standby power requirements. They are rather noisy though, and the full range of voltages is not available. There are few (if any) ±15V SMPS available for example, so powering preamps and other low power audio equipment will be easier, quieter and ultimately cheaper with a transformer.
+ +As a last resort, a transformerless supply can be used, but only where the current drain is low (typically less than 25mA or so), and only where there is no possibility of contact with any part of the connected circuit. There is no such thing as a 'safe' transformerless power supply, and they are potentially lethal. There are so many limitations and so few advantages to this approach that IMO it is usually a pointless exercise, unless one has a mains powered appliance that needs a low current supply that can remain completely isolated from contact with the outside world.
+ +The high current option described in Section 10 is the odd man out here - after all, the title of the article is 'Small Power Supplies'. Nevertheless, it's sufficiently useful that it warranted inclusion, especially since it's now quite difficult to get high current IC regulators.
+ + +Small Power Supplies (Part 2) +
![]() | + + + + + + + |
Elliott Sound Products | +Small Power Supplies (Part II) |
One of the things you will find when looking into power supply design is comment about transient response of various regulator topologies. It's a fact of life that nothing is instantaneous, and this applies as much to voltage regulators as any other IC (or even transistor of any type). Often, you will come across (IMO spurious) claims that the transient response of your favourite regulator IC is poor, and there will be graphs aplenty to prove it.
+ +These graphs and the information that goes with them are all completely true, but audio simply is not fast enough to cause problems, even with 'slow' regulators. Since most modern audio circuitry uses opamps (whether discrete or commercial ICs is immaterial), and the vast majority of these operate in Class-A for much of the time. Yes, there are many exceptions, but even then the rate of change (commonly written as 'dv/dt' or 'di/dt' - change of voltage/ current over time) is usually much slower than you might imagine. Note that 'di/dt and dv/dt are often written as Δi/Δt and Δv/Δt (where Δ means change of a variable or function).
+ +The current drain of a preamplifier or electronic crossover (for example) will barely change with programme material, and the need for extreme speed (i.e. very good transient response) is almost never needed. Entire pages have been written that examine the transient behaviour of various regulators, but there's usually not a word that covers the actual (as opposed to imagined) change in current drawn by typical signal-level circuitry. The situation is very different for switching and logic circuits, where the current drain increases by a factor of 100 or more as a logic IC changes state (this depends on the type of logic of course).
+ +In audio, this simply doesn't happen. The high level signals are generally of fairly low frequency, and even the presence of high frequency energy (at a lower level) doesn't change the current drain appreciably. This doesn't mean that regulators should be slow (or fast) - they simply have to be able to provide the current needed, when it's needed, and preferably without too much over or under-shoot when the current changes. Most of this is taken care of by the capacitors at the output of the regulator, and there are usually film caps in parallel with each opamp package (ESP circuit boards always include bypass caps).
+ +Consider the worst possible case, where a 20kHz sinewave signal has an amplitude of 15V peak (10.6V RMS). The slew rate (i.e. dv/dt) is less than 2V/µs. That this is an impossible signal in any preamp or crossover is a given, because no normal programme material can even come close to this - even if the preamp is driven into clipping. If that's the case, then the transient response of any regulator is the least of your problems.
+ +Another issue may also be raised (although there's not a lot of info available), and that's regarding noise. We're not talking about ripple, as that's usually fully specified in most datasheets. It's rarely a problem unless you do something silly (like try to regulate perhaps 13V DC (average) to 12V DC with a standard (not low dropout) regulator. If you do that, some ripple 'break-through' is almost guaranteed. Regulators are active circuits, and they do make some wideband noise (heard as hiss). While the results shown here are real, in 99.9% of cases it's easy to either add a filter or ignore it, because it simply doesn't cause any audible problems.
+ +I suggest that you also have a look at the article Low Dropout (LDO) Regulators, because these can be somewhat quieter than conventional regulators if used properly. They can be fussy though, so you must consult the datasheet for the one you wish to use. Because the output is from the collector (or drain with MOSFET types), they have a higher output impedance than 'ordinary' regulators.
+ + +Nearly all regulator datasheets show details of the transient behaviour, and for adjustable regulators (as seen below) this includes the effects of bypassing the adjustment pin. It's not essential to add a bypass cap, but it's generally considered to be a good idea as it reduces output noise. The extra cap also usually improves the transient performance. In most cases, there's also an output capacitor, and although I generally don't bother using ceramic caps in parallel with electros, in the P05 (and most other PSU boards) they are included as close to the IC as possible.
+ +This is done to ensure stability under all conditions, because, like fast opamps, regulators can be fussy about any inductance in the supply leads. Using a small multilayer cap right next to the IC ensures that stray inductance will never cause the regulator to oscillate. It can be omitted in almost every case, but for the sake of a few cents worth of parts I know that the regulators will be stable with any load.
+ +
Figure 1 - LM317 Regulator Schematic
The drawing above shows the general arrangement. There's not much to it, and although not strictly needed, the extra diodes are good insurance against momentary reverse polarity which can damage the IC. The drawing shows a single regulator, wired in one of the most common configurations. Only the positive regulator is shown, and analysis will be done for the circuit pretty much exactly as shown above. Note that the large input capacitors mean that a fast change of input voltage isn't possible, so the 'Line Transient Response' shown in the graph below is not relevant. The input 'pi' (π) filter with C1, R1 and C2 is something I've used for many years, and it removes most of the high frequency harmonics from the rectified AC, and even with as little as 2.7 ohms for R1, the attenuation of even the 100Hz component is significant (about 13dB, and increasing with increasing frequency).
+ +Most transient behaviour is measured using fast switching circuits, which may switch the output from zero to maximum current at a very high rate. For example, in the 'Load Transient Response' graph, the total switching time (load-on and load-off) is only about 2.5µs. This simply cannot (and will not) happen in an audio circuit. The load transient is shown with no output capacitance and 1µF. so it follows that if the capacitance is higher, there will be less disturbance, even at such unrealistic (for audio) switching rates.
+ +Something the graphs fail to show is what happens when the output capacitance reacts with the response of the regulator itself. It has a frequency response that falls at 6dB/ octave, so resembles an inductive source to the load. This can interact with the output capacitor and cause ringing. However (and this is important), if the current only varies comparatively slowly (or even barely at all), then there is no 'transient' event that will cause any problems.
+ +It's easy to run a simulation or take direct measurements that show ringing quite clearly, but it's essential to be realistic. There is no point designing a power supply that can cope with fast transients if they will never be encountered in practice. Naturally, it doesn't hurt if your regulator is much faster than it needs to be, just as it's quite alright if it can provide 10 times the current needed by your circuit. However, trying to ensure that it can do either of these things will not make it sound any 'better'. Any claim that it will change the sound should be taken with a (large) grain of salt, and most such claims have never been verified in a double blind test (what a surprise).
+ +
Figure 2 - LM317 Transient Performance [ 1 ]
Now, let's have a look at the various graphs to see what they mean. Ripple rejection is fairly straightforward, and is a measure of how well mains ripple across the filter capacitor (C1 in Figure 1) is removed by the regulator. Since the graphs originate in the US, 120Hz is used (rather than 100Hz which most of the world uses). The difference is probably so small that it will be hard to measure, so that's not a problem. Ripple rejection is influenced by the adjustment pin bypass, and as seen a value of 10µF is normally quite sufficient to allow better than 80dB ripple rejection.
+ +The situation isn't quite as rosy for higher frequencies, but this will normally not be an issue for a linear supply. If you are regulating the output of a switchmode supply then the second graph needs to be consulted, but otherwise it can be ignored. From the third graph, it's also apparent that ripple rejection depends on the output current. There's nothing you can do about that, and the performance is perfectly alright over the full current range.
+ +The next graph (Output Impedance) is more telling, and is something you may need to consider. Above 500Hz, the output impedance rises at 6dB/ octave, and this simulates an inductive output impedance. It is this 'simulated inductor' that can create problems. In terms of regulation, the output will be affected by rapid transient load current changes. As already noted, this tends not to occur with the vast majority of linear circuitry.
+ +Line regulation describes how the regulator circuit reacts to sudden changes of the input voltage. This is normally quite uncommon, especially if the bulk (filter) capacitor is properly sized. To change the voltage across a 2,200µF capacitor quickly (such as by 1V in a few nanoseconds as shown) requires a lot of energy, and is unlikely in any real circuit. However, it's also a warning that you may get excessive noise at the output if the input is derived from a switchmode supply with inadequate output filtering.
+ +The load regulation graph shows what the output does when a load is suddenly connected or disconnected. No circuitry is instantaneous, and it takes some time (in microseconds) for the IC to realise that the output voltage has fallen due to loading, and recover to the designed output voltage. With fast transients (and inadequate output capacitance) the output may 'ring' at some high frequency (6kHz up to perhaps 60kHz). with a damped oscillation. The load regulation graph hints at this, but doesn't provide enough detail to be useful.
+ +If you have a circuit that has high peak current with a rapid rate of change, a reasonably large output capacitor (220 to 2,200µF) will generally reduce any ringing to negligible levels. You also need to place a bypass cap close to the switching circuit itself to counteract lead or PCB trace inductance. This is especially important if the circuit is some physical distance from the regulator. The amplitude of the oscillation is generally fairly low, and will rarely exceed ~50mV peak in either direction (above/ below the set output voltage). A decent sized output cap will ensure the most stable output.
+ + +There is another problem that might cause issues, and that's the behaviour of a series inductor/ capacitor (LC) circuit. This might create a 'noise peak' at a frequency determined by the 'inductance' of the regulator IC and the output capacitor. This peak is (theoretically) inevitable (and not mentioned in any of the datasheets), but I don't know of a single case where it's caused a problem. I have had exactly zero reports from anyone (ever) of any issues due to this phenomenon.
+ +While doom and gloom may be predicted if you don't take appropriate care, in reality it's generally a non-issue. Analogue ICs (e.g. opamps) draw current continuously, so they provide some damping of the LC circuit created by the IC and its output cap. Since there are no radical transient current loads with most analogue circuits, there will be little or no ringing at the output of the supply. It's actually difficult to determine the apparent output inductance of the LM317. The output impedance graph implies that it's a lot lower than the measurements show (Zout is shown as around 33mΩ at 10kHz, giving an inductance of about 530nH).
+ +Due to the internal high frequency rolloff, the output appears inductive, and this reacts with the output capacitor to create a noise 'peak'. Being an active circuit, there is some unpredictability involved. Note that the measurements shown below assume no external load, so are 'worst case'. An external load will damp the resonance peak somewhat. Expect a load equivalent to ~200 ohms (50mA at 10V) to damp the peak (with small capacitors - less than 1µF) by around 3dB referred to the maximum, which is about 19dB at 22kHz as shown on the graph. The load is less effective with higher capacitance because capacitive impedance is so much lower (10µF has a reactance of only 2.6 ohms at 6.25kHz).
+ +While I have no reason to doubt the results obtained in the following chart (the source is certainly reputable), I have not been able to reproduce the results shown. In general, if you have a potential noise issue then it would be wise to run some tests of your own, but I have never encountered any 'untoward' noise problems using any voltage regulator, and I have used rather a lot of them in the many years I've worked in electronics design. Unfortunately the original article was rather sparse on details, and failed to provide info on the units used in the noise voltage scale. The exact test methodology wasn't discussed either, so the graphs should be taken as 'this may happen', rather than 'this will happen'.
+ +
Figure 3 - LM317 Noise Peaking [ 2 ]
Based on the above graph, the inductance at the output of the LM317 is about 65µH (a 10µF cap and 65µH inductor are resonant at 6.25kHz - close enough), and that's the peak seen in the graph. The effective inductance is not constant, and it varies with load. Using a low ESR (equivalent series resistance) capacitor for CL will make the problem worse, and a 'standard' aluminium electrolytic is normally the best choice. Although the effective Q of the regulator's output inductance is not especially high, as seen in the graph, you can still get a substantial noise peak.
+ +Given the noise specification provided in the LM317 datasheet, I've 'calibrated' the noise levels, based on a 'typical' output noise of around 250µV for 10V output. What does become apparent is that high ESR capacitors are likely to cause less of a problem than specialised low ESR parts. With the LM317 (and similar ICs), a low ESR will not cause oscillation, but it may increase the size of the noise peak and/ or ringing amplitude with a transient load. Be aware that the situation with LDO (low dropout) regulators is very different. See the Low Dropout Regulators article for more details about them (they can be very fussy!).
+ +One way you can test for this is to apply a transient load, pulsing the current at a suitable rate (say 1kHz), and look for any ringing (damped oscillation). If present, there is also some noise peaking, because the two are directly related. The frequency where you see ringing indicates a resonant peak, so if you see a damped oscillation at (say) 6kHz, then it's likely that there's also a noise peak at that frequency.
+ +
Figure 4 - Transient Response Of 7815 Regulator
Because transient response is a certain indicator of any sign of a resonant circuit created by the regulator and its output cap, I did a quick test. Ringing indicates resonance, with the frequency corresponding to the frequency where noise will be at its worst. I tested a 7815 (the positive half of a P05-Mini), and the transient response is shown above. It's not perfect - no regulator ever is, but the transient response is about what I'd expect given that the output cap is only 10µF. The capture was done as the 27mA load was connected and disconnected, and the AC component is about 20mV peak-peak. There is no sign of instability (ringing). Lead lengths were kept short (less than 20mm) to prevent spurious issues with stray inductance. A larger output cap will improve the overall stability - especially the initial ~40mV dip (that's due to the time constant of the output cap and load resistance - 5.6µs).
+ +Note that no audio circuit can or will ever impose a load such as used above. The rise and fall time of the output current is in nanoseconds - I used a very fast 1kHz squarewave to turn on the transistor.
+ +A common recommendation is to use two or more capacitors in parallel, with each having a different value. While you might think this could spread the resonance and reduce its effects, it does no such thing. A 100nF cap (for example) in parallel with a 10µF cap behaves more-or-less identically to the 10µF cap by itself, even if one or both has a significantly higher/ lower ESR than the other. You may read that the values should be 'non-harmonically related', meaning that they shouldn't be half, double, three times or any other simple multiple.
+ +Just like any other caps in parallel, the larger (or largest) value will dominate, but the performance is much the same as you would expect of paralleled capacitors. The total value is the arithmetic sum of the individual values used. The only place where it's common (and useful) to use unequal value caps is when bypassing opamps and RF circuits. A monolithic/ multilayer 100nF ceramic should be in parallel with the supply pins, and it's usually wise to include 10µF electros (usually between each supply and ground) where the DC enters the PCB. This has nothing to do with snake oil, but compensates for the track and wiring inductance that may otherwise cause instability.
+ +Paralleled caps are also common in RF circuits, where the high frequencies involved make even small stray inductances likely to cause major problems. Significant signal degradation can be caused by only a few centimetres of wire. For this reason, bypass caps should always have the shortest leads possible, and be as close to the circuit as physical layout can achieve. With RF circuits, the self-inductance of an electrolytic, polyester (or other film) cap may become significant, and a small ceramic (SMD for minimum self-inductance) may be used in parallel.
+ + +Given the very low output impedance of most regulators, it should come as no surprise that simply adding a large output cap doesn't significantly reduce the output noise (residual ripple in particular). Even adding a 2,200µF cap will not affect 100/120Hz noise at all, but if you are unlucky, you could easily end up boosting the noise level at a harmonic of the mains frequency. Because the overall noise (and hum) levels are already very low, you almost certainly will not notice such a boost if it happens.
+ +Virtually all common regulator ICs have an internal 6dB/ octave rolloff starting from around 10Hz to 500Hz or so. That means that all of them have an effectively inductive output impedance. It follows that the same caveats apply, regardless of the type of regulator you use. While the effects can (perhaps) be reduced somewhat by using a fully discrete or opamp based regulator circuit, there are no good reasons for doing so. There's no doubt that the effects shown are real, but equally they rarely cause anyone a problem.
+ +Consider that there are millions of regulator ICs in use throughout the world. Of these, very few indeed are considered troublesome in any way, by any user. Some will have been designed using the wrong type of capacitor at the output (e.g. a low ESR type), and most just use standard electros of varying capacitance. Anything from 1µF to 220µF is very common. While countless people will insist that the DC somehow has a 'sound', unless it's very noisy it has no such thing. Opamp circuitry can (and does) remove much of the supply noise (that's called PSRR - power supply rejection ratio), but it does fall with increasing frequency.
+ +In theory (and based on the logic of some), every piece of audio gear built should be noisy, with an audible background hiss that won't go away. To an extent, this is true, except that in most cases it's at such a low level that it's not audible unless you put your ear against the tweeter. Since most of us don't listen to audio that way, it's apparent that it must make no difference to what we hear, and indeed, that is true. Headphone amps are a case in point. Headphones are usually very sensitive, yet most dedicated amps driving them are as close to 'noiseless' as you're likely to find.
+ +
Figure 5 - RC Network Reduces Noise To Negligible Levels
If you have a particularly intractable noise problem, then the easy way to fix it is to use the circuit shown above (varied according to current demands). As shown, the circuit is specifically intended for circuits that draw a reasonably consistent current, regardless of signal level. The values given are suitable for circuits drawing up to 100mA, and although regulation is not as good as it would be without the series resistor, that usually doesn't matter.
+ +The series resistor isolates the regulator (I showed an LM317, but it can be any type you like, including an LDO regulator). The capacitor is no longer able to have any significant interaction with the regulator's inductive output impedance, and noise is rolled off from a frequency determined by ...
+ ++ f = 1 / ( 2π × R × C ) ++ +
With the values given, that's 268Hz (220µF) down to 27Hz (2,200µF), but you can change it to whatever you like. This arrangement is not suitable for use where there is a large current variation, unless you are certain that the voltage change won't upset the following circuitry. It may be unwise to use this circuit for powering dual opamps that have each opamp section devoted to different audio channels unless you use a large capacitance. In most cases I would expect no problems. The 2.7 ohm resistor affects the voltage regulation with varying currents, and the output voltage will fall by 270mV for each 100mA of current drawn.
+ +The values given attenuate the noise at 6.25kHz by 30dB below the low frequency noise level with 220µF, and obviously a great deal more if you use a larger value. Not only is the peak seen in Figure 3 removed entirely, but above 268Hz (or 27Hz) all noise is rolled off.
+ +Another option is the use of LDO regulators, with some specified for extremely low noise. Unfortunately, some of these are available only as positive regulators, and no negative version of the same basic type exists. Many are also limited to relatively low voltages (+5V or -5V output), but higher voltage versions are available. The higher voltage versions are unlikely to be significantly less noisy than standard regulators.
+ + +Regulators are not as simple as they appear (I doubt that anything is as simple as it appears ), but in the vast majority of cases they can be used with most circuits without any concerns whatsoever. Output impedance, transient response, noise, etc. are all real, but generally do not affect what we hear. In the vast majority of cases, careful measurements will be the first indicator if there is anything amiss. How good your results are depends on the capabilities of your measurement system, but you need to be aware that there are no known problems with any circuitry that are audible but not measurable.
There is no doubt that some potential problems can be below the limits of many measurement systems, but the only way to be certain that a change has improved (or made worse) the sound is by a double-blind test. That generally means that you need two samples, one 'tweaked' and the other left 'standard', so you can make direct comparisons, in real time, without knowing which system you are listening to. You can use the ABX Tester or AB Switch Box projects to run your tests (the AB Switch Box is the simpler of the two and works very well).
+ +There is no doubt that the measurements and regulator behaviours described here are real, but the only thing that should be of interest is "does it make an audible difference?". Most of the time, the answer will be "no", but there may be some circuitry that benefits from an ultra-low supply noise level. Where this is found, it's quite likely that the circuit itself is at fault, rather than the power supply. At times like this, simulations and published info have to be verified, so I attempted to do just that, but I was unable to verify an actual noise peak at the regulator's output at any frequency using my scope and FFT analysis.
+ +Although there is some risk of a noise peak, it isn't necessarily going to show up in real life. The quick test I did to see if I could replicate the 'noise peak' was unable to show anything meaningful. You will see evidence of the same effect in other articles [ 3 ] but usually at higher frequencies and lower amplitudes than the above graph implies.
+ +If you want to get the lowest noise possible, consider using a capacitance multiplier, as shown in Project 15, but adapted for lower current. This is a very effective way to get the equivalent of several Farads of capacitance (assuming you think you need that much). The cap multiplier circuit is placed after the regulator, and the regulator's output voltage adjusted to get the desired final voltage. You will need a higher input voltage than normal, so for 15V DC you would typically use an 18V or 20V transformer.
+ +Ultimately, it will rarely be the regulator that's the cause of noise problems. Ground loops or poor grounding practices can cause low frequency hum or buzz, and RF interference from any number of possible sources are generally going to cause many more problems than voltage regulators. However, this information may come in handy (and it's difficult to find), hence its inclusion on the ESP website.
+ + +The graphs shown (Figures 2 and 3) are based on those in the referenced documents, but have been redrawn to make them easier to read (the originals were anything but).
+ +Small Power Supplies (Part 1) +
![]() | + + + + + + + |
Elliott Sound Products | +Linear Power Supply Design - Part 2 |
Quite a bit of this article results from recollections of my early foray into designing and making my own transformers for guitar and bass amps (we're talking 50 years ago at the time of writing). I quickly discovered that I couldn't buy off-the-shelf transformers that would provide the voltages I needed or handle the current. One attempt at getting a custom transformer made was both a success and a disaster - it worked, but cost way too much, and was enormous (and very heavy). At that point, I ordered laminations and the best winding wire available (designed for high temperature operation) and proceeded to teach myself transformer design.
+ +Not one of my transformers ever failed, even though it wasn't uncommon for bass players (in particular) to decide to load the amp with far too many speakers. One had 4 × 8Ω quad boxes in parallel (2Ω), on a 200W amp designed for 4Ω. Both the amp and transformer survived this ordeal, but he was told it was a no-no once I found out.
Initially I used an educated guess to determine the number of primary turns, but later on I became a bit more skilled and was able to come up with a solution that worked very well. It pretty much goes without saying that the transformers used E-I laminations - the best I could get at the time - grain oriented silicon steel (aka GOSS). The windings were enamelled copper, with high temperature insulation. Unlike many enamels available now, the only way to remove the stuff I had is by abrasion - a soldering iron can't get hot enough to damage the insulation!
+ +My transformers were deliberately designed to run the core at a little over the 'recommended' flux density, so idle losses were higher than normal. Because I could use thicker wire (along with fewer turns), they had excellent regulation - far better than anything available at the time. They were also impregnated with (proper) transformer varnish, and baked at around 100°C until tender they were incapable of any internal wire movement. The ratings of the ones I built ranged from around 150VA up to 500VA. They were close to indestructible in service.
Many years later I had some transformers made (of which I still have a couple), and requested the same thing - run the core 'hot', with higher than normal magnetising current. Interestingly, the design engineer at the place that made them for me commented that "I wish more people thought that way." At idle, they dissipate around 40-50W, and actually run slightly cooler when driven to around ½ power than with no load. The supply voltages don't collapse as badly as 'store-bought' transformers, due to lower winding resistances. These are shown in the tables below, indicated by .
As regular readers may have noticed, I like transformers, and I'd like others to appreciate them as well. Using transformers is one thing, but understanding them is better. They are extremely efficient, and when designed specifically for a task you can take comfort in the knowledge that the transformer should outlive the equipment, and may even outlive the designer/ builder. It's only when the wrong transformer is specified for a job that you'll have problems, although both power and output transformers for valve (vacuum tubes) have a harder life due to the high voltages that are required. That's another story altogether, and is not covered here.
+ + +Part 1 of the 'Linear Power Supply Design' articles is (I freely admit) somewhat mind-numbing, especially for a beginner. This probably should be 'Part 1', but it can't be because that already exists (and has done so since 2001). There's not much in that article that isn't shown here, but this is deliberately simplified (some may disagree) to provide the information you need to get started. Many sites show the basic design process, but most (by a huge majority) leave out all the things that cause problems for many DIY hobbyists, and even some professionals. The most common omission is to not mention the transformer's winding resistance and how it affects performance.
+ +This is more serious than it sounds at first, because there is only one reason that transformers are available in multiple different power ratings (actually VA - volts × amps). If a room-temperature superconducting material could be found for the windings, the only variable would be the output voltage. The only difference between a 30V, 10VA and a 30V, 1,000VA transformer is the thickness of the wire used for the primary and secondary. A larger core is necessary for higher output current so that the heavier-gauge windings will physically fit into the core. If the windings were superconductors, the wire size constraint would be (at least partly) eliminated.
+ +Unfortunately, room-temperature superconductors are only a dream at present. This means that we can't escape from the fact that high power transformers must be far larger and heavier than low power types, so the winding resistance can be minimised. Any resistance in the circuit causes losses, and losses generate heat. If 1A flows through a resistance of 1Ω, 1W is lost as heat. Increase the resistance to 10Ω and that becomes 10W. Increase the current to 2A (still with 10Ω) and you now lose 40W, since power is determined by the square of current. While this is simplistic, it remains the biggest challenge to making high current transformers with low losses.
+ +Contrary to what you might imagine, the maximum flux density in a transformer core occurs with no load. This is covered in detail in the Transformers article, but it's mentioned again here because it's important to understand it. If you assume the 'alternate possibility', your understanding of transformer functions will lead to assumptions that are seriously at odds with reality.
+ +There are sections in this article that explain all of the things you'll come across, but with more detail than is normally provided. While it's certainly interesting, much of it isn't relevant to building a simple power supply. It's relevant to the transformer designer, but once you've bought the transformer you're stuck with what you have. However, this isn't entirely true if the transformer is a toroidal type, as it's a relatively simple matter to add some extra turns to get a bit more output voltage if you need it.
+ +Transformers are commonly described as being inductive components, but this is a gross simplification of reality. An unloaded transformer is certainly inductive, but the current drawn by the inductive component is small because the inductance is so high (typically at least 10 Henrys for a 230V, 50Hz transformer). When powering a resistive load passing more than 10% of the rated output current, the inductance can be ignored as it's (close to) irrelevant. Few people seem to understand this, including those who should be aware.
+ +For anyone who would like to run transformer power supply simulations, I suggest you read the article Power Supply Simulation (Not As Easy As It Looks), which covers the tricks you can use to make a simulator emulate the 'real world' performance of transformers and rectifiers. The info in that article isn't just for simulations though, as it's relevant in what's laughingly known as the 'real world'.
It's important to understand that the so-called 'linear' power supply is not linear at all. Current is delivered from the transformer (and the mains) only when the secondary AC peak voltage is greater than the stored charge in the filter capacitors. The current waveform is highly non-linear (distorted) and it can inject noise into any wiring that's close by (including speaker cables!). This means that the supply has a poor power factor, but that is not covered here. It's important (to the power company) but you have little or no control over it. You pay for power (watts), but not VA. The transformer is affected by VA, not watts.
+ +Transformer primary (mains) and secondary wiring should be kept well separated from all signal and speaker wiring. The 'ground' point must always be taken from the centre-tap of the filter capacitors for a dual supply to prevent diode switching noise from being injected into the ground wiring. Never take the ground from the transformer centre-tap or from a bridge rectifier, even if there's only a few millimetres of wire between that and the filter caps.
+ + +Wire has resistance. When you have to use 800 turns of wire for the primary, the only way to reduce the resistance is to use a heavier gauge wire, but that may not fit into the winding 'window'. Like almost everything, transformer design is a compromise. A skilled designer will get the best result possible from the least amount of material (steel and copper), with most design now being done by dedicated software.
+ +Transformers are limited by temperature, and the temperature is determined by the power dissipated in the windings. This is known as 'temperature rise', which means that at full load, the temperature will rise by (for example) 40°C above ambient. The ambient temperature is not the temperature in the room, outside or anywhere else that is not the immediate surroundings of the transformer. A transformer in a sealed enclosure will increase the temperature within that enclosure, and that is the 'ambient temperature'! The eventual temperature of any transformer is determined by winding resistance, the load on the secondary, and ventilation. Any transformer can be given a higher VA rating just by using a fan, a technique exploited in microwave ovens.
+ +The maximum allowable temperature is determined by the insulation temperature rating. It's uncommon for most suppliers to specify the insulation class, but expect most transformers to be rated for no more than 120°C. Many smaller transformers (up to 100VA in some cases) have an internal thermal fuse. It's not accessible as it's buried inside the primary winding, and if it opens the transformer must be discarded as it's usually impossible to replace it. 120°C is very hot, and you'd normally expect a transformer to run at no more than 60-80°C, with lower temperatures being very much preferred. As noted, you can use a fan to minimise the temperature rise, but that's rarely necessary in most power supplies.
+ +IEC 60085 | NEMA/UL | Temperature | Materials + |
105 | A | 105°C | Organic materials such as cotton, silk, paper, some synthetic fibres + |
120 | E | 120°C | Polyurethane, epoxy resins, PET/ Mylar®/ Polyester + |
130 | B | 130°C | Inorganic materials e.g. mica, glass fibres, with high-temp binders + |
155 | F | 155°C | Class 130 materials with binders stable at higher temperatures |
Transformer secondary voltages are nearly always specified at full rated current into a resistive load. This takes the winding resistance into consideration, and it invariably means that the output voltage with no load will be higher than the quoted secondary voltage. For example, a 100VA transformer is designed for an output of 30V RMS at 3.33A. When loaded with 9Ω, the secondary voltage will measure 30V RMS (if the actual mains voltage is the same as the rated primary voltage!), but when unloaded (no secondary current) the voltage will be around 33.5V RMS, giving a DC voltage of about 45V (including diode losses for a bridge rectifier). With a resistive load, the regulation is around 11.5% (compare this with the values shown in Table 4.1). When loaded with a bridge rectifier and filter caps followed by a load that takes the transformer to full load (24Ω for 100VA) the DC voltage falls from 46V to 38V - a regulation of about 17%. This is completely normal, and it happens with all transformers.
+ +As a result, all linear power supplies will provide more than the expected voltage with no (or light) load, and less than expected at full load. Failure to appreciate this is common, largely because most articles that describe linear power supplies either don't mention it, or it's glossed over expecting that "Everyone knows this". In reality, everyone does not know this, other than from their own measurements, which may (or may not) be sufficiently accurate. 'Knowing' something is very different from observing a phenomenon, but not understanding how or why it happens.
+ +Mains voltages are nominally 230V or 120V, but the actual voltage varies from hour to hour (and sometimes minute to minute). The tolerance is generally ±10%, but it's very common for that to be exceeded. Australian mains voltage is nominally 230V, but it's not at all uncommon to see up to 260V (+13%) and sometimes more (I've measured up to 265V RMS occasionally). Much the same occurs everywhere, and in some places the claimed 'accuracy' can be well over ±10%. If the input (primary) voltage changes, so too does the secondary voltage, and in direct proportion. That's only one of the many reasons that your DC voltages are different from the theoretical values.
+ +Of the variables, winding resistance is easy, but only if the transformer remains at the same temperature as when measurements were taken. Copper has a thermal coefficient of resistance of (about) 0.395%/°C (more commonly stated as 4E-3), so as it gets hot, the resistance increases. This increases losses, causing it to get hotter. Provided it's used within its (long term) ratings, self destruction won't occur. Any transformer can be heavily overloaded for a short time, and no damage will occur if it has time to cool again. For example, a 100% overload for 30 seconds has to be compensated by zero load for 30 seconds (50% duty cycle). You can use the temperature coefficient to calculate the winding temperature if you wanted to go that far.
+ ++ RT2 = RT1 × ( 1 + α × ( T2 - T1 )) For example ...+ +
+ RT2 = 6 × ( 1 + 3.95E-3 × 50 ) = 7.185Ω for a temperature rise of 50°C +
Where T1 is the initial temperature, T2 is the final temperature, and α is the thermal coefficient of resistance. I don't expect anyone to bother, because it's far easier to just feel the transformer with your hand (all live connections must be properly insulated!). If it feels hot, then it may be under-rated for the application. The same technique is regularly used by experienced technicians to test heatsink temperatures. The 50°C temperature rise shown above would have the transformer operating at 75°C if the ambient temperature is 25°C. That's much hotter than I'd want to operate a transformer on a continuous basis!
+ +The following table shows measured values from three transformers I tested. All were tested with 230V input, and the output current shown is across the full winding. Some transformers have separate windings that can be paralleled for double the current, but at half the total voltage. Foe example, a 25+25V transformer can output (say) 6A at 50V (300VA) or 12A at 25V (also 300VA). The last transformer marked with () is a custom design to my specification).
Type | VA | Volts (nominal) | Amps | Pri | Sec 1 | Sec 2 + |
Toroidal | 230 | 25+25 | 4.6 | 6.45 Ω | 237 mΩ | 238 mΩ + |
Toroidal | 300 | 25+25 | 6.0 | 4.21 Ω | 159 mΩ | 163 mΩ + |
E-I | 212 | 28+28 | 3.8 | 10.25 Ω | 392 mΩ | 397 mΩ + |
E-I (![]() | 350 | 28.5+28.5 | 6.1 | 6.19 Ω | 225 mΩ | 241 mΩ |
The primary is designed to dissipate less is because it's internal, surrounded by the secondary. It gets less cooling, so reducing its heat output is good engineering. You can do the calculations for all three transformers listed above and see the same thing. While this is somewhat peripheral to the main discussion, it's interesting and worth knowing about. You may also be curious why the secondary winding resistances are different. The secondary is wound in layers, so the outer layer has a slightly greater length of wire for the same number of turns as the inner layer. Almost all transformers will have the same (small) difference in secondary winding resistances. An exception is when the secondaries are bifilar wound, with both windings applied at the same time. This cannot be done with high-voltage transformers, as there isn't enough insulation to ensure there can be no voltage 'break-over' between windings when they are connected in series.
+ +For the 300VA toroidal transformer, the primary will dissipate just over 7 watts at full load, and the secondary will dissipate 11.6 watts. With a resistive load, the effective input voltage is reduced by 5.4V from the nominal 230V (224V RMS) due to the voltage drop across the winding resistance. This reduces the total flux density in the core. With a total secondary resistance of 322mΩ, at full load (6 amps) the voltage drop is 1.92 volts. When the transformer is designed, the turns ratio is adjusted to compensate for these losses, so at no load the output voltage will be more than 50V. For 230V to 50V, the ratio is 4.6:1, but the transformer will be wound with a ratio of around 4.3:1 and that will give a no-load output voltage of 53.5V.
+ +You don't need to understand this to build a power supply, but if you're designing one it helps. Most people just buy the suggested transformer and it usually works out just fine, but there's a great deal to be said for fully understanding why the voltages you measure aren't the same as you expected. Since a great deal of the ESP site is intended to show electronics enthusiasts just how things work (and why), I feel that it really is important to understand the nuances that are usually missed.
+ +The same three transformers were also tested for no-load input current (aka magnetising current - Imag) and unloaded secondary voltage across both windings. The input was set for 230V RMS for each test. The four sets of data below are measured values, not the nominal values claimed on the nameplate. The rated values will be achieved only with a fully resistive load, and not with the usual rectifier and filter capacitor power supply that will be used in almost every case.
+ + +Type | VA | Imag | Idle Loss, VA / W + | Vout (AC) | Regulation | Output (DC) + |
Toroidal | 230 | 19.2 mA | 4.41 VA / 1.16 W | 53.3 V | 6.6 % | ±36.75 V + |
Toroidal | 300 | 16.0 mA | 3.68 VA / 1.93 W | 53.2 V | 6.4 % | ±36.65 V + |
E-I | 212 | 66 mA | 15.18 VA / 10.8 W | 60.0 V | 7.14 % | ±41.46 V + |
E-I (![]() | 350 | 152 mA | 34.96 VA / 7.32 W | 60.7 V | 6.5 % | ±40.64 V |
The idle loss was measured in VA and watts, and while marginally interesting it's not actually very useful unless the transformer is powered on 24/7 and you want to know how much it costs to run. While you might expect it to be determined by I²R, that's not the case at all. The losses are a combination of copper loss and 'iron loss' (losses in the core). What is slightly interesting is the ESP transformer, which has lower power loss (as opposed to VA) than the other E-I transformer. I double-checked this, and it's correct. Part of the reason is the higher primary resistance, but it shows that the core losses are greater too. This is likely due to 'budget' steel having been used. All normal transformer measurements result in VA, which is the only rating that counts. In terms of no-load operating cost, there's no contest as the toroidal transformers win hands down.
+ +The comparison between the 212VA E-I transformer and the two toroidal transformers shows that the difference isn't as great as expected. The latter have lower magnetising current and better regulation than the E-I type, but in use the difference will be minor. All of the figures shown for these transformers are very accurate, but the normal ±10% mains variation is not considered. This makes accuracy a moot point, but it's still instructive (if not very useful) to be able to take these measurements. Doing so shows the processes involved, but in reality you can dispense with this in its entirety. The ESP transformer is one I had made to my specifications, and its regulation is almost as good as the 300VA toroidal. This comes at a price though; much higher no-load losses. When these were made (ca. early 1980s), toroidal transformers were rare and very expensive.
+ +A final piece of trivia that can be useful (sometimes) is the number of turns per volt (TPV). All transformers require sufficient turns to keep the magnetising current low enough to prevent core saturation. This is based on the size of the core and the maximum allowable flux density. For silicon steel, the maximum flux density is generally 1 Tesla (65,000 lines per inch²), but it can be pushed a little higher if you are willing to accept higher no-load losses. For the three transformers, I measured the following TPV ...
+ ++ 230 VA Toroidal: 3.89+ +
+ 300 VA Toroidal: 3.45
+ 212 VA E-I: 2.72
+ 350 VA E-I: 1.91 +
It's obvious that both of the E-I transformers have fewer turns per volt than the 'optimum', and this is the reason for the higher no-load losses. The two toroidal transformers have more turns per volt (hence lower losses), but quite obviously use thicker wire as the resistance is lower. It's easy to work out how many turns are required for any given voltage. For example, with (say) 3.5 TPV, a 230V primary needs 805 turns. There is a big difference between toroidal and E-I transformer cores. The toroidal core has no air-gap, and the onset of saturation is much more rapid. An E-I core can be pushed a little harder, because the junction between E and I laminations introduces a small air gap that makes saturation a little less radical. Higher flux levels can be tolerated, and it becomes a trade-off between no-load and full-load losses.
+ +Note: The turns/ volt of a 50Hz transformer is always greater than an equivalent transformer designed to operate at 60Hz A transformer for 50Hz needs 1.2 × the number of turns for the same transformer at 60Hz. It used to be common for 60Hz-only transformers to be used in US made equipment, but global trade means that most (but not all) US (or Canadian) transformers are now made to work with 50-60Hz, and 120-240V mains. A 60Hz transformer will overheat (often to the point of failure) when used at 50Hz, even if a step-down transformer is used to reduce 230V to 120V. A transformer designed for 50Hz will work perfectly with 60Hz, and no precautions are needed.
+ +To measure TPV, simply wind one turn around the outside of the existing windings, apply the rated voltage to the primary, then measure the voltage (in millivolts). For the 300VA toroidal transformer, I measured 290mV for one turn, and TPV is simply the reciprocal (1/V). This works out to be 3.4482 (3.45 TPV is close enough). It's more accurate if you add 10 turns, and divide the measured voltage by 10. You don't need to know this unless you are adding a new winding to an existing transformer (something that can be useful). For example, if you need an auxiliary 15V AC supply, you'd just add 60 turns evenly spaced over the existing windings, using enamelled copper wire thick enough to handle the current. That will give you an unloaded voltage of 17.4 volts. Wrap the additional winding with tape for protection. Do not use ordinary electrical tape!
+ +VA | Reg % | RpΩ - 230V | RpΩ - 120V | Diameter + | Height | Mass (kg) + |
15 | 18 | 195 - 228 | 53 - 62 | 60 | 31 | 0.30 + |
30 | 16 | 89 - 105 | 24 - 28 | 70 | 32 | 0.46 + |
50 | 14 | 48 - 57 | 13 - 15 | 80 | 33 | 0.65 + |
80 | 13 | 29 - 34 | 7.8 - 9.2 | 93 | 38 | 0.90 + |
120 | 10 | 15 - 18 | 4.3 - 5.0 | 98 | 46 | 1.20 + |
160 | 9 | 10 - 13 | 2.9 - 3.4 | 105 | 42 | 1.50 + |
225 | 8 | 6.9 - 8.1 | 1.9 - 2.2 | 112 | 47 | 1.90 + |
300 | 7 | 4.6 - 5.4 | 1.3 - 1.5 | 115 | 58 | 2.25 + |
500 | 6 | 2.4 - 2.8 | 0.65 - 0.77 | 136 | 60 | 3.50 + |
625 | 5 | 1.6 - 1.9 | 0.44 - 0.52 | 142 | 68 | 4.30 + |
800 | 5 | 1.3 - 1.5 | 0.35 - 0.41 | 162 | 60 | 5.10 + |
1000 | 5 | 1.0 - 1.2 | 0.28 - 0.33 | 165 | 70 | 6.50 |
The above table appears in several ESP articles regarding transformers, and is shown again here because it's so useful. The two toroidal transformers I tested have slightly different regulation from that shown in Table 1.4, but the difference is not significant. There are many other factors that influence the output voltage you measure, particularly when it's rectified and filtered. These are covered next, and while you can't change anything, at least you will know why the voltage is different from the assumed value.
+ +Another term you'll find in any text about transformers is the turns ratio. This is simply the ratio of input to output voltage, so a transformer with a turns ratio of 4.32:1 means that for every 4.32V on the primary, there will be 1V at the secondary. With 230V input, the output is 53.24V (as shown tor the 300VA toroidal transformer in Table 1.3). Although it's not relevant for mains transformers, the impedance ratio is the square of the turns ratio (18.66:1 for the same transformer).
+ +You can use the impedance ratio to consolidate the effective winding resistance of both primary and secondary. For example, if the turns ratio is 10:1 (230V input, 23V output) and the primary resistance is 10 ohms, the effective primary resistance is 10Ω / 10² = 0.1Ω (100mΩ). This can be added to the actual secondary resistance to obtain the total. If the secondary resistance is (for example) 250mΩ the total effective series resistance is 350mΩ. I don't expect many readers will take it to this extreme, but it sometimes comes in handy - particularly for simulations and/or detailed analysis.
+ + +We expect a sinewave from the mains, but it not - it's invariably distorted. The degree of distortion varies throughout the day, and depends on the current loading on the grid. It's fair to say that the 'rules of thumb' that we all use are either wrong, or at least inaccurate. Surprisingly perhaps (perhaps???), this doesn't matter very much, because the errors created by the basic formulae are smaller than the errors caused by mains variations. So, you can continue to use √2 (and its inverse) and not worry about it. Provided you understand that the results are (very) approximate, then you're well on your way to understanding linear power supplies.
+ +
Figure 2.1 - Distorted Mains Waveform
The above is a capture from the mains (via a transformer for safety), showing obvious distortion. We'd expect the peak voltage to be ±32.5V for a 23V RMS sinewave, but the oscilloscope shows ±32V (1V peak-peak lower than expected). The voltage was exactly 23.0V RMS, measured separately with a precision bench multimeter. This is normal, so it's obvious that a small voltage has been 'lost' from the mains. In itself, this is a minor contribution to the inaccuracies you'll measure when testing a power supply. The transformer used to drop the mains to a safe value was operated well below saturation to ensure it didn't contribute to the mains distortion.
+ +However, it is important that you recognise that the mains waveform is distorted, as that reduces the actual (as opposed to calculated) DC voltage. The waveform at the transformer's secondary is also distorted (in much the same way as the mains), because the capacitor peak charging current is much higher than the RMS value. As a general rule, expect the AC RMS current to be at least 1.8 × the DC current. It can be greater than this, and in general I prefer to use a 2:1 ratio as that's usually more realistic.
+ +The simple relationship we use to convert AC (RMS) to DC (AC × √2 [1.414]) doesn't work when the waveform is distorted, but it's still perfectly alright to keep using it. At low currents the error is so small that it doesn't matter. At high current, it doesn't work at all, due to the winding resistances described in the previous section. These can be sufficiently high as to make the average DC voltage almost the same as the RMS voltage when high current is drawn form the supply.
+ +The 300VA toroidal transformer will provide a DC voltage of 69.5V with a (roughly) 1A load and 10,000µF filter cap, and supplied with a 230V sinewave. That is reduced to 66.7V DC under the same conditions, but a 230V RMS distorted (about 4.4% THD) waveform. The additional distortion created by the transformer's winding resistance reduces the voltage even further.
+ + +First, we select a transformer. Because I already had a simulation file using the ESP custom transformer, that's the one I used for the example. Any will work of course, and the results are very good if all the information is supplied. The details are shown on the schematic, with voltage and current under load (about 310VA). The primary winding will dissipate 12.7W, with the first secondary (225mΩ) dissipating 6.84W and the second (241mΩ) 7.33W (a total secondary dissipation of 14.17W). The whole transformer dissipates 26.87W when providing 206W DC. Note that this does not include magnetising current, and this is reduced under load due to the primary resistance. The effective RMS input to the primary is 223V RMS for a 230V RMS input under the load conditions shown below.
+ +
Figure 3.1 - Power Supply Components
The above shows the total parts needed for a 'typical' power supply. The transformer is 230VA (but only because it was the first to hand to add to the photo), with 25+25V secondaries. The two filter caps are 10,000µF, and the bridge is a chassis mount 25A type. Compare this with Fig. 13.4 (a 60W switchmode supply) in terms of complexity and the number of parts needed. The transformer shown will be more than sufficient for a pair of 100W (4Ω) amplifiers for hi-fi usage. You could get more, but the voltage is limited (±35V DC). The wiring is simplicity itself, apart from the mains input and switch which must be properly installed to prevent accidental contact.
+ +Normally, transformer selection doesn't concern us if the transformer is specified as part of an overall design, but when it's necessary to specify the transformer yourself, then without guidelines you are in the dark. You don't need to go into as much detail as I've done here, but this example gives you a much better idea of what you can (and cannot) get away with. Most loads are not continuous, so 'transient' transformer overload is not an issue. If the load current is continuous, then you need to perform detailed tests to ensure that the transformer will survive, and to ensure that you get the voltage(s) your circuit needs.
+ +
Figure 3.2 - Power Supply Schematic
The expectation is that the DC output voltage will be around 79.6V, but it's not, it's 70.3V. The secondary waveform is badly distorted because of the high-current pulses needed to recharge the filter cap and supply the load current at the same time. The peak current is 12.8A, with the RMS value being 5.51A, and a DC load current of 2.93A. That means that the AC current is almost 1.9 times the DC current! The 'rule of thumb' is that the AC current is 1.8 times the DC current, but as you can see that's not necessarily the case. It does become closer to 1.8 with transformers having higher winding resistance.
+ +
Figure 3.3 - Voltage & Current Waveforms
For the simulation above and those that follow, I used a 'pure' sinewave for the input, because the normally distorted mains waveform changes things and makes analysis harder. As a result, you will not measure the exact relationships shown, but the results will still be well within normal tolerances. Above we see the voltage and current waveforms. As noted above, the transformer's secondary current is 5.51A RMS, with peaks of 12.8A. The peak to RMS ratio is 2.323:1, and the diode conduction period is about 3.4ms for each half-cycle. The diodes only conduct when the incoming AC is greater than the capacitor voltage. The average DC voltage (blue waveform) is 70.73V, ripple voltage is 1.9V RMS or 6.14V peak to peak (8.7% based on p-p or 2.7% based on RMS). Note that you can't use the √2 multiplier to covert RMS to peak-peak because the waveform is not a sinewave.
+ +You can see that once the peak AC is higher than the DC level (by ~0.9V) the DC voltage follows the AC waveform. The two are in almost 'perfect harmony' as the filter capacitor charges. The only difference is the diode forward voltage drop, which varies with current (the forward voltage of most diodes is up to 1.2V at maximum current). This distorted waveform is also passed back into the mains supply via the transformer, and is not mitigated by the primary resistance.
+ +The waveform distortion is a direct result of winding resistance and peak current. With the primary resistance of 6.91Ω and current peaking at 3.4A, that takes 23.5V from each mains peak (positive and negative), so the effective mains voltage is only 301V peak, not 325V peak as expected. Interestingly, the voltage across the primary (after Rp) is 223V RMS, due to the distorted waveform (flux density is determined by the peak voltage, not RMS). On the secondary side, there's a total of 446mΩ with peak current of 12.95A, a further 5.98V (peak) is lost, and that's why the regulation is so poor. This affects all transformer, bridge and filter cap combinations in the same way. The only way to reduce these losses is to use a bigger transformer.
+ +The ripple voltage can be reduced by increasing the value of C1. Doubling C1 (6,6000µF) halves the ripple voltage, but increases the transformer's secondary RMS current to 5.54A. Although there is a myth circulating that using a much larger than normal filter cap will 'burn out' the transformer, it's complete nonsense. You can use as much capacitance as you like (or can afford) without concern. The increase is small, and it gets even smaller once the capacitance exceeds 10,000µF (10mF).
+ +You may (or may not) have noticed that the loaded (310VA) power dissipation is less than that at no load. I can't simulate core saturation easily (and it's inaccurate when attempted), but from experience I know that the transformer used for these simulations runs a little cooler under moderate load than at idle. Although I estimated the VA rating to be 350VA, based on the internal dissipation figures shown above it could be pushed to 400VA if the regulation isn't a concern. Would I run it at more than 350VA with a continuous load? "No" is both the short and long answer.
+ +Something else that you may (or may not) notice is that the 'clipped' AC waveform has peaks that rise in voltage in Figure 3.2, but the mains waveform shown in Figure 2.1 shows the peak voltage falling at the peaks. The reasons for this don't appear to be discussed anywhere. I suspect that it's simply due to phase shift through the power distribution network, caused by a combination of partially inductive loads and power-factor correction capacitors on the network. In some mains waveform captures you may find on the Net, the mains is either 'flat-topped' or shows the voltage rising slightly, so it's location dependent. The waveform I captured shows the mains distortion that existed at my location at that time of day. It does change, and I've seen different waveforms during other tests.
+ + +The constant current drain of a Class-A amp means that you must design the supply for continuous (rather than transient) current. Other amplifiers (notably Class-AB and Class-D) draw a widely varying current, depending on the load and instantaneous output power. This is catered for in all ESP designs, but it still means that the continuous output power is a little less than you might expect with some loudspeaker loads. Transformers are always specified that will cater for continuous high power, and that means that they are usually larger than may be suggested for other designs you may come across.
+ +When a power supply is used with an amplifier, the basic things we need to know before starting are as follows ...
+ ++ Power output and minimum impedance, or ...+ +
+ Peak and average DC current
+ Acceptable power supply ripple voltage +
With only these criteria, it is possible to design a suitable supply for almost any amplifier (or power supply for other applications). I shall not be describing high current regulators or capacitance multipliers in this article - only the basic elements of the supply itself. These other devices are complete designs in themselves, and rely on the rectifier/ filter combination to provide them with DC of suitable voltage and current.
+ +By knowing the average DC current and voltage, you can work out a rough approximation of the transformer VA rating needed. For example, a 100W (4Ω) amp will use ±35V DC supplies, and draw a peak current of up to 8.5A with a 4Ω load. The speaker current is about 6.2A RMS, and the average DC supply current is 2.6A (close enough) for each supply. Since the supplies are in series, the current in each is the same - 2.6A. However, this is the peak current for a continuous sinewave, and the average is less. We'll assume a peak-to-average ratio of 10dB (10W average at the onset of peak clipping), as this is likely to be the worst case. The average DC current is only about 720mA. Because of the rectifier conversion, the AC current will be 2½ times that, or about 1.8A.
+ +The transformer needed for a single 100W, 4Ω amp with ±35V is 25-0-25V, or 50V in total. 50V with 1.8A is 90VA (180VA for stereo). If you think that can't be right, you would be correct - it's actually less. It's uncommon for any home system to be operated at full power for any length of time. Even professional audio is subject to the same level variations that is a characteristic of music, although they are nearly always pushed closer to their limits. Note that this usually does not apply to guitar amps, which are often driven well into clipping much of the time.
+ +
Figure 4.1 - Dual Primaries For 120/ 230V Operation
In many cases, transformers are supplied with dual primaries. This allows them to be used with 120V (60Hz) and (nominally) 230V (50Hz). However, twice 120V doesn't add up to 230V, it's 240V. This is a 4% error, so when used with 230V the secondary voltage(s) will be 4% low, compared to the voltage provided with 120V mains. There's nothing (sensible) you can do about the error, and the difference is not worth worrying about. The mains voltage will vary by more than that during the day. The important part to be careful with is the polarity of the windings. If you get the polarity of one wrong, the result will be a splattered fuse, because the transformer places a very low resistance across the mains. Some transformers provide the details on the label, and others just provide the wire colours for each primary. When described this way, the colours are (almost) always in order (start - finish). The transformer shown in Fig. 4.1 shows the primaries as BRN-VIO+GRY-BLU, and the secondaries as RED-BLK+YEL-ORG. The first colour in each 'group' is the start (indicated by a dot in schematics). Note that there is no convention for colour-coding, and it varies by manufacturer.
+ +Due to the highly variable nature of music, it's very difficult to determine the 'true' average current drawn. It's always better to over-estimate than under-estimate, although that does mean you spend a bit more for the transformer. That comes with another benefit too - the supply rails don't vary by much as the amplifier is used. When dealing with a Class-AB amp, a reasonable assumption is to use a transformer rated for roughly the same VA as the output in watts. A stereo 100W per channel amp will be perfectly happy with a 200VA transformer. Some texts suggest 0.7, meaning that the transformer only needs to be 140VA. For normal home listening, this is usually more than acceptable. You won't be able to get the full power with a sinewave, but few of us listen to high power sinewaves for pleasure.
+ +If you use a transformer that's less than the amp power, it may be overloaded with continuous high level programme material, but music is dynamic and the average power is lower than you think. A 100W amp pushed to mild clipping will normally have an average power of less than 10W. Of course, that depends on the type of music and the peak to RMS ratio of the signal. If you subscribe to the idea of using the minimum you can get away with, even a 100VA transformer for a stereo 100W amplifier will be perfectly alright with normal programme material, but you won't get the full rated power except with transients.
+ +The peak to RMS ratio of music is highly variable. FM broadcasts are usually compressed, and they represent the worst case, so that's what I measured. Over the period I monitored it, the peaks were at ±2V and the averaged RMS (taken over a large number of samples) was 507mV, a 'typical' ratio of 4:1 (near enough) or 12dB. That means that a 100W amplifier driven just to the onset of clipping will be delivering an average power of 7.2W (12dB below 100W). The RMS speaker current is therefore about 1.3A into 4Ω. When the ESP designed transformer is powering a pair of amplifiers delivering maximum output with programme material, the transformer is operating at less than 200VA for a stereo pair. Of course, this assumes that there are no breaks in the music, and the amp is constantly driven to the verge of clipping. In normal use this is unlikely! Just 7.2W into 90dB/W/m speakers means 98.6dB SPL at one meter (and one speaker). At the listening position with stereo, it won't be much less.
+ +For those who prefer a simple answer with the minimum of explanation, this summary should be ideal. Naturally, you don't have all the information - just enough to make a reasonably sensible selection.
+ +The 'fudge-factor' of 0.7 is a good overall compromise, and it's rare that this will cause any issues. A dual (e.g. 100W/ Channel) amplifier needs a 140VA transformer, so you'd select the closest available. With toroidal transformers, the most common in this range is 160VA, with is perfectly acceptable. If the transformer is larger you get 'stiffer' supplies with less voltage 'sag' during transients. However, while the difference is easily measured, it will almost certainly be inaudible. As noted above, you cannot use this formula with guitar amps - the VA rating should not be less than the amplifier power. A 100W guitar amp therefore needs a minimum transformer rating of 100VA, and preferably a bit more. I wouldn't be happy with less than ~150VA for a 100W guitar amp.
+ + +In the following, the transformer has dual secondaries, connected in series for most, but kept separate for Fig. 5.3 and paralleled for Fig 5.4. There are a few transformers available that have a 'true' centre-tap, but they are now fairly uncommon - particularly for toroidal transformers. There is no difference whatsoever between a true centre-tap and one made by joining the two windings. When connecting the windings in series, they must be in-phase, with the start of the second winding connected to the finish of the first. Most toroidal transformers have a diagram that shows the colours used, and they are in order - start-finish.
+ +Note: If you use a muting system that relies on an AC connection to provide a power-off mute (such as Project 33), there is only one connection that works properly with a 'stacked' supply (Fig. 5.3). The take-off point is shown on the drawings that follow, and can be ignored if it isn't used.
+ +There is only one rectifier type that will be covered in depth - the bridge (single or dual supply). The so-called single-supply full-wave rectifier is a throwback to the valve era, when it was difficult to use any other topology with valve rectifier diodes. It was possible to make a bridge rectifier using valves, but the extra filament windings and multiple valve diodes made it far too expensive to consider. Due to poor transformer utilisation, it will be discussed only in passing. Note that in each case shown, the 'ground' of common point is as close as possible to the centre-tap of the filter capacitors, or for two (or more) sets of caps, from the last set.
+ +Half-wave rectifiers are an abomination, and even with only a few milliamps they can cause core saturation with a toroidal transformer. I don't recommend them for anything unless there is no other choice (which is very rare indeed). In most cases, anywhere a half-wave rectifier is used, you can (and should) use a bridge. The one exception is with some valve amplifiers, where a separate tap on the HV winding is used for the negative bias. You cannot use a bridge rectifier for that, but the current is very low (the only saving grace). Because of the likelihood of core saturation, I will provide no analysis of a half-wave rectifier.
+ +
Figure 5.1 - Dual (Positive And Negative) Power Supply Schematic
Figure 5.1 shows the most common transformer power supply used today. The transformer is the same one used in Figure 3.1, but the centre-tap is now connected and there are two filter capacitors in series, with twice the capacitance for each. The secondary currents are different because the two secondaries have different winding resistances. This affects the ripple voltage slightly, as the peaks are not quite the same size. Again, this is normal with almost all transformers (refer to Table 1.2 again) and it makes little or no difference in practice. The peak secondary current is 13.4A for S1 and 12.8A for S2 because each winding has a slightly different resistance. The capacitor ripple current is 4.63A RMS. The 'AC Detect' signal can be taken from either secondary winding.
+ +The transformer utilisation is almost the same as with the Figure 3.2 circuit. There are subtle differences, but in practice they are immaterial. The dual supply bridge is a very efficient design, and while some constructors like to 'experiment' with other arrangements, they don't make any significant difference. One thing that you will get is slightly asymmetrical DC ripple voltages under load, but this is never a problem.
+ +The arrangement shown in Figure 5.1 is derived from a pair of full-wave rectifiers as shown next. These used to be very common, but transformer utilisation is not particularly good and you'll most commonly see these used in valve circuits, often with valve rectifiers. If possible, I'd avoid this, but if the transformer has a fixed centre-tap (rather than separate windings that are wired in series) you don't have a choice if you only need a single supply.
+ +
Figure 5.2 - Single (Positive Only) Full-Wave Rectifier
Each secondary winding provides half the DC output, with the RMS values shown. This is somewhat misleading though, as the current from each winding is unidirectional (it's pulsating DC). The peak current from each winding is over 14A. While this circuit is useful, if you need high current it's better to use paralleled windings and a bridge rectifier (Fig. 5.4) if the transformer has separate secondaries. Ripple current is 4.02A RMS for each capacitor (a total of 8.04A). The 'AC Detect' signal can be taken from either secondary winding.
+ +The Fig 5.1 circuit is simply two of these full-wave rectifiers, with one providing a positive output and the other a negative output.
+ +
Figure 5.3 - Alternative 'Stacked' Dual Power Supply Schematic
This arrangement is imagined by some to be 'better' (in some mysterious way) than the more traditional arrangement. Apart from the requirement for two bridge rectifiers, there's very little difference from the 'traditional' dual polarity bridge, but the output voltages are a little lower because of the additional diodes. Transformer utilisation is fine, and the total dissipation in the transformer is very slightly lower, with some additional power being dissipated in the second bridge rectifier. The bridge current (peak and RMS) is slightly less for each bridge, but there are two of them so 'wasted' power is increased a little. Ripple current is 4.62A for C1 and 4.58A for C2 (both RMS). The 'AC Detect' signal must be taken from the point indicated.
+ +An interesting characteristic of this arrangement is that neither AC winding has a direct ground reference, and if you're using a Project 33 speaker protection circuit, the 'AC Detect' terminal must be taken from the point shown. With most other connections the power-off mute will remain 'on' and it will never release. I mention this in passing, because a customer had exactly that problem and wondered why. The connection shown works just fine, but it's easier to use the Fig. 5.1 circuit so the problem isn't created in the first place.
+ +The idea that a 'stacked' supply is somehow superior is not backed by any test results. Because the output ripple is exactly the same (give or take a few millivolts) as a more 'traditional' supply for the same conditions, there are no benefits, but you have to use two bridge rectifiers. This increases the cost, and nothing more. The output voltage is fractionally lower because there's an extra diode in series with each supply rail, but the diode current is (close to) identical.
+ +
Figure 5.4 - High Current Single Power Supply Schematic
On occasion, you don't need particularly high voltage, but you do need as much current as you can get. To do this, the windings have to be separate, with a pair of wires for each secondary (a centre-tapped transformer uses the arrangement shown in Figure 5.2 and you cannot parallel the windings). Note the polarity indicators ( markings) - the windings must be in phase or a short circuit is the result! The transformer input is 325VA (230V at 1.415A RMS), and the output current is doubled from dual supply and high voltage examples. Capacitor ripple current (4.61A each, 9.22A RMS for the two) is higher than the full-wave rectifier because the windings are in parallel, so have less series resistance. The 'AC Detect' signal can be taken from either end of secondary windings.
As noted earlier, ½ wave rectifiers should never be used if you need more than 1-5 milliamps or so. At even comparatively low output (say 5W), the transformer VA is more than seven times the output power (tested and measured!). 5W of DC can result in over 50VA in the transformer, and a toroidal transformer will almost certainly blow the fuse due to gross core saturation. There is simply no reason at all ever to use ½ wave rectifiers!
+ + +The ripple voltage (measured peak-peak) is determined by the capacitance and the mains frequency. In all the examples here, you'll notice that the capacitor value is not standard. I assumed the use of three 2,200µF capacitors in parallel in each case, as this is almost always going to be cheaper than using a single large electrolytic capacitor. Not only cheaper, but you'll usually end up with a lower ESR and higher ripple current with this arrangement. This is one of the very rare cases where you can save money and get a better result!
+ +The required capacitance for a given load current and ripple voltage is determined (approximately) by the formula ...
+ ++ C = (( IL / ΔV) × k × 1,000 ) µF ... where ++ ++ IL = Load current+
+ ΔV = peak-peak ripple voltage
+ k = 6 for 120Hz or 7 for 100Hz ripple frequency +
Since all my calculations above were done using 100Hz ripple frequency (50Hz mains), this can be checked easily. Several examples shown have a current of around 3A with a ripple voltage of 3V (P-P), so we need ...
+ ++ I L = 3A, ripple = 3V p-p, therefore C = 7,000µF (6,6000µF is near enough) ++ +
This formula is more than acceptable for most applications, with the error being less than the tolerance of most electrolytic caps. If we decide that 2V P-P ripple is preferable, the net result is that the required capacitance is about 3,500µF per amp (50Hz supply). The required capacitance will be less for 60Hz countries, at 3,000µF per amp - again for a 2V P-P ripple voltage. My recommendation is for a minimum of 3,300µF per amp DC, although that is not what was used in any of the examples. The reason for this is simple; full current is rarely required on a continuous basis. Note that the formula is only an approximation, and you will almost certainly see variations in real life. However, the formula works fairly well over a wide voltage range.
+ +Always remember that the mains voltage can fall (or rise) by up to 10%, where a fall of 10% gives a power loss of around 20W - the 100W amp will only be capable of 80W with 10% low mains. If an amplifier is intolerant of a normal amount of supply ripple (typically a couple of volts peak-to-peak), then it's a poor design and probably shouldn't be used. Some Class-A amps are an exception, and a capacitance multiplier is far cheaper than 100,000µF capacitors.
+ +Use of a small (e.g. 1µF) polyester or polypropylene capacitor across the DC output is a common practice. Electrolytics all exhibit a small inductance, and this causes their impedance to rise at high frequencies. This is dependent on the physical size (mainly the distance between the leads) of the cap - bigger caps usually have greater inductance. Should you choose not to include a film bypass (I don't bother in any supply I build), nothing 'bad' will happen - the impedance of the large electrolytic will usually remain much lower than that of the film cap at any frequency below 1MHz or so. It may be possible to see a tiny reduction of HF noise above 20-20kHz, but don't expect a reduction of more than perhaps 150nV (yes, you read that correctly). As I said, it does no harm, but you won't hear a difference in a blind test, and it's even difficult to measure without specialised equipment.
+ +Note that the leads to and from the filter caps will generally have far more inductance than the capacitor itself, and it is often these leads (as well as PCB traces) that dominate the 'self-resonant' frequency of a capacitor. If the leads are too long, then some amplifiers will oscillate. The proper place for film bypass capacitors is on the amplifier board itself - not directly in parallel with the filter capacitors. You can do both, but only the caps on the power amp board will have any useful effect. As a guide, the inductance of a straight piece of wire in free space is approximately 5-6nH (nano-Henrys) per centimetre, so if you have 100mm (10cm) of wire between the filter caps and the amplifier, you have added ~55nH of inductance in the supply leads. It isn't much, but can cause high speed semiconductors to oscillate in a feedback circuit.
+ +Some designers include a bleeder resistor in parallel with the filter cap(s). Once an amplifier is working normally this is redundant, and does nothing other than dissipate power. It can be very useful during testing though, as the caps can retain a charge for some time if the amp is not connected, leading to sparks, rude words and possible damage. There are no rules for the value, but it wouldn't be sensible to use a 1Meg resistor in parallel with a 10,000µF capacitor, nor would it be sensible to use a 100Ω resistor. In general, a resistor value that will discharge the caps to 37% of the full voltage (one time-constant, R×C) in around 10 seconds is reasonable, so for a 10,000µF cap that means a 1k resistor. If the supply voltage is ±35V, you'll need 2W resistors that will dissipate a little over 1.2W each. Work out the value and power rating needed for your application using Ohm's Law.
+ + +The manufacturers' ripple current rating is the maximum continuous current (at maximum allowable temperature) to achieve the quoted life expectancy of the capacitor (usually 2,000 hours, or 12,000 to 26,000 hours for 'high reliability' capacitors). The ripple current rating is determined in part by the ESR (equivalent series resistance) and the maximum rated operating temperature (typically 85°C or 105°C).
+ +Capacitors in power supplies feeding Class-A amps should be operated well within their ratings. In a Class-AB amp, the maximum ripple is at maximum output which only occurs occasionally (if at all!). Occasional excursions up to or even above the maximum ripple current will not significantly affect the life of the capacitor. In a Class-A amp however, the ripple is at or close to the maximum whenever the amp is switched on. If the ripple current is at the maximum for the capacitor, the life expectancy would be 2,000 hours (for most types). This equates to a life of less than 2 years if the amp is used for 3 hours a day. It will likely last much longer, but that will be good luck rather than good management.
+ +A formula for calculating ripple current would be very useful, but unfortunately (despite claims made in some articles I have read), it is almost entirely dependent on the series resistance provided by the incoming mains, the power transformer and rectifier diodes. Formulae that do exist only work for capacitance values that are too small to be effective.
+ +As a rough guess (and that's all it is), you can estimate the RMS ripple current. It will typically vary from around 1.8 times the DC current up to 2 times the DC current for larger transformers with low winding resistance. With smaller transformers (higher winding resistance) the ripple current may be less than 1.8 times DC, but usually not by a great deal. Even for a small transformer, it's safe to assume that the RMS ripple current will still be about 1.5 times the average DC. Wherever possible, ensure that the capacitor ripple current rating is at least double the average DC.
+ +Remember that large capacitor values have a smaller surface area per unit capacitance than smaller ones, so the use of multiple small caps instead of a single large component can be beneficial. There is more surface area, the ESR will be lower, ripple current rating higher, and the combination will most often be cheaper as well. This is an 'all win' situation - rarely achieved in any form of engineering. See Linear Power Supply Design for a detailed analysis. However, large 'can' style capacitors with bolt-on connections are generally made to very high standards. They are expensive, but if you want the best performance possible these are recommended. Never buy these from unknown sellers on auction sites, as fakes are quite common (these caps typically cost over AU$35 each). If you see them advertised for less than AU$20 (with free postage) you can't expect to get the real thing!
+ +It's worth pointing out that historically, filter capacitors are the number one cause of power supply failure. This is almost always because of the effects of temperature and ripple current, and close attention to this is very much worth your while. ESR is the best way to determine if a capacitor is still good or is on its last legs. An ESR meter is an excellent investment for anyone building or repairing amplifiers. When a cap goes 'bad', the ESR will rise to an unacceptable value even though the capacitance may seem to be within normal tolerance. It's also worth noting that many vintage (valve and transistor) guitar/ hi-fi amps may still have the original filter caps. Sometimes it's difficult to understand how a cap that has a 'design rating' of 2,000 hours can last for 30 years or more!
+ + +One thing I strongly recommend for power amplifier power supplies is the use of 25/ 35A chassis mounted bridge rectifiers. Because of the size of the diode junctions, these exhibit a lower forward voltage drop than smaller diodes, and they are much easier to keep cool since they will be mounted to the chassis which acts as a heatsink. As always, lower temperatures mean longer life, and as was demonstrated above, the peak currents are quite high, so the use of a bigger than normal rectifier does no harm at all.
+ +Even given the above, I have had to replace bridge rectifiers on a number of occasions - like any other component, they can (and do) fail. Bigger transformers increase the risk of failure, due to the enormous current that flows at power-on, since the capacitors are completely discharged and act as a momentary short circuit. You must always consider the peak current, which as shown above is much higher than the RMS or average value. With a 'typical' power supply, the peak diode current can exceed five times the DC current, even though the average diode current will be about half the DC current. Diodes are (almost) always specified for average current, with a repetitive peak current capability that can handle the expected peak current in normal use.
+ +Diodes used in a FWCT (Full-Wave Centre Tapped) or single Full-Wave supply rectifier must be rated at a minimum of double the worst case peak AC voltage. So for example, a 25V RMS transformer will have a peak AC voltage of 35V when loaded, but may be as high as 40V unloaded, and double this is 80V. 100V Peak Inverse Voltage (PIV) diodes would be the minimum acceptable for this application.
+ +Voltage doubler supplies are very uncommon for transistor power amps, but are sometimes used for preamp supplies and valve (vacuum tube) amplifiers. The diode PIV must be at least double the peak AC voltage, so (for example) with a 20V winding (28V peak) the diodes need to be rated for a minimum of 100V.
+ +For a single bridge rectifier, PIV only needs to be greater than the peak AC voltage, since there are effectively two diodes in series. In the case of a dual supply (using a 25-0-25V transformer), the worst case peak AC voltage is 80V, but using diodes rated for 200V PIV is wise. The most common 35A chassis mounted bridge rectifiers are rated at 400V, and this is sufficient for all supplies commonly used for power amplifiers of any normal (i.e. < 500W into 8Ω). Beyond this, the voltage rating is fine, but the current rating is inadequate, and a higher current bridge should be used. Alternatively, use a separate bridge and filter capacitors for each channel.
+ +There is currently a trend towards using fast recovery diodes in power supplies, since these supposedly sound 'better' (IMO this is snake-oil). There is absolutely no requirement for them, but they do no harm. The purpose of a fast recovery (or any other fast diode) is to be able to switch off quickly when the voltage across the diode is reversed. All diodes will tend to remain in a conducting state for a brief period when they are suddenly reverse biased. This is extremely important for switchmode supplies, since they operate at high frequency and have a squarewave input. Standard diodes will fail in seconds with the reverse current, since it causes a huge power loss in the diode.
+ +These diodes typically come in a TO-220 package, and must be mounted to a heatsink (with insulating washers and thermal compound). At maximum output current, the diodes can dissipate a surprisingly high power (over 12W peak or 2W average each is easily achieved), and the TO-220 package is too small to maintain a sensible temperature without a heatsink. Based on datasheets, the thermal resistance from junction to ambient for a 'free-air' TO-220 package is around 73°C/W, so at 2W that would leave the junction at over 170°C, which is just below the maximum of 175°C. There's no room for error without a heatsink.
+ +At 50 or 60Hz, and with a sinewave input, the slowest diodes in the universe are still faster than they need to be. Despite this, high speed diodes actually do cause less 'disturbance' at the transformer's secondary. Not that it makes the slightest difference to the DC. Some designers suggest that even the standard diodes should be slowed down with paralleled capacitors. This might help, as it reduces the radiated and conducted harmonics from the diode switching. These switching harmonics can extend to several MHz (but at very low levels), even with the normal 50/60Hz mains.
+ +Typically, capacitors between 10 and 100nF (optionally with a small series resistance) are wired in parallel with each diode in the bridge, and this is quite common with some high end equipment and test gear where minimum radiated noise is essential. Some constructors like to add snubbers (a series resistor and capacitor) in parallel with the transformer secondaries. For more info on that topic, see Power Supply Snubbers which covers this in detail. Don't expect a snubber or fast diodes to change the DC, because they won't (and this has been tested and verified on the workbench). The main filter capacitors have a very low impedance at all frequencies of interest, and they effectively remove all traces of switching transients (they are not particularly fast, despite 'alternative' opinions).
+ + +Since the power supply is connected to the mains, it is necessary to protect the building wiring and the equipment from any major failure that may occur. To this end, fuses are the most common form of protection, and if properly sized will generally prevent catastrophic damage should a component fail. However, read the next section before deciding, as inrush current has to be accommodated. You'll find everything you need to know in the article How to Apply Circuit Protective Devices, and ignore all claims for 'audiophile' ('audiophool'?) fuses. They are nothing more than devices intended to separate you from your money, sold by charlatans (aka snake-oil vendors).
+ +Toroidal transformers have a very high 'inrush' current at power-on, and slow-blow fuses are essential to prevent nuisance blowing. In the case of any toroid of 500VA or more, a soft-start circuit is very useful to ensure that the initial currents are limited to a safe value. An example of such a circuit is presented in Project 39, and represents excellent insurance against surge damage to rectifiers and capacitors.
+ +Calculating the correct value for a mains fuse is not easy, since there are many variables, but a few basic rules may help. Firstly, check the manufacturer's data sheet or website. Often they will have recommended fuse ratings and types to suit their transformers in use. If manufacturer data is unavailable, determine the maximum operating current, based on the transformer's VA rating. The calculations done previously will help.
+ +The full load mains current is determined by the VA rating of the transformer, calculated by ...
+ ++ Imains = VA / Vmains Where VA is transformer rating, Imains is the mains current and Vmains is mains voltage ++ +
A 160VA, 230V transformer will draw a full load current of 695mA, but you'd normally use a 1A fuse, which must be slow-blow if the transformer is toroidal. In many cases you'll have to compromise, and use a fuse that's rated for more current than the transformer is designed for. If there's a major fault (for example a failed bridge rectifier or shorted transistors in the power amp, the fuse will protect the transformer from a prolonged overload. The mains fuse will not protect the amplifier or your speakers, and this must be done with additional circuitry (e.g. Project 33) and amplifier fuses. I recommend the use of a soft-start circuit for any transformer above 300VA.
+ +Thermal protection (often by way of a once-only thermal fuse) is included in some transformers. Generally (but not always) this is limited to small transformers that have a fine gauge primary winding, and they may only draw around twice their normal primary current when the output is shorted! A normal fuse can withstand that small overload for more than long enough to enable a complete melt-down! If the 'one-time' thermal fuse has been used, should the transformer overheat it must be discarded, since the fuse is buried inside the windings and cannot be replaced. Transformers that are used with preamps may have this issue, but never for big transformers rated for 100VA or more.
+ +You must ensure that the transformer is properly protected at the outset. Feel free to add your own thermal fuse, but make sure it is in good thermal contact with the windings, is well away from any airflow (intended or otherwise) and that the wiring to it is safe under all possible conditions. This isn't trivial, but it does add an extra level of protection - but only if done properly.
+ +Multi-tapped primaries (e.g. 120, 220, 240V) create additional problems with fusing, and often a compromise value will be used. The transformer protection is then not as good as it could be, but will generally still provide protection against shorted diodes or filter caps. Ideally, there will be different fuse ratings for 120 or 230V operation, and the correct fuse should always be used.
+ +Additionally, it can be an advantage to fit Metal Oxide Varistors (MOVs) to the mains - between the active and neutral leads. These will absorb any spikes on the mains, and may help to prevent clicks and pops coming through the amplifier. MOV specifications can be daunting though, and it will often help if you ask the supplier for assistance to pick the right one for your application. They usually can only withstand a limited number of over-voltage 'events' before they fail completely, and the normal failure mode is for them to explode (and no, I'm not joking).
+ +++ + ++
++
Note that a primary fuse or circuit breaker protection does not protect the amplifier against overload or shorted speaker leads. If this happens, or should the amplifier + fail, the primary fuse offers no protection against further amplifier or speaker damage and possibly fire. For this reason, secondary DC fuses should always be used - no exceptions. + Many people also like to include DC protection, such as Project 33. Many commercial and kit versions fail to show the correct relay contact + wiring, and they may be next to useless if the voltage exceeds 30V. +
Inrush current is defined as the initial current drawn when the power is first applied. With transformer based power supplies, there are two separate components - transformer inrush and capacitor charging current. They are very much interdependent, but the maximum current at power-on cannot exceed a value determined by the transformer's primary resistance. The optimum part of the waveform to apply power for a transformer is at the peak of the AC voltage - 325V for 230V mains. See Transformers, Part 2 for more info.
+ +To minimise capacitor inrush, power should be applied at the mains zero crossing, where the maximum rate of change of voltage is the lowest.
+ +These two are completely at odds with each other, but the exact moment when power is actually applied is effectively random. In addition, there is the effect of the (discharged) capacitor applying an instantaneous heavy overload to the transformer at power-on. This will tend to reduce the transformer's flux density, but the cap(s) will behave as a momentary short-circuit (via the diode bridge), so the only way to know what really happens is to run tests. This level of testing is not trivial and requires specialised test equipment, but fortunately is not really necessary.
+ +With transformers of 300VA or less, you usually don't need to do anything. If the correct rating and type of fuse is used, the inrush current will be high but well within 'normal' range. The worst case inrush current can be no more than around 50A (at 230V for a 300VA transformer), because it's limited by the primary resistance and mains impedance. Duration is typically less than one AC cycle. Larger transformers create higher inrush current because the primary resistance is lower. The capacitors have to charge, and as noted above (see Table 6) the capacitor inrush duration is much less than 500ms, even with extremely large capacitors.
+ +The easiest way to limit the inrush is to use a soft start circuit such as Project 39. Using NTC thermistors alone is a very poor choice, because most amplifiers don't draw enough current at idle to keep the thermistor(s) hot enough to obtain a low series resistance. The thermistor resistance will be constantly cycling when the amp is driven with a signal, and there is little protection if the amp is (accidentally or otherwise) switched off and back on again quickly. The constant cycling will eventually cause the thermistor(s) to fail, often explosively!
+ +A soft start circuit protects the fuse from very high surge currents, limits the capacitor charging current, and makes the power-on cycle much more friendly to the equipment and the incoming mains. The resistors (or thermistors) should be selected so that the maximum peak current is between 2 and 5 times the normal full power operating current. For example, if an amplifier is expected to draw 2A at maximum power, the soft start should limit the worst case peak current to somewhere between 4 and 10 amps. For 230V mains, the resistance will be between 23 and 58 ohms. The standard values I suggest for Project 39 are around 50 ohms for 230V (or 22 ohms for 120V), and these have proven to be effective and reliable for many hundreds of constructors.
+ +Provision of a soft start is also needed for most switchmode power supplies. Unlike a linear supply, there is no transformer primary winding resistance to limit the current, and the low ESR of the capacitors can cause exceptionally high inrush. I've measured the inrush of a fairly modest SMPS (150W) at 80A peak, and even a small 20W SMPS can cause 10A or more peak inrush current. Many of the latest generation of switchmode supplies use an active soft start circuit because the inrush current often causes circuit breakers to trip if several supplies are turned on at the same time. A modest 150µF/ 400V electrolytic capacitor will have a typical ESR of no more than 2 ohms, so if not limited, inrush current can be 150A or more - at least in theory.
+ +In practice, there are several additional impedances that help mitigate the inrush current. Mains wiring (including plugs and sockets), diodes, fuses and internal wiring all contribute some resistance and that keeps the inrush current below 100A in most cases. To ensure that inrush never causes a problem, a soft-start circuit is by far the best solution.
+ + +For any 'home build', always use a 3-wire mains lead. Double insulation is somewhere between difficult and impossible to achieve for a DIY constructor, and to qualify for the 'double square' double insulated rating, accredited laboratory 'type testing' is usually a requirement, not an option. Very, very few toroidal transformers will qualify, as most use basic insulation between the primary and secondary. Despite what you might think, 'basic insulation' is a regulatory term, meaning that the insulation is sufficient to ensure safety under all normal conditions, provided there is an earth/ ground connection for the equipment to provide a secondary level of safety. Note that the exact terminology for the two insulating layers depends on where you live (and the regulatory bodies thereof).
+ +Note too that double insulated appliances (by regulation) shall not be earthed! This makes double insulation on many commercial products irrelevant (and potentially dangerous), because it's very rare that all parts of a hi-fi system (in particular) are double insulated. This produces a quandary, which is cheerfully ignored by the vast majority of people who own a hi-fi system.
+ +Double insulation can produce problems with hi-fi (and other audio) gear as well. Consider the following drawing which shows the issue. 'Typical' transformers (I checked five different units) always have some stray capacitance between the primary and secondary. For those I tested, this ranges from about 300pF to 600pF, but there will be differences. More than 1nF in total is unlikely other than for very large transformers. If the secondary is floating (not earthed), you'll measure a voltage from 'Common' (which will be earthed in Class II gear).
+ +
Figure 11.1 - Voltage With Floating Secondary
The neutral connection is referred to earth/ ground in nearly all installations Note 1. The stray capacitance creates a voltage divider, so an un-earthed 'Common' will be at around 110-115V RMS with 230V, or 58-60V RMS with 120V, 60Hz. While the available current is low (you won't feel it), it can have enough energy to kill sensitive input stages such as JFETs or FET-input opamps. This is something I've tested and verified, and double-insulated products have killed other equipment before, and will continue to do so. Switchmode power supplies are much more likely to cause damage than mains-frequency transformers.
+ +Despite any misgivings you may have, the neutral is always connected to protective earth/ ground somewhere. It's this connection that creates the neutral, and it can be at the pole transformer and/ or at each customer premises (in Australia and New Zealand it's called 'MEN' - multiple earth neutral, and is mandated by AS/NZS3000:2018). Without a dedicated neutral, the safety of mains distribution is seriously compromised. However, it's important to note that electrical safety standards worldwide dictate that both active/ live and neutral are to be considered to be at 'hazardous voltage', regardless of the voltage you measure.
+ +To achieve double insulated standards, all mains wiring (including the transformer primary) require two separate layers of insulation - basic and supplementary. This includes all internal mains wiring, including the mains switch and fuse. Some E-I transformers are rated for double insulation (with the primary and secondary on separate sections of the bobbin), but the double insulated rating applies to the 'appliance', not the individual parts that are used. It's easy for an inexperienced (or experienced) constructor to use double insulated parts, but fail to achieve results that ensure that the entire unit meets the relevant standards for double insulation. My advice it that you don't even attempt it!
+ +Insulation types are as follows ...
+ +++ ++
+Functional Insulation between conductive parts which is necessary only for the proper functioning of the equipment. + Basic Insulation applied to live parts (e.g. the plastic insulated connectors that hold the active and neutral wires in + place) to provide basic protection against electric shock. + Supplementary An independent insulation, in addition to basic insulation, to ensure protection against electric shock in the + event of failure of the basic insulation. + Double Insulation comprising of both basic and supplementary insulation. + Reinforced A single insulation system applied to live parts, which provides a degree of protection against electric shock equivalent to + double insulation. +
A brief rundown of some of the equipment classes and applicable standards follows. These are important to understand, as misapplication can result in equipment that is unsafe, with the risk of electric shock, fire or both. The standards applied vary by country, but most use the following definitions and requirements ...
+ +++ ++
+Class 0 Electric shock protection afforded by basic insulation only. No longer allowed for new equipment. + Class I Achieves electric shock protection using basic insulation and protective earth grounding. + Class II Provides protection using double or reinforced insulation and hence no ground is required. (In fact, grounding is + prohibited Note 1 ). + Class III Operates from a SELV (Separated Extra Low Voltage) supply circuit, which means it inherently protects against + electric shock (no hazardous voltages are to be generated within the equipment). +
Class 0 uses only basic insulation with no additional protection, and is no longer permitted in most (if not all) countries. 'Legacy'/ vintage gear is commonly Class 0 (especially that of US origin). It's unsafe, and should be upgraded to Class 1 without delay. Class 1 requires that all conductive parts that could assume a hazardous voltage in the event of basic insulation failure are to be connected to the protective earth conductor. The vast majority of 'home builds' will be Class 1 as there are really no other options.
+ +This means that your power supply and associated electronics will be in a suitable chassis (usually of metal, but it may be plastic), and all parts that may become 'live' in the event of a functional or basic insulation failure are securely bonded to the protective earth with a 3-wire mains lead.
+ + +When you build test equipment, that falls into a different category in many cases. Equipment must still be safe, but the rules are often relaxed (a little) because much test gear is used by professionals who already know and understand the potential risks. There are exceptions, and what some people refer to as 'nanny state' regulations mean that even professionals have to be 'protected' from hazardous conditions, despite that fact that working on equipment is inherently hazardous.
+ +In some cases, you will be able to get away with not grounding the internal circuitry, although if the equipment has a metal case, that should be connected to protective earth. This isn't 100% 'safe' of course, but it means that a bench power supply (for example) won't have one terminal joined to protective earth, because that limits the way it can be used in a normal workshop environment. Notably, oscilloscopes are invariably grounded, and this has led to the extremely dangerous practice of cutting off the earth pin, or using an isolation transformer for the scope. This means that the 'ground' lead can easily end up at a dangerous potential. The solution is simple - connect the equipment being tested through an solation transformer, but only when it's essential to do so.
+ +There's a persistent myth that using an isolation transformer for everything is a good idea. It's not, never was and never will be. Equipment (mains) faults can be missed, and the user is lulled into a false sense of security. The workbench safety switch will not operate if you get an electric shock, and it may be the last one you ever get!
+ +If you're building test equipment, you must apply common sense, and be extra careful that you don't create something that may try to kill you. All mains powered gear has a (usually small) risk of live mains wiring coming into contact with the internal circuitry, so use only mains-rated cable, keep it well separated from all internal electronics, and make sure that switches and fuses are rated for the full mains voltage and will not become 'live' even if the internal mechanisms should choose to spontaneously disintegrate.
+ + +EMI/ RFI (electro-magnetic interference/ radio frequency interference) is not usually a problem with a linear power supply, and most will pass the regulations in all countries without any filtering. However, it's quite common to use at least some kind of filter, which in many cases will be nothing more than a capacitor. There are three possible approaches, with none being significantly better or worse than another. There can be a large cost difference though, and it's up to the constructor to decide which approach is used.
+ +The first method is to use a mains rated (Class X2 or X3) capacitor in parallel with the transformer's primary, after the mains switch. It's important that no standard (DC) capacitor is used, regardless of voltage rating - it must be an X2 /3 mains cap. Class-X caps are specifically designed for use across the mains, and are usually (but not invariably) polypropylene. A common voltage rating is 275V AC, which is ideal. A capacitance of around 100nF to 470nF is generally suitable. In the US, it used to be common to see 600V DC caps used, and while these might survive with 120V, they will fail with 230V AC mains. A Class-X capacitor may also fail, but will eventually become open circuit (DC caps can [and do] fail short-circuit!).
+ +The second is to use a capacitor across each winding of the transformer's secondaries. Again, I suggest that you use X3 caps, especially for secondaries of more than 50V AC. The task gets harder for valve amps, because the secondary voltage is usually in the range of 300V to 600V AC, so a series string of caps will almost certainly be needed. When a series string is used, it's a good idea to include resistors in parallel with each cap to ensure the voltage across each is equal. Be careful with the resistors - it will often be necessary to use several in series so the voltage across each is limited to a safe value. Using resistors with a high voltage across them will almost always lead to eventual failure! Many people (and especially hobbyists) remain unaware that resistors have a maximum voltage rating. 1W carbon film resistors are usually a good choice here, but keep dissipation below 0.5 watt.
+ +The third method is common when people decide that fast diodes sound 'better', and they add a cap in parallel with each diode to slow it down again. The same can be done with standard diodes. This isn't a method I've recommended, but it will be similar to using a single cap (or two caps for a dual winding) across the transformer secondary winding(s). However, I have tested fast diodes, and was able to measure a small reduction of RF interference compared to 'standard' diodes. This translates to lower conducted (and possibly radiated) emissions, but it does not affect the DC at all. Fast diodes almost always require a heatsink, and they must be individually isolated with mica, Kapton (both with heatsink compound) or thermal pads.
+ +Class-Y ('intrinsically safe') capacitors are used in switchmode supplies, but I absolutely do not recommend that they be used in any DIY power supply. They are always low value (typically less than 10nF), and are usually connected between the primary and secondary for Class-II (double-insulated) switchmode supplies. The risk to any connected equipment (particularly sensitive input stages) is high, and if improperly connected they are potentially lethal. They are not needed with Class-I (earthed chassis) linear power supplies, and are likely to do far more harm than good.
+ +None of the above will make much (if any) difference to the mains harmonics generated within the audio band, but they can help reduce radio frequency noise by a few dB. The test used to determine whether there's a benefit or not is 'conducted emissions' - noise and/or interference that's passed back into the mains wiring through the mains lead itself. In most cases, it's highly unlikely that you will hear any difference, unless the added cap manages to reduce audible noise (improbable in a well laid out system).
+ +For more info on the topic, see the article Power Supply Snubber Circuits. While not essential (and it doesn't affect the sound), adding snubbers to the transformer secondary (or secondaries) can reduce EMI to a worthwhile degree. While EMI is rarely bad enough to prevent any transformer supply from passing conducted emissions tests, a snubber can provide an extra 'safety margin'. Far worse is the level of harmonic distortion (and poor power factor) caused by the very non-linear waveform. Note that the power factor is not due to inductance, but waveform distortion (see Power Factor - The Reality (Or What Is Power Factor And Why Is It Important).
+ + +Linear supplies are thought to be inefficient, but that is simply untrue. I tested two transformers (300VA toroidal and 212VA E-I types). The two test circuits used a pair of 6,800µF caps in a full-wave dual supply (nominally ±35V [toroidal] and ±40V [E-I], and I tested with 180Ω, 60Ω and 16Ω loads. It will come as no surprise that the toroidal transformer had higher efficiency, but both were far better than 'common wisdom' would have you believe. Try as I might, I was unable to find any published figures on-line for the efficiency of a simple transformer, bridge and capacitor power supply, so this is likely to be the first time you've seen these measurements.
+ +Any time you try to find this info, you'll get countless pages talking about SMPS, but little or nothing for 'conventional' power transformer based supplies (sometimes referred to as 'heavy iron'). It's worth noting that it's actually quite difficult to get the results by simulation. Because iron losses aren't taken into account, the result will generally be optimistic. It's also rather a chore to get every parameter exactly right, and that's why I took measurements. Note that this high efficiency only applies for an unregulated supply. If the output is regulated with a linear regulator
+ +300VA Toroidal + | ||||||
Load | DC Volts | Power Out | Power In | VA | PF | Efficiency + |
No Load | 69.5V | 0 | 2.1 W | 3.22 | 0.65 | 0% + |
180 Ω | 67.8 V | 25.5 W | 29.6 W | 34.04 | 0.87 | 86.1% + |
60 Ω | 66.0 V | 72.6 W | 79.6 W | 93.84 | 0.85 | 92.5% + |
16 Ω | 59.2 V | 219.0 W | 235.8 W | 326.6 * | 0.67 | 92.9% + |
+ | ||||||
212VA E-I + | ||||||
Load | DC Volts | Power Out | Power In | VA | PF | Efficiency + |
No Load | 82.2 V | 0 | 11.5 W | 16.5 | 0.70 | 0% + |
180 Ω | 78.4 V | 34.1 W | 42.4 W | 50.6 | 0.84 | 80.4% + |
60 Ω | 74.9 V | 93.5 W | 113.8 W | 126.7 | 0.90 | 82.2% + |
16 Ω | 62.8 V | 246.5 W | 330.0 W | 365.0 * | 0.90 | 74.6%
+ |
The two measurements shown with * in the VA column represent transformer overload. With the lowest load resistance used, the E-I transformer was seriously overloaded, drawing 330W and just shy of 366VA (it's a 212VA transformer). It still managed to get almost 75% efficiency, despite the overload. I'd estimate that at rated VA it would reach about 85% efficiency. Note that the efficiency is for the system as a whole, the transformer, bridge rectifier and filter caps. The transformer alone will be a little better, as there are no diode losses (in particular).
+ +The toroidal transformer is as good as you're likely to get. Most of the time it will be operating at fairly low average current, and the efficiency will be around 80%. All power supplies have an efficiency of zero with no load, but with a DC output of as little as 2-3W the toroidal tranny will reach 50%. The input power from the mains will be no more than 5-6W. The power factor of the E-I transformer is very high with a heavy load because resistive losses in the transformer become dominant. This column was included only for interest's sake, as there's nothing you can do about it with a standard power supply circuit. You pay only for watts consumed, not VA, so the power factor is largely irrelevant (and you know that it's the result of the non-linear waveform, and not inductance).
+ +The efficiency of any transformer is usually (but not always) determined by its size. Large transformers are generally more efficient than small ones, but 'blanket' generalisations are unwise. In some cases, the no-load power is lower for large transformers too, but a great deal depends on the way it's made and its end purpose. All measurements in Table 13.2 were taken at a line voltage of 230V, adjusted as needed with a Variac.
+ +80 VA Toroidal | 12 VA AC Plug-Pack #1 | 12 VA AC Plug-Pack #2 | 4 VA PCB + Mount Toroidal + | ||||
Power | 0.48 W | Power | 2.1 W | Power | 2.3 W | Power | 0.62 W + |
Current | 4.2 mA | Current | 39 mA | Current | 17 mA | Current | 20 mA + |
VA | 0.96 VA | VA | 8.97 VA | VA | 3.91 VA | VA | 4.6 VA
+ |
It's obvious from the table that the 80VA toroidal transformer has lower no-load dissipation than any of the others. The two plug-pack transformers use an E-I transformer internally, but despite having the same VA rating (12VA), #2 consumes more power at idle than #1. The 4VA PCB-mount toroidal beats them both, but it's still worse than the 80VA toroidal, something that is common but likely unexpected.
+ +The no-load power is nothing to be too concerned about, but it's still power that you have to pay for. A single unit is unlikely to be considered 'excessive' by most consumers, but there are minimum energy performance standards (MEPS) that apply in many countries, and that signed the death-knell for linear DC supplies world-wide (they are all switchmode now). In Australia, the same legislation almost caused the demise of AC plug-packs as well (see The Humble Wall Transformer is the Latest Target for Legislators). While it's perfectly reasonable to reduce standby power dissipation in products, it's not reasonable to try to ban something for which there is no replacement. AC plug-packs cannot be economically replaced with an electronic equivalent.
+ +I'm all for maximising efficiency wherever possible, but it should never be at the expense of safety. Longevity is also important, as the waste generated when any electronic product fails is a serious problem worldwide. I always ensure that any failed electronic product is taken to my local recycling centre, and not just tossed in the bin where it ends up as landfill. However, if a product can be repaired that's even better. Unfortunately, very few products made today are designed to be repaired, and service data (service manuals, schematics, etc.) are mostly unavailable. There are exceptions, but these are often only for 'boutique' products rather than general household items.
+ + +Many new products in the audio arena use switchmode supplies, which are smaller and lighter than 'linear' supplies. These are available in a variety of formats, such as 'frame' types (a compete chassis assembly) or OEM (original equipment manufacturer) PCBs, which generally have no enclosure. These range from single-ended supplies (single output) to dual supplies, with voltage and current ratings from 12V up to ±100V, at anything from 5A or so to 20A or more. High power types (greater than 300W output) should employ active PFC (power factor protection) to minimise the mains waveform distortion.
+ +Manufacturers often tout the 'superior' efficiency of an SMPS, implying that a linear supply is much lower. While this is certainly true for linear regulators, it's not the case for a transformer, bridge and filter cap(s). These supplies can be very efficient, especially when a toroidal transformer is used. There are losses as described above (all circuits have losses), but a well designed linear supply can exceed 85% efficiency quite easily at full load, and it can be even higher. At light loading (20W output), the second toroidal transformer in Table 1.2 (300VA) will get to 95% efficiency, a figure that few SMPS can even dream of. Even when loaded to an output power of over 150W, the efficiency is still around 90%. This is not something you'll see stated elsewhere, as everyone seems to thing that an SMPS is the most efficient type of power supply. This is true only when it's regulated!
+ +Predictably, SMPS will not be covered in detail here, as switchmode supplies are generally not suited to hobbyist construction. The circuitry isn't especially difficult, but the transformer is very specialised, and has to be made differently depending on the circuit topology. Low-power supplies generally use flyback circuits, which are suitable for output power up to around 150W as a maximum. More advanced circuits include forward converters, half-bridge or full-bridge, with the latter two with or without resonant 'LLC' (inductor, inductor, capacitor) circuitry. The resonant LLC topology generally uses 'soft' switching, meaning that the high voltage DC input is switched at or near the zero-voltage point of the switching waveform.
+ +
Figure 14.1 - LLC Resonant Half Bridge SMPS (Concept Only)
The drawing is adapted from an ST application note [ 3 ] and shows the essential 'ingredients' of a resonant SMPS (the reference is a 64 page document, which should give you an idea of the degree of complexity for a supposedly 'simple' topology). The DC input will either be derived from directly rectified and smoothed mains (via the necessary protection and RF interference suppression circuits). For 120V operation, a mains voltage doubler is common, and more powerful supplies (over 150W or so) will often use an active PFC circuit with a 400V DC output. The tank circuit consists of CR, LR and the primary inductance of the transformer (LP). Ideally, the inductive parts of the system will both use the same magnetic circuit, and this can be achieved (for example) by deliberately ensuring that the transformer has higher than normal leakage inductance.
+ +The supply shown above doesn't use feedback, so it operates 'open-loop'. Ripple on the DC input will be transferred to the output, and just the design of the input circuitry (EMI filters, soft-start circuit, rectifier and filter bank) is a complex process. There are countless products sold that use an unregulated SMPS, and it's not necessarily a problem for powering amplifiers, which don't have regulated supplies anyway. The standard 'linear' supply design process described here doesn't include regulation, and it's never been a problem.
+ +This section is only short, as I'm not about to include even an abridged design guide for SMPS. There is a great deal of information available on-line, but not all of it is useful, and some is almost guaranteed to produce a result that explodes when turned on for the first time. SMPS design is a very complex area, and as noted above, the transformer is the most critical component of all. If you wish to know more, be prepared for a great deal of research, many failed experiments, and you'll often still end up with a circuit that works but is far from optimum.
+ +There are some very good resources on-line, and if you find the idea of making a switchmode supply appealing, make sure that you read as much as you can, getting information from IC manufacturer's datasheets, reputable publications and other trustworthy material. Videos and forum posts are generally the worst places to get your design ideas, as they are never peer reviewed and some are just nonsense.
+ +There are three fairly major points that needs to be made regarding the use of an SMPS in a project. A conventional power supply has only a few parts, the transformer, bridge rectifier and filter capacitors. Failures are rare, and are usually easily fixed. The most common failure is the electrolytic capacitors, but I have amps (with power supplies) that are 50 years old that still perform the same as when they were built. A switchmode supply has a great many parts, and all of them are required to be operational for it to work. The failure of one part out of 100 or more means that the supply doesn't work any more, and in some cases that one part will cause a cascade of additional failures. Should a switching MOSFET die, it will often kill the controller IC as well, along with a few other support components. MOSFETs are easily substituted, but controller ICs may be obscured (part number removed) or no longer available. The power supply is then scrap. You can salvage parts from it, but their usefulness may be questionable.
+ +Secondly, all SMPS must be designed for the maximum expected output current. An amplifier that can deliver 100W into 8Ω (200W into 4Ω) has to be able to supply 10A peaks. A mains transformer-based supply will have to deliver an average current of perhaps 1-2A with programme material, and 10A peaks when required. The 'equivalent' SMPS has to be able to deliver the maximum peak current (10A) without current-limiting, or the amplifier will clip prematurely. Many SMPS will be very unhappy if you add 10mF (10,000µF) caps at its outputs, as that will almost certainly trip the overcurrent protection. A linear supply will be quite alright with a 250VA transformer, but you need an 800W SMPS to do the same job!
+ +Finally, a SMPS will (should) include overvoltage sensing and shutdown. This is necessary because some faults (such as a fault in the feedback network) can cause the voltage to climb to the maximum possible. The result may be a great deal of 'downstream' damage (the powered circuitry). Most switchmode controller ICs also include undervoltage protection to ensure that they don't draw excessive current with a lower than 'normal' supply voltage. The sensitivity of the overvoltage sensing circuit depends on the load, as some are more likely to be damaged than others. This requirement adds more parts to an already complex circuit. Note that the following circuit does not include overvoltage protection, so it would have to be added externally.
+ +
Figure 14.2 - Simple Dual Output SMPS [ 4 ]
The drawing shows a simple flyback SMPS, rated for a total output of about 24W (the two outputs cannot be loaded to full current simultaneously). Consider that this really is a simple circuit, yet it has far more parts than an equivalent low-power linear supply (including 3-terminal regulators). The failure of any one component will either stop the circuit from working or it will become unstable. Because all of the 'interesting' parts are at mains voltage, it's hard to work on should it develop a fault, and one has to wonder how long the IC used will remain in production. Even if it is still an 'active' (not obsolete) part, that doesn't mean that it will still be available in 10 years from now. Once you can't get the IC, the supply has to be scrapped if the IC fails.
+ +It doesn't matter if a power supply adds 10kg to the weight of something that you don't need to carry around, so the 'low weight' argument may be a moot point. Despite some claims to the contrary, a conventional transformer based supply is usually fairly efficient (energy in vs. energy out), so there's not much to be gained there either. Unless the transformer fails and it's a custom design (a remarkably rare event), there's nothing that can't be replaced by an equivalent component, whether it fails in 10 years or 50. No SMPS can match that, and the circuit complexity is often such that repairs are somewhere between difficult and impossible. The chances of any SMPS lasting for 50 years is remote - with so many parts something will go wrong, often sooner rather than later.
+ +One of the biggest issues is that SMPS are very compact, so everything is close to something else, including parts that run hot. Heat is the natural enemy of semiconductors and electrolytic capacitors, and when everything is close together there will be heat transfer between parts. Once any SMPS is rated for more than ~25W, there will be a heatsink. In some cases this will be 'hot' (i.e. at mains voltage) and otherwise thermal pads will be used to provide thermal conductivity and electrical insulation. Some SMPS are designed to be chassis mounted to get better heatsinking, but this only works if the chassis is aluminium. A steel chassis is useless as a heatsink due to poor thermal conductivity.
+ +
Figure 14.4 - Example Of A Very Compact SMPS
The above gives an idea of what 'compact' really means. The PCB is 150mm x 52mm x 25mm high, and it's packed with parts on both sides (all SMD parts are on the underside of the board). The supply is 24V @ 2.5A (60W), and has active PFC (power factor correction). This does increase the number of parts needed, but it decreases the mains input current substantially. It's designed specifically for lighting. The heatsinks are clearly visible, and it's designed to be mounted inside an aluminium enclosure to provide additional heat dissipation. If this supply fails, there's little chance that anyone would try to fix it (no schematic is available). Even the number of parts that can be salvaged is limited. Like nearly all switchmode supplies, it is intolerant of even a momentary overload, and it shuts down until the load returns to normal.
+ +Idle current for the supply is 23mA and it dissipates 370mW. With a 1.5A load (36W output), the input power is 42.2W, an efficiency of 85.3%. The input current measured 200mA, so it draws 46VA and has a power factor of 0.97 (1.0 is 'ideal', but only achieved with a resistive load). Compare these figures to the values shown for the toroidal transformer in Table 13.1. The most significant difference is that a 'conventional' transformer can easily deliver twice its rated output to handle peaks, but an SMPS can handle only the maximum current it's designed for. There is no short-term peak current capability for the vast majority of SMPSs.
+ +For another example (and the supply is still relatively low power, at 145W max.), <click here> to see a fairly comprehensive forward-converter SMPS intended for a small PC. The supply is made by DELL, and is a fairly good example, without being too complex (I expect that many will disagree on this point). The circuit has been completely re-drawn, as the original was not particularly clear and the file was a great deal larger. In case you don't know the terms, 'PE' means protective earth (ground) and 'POK' means power ok (aka 'power good').
+ +A transformer-based supply will lose a few volts of output with a 2:1 overload (5A), but can handle 'normal' transient overloads without any problems. There's also no limit to the size of the filter capacitor(s), so you could use 10,000µF or 100,000µF caps with no adverse effects. It will be larger overall, but the total cost would be similar. Of course you can get a cheap SMPS too, but don't expect it to last very long. The SMPS shown will tolerate a 10,000µF cap at the output, but it will still shut down if you try to draw more than 2.5A - even momentarily.
+ +Of course, not everyone wants to keep the same equipment for most of their life, but if it has to be scrapped in 5-10 years because a $2.00 IC has failed, that should be cause for some concern. Even if you don't care about the replacement cost, the waste of resources is still something that should be factored into the equation. Most switchmode supplies are not designed to be repaired, something that should be apparent if you try to get hold of the circuit diagrams for commercial products. Most are never released - not even to 'authorised repairers', and the 'repair' process is a replacement (until there is no replacement available). For some reason, many people never consider this piece of 'reality'.
+ +One thing that's immediately apparent is that most SMPS are compact. While convenient, this ensures that parts that are affected by heat (notably electrolytic capacitors) will almost always be located right next to things that generate heat, such as MOSFETs, power resistors and diodes. In an ultra-compact design, this invariably means that longevity is compromised. The saving grace is that for audio, continuous power is fairly low, so continuous high temperatures are unlikely. However, a thermal switch operating a fan is always a good idea, although that means the circuit will gather dust over time.
+ +If you do a web search for switchmode power supplies you'll almost certainly see lots of links to forum posts from people seeking help. In many cases the PCB will have one or more parts burnt out, with no indication as to what they once were. In a few posts you'll see that the person got the supply working again, but in many cases the only fix is a replacement. A very common fix is to replace electrolytic capacitors, but obviously that only works if the rest of the supply is functional. Unfortunately, catastrophic failures are common, leaving the PCB so damaged that even if the parts are available, a safe repair isn't possible.
+ + +The information presented in this article is intended as a guide only, and ESP takes no responsibility for any injury to persons (including but not limited to loss of life) or property damage that results from the use or misuse of the data or formulae presented herein. It is the readers' responsibility absolutely to assess the suitability of a design or any part thereof for the intended purpose, and to take all necessary precautions to ensure the safety of himself/ herself and others.
+ +The reader is warned that the primary voltages present in all power supplies for amplifiers are lethal, and the constructor must observe all applicable laws, statutory requirements and other restrictions or requirements that may exist where they live. Secondary voltages are usually 'safe', but be warned that the voltage between +ve and -ve supplies will usually be at least 50V and may be over 120V for high power amplifiers. This is also considered 'hazardous', as is any voltage exceeding 42.4V peak AC (30V RMS) or 60V DC. Injury from secondary voltages is not common (except for valve amps, and especially microwave ovens - the latter have killed many technicians over the years!
+ +++ ++
+
+WARNING: All mains wiring should be performed by suitably qualified persons, and it may be an offence to perform such wiring unless so + qualified. Severe penalties may apply.
Be particularly careful to ensure that the DC supplies are insulated from each other and the common connection. Any contact between conductors may result in an arc, which can cause a fire or severe eye damage if you happen to be looking at it. Over-current protection by way of fuses or circuit-breakers is essential to ensure that the equipment and house wiring is protected from fault currents.
+ +All power supplies must be fused and/ or protected by an approved circuit breaker, and all mains wiring must be suitably insulated and protected against accidental contact to the specifications and requirements that apply in your country. In most cases, there is a requirement for the use of 'a tool' (a screwdriver qualifies) to gain access to internal circuitry. It's now common for manufacturers to include security screws, to prevent access by non-qualified persons and/ or anyone attempting to gain access for servicing or other purposes.
+ + +Linear power supply design is not particularly difficult, and apart from making sure that you never exceed the recommended maximum voltage for any amplifier (or 3-terminal regulators for low current supplies), it's hard to go wrong. Smaller transformers will usually have a slightly higher no-load voltage, but it collapses more readily under load. A larger transformer will always provide a 'stiffer' supply voltage, but you need to ask yourself if this really matters. Transformers are very robust machines, and as long as you don't subject them to continuous overloads, most will last (almost) forever. Toroidal transformers are usually the best choice if you can afford the extra cost, and their much lower radiated (leakage) flux means that there are usually fewer problems with hum loops within the chassis.
+ +A simple linear supply will operate with severe transient overloads (as found with music) without any risk. Provided the average transformer rating isn't exceeded, it won't care if you draw transient current of double (or more) the transformer's rating. A 200VA transformer will not be troubled if you draw an average of 50VA with peaks of up to 500VA. This would be typical for a stereo 150W/ channel amplifier driven to the onset of clipping with music having a 10dB peak/ average ratio. The use of large (10mF/ 10,000µF) filter caps will allow full dynamics. This is based on the 'rough-and-ready' formula described in Section 4 (Selecting the Transformer), where the transformer has a VA rating of 0.7 of the total amplifier output. For example, 0.7 of 300W (2 x 150W amps) is 210VA, but 200VA is close enough.
+ +Don't discount the 'tried and true' E-I transformers from your final decision. Most are wound using separate sections of the bobbin for primary and secondary, making the likelihood of a primary to secondary breakdown very unlikely. Leakage flux can make the internal layout more difficult (hum loops), but if you only need a relatively small (say 100VA or thereabouts) transformer, there's no comparison when it comes to the cost difference. An equivalent toroidal will generally cost significantly more, and while it will be lighter and marginally more efficient, the cost doesn't always outweigh the benefits.
+ +There's absolutely no difference in the 'sound' from an amplifier using a toroidal vs. E-I transformer, provided that the E-I transformer doesn't induce hum. Anyone who claims otherwise is badly mistaken. There are certainly measurable differences (notably regulation and no-load current), but these don't affect the amp's 'sound'. The difference in output power is so small (generally well below 1dB) that a listening test will not pick the difference. It's important to understand that we can measure things that can't be heard, but the converse is not true.
+ +The idea of this article is to show the complete design process, but 99% of the time you'll buy a suitable transformer, bridge rectifier and filter caps, wire it up and it's done. Not including metalwork (which can be very time consuming for any build), a linear supply can be wired up and operational in no more than an hour, using point-to-point wiring and chassis mount electros. Should you buy a ready-made SMPS it will take just as long, but (and by necessity) you'll be much more careful. Last-minute chassis mods can be made to a linear supply without having to worry about small bits of metal bridging 0.5mm spaced IC pins, and it's very rare indeed that you'll ever need a fan for a linear supply. With many SMPS, a fan and excellent ventilation are a requirement, especially if it's very compact.
+ +Ultimately, the power supply design is influenced by cost, size and weight. Switchmode power supplies are now very common in many commercial products (often with Class-D power amps), and no-one can claim that they are quieter than a linear supply. Many are not regulated, and simply use an 'off-line' rectifier and filter cap, followed by a squarewave inverter. The DC supply rails have 100/ 120Hz ripple, along with high frequency noise from the high-speed switching. These are generally not suited to DIY as noted in section 13, but the power supplies are available (often quite cheaply) as a separate item. Don't expect them to last 50 years though - some will be lucky to last for 5 years.
+ +The degree of over (or under) engineering in a DIY project is determined mainly by size and budget. Weight is irrelevant if you don't need to carry it around on a regular basis. Provided you make sensible choices, the PSU is not difficult, and understanding why they behave as they do means that you aren't left guessing. Getting the supply 'just right' is rather satisfying, and building it so it will last a lifetime isn't hard once you have the right information to hand.
+ +This article is an updated version of Linear Power Supply Design, which was written over twenty years ago. It's still (very much) worth reading, but it's a long article with a great deal of information. I've tried to keep this a little more succinct, but with a topic as complex as power supplies (despite their apparent simplicity) that's not easy.
+ + +The first two references are from other ESP articles, and where appropriate they have further references that indicate the original material used. The remaining references are for SMPS information.
+ +![]() | + + + + + + + |
Elliott Sound Products | +Pre-Regulator Techniques |
Pre-regulation (or preregulation) circuits have been a common requirement in power supplies for many years. There are two reasons - either to reduce the ripple present on the output, or to minimise the power dissipation of the regulator. This reduces heat generation (in the regulator) and may improve regulation a little because there's less voltage change at the input. There are countless different circuits, but they follow the same general themes - linear, tap-switching, phase-cut and switchmode. The last three can be implemented in many different ways. Linear pre-regulators are usually fairly similar because there's a limited number of options.
+ +The first alternative is to use a linear pre-regulator, with an output voltage that's just high enough to ensure that the regulator remains in control of the output. This has the advantage that the regulator circuit itself is already supplied with a waveform that's essentially free of ripple, ensuring a very low noise output. However, the dissipation in the pre-regulator can be very high - even for a relatively low power circuit.
+ +The simplest form of 'high efficiency' pre-regulation to use two or more voltage taps on the transformer, with the appropriate output voltage being taken from the transformer depending on the set output voltage. Tap-switching (as this is called) is fairly simple to implement, but usually requires a custom transformer. This makes it suitable for manufacturers, but it's far less attractive for DIY unless the constructor is willing to use two multi-tapped transformers, assuming local availability and a dual supply with positive and negative output voltages. You may even have a suitable transformer available in the 'junk box'.
+ +In many early high efficiency power supplies, an AC 'phase-cut' scheme was common. By turning on the AC at that part of the waveform where the peak AC voltage was just above the voltage needed at the output, the voltage across the regulator was kept to a minimum, thus improving efficiency. These systems commonly used thyristors (aka SCRs or silicon controlled rectifiers), which are readily available in high current versions. The very spiky nature of the waveform could create both acoustic and electrical noise. TRIACs were also common, and there was a commercial audio power amplifier design that used this technique.
+ +Modern high power supplies use a switchmode supply at the front end, either with direct conversion from the AC mains, or a low voltage switchmode regulator following the power transformer. These can have high efficiency, and where very high power is expected the AC side may use active power factor correction (PFC) to ensure that the mains waveform is as close to a sinewave as possible. This creates a complex design overall, but is capable of very good results.
+ +For this discussion, we'll look at a supply that can provide up to 50V DC at up to 5A. Although diagrams will only describe a single (positive) supply, the same principles apply for a dual supply with both positive and negative outputs. The primary difference is that for a dual supply, voltage, current and total power dissipation are doubled. Of course this only applies when both polarities are supplying the same voltage and current (a dual tracking power supply). Only the pre-regulator is considered here - the regulator is a separate entity, and is shown as a 'block', similar to a 3-terminal IC regulator.
+ +The drawings below are examples, and in each case shows one way that a particular pre-regulator can be configured. There are as many possibilities as there are designers, and it would not be possible to include a sample of each. A web search for a particular pre-regulator design will often turn up some good examples, along with the usual irrelevant links and some examples that should state that the method shown should be avoided, but someone will still think it's a good idea.
+ + +Regardless of the technique used, the regulator circuit (either discrete, IC or hybrid) must always have enough voltage across it to allow proper regulation. That includes the most negative part of the ripple waveform. If a regulator needs 5V differential (input to output), the unregulated (or pre-regulated) voltage must always be at least 5V greater than the output voltage. If there's 3V of ripple, then the most negative part of that voltage still needs to be 5V greater than the input. The most positive part of the ripple waveform will therefore be at 8V above the output.
+ +If the voltage differential isn't great enough, there will be ripple 'breakthrough', and some of it will be visible at the output terminal(s). That means that the average voltage (and therefore the average power dissipated in the regulator) must be a little higher than expected. With 3V peak-peak of ripple, the required average DC voltage is increased by 1.5 volts. It doesn't sound like much, but it increases the power demands on the regulator. With 5A output, the power dissipation is increased by 7.5 watts, so total dissipation (including the required 5V absolute minimum) is 32.5 watts. That's a significant increase from the 25W dissipated if the pre-regulated voltage has no ripple.
+ +Depending on the type of rectification used (normal diodes, SCRs) and other factors, the transformer may also need to handle a higher 'apparent power (VA or volt-amps). A standard bridge rectifier imposes a VA rating that's around 1.8 times the actual power delivered. That means that if you expect the output power (including losses) to be 250W, you need a 450VA transformer if the full output load is maintained for any length of time (longer than a few minutes). A smaller transformer can only be used if you include thermal sensing on the transformer as well as the heatsinks, so the supply will shut down if it starts to overheat. Failure to include this safeguard could lead to a transformer failure.
+ +With any high power bench supply, one of the issues that you will always face is the transistor SOA (safe operating area) limits. Datasheets usually provide this in graphical form, and operating beyond the second breakdown limits (even briefly) can result in instantaneous failure. This must be addressed in a final design, and the details are included below (this circuitry will be in the regulator, not the pre-regulator). Remember that if a regulator transistor fails, it will do so short circuit, so the full supply voltage will be presented to the device being tested. This may lead to the destruction of the DUT (device under test).
+ +In the cases discussed, it is assumed that power transistors will be mounted directly to the heatsink, with no electrical insulator present. This minimises the thermal resistance from case to heatsink, but it will always be a non-zero value. The best you can hope for is probably around 0.1°C/W, but that's not easy to achieve in practice. The use of silicone 'thermal' pads is so unwise that I dare not even make mention of them , but they exist, and some people still think they're a good idea. While fine for low power applications (up to around 10W continuous) they are ok, but for serious power they are grossly inadequate.
Unfortunately, direct mounting almost always means that the heatsinks are 'hot' (as in electrically 'live'), and they must be insulated from the chassis and great care is needed to ensure that a short to chassis is rendered as close to impossible as you can make it. This isn't necessarily as hard as it sounds, but it does demand a design that's different from the way heatsinks are normally used. As an example, I've included the photo below of a dual live heatsink, which is held together with pieces of acrylic. All screws are countersunk well below the surface, and tape will be applied before mounting to provide a proper electrical barrier. Mounting to the chassis is simple - three holes are drilled through the acrylic, and tapped to take 4mm metal thread screws.
+ +
Figure 1 - Dual Live Heatsink, With Fan And Acrylic Separators
The arrangement shown lends itself very well to this application, and one heatsink is for the positive supply, and the other for the negative supply. This is being prepared for an up and coming project that's designed to provide an affordable dual supply, with voltage up to ±25V and load current up to 2A (either or both supplies). Almost all circuitry will be attached to the heatsink, other than the voltage setting pots, current limiting and primary power supply (transformer, bridge rectifier and filter capacitors).
+ +Although the fan is rather puny and the heatsinks aren't overly large (the tunnel is 80mm square, and 160mm long), this heatsink should be good to dissipate up to 50W each side (100W total) fairly easily. This is far more than I'll need, but there's no such thing as a heatsink that's too big. Note that it is absolutely essential that the fan blows air into the tunnel, because fans that suck, really suck! There is a huge difference in performance, and this is covered in detail in the ESP Heatsinks article.
+ + +This is the simplest to implement, not counting the thermal management provisions that are essential. For our hypothetical supply, it will require an unregulated voltage of at least 62V DC. If you were to use it with the full 5A output at (say) 5V DC output, the pre-regulator will dissipate at least 260W, with the regulator dissipating a further 25W (assuming a regulator voltage differential of 5V). That is a great deal of heat to dispose of, and attempting it without forced air cooling (a fan) is unrealistic. It can be done, but the heatsink would have to be massive, and the cost of that alone would almost certainly exceed the cost of the power supply itself. This is just silly unless there is an absolute requirement for total acoustic silence, which is rarely the case for a lab/ bench supply.
+ +As the output voltage is increased, pre-regulator dissipation is reduced, until at the very upper limit, it should be passing almost the full unregulated voltage to the regulator. This may mean that output noise (100-120Hz hum or buzz) also increases, because there's no pre-regulation to reduce ripple. This can be countered with a higher unregulated voltage of course, but that increases losses even more. As noted, the greatest advantage is simplicity, but much of that tends to go away when you have to add thermal management circuitry.
+ +Normally, the fan will not be running, and that will be the case for (probably) most of the tests that are typically performed. However, as the heatsink temperature increases, the transistors or MOSFETs used in the pre-regulator become prone to failure due to excessive die temperatures. As soon as the heatsink gets above around 30°C or so, the fan should come on (it can be variable speed), and if the heatsink temperature continues to increase, the supply should be turned off automatically. If these precautions aren't taken, your test load and the supply are liable to be seriously damaged.
+ +While it's potentially the quietest (electrical noise), a linear pre-regulator is the least efficient method. However, this doesn't mean that it shouldn't be considered, especially for lower powers. For supplies providing ±25V or so at current up to 2A, it's limitations are minimised and the loss of efficiency isn't such a big deal. Worst case dissipation may be up to 70W (140W for a dual supply), but that's only under full load at very low output voltages. In 'normal' use (whatever that is ), dissipation will be somewhat less, and in many cases it will only be a few watts when testing preamps or even power amps at low power. It's a technique that isn't dead just yet, and will likely continue for many years to come.
Perhaps one of its greatest advantages is that if built well with good heatsinking, it will outlive most of the people who choose to build it. The parts won't disappear any time soon, and service (if ever required) is usually straightforward if through-hole parts are used throughout. There's no need for SMD parts, because the circuit is so simple. This cannot necessarily be said for some of the alternatives, and especially for switchmode circuits. However, this only applies if a more pragmatic approach is taken, reducing the voltage to ±25V at a maximum of around 2A.
+ +A linear tracking regulator is almost silent, both acoustically and electrically. However, they are also very inefficient, so they need large heatsinks to dissipate the considerable heat that can be generated in a high-power supply. This is not only very wasteful of energy (you pay for the heat generated because of the current drawn from the mains), but also increases the size and cost of the supply.
+ +
Figure 2 - Linear Tracking Pre-Regulator
In the above, C1 is 10,000µF (10mF), and is the main smoothing capacitor. It's supplied from the output of the rectifier. Q1 and Q2 form a current source. This provides base current to the series pass Darlington pair (Q3 and Q4). Q4 may consist of two or more paralleled devices if the dissipation is high. The zener diode (ZD1) ensures that the input voltage to the regulator (which may be an IC or discrete) is at least 4.5V greater than the output voltage. If the regulator requires a higher differential voltage, you simply use a higher voltage zener diode. By default, the output from the pre-regulator is fairly well smoothed and contains little ripple because its reference is the regulated output (via ZD1). D1 ensures that the pre-regulator and regulator are not subjected to reverse voltages if a DC source is applied to the output (which can and does happen). Values are not provided for the pot (VR1) or R3 because they are dependent on the regulator topology.
+ +One of the more challenging aspects of any linear design is the transistor SOA (safe operating area). For example, the TIP35/36 devices are low cost and ideal in this role, but there are several things that must be considered. The first is power rating (125W), but that's tempered when you look at the temperature derating curve (power vs. case temperature), the maximum TJ (junction temperature), Rth j-case (thermal resistance, junction to case), and the SOA curves. It should be apparent that with Rth j-case at 1°C/W, if the device is dissipating 70W, the junction must be at ambient (25°C) plus TJ - a total of 95°C. This assumes perfect mating between the case and heatsink, and that the heatsink remains at no more than 25°C.
+ +This is clearly not possible. The maximum allowable junction temperature is 150°C with a case temperature of 25°C, so with 70W dissipation the case temperature cannot exceed 80°C (this is easily calculated, or can be done using graph paper). At 150°C. the die cannot dissipate any additional power, and at a case temperature of 25°C it can handle 125W (which raises the die temperature to 150°C). Note that this only addresses temperature, not SOA! The SOA curve shows that if there's 35V across the device, the maximum current is 2A - that's a maximum of 70W at 25°C. If the voltage or current is increased beyond that, there is a likelihood of second breakdown, an almost instantaneous device failure mechanism. These limits are reduced at higher temperatures!
+ +Despite the apparent simplicity of a linear pre-regulator, there's a great deal of design work involved to ensure that reliability is not compromised. This is why it's so important to examine datasheets, minimise all thermal resistances possible, and usually be prepared to use more parts than you originally thought you'd need. However, this really is the simplest - as soon as more 'advanced' techniques are used, the design challenges are only increased.
+ +If the idea of a linear pre-regulator were to be used for the hypothetical supply (50V at 5A, worst case dissipation of around 300W), the SOA requirements would mean that you'd need a minimum of ten TIP35/36 transistors for each polarity (600mA maximum with 60V across the transistor). This is obviously not a smart way to build a very high power supply. 250W (500W dual) isn't a huge supply by any stretch of the imagination, so alternatives are essential.
+ + +Without tap switching, a 50V, 5A supply needs a minimum input voltage of around 55V, so if you were to expect 5A at 1V DC output, the dissipation will be 270W. This assumes that the mains voltage remains at the nominal value, either 230V or 120V. In reality, we need to allow for both high and low mains voltages, so the unregulated voltage should be at least 10% higher than nominal to allow for lower than normal mains voltage. 55V becomes 61V near enough. Dissipation is increased to 300W.
+ +Using tap switching, the transformer has multiple windings (or a single winding with multiple taps), and higher efficiency is available than a regulator that's always supplied with the highest voltage provided from the transformer, rectifier and filter capacitor. For example, for voltages up to 12V, the unregulated DC voltage will typically be at least 18V (average value, requiring an AC voltage of 15V RMS), and it always has to be high enough to ensure that the minimum voltage (based on the amount of ripple) remains above the regulator's dropout voltage (where it can no longer regulate). This varies from around 3V or so, up to 5V or more, depending on the topology of the regulator itself.
+ +With an output voltage of (say) 1V at 5A, the regulator dissipates around 90W. As the output voltage is increased, the transformer tap is automatically selected to provide the required voltage range. There is still a lot of heat to disperse, but it's much lower than a simple regulator supplied with the full secondary voltage at all times.
+ +When the user selects an output voltage of 12V DC or greater, the transformer tapping point is increased so there's more voltage at the regulator's input. For our example, it might rise to 39V DC (AC output from the transformer of 30V RMS), and at full current (5A) with an output voltage of 16V, the regulator dissipates 115W. With a three tap system, the final tap will be selected when the output voltage is set for 34V or above. At 34V with 5A output, the regulator has an input voltage of perhaps 60V, and dissipates 130W.
+ +Note that dissipation is always highest at the low end of any tapped supply voltage. If the regulator is operated with 50V output at 5A, dissipation is around 50W. It will generally be a bit lower at just below the switching voltage for lower voltages, but you must always design for the worst case. You must also allow for a short circuit at the output, and this can be very challenging indeed. Instantaneous power dissipation may exceed 300W, and a heatsink with high thermal mass is needed to absorb such 'transient' events without localised temperature rise. Transistor SOA protection should be included to protect the regulator transistors, and this can be challenging (to put it mildly).
+ +
Figure 3 - Simple 3-Stage Tap Switching
A simple tap switching circuit is shown above. Voltages referred to are loaded to 5A, and assume a power transformer of not less than 500VA. The regulator gets an input voltage of around 19V as long as the output voltage is less than 12V. Above that, the zener diode (ZD1) passes enough current to turn on Q1, which in turn operates the relay (RL1). The relay contacts disconnect the low voltage winding and connects to the next tapping (30V AC), so the regulator's input voltage is increased to 44V (~40V loaded). The regulator can then provide a regulated output of up to 28V DC. If the output voltage is increased further, RL2 operates, connecting the 45V AC tap, giving an unregulated voltage of around 63V (~60V loaded). Without the tap switching, dissipation in the regulator will be much higher than desirable with low output voltages, particularly if the current is high.
+ +The relay contacts are labelled 'NO' and 'NC', meaning normally open and normally closed respectively. The 'normal' state is when the relay is not energised, so the 'NO' contacts will be open (no connection). The relay contacts must be able to handle the full voltage and current, as determined by the design of the power supply. This is usually easy to achieve, and relays have very low resistance when the contacts are closed. You must ensure that there is no possibility of relay contacts for separate voltages to short a winding (this is not possible in the Figure 3 circuit).
+ +ZD2 and ZD4 protect the relay switching transistors from excessive base current with high output voltages. If a pair of comparators are used instead of zener diodes and transistors, power dissipation is reduced and the tap switching voltages will be more accurate. This does add complexity of course, but the cost difference is negligible. The simple scheme shown will certainly work, but switching thresholds are not very precise.
+ +BR2 along with the separate winding provides a low voltage (~12V DC loaded) output permanently to operate the relays, regardless of the selected AC voltage from the transformer. This is best provided by a separate winding, and the output would ideally be regulated for comparators and relay coils. Comparators give better (more predictable) voltage sensing, which gives greater precision and lower power consumption.
+ +If you drive a short circuit and attempt to increase the output voltage, it cannot rise due to current limiting, so the higher voltage taps can't be selected. Although I've shown relay switching, it can also be done using SCRs (silicon controlled rectifiers, aka thyristors), TRIACs or even MOSFET relays. Regardless of the switching technique, the results are much the same. 'Solid state' switching may be thought to be preferable, but is more involved, has higher losses than relays and requires more complex circuitry.
+ +Of course, there's no reason not to include a linear tracking pre-regulator with tap switching, but this will still be subject to the same constraints as a linear pre-regulator if you happen to have set the highest output voltage and there's a sudden short circuit in the load (or just the test leads). The tap will be dropped to the lowest setting almost instantly, but there's still a large filter capacitor, charged to the maximum unregulated voltage! This will cause grief whether linear tracking pre-regulators are included or not, and it must be catered for because it will happen.
+ +The overall efficiency of a tap switching systems is improved with more transformer taps. It's also possible to use different voltage windings that are switched in a sequence that allows (say) three windings to provide five different output voltages from the transformer. You might have a pair of 18V windings and a single 9V winding, switched so that you can have AC voltages of 9, 18, 27, 36 and 45V AC. While this obviously improves efficiency, it also means a complex logic matrix to control the switches. Use of a microcontroller will simplify this task of course, but the relay contact arrangement will be fairly convoluted. The transformer will be a custom design, unless you use multiple smaller transformers.
+ +The regulator design must be sufficiently rugged to ensure that it doesn't fail if shorted while supplying the maximum output voltage, and this will happen, either accidentally or due to a failure in the test circuit. This particular issue refuses to go away, regardless of the technique used for pre-regulation, and failure to provide appropriate protection circuitry will result in a blown-up power supply.
+ + +A common approach to pre-regulation in early power supplies was a 'phase-cut' circuit, somewhat similar to an incandescent lamp dimmer. These were popular because they could allow the unregulated voltage to remain just high enough to ensure that the following linear regulator could provide good regulation, without any ripple breakthrough.
+ +However, most of these supplies used SCRs (silicon controlled rectifiers, aka thyristors). The biggest problem was/ is the turn-on speed of the SCRs - they go into conduction very quickly, and that means that they invariably cause some high frequency noise. Because they can only be turned on, they were (in lamp dimmer parlance) leading edge 'dimmers', so most of the AC half-cycle would pass before the SCR(s) turned on. GTO (gate turn-off) thyristors became available later, but they were never used in any 'phase-cut' pre-regulator circuit I've seen.
+ +The rapid turn-on also causes most transformers to growl, so they made acoustic as well as electronic noise. One alternative to the 'traditional' SCR phase-cut pre-regulator is to use a MOSFET switch. This means that it can turn off when the voltage is high enough, so it operates like a trailing-edge dimmer. This is somewhat quieter than an SCR version, and with the MOSFETs one can obtain today, it's also more efficient. However, that doesn't mean that high frequency noise is eliminated.
+ +You can think of this arrangement as an 'infinitely variable' tap switcher, because the transformer's output voltage is continuously variable. The unregulated output voltage can be as low as 6V if the control is on the secondary side of the transformer. Many supplies used phase-cut circuits on the primary side, because that reduces the current involved, which in turn lowers the losses in the SCRs or TRIACs (a TRIAC is a bi-directional AC switch). Of course this introduces additional complexity, because the SCRs or TRIACs need isolated control circuitry. There are specialised ICs designed specifically for powering TRIACs (e.g. MOC3020 ... MOC3023), but control circuitry is still required. A zero-crossing detector is necessary so the circuitry can identify the point where the AC waveform passes through zero (and the SCRs or TRIACs turn off).
+ +In the following circuit, the zero-crossing detector is not required as a separate sub-circuit. The switching system doesn't actually identify the zero-crossing, but turns on the MOSFET whenever the AC voltage is below the target voltage. The current limiter uses a 50mΩ resistor (R2), which limits the peak MOSFET current to a bit over 13A. If the peak current is decreased, the MOSFET will be on for longer, and total power dissipation will be increased. The current must be high enough to ensure that the filter cap (C2) can charge to the required voltage under full load. Ultimately, the peak current is also limited by the transformer's winding resistance.
+ +
Figure 4 - Phase-Cut Pre-Regulator
The drawing shows a version of a phase-cut pre-regulator that you almost certainly won't find elsewhere. Although simplified, it works well as shown, and needs only a few changes for a practical circuit. The P-Channel MOSFET is turned on as the unfiltered DC waveform falls below the target voltage, and turns off again when the target unregulated voltage is reached. With a 5.1V zener as shown, the regulator's differential voltage is about 5V at any output voltage setting. The opamp comparator requires a 'full time' power supply, or it can't function. As with all phase-cut circuits, the filter capacitor's ripple current can be much higher than normal. This is mitigated (to an extent) by using the current limiter for the MOSFET as shown, but that increases its dissipation. For better overall performance, Q3 is a current sink. This makes the MOSFET drive signal less affected by the instantaneous voltage. R7 and R10 are required so the circuit will start, as without them there is no voltage on the comparator's non-inverting input, and it won't turn on. It only requires a few millivolts to start operating, and the process is self-sustaining after that. R7 also provides the zero crossing signal, although at times the circuit will switch on at other points on the waveform (as seen in the waveforms below).
+ +Using phase-cut circuitry on the secondary (low voltage, high current) side of the transformer was once impractical, but MOSFETs have changed that. They are available with almost scary current ratings, and 'on' resistances so low that power dissipation is minimal. The circuitry needed isn't frighteningly complex, but it's usually a wise move to impose some form of current limiting (other than the transformer's winding resistance) to ensure that the filter capacitor ripple current is manageable. Without current limiting, the filter capacitor can have a very hard (and commensurately short) life. Fortunately, this isn't too difficult to achieve, and requires only a few low-cost parts. One technique that was commonly used in older systems is a filter 'choke' (inductor), but that's a large, heavy and expensive addition. However, it does give good results when implemented properly.
+ +It's rather doubtful that any manufacturer would use this scheme in a new design, but that's not because it's not effective. The biggest issues with all phase-cut systems is poor transformer utilisation and high capacitor ripple current. For a DIY build, and provided that the DIYer is willing to experiment, it can produce good results, but manufacturers will now use a switchmode supply (and most use only the switchmode supply, without any linear regulation to minimise noise). Note that leading edge (SCR or TRIAC) systems are essential if used on the transformer primary, but if the phase-cut is performed on the secondary side, a trailing edge switch is preferred. Note that the supply voltage to the comparator must be no less than the regulator's output voltage to prevent damaging the input circuits of the comparator IC (or opamp). The comparator has some hysteresis built in (provided by R5), which helps to prevent spurious oscillation.
+ +Of the options shown, the MOSFET phase-cut circuit is probably the simplest to implement if you need high efficiency, but it comes at a cost. While the circuit is conceptual (rather than a complete solution), it simulates very well and there's no reason to expect that it won't also work very well. It doesn't need any extras other than a suitable supply voltage for the comparator (typically around 30V DC). Apart from a switchmode circuit, it can have the highest overall efficiency at any voltage or current of any of the techniques.
+ +So, what are the costs? At low to mid voltages, expect filter capacitor ripple current to be up to twice that of a conventional rectifier supplying the same output current. It also suffers from rather poor transformer utilisation (in common with all phase-cut circuits). The power factor at the voltage and current shown below is only 0.327 which means the transformer's VA rating may be up to 800VA for an output of 250W (50V at 5A). You need a much larger transformer than expected to get the current and voltage required. A 'conventional' rectifier and filter cap needs a 450VA transformer for the same output power. The same effects are seen with any phase-cut system - it's not something that's limited to the one shown.
+ +
Figure 4.1 - Phase-Cut Pre-Regulator Waveforms
Of all the designs shown here, the MOSFET phase-cut switching version is the only one that requires a waveform to show how it functions. The unregulated voltage and current are shown, for an unregulated output of just over 23V at an output current of 0.8A. The MOSFET current is limited using the circuit shown above, and is about 13A peak. As you can see, when the unregulated voltage falls below the threshold, the MOSFET turns on and remains on, until the voltage exceeds the threshold. You can see the slight 'bump' in the DC waveform if the MOSFET turns on just before the zero crossing. The main power transfer occurs after the zero crossing. The comparator's output is shown in blue, and you can see that it turns on just before the zero crossing, and turns off at the instant the voltage has reached the desired peak value. As output current is increased, the MOSFET turns on for longer, allowing the filter cap to fully charge to the required voltage.
+ +As noted earlier, this is not a scheme you're likely to come across elsewhere. There are certainly MOSFET switching systems published, but most try to operate in exactly the same way as a 'traditional' SCR or TRIAC version, and don't use the simpler scheme shown here. There's nothing to suggest that a more traditional method is 'better', and I suggest that the opposite is true, since the circuit shown above operates primarily as a trailing edge control system, which helps to reduce the capacitor's ripple current.
+ +Care is needed with the MOSFET selection, because they have a defined safe operating area. This is critical when they are operated partially in the linear region (for which few MOSFETs are optimised), and the datasheet must be consulted to ensure that the MOSFET used can handle the combination of voltage and current. The current limiter makes life easier for the filter capacitor, but harder for the MOSFET. Conversely, removing the current limiter makes life easier for the MOSFET, but places more strain on the filter capacitor.
+ + +With a switchmode pre-regulator, you retain the normal mains transformer, bridge rectifier and filter capacitor. However, rather than using a linear (or phase-cut) pre-regulator, it will (most commonly) be a 'buck' (step down) switching regulator. There are countless ICs available for this, and it would be rather foolish of me to try to describe a complete circuit (so I won't). Instead, the buck converter is shown as a circuit 'block', with a separate P-Channel MOSFET acting as the switch. The feedback has to ensure that the output voltage is higher than the regulated output, and as before the voltage differential depends on the regulator topology.
+ +This arrangement is capable of high efficiency, so there will be little wasted power. The biggest problem will always be ensuring that the switching noise is not coupled through to the output. For some applications, a bit of high frequency noise isn't a problem, but if you are trying to measure the signal to noise ratio (SNR) of an audio frequency circuit, any high frequency noise can play havoc with your measurements.
+ +
Figure 5 - Switchmode Buck Converter Pre-Regulator
The DC voltage from the transformer, bridge and filter cap must be greater than the highest regulated voltage required, because the buck converter will always need some voltage differential (just like the linear regulator). One major advantage is that if you need high current at a low voltage, the switchmode converter applies 'transformation'. Assuming no losses, if the buck converter has an input voltage of 60V, an output voltage of 10V and a current of 5A (50W), its input current will only be 833mA (also 50W). In reality it will be more because no circuit can achieve 100% efficiency. It's reasonable to expect that the input current will be around 1A (60W) representing only 10W 'wasted' power. Even a small heatsink can dispose of that easily, although not all of the power is dissipated in the switching MOSFET - some is also dissipated in the inductor and rectifier diode.
+ +Q2 is a very simple differential amplifier, which ensures that there's about 6 volts across the regulator. If the input voltage decreases due to external loading, Q2 is partially turned off, which provides a lower feedback voltage to the switchmode converter, forcing its output voltage to increase. The converse is also (obviously) true. Because the MOSFET is a high speed switch, dissipation will be low, and is a combination of the turn on/off speed, and the 'on' resistance (RDS on). Inductor dissipation depends on the core losses and AC resistance (which is affected by skin effect, and is higher than its DC resistance). A means of providing short circuit protection or current limiting is necessary for the buck converter.
+ + +Today, the trend is to use a switchmode power supply to provide the unregulated voltage. It is actually regulated, but set up so that the SMPS output voltage is high enough for the linear regulator to regulate properly. Hopefully, any residual high frequency noise will also be eliminated, but that can be a lot harder than it sounds. A switchmode supply can be either on the mains side (eliminating the 50/60Hz transformer) or on the secondary side using a simple buck regulator as shown in Figure 4. Using a mains switchmode supply is more efficient, but you then have a great deal of circuitry that's all at mains potential. This is not usually a wise choice for most DIY people, although it can be done if you are skilled in the finer points of 'off-line' (powered directly from the 'line' (mains) voltage) switchmode supplies. I've shown an SMPS with active PFC (power factor correction), but this isn't essential. These are far more complex than 'simple' switchmode supplies.
+ +The most common SMPS uses a rectifier direct from the mains, with a high voltage filter capacitor. This is followed by a switchmode control IC, and one or more MOSFETs to switch the high voltage DC to the transformer. Low power systems (less than 50W or so) will use a flyback converter, while more powerful supplies use full or half bridge drive to the transformer. The output voltage on the secondary side is controlled by pulse width modulation (PWM). The feedback control system must monitor the output voltage from the regulator, as well as its input voltage (from the SMPS), and ensure that there is sufficient voltage differential to maintain regulation. The SMPS requires short circuit protection, which is not shown in the circuit. For additional information about switchmode topologies, see the ESP article Switchmode Power Supply Primer.
+ +
Figure 6 - 'Off-Line' Switchmode Pre-Regulator
Because there are so many possibilities and so many variables, the above is presented as a block diagram only. The secondary rectifier has to use either Schottky or ultra-fast diodes, as they will typically be operated at 50kHz or more. The feedback system uses the same differential arrangement as Figure 5, but includes a resistor (R1) to limit the maximum LED current in the optocoupler. There are many things that can (and do) go awry with an SMPS, and every eventuality has to be catered for. The SMPS is shown as a circuit block for this approach, simply because of its overall complexity. The purpose of this article is to provide ideas, not complete circuit diagrams.
+ +Note that apart from the X2 capacitor (C1), there is no mains input filtering, switching or fusing shown. These are all necessary in an operational circuit. Switchmode supplies can provide both conducted (through the mains wiring) and radiated (through the air) radio frequency interference (aka EMI - electro-magnetic interference), and filtering is always necessary to ensure that other equipment isn't compromised. Commercial equipment requires compliance testing, and the necessary filtering is essential to obtain certification. It's generally unlawful for any manufacturer to sell non-compliant equipment.
+ +There is no doubt that a well engineered SMPS can give very good results. As shown, you can also use a smaller main filter capacitor (C2) because the frequency is so much higher than normal mains. This minimises stresses if (when) the supply is shorted, because it discharges much faster than a larger cap. Unfortunately, the difficulty lies in the implementation, as these supplies are very complex. Most of the ICs used are SMD, and should the supply fail 10 years after you build it, the chances of getting replacement parts is not good (especially PFC and SMPS controllers).
+ +Despite the apparent simplicity of the block diagram shown, the reality is that there is nothing even remotely trivial about this technique. You can simplify the final design by not using active PFC, but there are still many serious challenges to overcome. The design of a switchmode transformer is almost a 'black art', and achieving full isolation that complies with relevant safety standards is a feat unto itself. Ultimately, while it's certainly likely to provide the highest efficiency of all the methods discussed, the circuit complexity (and the danger of working with live mains powered circuitry) means that it's very hard to recommend as a DIY project.
+ + +When a regulator is providing the maximum possible output voltage, an accidental short (or a failed DUT) can stress the regulator's series-pass transistors well beyond their SOA limits. This can result in instantaneous failure, especially if the current limiting is set to its maximum value. Consider a transistor with 60V across it, attempting to pass 5A. The instantaneous power is 300W, and it takes time for the main filter capacitor to discharge. The more capacitance you use, the worse it is for the transistor(s). While most transistors can handle up to three times their rated power for very short durations, the time needed to discharge a 10,000µF capacitor will exceed the capabilities of simple series pass stages. At the same time, the transformer and rectifier are trying to keep the cap charged! The pre-regulator will reduce its output voltage, but this is never instant - expect at least 10ms, often more.
+ +This is especially true when using a pre-regulator, because that is generally used to limit the dissipation in the regulator. Consequently, the regulator may normally only have to dissipate around 100W (worst case), and usually less. Unless some form of specialised limiting is used (commonly known as V-I limiting in audio power amplifiers), the result will mean an expensive repair, and the supply is out of action until it's fixed. This isn't a trivial undertaking, and some fairly serious design work is necessary to get a V-I limiter that provides complete protection against short circuits. Don't imagine for a moment that it won't happen, because it will - that is pretty much guaranteed!
+ +
Figure 7 - TIP35C/36C SOA Curves
The data above is adapted from the Motorola datasheet for the TIP35C/36C (25A, 100V, 125W), and only the 'C' version is shown, since the lower voltage parts are now hard to obtain. Below 30V, the limits are based on power dissipation alone, so at 10V the limit is 12.5A (125W) and at 30V the limit is 4.16A (125W). At any collector to emitter voltage greater than 30V, second breakdown becomes the limiting factor, and woe betides the designer who fails to take it into consideration. Higher current is permissible if the duration of the overload is short enough, so you can get up to 1.75A with a duration of 300µs (87.5W), but that's not sensible for a power supply.
+ +As you can see, if there's 50V across the transistor, its maximum collector current is only 1A (50W vs. 125W). This is the secondary breakdown limit - at a case temperature of 25°C ! As the temperature increases, the SOA limits fall, so maintaining the lowest possible heatsink temperature is obviously critical. The TIP35/36 devices are rated at 125W, but that can only be reached at a VC-E of 30V or less, and at a case temperature of 25°C. This is normal, and you'll see the same trend with any BJT you care to examine. Some are better than others, but all are limited by physics.
+ +Using switching MOSFETs in linear mode is generally considered to be a bad idea (by the manufacturers), and while they don't suffer from second breakdown per sé, they have a remarkably similar failure mode brought about by localised overheating within the silicon die. It's well outside the scope of this article to try to cover this, but it's a very real phenomenon and has caused the death of many a MOSFET. If you look at the vast majority of MOSFET datasheets, you'll see curves for various periods, such as 10ms, 1ms and 100µs. They don't show operation at DC, because they don't handle it well. Switching MOSFETs are designed for switching!
+ +Of course, there's no good reason that you can't use lateral MOSFETs - the same as used for audio amplifiers such as the Project 101 MOSFET power amp. Lateral MOSFETs such as the ECX10N16 (125W, 160V, 8A) have a much greater SOA than bipolar transistors, and the primary limit is simply power dissipation. For example, if the device has a drain-source voltage of 100V, the maximum current is limited to 1.25A, because they are rated for 125W. If the voltage is 50V, current is 2.5A (also 125W). As standard practice, all power ratings are for a case temperature of 25°C. Lateral MOSFETs are much more expensive than BJTs or switching MOSFETs, and are uncommon in regulators or pre-regulators. There are a few MOSFETs (other than lateral types) that are designed for linear operation, but they are hard to find, and usually very expensive.
+ + +Everything in electronics ends up being a compromise. We compromise noise for efficiency, and (in many cases) we may compromise efficiency for ease of construction. There is no single 'ideal' solution, so a trade-off is always necessary somewhere. Simple techniques are usually easy to build but inefficient, and as the circuit is improved along the way, it will end up being more complex. With modern SMD (surface mount device) construction, there's little or no cost in terms of PCB real estate, but the final product may not be able to be repaired if it's smothered in tiny SMD parts.
+ +The best design for any given purpose is not necessarily the most efficient or the most expensive, and it may not even require particularly good regulation. The best design is one that suits the purpose, and for DIY, is easy to build and service should that ever be necessary. Highest efficiency almost always means greatest complexity, and that's especially true of switchmode circuits. If you intend to use the supply regularly, it needs to provide the functions you think are essential, and ideally can be modified later to make improvements should they be found necessary.
+ +A bench supply doesn't need 0.01% regulation, because it's invariably used with test leads that will degrade the regulation anyway, even if they are of sufficient gauge to minimise voltage drop. To use test leads that don't affect regulation means that you have to employ remote voltage sensing, so you need five leads for a dual power supply. In all the years I've been using power supplies, I've literally never wished that I had remote sensing capabilities, because most of the time a small voltage variation is of no consequence. It's a different matter if you are performing particularly precise measurements, but if that's the case you need a supply that's been designed for the purpose. DIY usually won't provide the performance needed without considerable effort and expense.
+ +Major manufacturers may spend hundreds (possibly thousands) of hours designing a supply that can be classified as true 'laboratory grade', and few individuals have the time, resources or money to spend on multiple prototypes to arrive at a final design. For example, a small miscalculation when designing a custom power transformer will mean that a new one has to be built. This can add a significant financial burden if you are building a single supply for your own use.
+ +Like the article discussing Bench Power Supplies - Buy Or Build?, this is not intended to show complete and/or tested and verified circuitry. It's a collection of ideas, selected to show common ways to minimise regulator dissipation. Each has been simulated (other than the switchmode versions), and has advantages and disadvantages. The not-so-small issue of protecting the regulator should the voltage be set to maximum and the test leads are shorted has not been addressed with any additional circuitry. If the regulator is a 3-terminal type (unlikely given the voltage and current suggested in the introduction) this should be 'automatic', but for a discrete regulator some form of instantaneous dissipation limiting must be considered.
+ +Bench power supplies are not trivial, and the protection requirements become quite onerous for a supply that can deliver high voltage and current. Since most (or at least a great many) applications today require a dual supply, everything is doubled. I consider that a supply that can deliver up to ±25V at perhaps 2A or so to be a sensible limit for a home made supply. Anything larger becomes very expensive to build, and is much harder to protect against accidents or misuse (deliberate or otherwise).
+ +Many pre-regulator circuits rely on a separate 'always on' supply to power the control circuitry (always on when the power supply is turned on, not 24/7). While the current needed is usually quite low, this adds to the overall circuit complexity, and it's worse for a dual supply. In addition, separate floating supplies may be required for digital meters, and while inexpensive, they too add to the build complexity and final cost. Some people don't care, and simply want to build the best supply possible that suits their needs. If that's your goal, then choose wisely, and be prepared to build several prototypes before you get everything just right.
+ + +The most useful reference is shown below, along with the ESP article. The HP circuit is an advanced design (for its time), and uses tap switching to obtain 0-50V at 0-10A output. There are countless circuits on the Net, with some being perfect examples of what not to do, while others are interesting (which should be in quotes for some). Otherwise, there are few other references, because the information available was either far too complex to consider, or had issues that would make a reference less than useful. The references to using MOSFETs in linear mode are worthwhile just for interet's sake, as many people are unaware of the likely problems.
+ +![]() |
Elliott Sound Products | PSU Simulation |
On the surface, simulating a power supply (regardless of the simulator you use) is simple. A sinewave generator producing the nominal mains voltage and frequency, perhaps an 'ideal' transformer with the correct ratio(s), some diodes and filter capacitors. Then you apply a load to see the ripple performance, and perhaps examine regulation, filter capacitor ripple current and diode dissipation. The load can be varied so you can observe the performance with different output current.
It all seems very straightforward, but there are so many traps and 'gotchas' hiding within this simple model that you will almost certainly be bitterly disappointed when the 'real thing' (i.e. the finished physical power supply) doesn't behave even remotely like the simulated version. The reasons are often puzzling, but there are many things you need to do to get a reasonably accurate simulation working.
There are many reasons to simulate power supplies, not the least of which is that you can do anything you like. Short circuits or insanely low load impedances won't harm the simulator one bit, where a physical supply may be destroyed by the same abuse. This is one of the beauties of simulations - you can try anything, however outrageous, without risking damage to hardware or yourself. Bench testing means that you have to get things right, or 'bad things' are likely to happen (such 'bad things' can also be rather expensive).
I suggest that you read the Transformer articles (there are three in the series) to ensure that you have a full understanding of the principles of transformers before you try any serious simulations. There are many factors to consider, but I will only cover the essentials here. While there are some liberties taken in this article, you will get simulations that are closer to reality than you can ever get if you don't take the most important factors into account.
Some may well ask why anyone would bother simulating such a simple circuit that's easy to put together on the bench. There are many reasons, with the most important being understanding what goes on. A simulator lets you do anything you like, and it also lets you measure things that are very difficult in a real power supply. You can use a huge filter capacitor, and none of it costs a cent - no parts to buy, no soldering, and an opportunity to examine aspects of performance that can't be done with the physical supply. Even something as simple as measuring capacitor ripple current will upset a real circuit due to the resistance of the test leads and the internal resistance of the ammeter. In a simulation, it's as easy as probing a wire for voltage or current.
Simulators often allow the use of 'ideal' parts, which you can mess with by adding series or parallel resistors, or specifying the part's parameters. This lets you see the changes quickly, and with a resolution that is usually impossible with normal test equipment. In so doing, you can use it as a learning exercise. A great many ESP articles use simulations, because it lets me produce graphs and measurements that would be very difficult and time-consuming otherwise.
Note Carefully: There is one major difference between simulating a power supply and the 'real thing'. In a simulation, it's usually not possible to account for saturation. While this might seem to be a serious limitation, it's not (although it lack of saturation does increase primary current settling time if the voltage is applied starting at 0V). Any transformer has the maximum flux density in the core at no load, and as the loading is increased, flux density decreases. This may be counter-intuitive, but it's a fact nonetheless, so any transformer simulation you perform is only very slightly affected by the inability to simulate core saturation. For the most part it can be ignored, because saturation is (almost) irrelevant at reasonable power levels.
While this article shows a step-down transformer for the examples, step-up transformers - as will be used with valve (vacuum tube) gear - can be simulated just as easily. The HV secondary winding will have a fairly high resistance, so it's easy to measure a few samples with an ohmmeter, and the forward resistance of valve rectifiers (which IMO can't be recommended for anything other than the bin) is easily simulated by adding the value shown in the datasheet for forward resistance (which may need Ohm's law to determine). Depending on the rectifier valve, expect somewhere between 50Ω to 100Ω for each diode. Otherwise, there are no surprises, other than the inevitable surprise you get when you realise just how accurate a simulation can be, once everything is included. Even choke input filters can be simulated if you wanted to go that far, and the results will be as good as your input data.
Many people use LTspice for simulations, and there are a few tricks that are necessary to 'create' a transformer. Firstly, you need to create two inductors (one for primary, one for secondary) with the inductance ratio set for the turns ratio. For the examples used below, you'd start with the primary inductance (L1) at (say) 10H, and the secondary inductance (L2) at 100mH (10:1 ratio). Note that the inductance ratio is the square of the turns ratio! Then you must add a spice 'directive', such as 'K1 L1 L2 1' to tell the simulator that the two inductances are coupled with a coupling factor of unity. If preferred, you can use a figure just below unity (e.g. 0.9999), but that's rarely needed and makes little difference. The series resistance of each inductor can be included when it's created, but it's better to keep this external so it can be seen readily (and appropriately annotated). Annoyingly, LTspice complains if you tell it that the transformer is T1 - the letter 'T' is reserved for a transmission line. Creating a simple switch is a bit of a task in LTspice, so you'll need to look at the circuits below (Figure 1).
While LTspice does have a component called a 'switch', it isn't defined, and a 'bespoke' definition has to be created for it. Nothing particularly difficult if you're familiar with the process, but more work than it should be.
If you use SIMetrix, you simply add an 'ideal transformer', with the ratio set for the desired value(s) between primary and secondary. The coupling factor can be reduced from the default of unity, but again, it's not necessary. You only have to specify the primary inductance, and the secondary looks after itself. Overall, I find SIMetrix far easier to use than LTspice, but the free version does come with limitations. A simple switch is easily created, and you just need a DC (single pulse) supply delayed by 5ms (50Hz) or 4.6667ms (60Hz).
In either SIMetrix or LTspice, the switch is driven from a single pulse source, with a voltage of between 5-12V, and the output delayed by the required time for ¼ cycle so the mains is switched on at the peak of the waveform (see the reasons for this below). The pulse duration (period where the output is at 5-12V) must be set for longer than your simulation run-time. Around 10 seconds is usually more than enough, but you can make it longer if preferred.
While there are many other versions of Spice available, I can't comment on how to use them all for the simulations shown below. If you use something other than SIMetrix or LTspice, you'll probably also know how to create the various parts as described above in the Spice version you use. If not, or if you don't use a simulator, then I suggest SIMetrix. IMO it's far easier to use than LTspice, although it does have limitations in the free version. LTspice is free and virtually unlimited, but somewhat predictably the parts offered are those made by Linear Technology (although other libraries are available).
While there are web-based simulators available, this isn't an approach I'd take. If the load is high (many users at the same time) your simulation may be queued, or if the queue is full, it won't run at all. A 'desktop' application is always preferable, and you can have as many simulations as you like, all stored on your hard drive and ready to go. For example, I have over 5,600 different simulations, saved over many years. I can load and run any of them with a few mouse-clicks, and in many cases a 'new' simulation is built from something pre-existing, with a few modifications as required.
Figure 1A, 1B - LTspice (Top) and SIMetrix (Bottom) Simulation Circuit
The two circuits shown above are screen captures, and they are functionally identical. The LTspice drawing is larger only because that's how it turned out when I took the screen grab. The remainder of the circuitry described below is simple to include in either version of spice, and changing the series resistance and other parameters are equally straightforward. The switch simply enables you to measure the primary current without the DC offset, and without having to run the simulation for longer than necessary. I would still advise that you use around 250ms for the simulation, with data output starting from the 150ms point. All circuits take some time to settle, and delaying the output trace means that you see the 'steady state' conditions.
Note that the 'pulse' directive is identical for both versions. The first number is the start voltage, followed by the output voltage, time delay, risetime, fall time and duration. The way the two versions covered here operate are very similar (probably close to identical), but the schematic capture and probe functions are quite different. SIMetrix has (IMO) a better user interface, and better (simpler) control of the graphical output. However, moving from one to the other isn't easy, as both have idiosyncrasies that take time to learn.
Figure 1C - SIMetrix Alternate Switch Circuit
The drawing above shows how you can implement a switch in any simulator, including those that don't have this functionality. The circuit shown is a MOSFET relay, and you can use any available MOSFET that has the voltage rating needed, as well as a low RDS (on). Note that the pulse generator is set for an output voltage of 10V, to ensure the MOSFETs conduct fully. The IRFP240 (20A, 200V, 140mΩ RDS-on MOSFET is a good choice, but any other MOSFET with similar ratings will work just fine. This shows one of the nice things about simulators - you can do things in a simulation that would be very tiresome in a real circuit. This can be a shortcut, or to achieve an end goal that would otherwise be hard to realise due to simulator limitations. For what it's worth, I can tell the reader from firsthand experience that setting up a physical peak switching circuit is not simple, even though it appears so in a simulator! I built one several years ago (and results are shown in the transformer articles), and it was anything but a simple circuit when completed. It can switch at either the zero crossing or peak of the waveform, but it never made it as a project because it's a 'single purpose' test tool that few people will ever need.
You don't have to include the peak voltage switch unless you intend to examine the primary current. Personally, I think it's both important and educational to do so, but for a quick simulation it's not essential. Leaving it out means there's less faffing around, and the secondary will perform normally (transformers don't pass DC, so the offset is immaterial). However, it's worthwhile to include the delayed switch so you can measure the primary current. I suggest you try it both without and with the switch, so you get an appreciation for the way inductive components react when driven from a voltage starting at zero vs. the voltage starting at the peak.
Also, be aware that peak switching causes the first half-cycle current to be very high. In the simulations shown here, it will be close to 20A, but being a simulation that doesn't matter. if you wish to measure the RMS current, you need to exclude the first 20ms of the simulation or the result will be wrong. Most simulators let you start the output trace at a specific time from the start of the analysis.
As already suggested, one of the aims of simulations it to learn how components behave, and simulation is not just a 'short-cut' design process. While it also serves that purpose admirably, you can gather so much more information from your simulated circuits, with zero risk. Simulators are powerful tools, and when used wisely can provide you with a great deal of knowledge and understanding of circuit behaviour than just building the circuit and using it. Do you trust the simulation implicitly? No. It can only be trusted if you include all of the 'real world' parasitic components, something that's close to impossible. In reality, even that doesn't mean your circuit will work as expected, and only experience will tell you if the simulation is 'sensible' or not. A simulator has no difficulty at all with telling you about circuit performance at -200dB or at a voltage of 1GV (1,000MV), but of course this is meaningless with any real circuit. However, there is so much more to them that not using one means that you miss out on a great deal of useful information.
You may be curious why one should go to the trouble of switching the mains at the peak of the waveform. In a simulation, the transformer is 'ideal' and has few losses. We add winding resistance, but we can't easily simulate core saturation. This is at it's worst when the mains is turned on at the zero crossing, but the simulator won't show core saturation without a great deal of messing around. By switching at the peak, saturation effects are minimised and we don't see a large (and completely unrealistic) DC shift in the primary current. If the input is switched at the zero crossing, it can take several seconds of simulation before the input current returns to normal (i.e. AC, with no DC shift). No-one wants to wait for ages until a simulation provides results that are useful.
It's been said that the famous Bob Pease refused to use and/or hated simulations, however that's not actually true [ 5, 6 ]. However, he was able to do many of the complex calculation either by hand or in his head, something most of us can't do. The important thing is to ensure that any simulation is 'sane', and doesn't give answers that a quick mental calculation says is simply impossible. The sanity check is essential in any simulation, but it's a step that most people don't take in their quest for a quick answer. As shown in this article, to get a good result, you need to take a lot of different factors into consideration. Failure to do so gives results that can't be trusted and aren't useful.
The most common approach will be something along the lines of that shown in Figure 2. A sinewave generator is set for 50Hz, with an output of 325V peak (230V RMS). The transformer ratio is (for the sake of simplicity) 10:1, so the output voltage will be 23V RMS. Four 'ideal' diodes are used initially for the bridge rectifier, and there's a 4,700µF filter cap. The load is 31 ohms, since we think we should get an output current of 1A DC. Naturally, you can substitute a different voltage, frequency, load, etc., depending on the end use for the supply and your local mains. Depending on the simulator you use, the 'ideal' diode model may truly be ideal (zero forward voltage drop for example), or (as is the case with SIMetrix) has 'normal' voltage drop but no voltage limit. Feel free to use an existing diode model if you prefer, and consider that some simulators may not even include an 'ideal' diode. Make sure that the current and voltage ratings are suitable.
Figure 2 - Basic PSU Simulation Circuit
At first glance, there's nothing wrong, and it will appear to simulate perfectly. You will certainly get close to the expected output voltage, and varying the load resistance will cause more or less ripple at the output. You can measure capacitor ripple current as well, as this is an important (but often overlooked) parameter. Most simulators let you measure the current in a wire, either with a fixed inline current probe of other means. However, should you build the circuit and test it, you'll quickly find that reality is quite different from the simulation.
Figure 3 - Input Voltage & Current Of Figure 2 Circuit
When simulated, the output voltage is 30.2V DC (average), load current is 974mA and the primary current is 319mA RMS (note the 1.51A peaks!). Output ripple is 1.78V peak-peak, or 539mV RMS. This is pretty much what you would expect, but if you were to build and test the same supply, it will be very different. Input current and output voltage will be lower and, perhaps surprisingly, the ripple voltage will be a little lower as well. A simulation using the above circuit will cheerfully claim that the capacitor's ripple current is a little over 3A RMS.
Note that the 'remnant' of sinewave shown in the current waveform is the magnetising current, which is 73mA for a 10H primary inductance. This is not what the actual primary current waveform will show, because the core of most transformers is driven into slight saturation at the normal input voltage, and the input current waveform is not a sinewave. If you wish to see what the magnetising current really looks like under a range of conditions, see Transformers, Part 2, section 12.1. This also shows the voltage waveform, and it's quite apparent that the 'flat-topped' sinewave is the normal condition.
There are several reasons for the discrepancies between a simple simulation and the real circuit, with the main one being the transformer itself. Simulators have 'ideal' transformers, but the real world does not. Any physical transformer has resistance in the windings, and this can make a surprisingly large difference to the outcome. Consider the data shown in Table 1 (below), which shows the primary resistance of more-or-less typical toroidal transformers. The figures are similar for E-I types, but the transformer will be larger for the same VA (volts × amps) rating.
The timed switch is a 'special' adaptation (as described above in Section 1) that ensures that the transformer's inductance doesn't cause a DC offset in the mains input current. It's shown as 5ms, as that represents a 90° phase shift in the waveform, and power is applied to the transformer at the peak of the voltage waveform. The exact mechanism for providing the delayed switch depends on the simulator you use, and because there are so many variants that is something you'll have to work out for yourself. If it's not included, there is a DC offset of nearly 82mA even 200ms after the simulation has started. The same applies for the other circuits as well. It takes SIMetrix (the simulator I use most) almost 5 seconds before the DC offset has fallen to (close to) zero. For 60Hz mains, the delay time is 4.1667ms (¼ 16.6667ms cycle time).
Without the timed switch, virtually all simulators that work properly will show the DC offset. The ideal time to close any switch feeding a transformer is at the peak of the AC waveform. This is just as true with a real transformer as a simulated version, and is counter-intuitive unless you understand AC inductor theory very well. Without the timed switch, there will be a DC offset in the primary current, and the simulation will have to run for several seconds before the average reaches zero. Real transformers are no different! It takes less time for a 'real' transformer to reach zero DC offset in the primary, largely due to losses and partial unidirectional core saturation if the power is applied at the zero crossing. By default, simulators start their output voltage from zero, and this creates problems and poor correlation with reality.
The first step to getting a simulation that is closer to reality is to include the transformer winding resistances. It's also advisable to include the mains impedance, and this becomes increasingly important with larger transformers. Typically, I suggest that you assume 1Ω mains impedance for 230V operation, and 0.25Ω for 120V. This may be pessimistic or optimistic depending on the mains where you live, but it's accurate enough for most purposes.
The most essential parameter is primary resistance, and the table shows this, along with regulation info that rarely matches reality, because it assumes a resistive load. This is almost never the case with typical power supplies, so it's of little use in reality. It can (at least in theory) be used to determine the secondary resistance, but in most cases (where it's provided at all), it's a representative figure, and doesn't necessarily tally with measured performance.
VA | Resistance (Ω) | Regulation | VA | Resistance (Ω) | Regulation |
4 | 1,100 | 30% | 225 | 8 | 8% |
6 | 700 | 25% | 300 | 4.7 | 6% |
10 | 400 | 20% | 500 | 2.3 | 4% |
15 | 250 | 18% | 625 | 1.6 | 4% |
20 | 180 | 15% | 800 | 1.4 | 4% |
30 | 140 | 15% | 1,000 (1kVA) | 1.1 | 4% |
50 | 60 | 13% | 1,500 (1.5kVA) | 0.8 | 4% |
80 | 34 | 12% | 2,000 (2kVA) | 0.6 | 4% |
120 | 22 | 10% | 3,000 (3kVA) | 0.4 | 4% |
160 | 12 | 8% |
The table only shows the primary resistance, and getting info on the secondary resistance is difficult without a (very) low ohms meter. While I have described one in the projects pages, it's not a common requirement and few people will have the means to run the tests. Instead, the secondary resistance can be estimated. We can take a 300VA transformer as an example. The primary resistance is about 4.7 ohms, and if we assume a 10:1 ratio, the secondary resistance should be no more than 0.1 ohm. There is no universal formula for this, but transformer makers usually try to ensure that the power dissipated in the primary winding(s) is either the same or less than that in the secondary. This can be because the secondary is wound on top of the primary, and has slightly better cooling. As a (very) rough approximation, the secondary resistance should be in the order of ...
Tr = Vp / Vs (Tr is turns ratio, Vp is primary voltage and Vs is secondary voltage) Tr = 230 / 23 = 10 (For this example) Rs = ( Rp / Tr² ) × 1.1 (Rs is secondary resistance, Rp is primary resistance and 1.1 is a 'fudge factor')
Feel free to ignore the 1.1 'fudge factor', but that worked out to be a fairly close average for the transformers I tested with a low ohms meter. There's no truly scientific explanation for the fudge factor, but without it, the calculated transformer secondary resistance was less than the measured value. It's not a fixed value though, and you may find significant variations if you run tests yourself.
Based on the calculations shown, our 300VA 10:1 test transformer will have a secondary resistance of about 52mΩ. If simulated with these values, regulation is somewhat better than the 6% estimate provided in Table 1. However, it's a good start, and much more likely to give an (acceptably) accurate simulation than the ideal transformer alone. When a transformer feeds anything other than a resistive load, many other issues come into play. Many of these are discussed in detail in the transformer articles, but they are just as relevant for a simulation.
Note that for transformers with a 120V primary, the winding resistance is one quarter of that shown in the table. A 300VA transformer would therefore have a primary resistance of 1.175 ohms. This applies whether the transformer has two 120V windings in parallel or has a single 120V winding. Each winding is roughly half the resistance of a 230V winding, and they're in parallel.
One specification that is never provided for mains transformers is the primary inductance. While it would seem important, in reality it's not. The inductance is not a fixed value, and measuring a transformer with an inductance meter will usually give you an answer, but it doesn't serve any real purpose. When you create an 'ideal' transformer in a simulation, you do need to provide the primary inductance. In most cases, a value of around 10H is fine for 230V primaries, or 5H for 120V. You can use more or less, but you will need to run tests to ensure that the end result is 'sensible'. In this context, 'sensible' means that the no load current will be somewhere between 50-100mA. This is easily determined using the formula ...
Ip = Vp / ( 2π × f × L ) (Ip is primary current, Vp is primary voltage, f is frequency and L is inductance)
There's plenty of leeway, but if the inductance is too high it will take some time for the simulation to stabilise. This happens because simulators start the input voltage from zero, and that creates a DC offset due to the inductance. A load will make the simulator settle faster, but you generally need to simulate various different loads, especially if the supply is for a Class-AB power amplifier. The DC current varies from a few 10s of milliamps to several amps in sympathy with the signal.
To obtain a more realistic simulation, it's obviously essential to include the transformer's winding resistances. We should also include the filter capacitor's ESR (equivalent series resistance), as this affects the ripple voltage and the cap's ripple current. Unfortunately, this isn't always easy to find in the datasheet (assuming that you can even get the datasheet), so a few representative figures are provided in Table 2. It's necessary to add this, because most capacitors are considered 'ideal' by simulators, so they have no losses. The values shown below are taken from the data for a commercial ESR meter. Missing values are not there because the values are either outside the range of the meter or the value is not readily available in that voltage (e.g. a 1µF/ 10V cap is uncommon). Mostly, new capacitors should measure less than the values shown, but ESR increases as a capacitor ages, and a high reading is a reliable indicator that the cap is on the way out.
µF / V | 10 V | 16 V | 25 V | 35 V | 63 V | 160 V | 250 V |
1.0 | 5.0 | 4.0 | 6.0 | 10 | 20 | ||
2.2 | 2.5 | 3.0 | 4.0 | 9.0 | 14 | ||
4.7 | 2.5 | 2.0 | 2.0 | 6.0 | 5.0 | ||
10 | 1.6 | 1.5 | 1.7 | 2.0 | 3.0 | 6.0 | |
22 | 5.0 | 3.0 | 2.0 | 1.0 | 0.8 | 1.6 | 3.0 |
47 | 3.0 | 2.0 | 1.0 | 1.0 | 0.6 | 1.0 | 2.0 |
100 | 0.9 | 0.7 | 0.5 | 0.5 | 0.3 | 0.5 | 1.0 |
220 | 0.3 | 0.4 | 0.4 | 0.2 | 0.15 | 0.25 | 0.5 |
470 | 0.25 | 0.2 | 0.12 | 0.1 | 0.1 | 0.2 | 0.3 |
1,000 | 0.1 | 0.1 | 0.1 | 0.04 | 0.04 | 0.15 | |
4,700 | 0.06 | 0.05 | 0.05 | 0.05 | 0.05 | ||
10,000 | 0.04 | 0.03 | 0.03 | 0.03 |
In most cases, and especially if you use high capacitance (e.g. 10,000µF), the ESR is very low, and some allowance may need to be made for wiring resistance. While we usually tend to think that 50mm of reasonably thick wire has virtually no resistance, it adds up when you're looking at a capacitor ESR of less than 0.05Ω and a peak secondary current of over 8A with the circuit shown. Everything makes a difference, although it will often only show up in simulations, because the real world has so many other variables. This includes typical test gear (multimeters in particular), which cannot show the peak value, and the RMS component is only accurate if the meter has 'True RMS' capability and can handle the crest factor (the difference between the peak and RMS values).
Figure 4 - Realistic PSU Simulation Circuit
Once we include the mains resistance (Rmains), primary resistance (Rp), secondary resistance (Rs) and ESR, the results will be much closer to reality. Now we find that the input current is just under 240mA RMS, average output voltage is 29.58V, load current is reduced to 954mA, and ripple is 1.74V peak-peak (512mV RMS). However, while certainly more accurate than the Figure 2 circuit, there will still be a difference between what you simulate vs. what you measure in a real circuit.
Figure 5 - Input Voltage & Current Of Figure 4 Circuit
The first thing you should see is that the peak input current is reduced, and the current waveform spikes are a little broader. This is because the resistance reduces the peak current slightly, and the diodes conduct for a little longer in order to 'top-up' the filter capacitor. The DC output is 29.58 (29.6V near enough), and there is 511mV RMS of ripple. It's reduced mainly because the voltage is lower, so there's a bit less current in the load. The capacitor's ripple current is 2.2A with the circuit shown.
Mostly, if you use the Figure 4 circuit for simulations you will be more than close enough to get a reasonable representation of reality. There are errors, but they pale into insignificance compared to normal mains fluctuations. We can still do better, but mostly there's no real need.
Table 1 shows the regulation that can be expected from the transformers shown. However, it's important to understand that regulation is always specified for a resistive load. With few exceptions, this is not how the transformer is used. Unfortunately, it's unrealistic to expect that manufacturers could provide a regulation figure for a 'typical' power supply, because there is no 'typical' supply.
Without exception, when a standard supply as described here, the regulation will be much worse than the quoted figure. In addition, the rated secondary voltage is specified at full load (resistive), so at low output current the voltage will be higher than you expect. I used a 10:1 transformer ratio, and it was assumed that this would provide an output voltage of 23V RMS. In reality, a transformer rated for 23V output would have an output of perhaps 24.4V RMS with 230V mains and no load (6% regulation assumed).
When a power supply as shown here is used, the regulation will be much worse than 6%. When the load is varied from zero to 6A, the regulation is 15.8%, with the average output voltage varying from 30.31V (no load) to 25.51V at 6A output. 6A is getting close to the maximum output allowable for a 300VA transformer, with a calculated input of 241VA. One thing that you can assume with most supplies is that the full output isn't used all the time, and transformers don't care if they are overloaded, provided that the average VA rating isn't exceeded over a period of time that's variable, depending on the physical size of the transformer.
For example, overloading a 300VA transformer to 150% for 30 seconds and operating it well below full output for 30 seconds will be fine, even if this is continued all day. Doing the same with a 5VA transformer is ill-advised, because it has very little thermal mass. Forced air cooling (i.e. a fan) can increase the VA rating of any transformer, but it's impractical for anything less than 160VA or so. Transformers aren't easy to fan cool, because the air only has access to the outside of the windings (and the core for E-I types), reducing its effectiveness.
Regulation gets worse if a transformer gets hot, because the winding resistance increases. Ideally, no transformer should ever reach a temperature such that you can't place your hand on it without being burned. High temperatures cause greater losses and risk insulation breakdown if maintained over a long period.
You may hear terms like 'clean' or 'dirty' power. In theory, these refer to the quality of the AC power waveform. The ideal power waveform is a pure 50Hz (or 60Hz) sine wave. This is a mathematically pure waveform, which is never achieved in reality. Even the 'cleanest' power you'll get from the mains is somewhat distorted, and this is due to thousands of loads connected to it across the region served by your power company. The reasons are many and varied, but provided the distortion is relatively low (up to around 5% if you were to measure it), it won't upset anything.
If you were to measure the AC waveform, you'll nearly always find that it's not a sinewave. The most common waveform is shown below, and it's this that you need to use for an accurate simulation. While it does add some complexity to the overall simulation, producing a reasonable facsimile of the typical mains waveform isn't especially difficult. You do need to understand the reasoning behind the 'distortion generator' though, because it makes a surprisingly big difference to the outcome.
Figure 6 - Mains Waveform At 23V RMS Output
The above is not a sinewave! The first thing to notice is that the peak voltage is not 32.5V as expected (23 × √2), but is only 31V peak. That's because the waveform is distorted, and you can see the 'flat-top' on the peaks. It's not really flat though - it slopes downwards from the initial peak before returning to 'normal'. This is the 'new normal' for mains almost everywhere, regardless of voltage or frequency. The only way to know the actual peak voltage vs. the RMS value is to measure the peak with an oscilloscope, and the RMS voltage with a 'true RMS' reading voltmeter. The 'flat top' is caused by (literally) many thousands of power supplies all drawing current at the peak of the AC waveform, and the supply we are simulating is no different.
Fortunately, it's not imperative that the exact waveform as shown above be provided in a simulation. The important part to get right is a 'flat-top' waveform that approximates the actual, with the right peak and RMS values. Ultimately, it depends on the simulator you use, and how far you are willing to go to get that approximation. If you have a VCVS (voltage controlled voltage source) in the simulator you use, it's quite easy to achieve. The circuit shown below is optimised for 230V mains, but it's easily modified to suit 120V.
Figure 7 - Simulated Flat-Topped Mains Waveform Generator (230V RMS)
There is an additional 'trick' included in the above. The 50Hz generator has a peak output of 330V (233V RMS), and the clipping circuit is perfectly straightforward. It's set up so that the waveform is clipped when it exceeds 315.6V. The RMS voltage is barely affected. As noted above (Figure 2 circuit), the timed switch ensures that the mains is connected at the peak of the AC waveform. Without it, there will be a DC offset in the primary current, and the simulation will have to run for several seconds before the average reaches zero. Real transformers are no different! It takes less time for a 'real' transformer to reach zero DC offset in the primary, largely due to losses and partial unidirectional saturation if the power is applied at the zero crossing. Because the clipped AC waveform is a high impedance, a VCVS (voltage controlled voltage source) was included as a unity gain buffer. This is an 'ideal' part, with an output impedance of zero.
The current waveform is shown next. This is the reason for lower than expected DC voltage, because the current is not a nice continuous flow, but has high amplitude peaks when the diodes conduct. The total power from the mains is 32.52W, with 28.2W dissipated in the load, with the remaining 4.3W dissipated in the transformer (and mains) resistances, as well as the diodes. The filter capacitor dissipates less than 250mW, all due to the ESR.
Figure 8 - Input Voltage & Current Waveform With Flat-Topped Mains Waveform (230V RMS)
With this circuit, the ripple voltage at the output is reduced to 470mV RMS, which is primarily due to the reduced DC output voltage and subsequent reduced current in the load. The capacitor's ripple current is 1.98A, and this is closer to the 'real' value than the other two simulations. In particular, look at the primary current waveform - it's very different from the others shown. Current is supplied to the filter cap and load for a little longer, and the waveshape is modified.
You can verify that this is a good approximation to reality by using one of the ESP current monitors - either Project 139 or Project 139A. These are both very useful tools when working on power supplies, because they let you see the current waveform on a scope, without any risk of contacting live mains.
A very important measurement is the VA rating. In the case shown above, the mains supplies 230V at 240mA, which is 55VA (230V × 240mA). Power factor ( W ÷ VA ) is 0.59, which is typical for most 'linear' power supplies. It's quite obvious that these supplies are not linear, and the term is used to differentiate these supplies from switchmode versions.
When a transformer is followed by a bridge rectifier and filter capacitor, it no longer qualifies as a 'simple load'. The current (primary and secondary) is highly non-linear, and this causes much greater losses than the simple primary and secondary resistance would suggest. This is why transformers are always rated in VA (Volts × Amps), and not watts. VA is equal to watts only if the load is linear (i.e. resistive). Real loads are almost always non-linear, and this affects the regulation of a power supply. Unless you include the primary and secondary resistances in your simulation, the end result will be highly optimistic for the output voltage, and highly pessimistic for filter capacitor ripple current. The inherent series resistance reduces the peak capacitor current, and also reduces output voltage regulation.
The topic is important, and isn't something that's well understood by most hobbyists, although it is generally well understood by engineers. In most cases you'll get a maximum of 75% of the rating in VA as watts of output. This is also known as power factor, and the power factor is determined by dividing output power in watts by VA. 75% is the same as a power factor of 0.75 (unity is the best possible, and can only be achieved with a resistive load on the transformer's secondary).
This is something that's covered in detail in the transformer articles, and power factor is explained in detail in the article Power Factor - The Reality in the 'lamps & energy' section of this website. It's a complex area of electrical theory, but it's important. Most people will assume that one should be able to get 300W (continuous) from a 300VA transformer, but that's only true with a resistive load. In a capacitor-input DC power supply (as shown here), the power (in watts) is around 0.75 of the transformer's VA rating. In reality, this is rarely a limitation, because most power amplifiers draw far less average power than the amp's rated power. Even if driven to the onset of clipping, the average power is typically between 10% to 50% of the amp's rated power.
Note that this is usually not the case with most Class-A amplifiers, and (perhaps surprisingly) guitar amps. The latter are often driven into hard clipping (aka 'overdrive') for extended periods, and using less than a 150VA transformer for a 100W guitar amp would be most unwise. Simulations give you the opportunity to test this for yourself, without risking damage.
While we tend to think that 230V (or 120V) mains will measure the claimed voltage, this tends to happen more by accident than by design. In reality, the mains voltage can vary by ±10%, and sometimes more. This is normal, and designs have to take this into consideration. The claimed supply voltage is 'nominal' (existing in name only), and variations are the rule rather than an exception. The frequency (50 or 60Hz) is also nominal, but is much more tightly controlled because if it were otherwise the supply grid would collapse (and that is not an exaggeration). The mains resistance is added to the transformer's primary resistance, so a transformer with a 4.7 ohm primary should use a series resistor of 5.7 ohms (230V - you can work it out yourself for 120V).
We also need to factor in the resistance of the wiring from the distribution transformer, house wiring and the resistance of the mains lead to our power supply. With 230V mains, this usually works out to be around 1 ohm. I've measured the resistance at 800mΩ, so a 2,300W load (10A) causes a voltage drop of 8V at the wall outlet. In 120V countries, expect this to be roughly ¼ of that with 230V. That means the mains resistance should be about 0.25Ω for 120V mains. That means that around 75-80W is 'lost' just in the mains wiring at maximum current (a current of 10A at 230V or 15A at 120V is assumed). It's very rare for audio equipment to run at maximum power continuously, although many Class-A amplifiers will come close, as will preamps. The latter are not a problem because the current is usually quite low (typically less than ±100mA in most cases).
The typical variation of mains voltage is up to ±10%, so 230V could be anywhere between 207V and 253V. It's usually less, but the variation will always be at least ±5%, a range from 218V to 242V (all RMS). For 120V countries, that gives a range of 108-132V (±10%) or 114-126V (±5%). This needs to be accounted for when designing a power supply, and you also need to remember that the no load (or light load) output voltage from any transformer will always be greater than that at full load (as determined by the regulation figure for the transformer you intend to use).
For example, if the transformer regulation is stated to be 8%, the output voltage will be 8% greater at no load than at full load (resistive load). When used for a DC power supply, the regulation figure is (roughly) double that with a resistive load, as shown above. For our hypothetical 23V, 300VA transformer, the AC output will be 24.8V with no load (giving about 34V DC).
Many low-power supplies will be regulated, most often using a 3-terminal regulator IC such as LM7815/ 7915 or variable regulators such as the LM317 or 337. All regulator ICs require some 'headroom', so the input voltage - including ripple - must be a few volts higher than the output voltage. Should the most negative point on the ripple waveform fall below the minimum necessary, there will be some 'breakthrough' of noise, usually heard as a buzz if it leaks into the audio signal by some means.
For projects like the Project 05 or Project 05-Mini, the suggested secondary voltage is 15V AC, usually with a transformer rated for at least 15VA (500mA secondary current). Because very few preamps will draw anything like that much current, I know that the output voltage will be somewhat higher than claimed, because it's quoted at full load. In general, that means that the DC input will be at least 21V, and usually a bit more. Even if the mains voltage falls by 10%, there's still around 19V before regulation.
The 'dropout' voltage for these regulators varies, but it's usually around 2V. Provided the filter caps are big enough (and the two projects mentioned have plenty of capacitance), even the ripple voltage won't fall low enough to cause problems for typical currents - usually a maximum of around 100mA or so. This is something that must always be considered, but we also have to work with what's available. I would rather specify transformers with an 18V secondary (or two 18V secondaries), but these are difficult to obtain. If ripple breakthrough ever becomes an issue, then it's a simple matter to use 12V regulators instead, with the certain knowledge that all ESP designs will work perfectly happily with ±12V supplies. Many hundreds of preamp regulator boards such as those mentioned have been sold, and no-one has ever had a problem with ripple breakthrough.
That doesn't mean that you don't have to verify your design thoroughly. The final usage has to be considered, as well as available transformer voltages. You need to ensure that you have enough filtering to minimise the ripple. While I have seen regulators similar to those I provide, some people like to skimp on the filter capacitor, with as little as 470µF suggested in some circuits. While it will probably be fine when the mains voltage is at or above the nominal level or with very light loading, ripple breakthrough is almost a certainty at higher current or low mains.
Simulating a power supply is much more complex than most people realise, especially if you want to get as close to the final physical circuit as possible. Mostly, it doesn't matter all that much, because the mains voltage and waveform will be different at different times of day. If your simulation is off by a couple of volts, that's nothing compared to the changes that will occur naturally due to demand on the supply grid. However, it all helps to get a better understanding of what really happens, how much power is lost, and where.
Once you understand the real factors that affect power supplies, you'll have a much better chance of running a simulation that matches the physical version of the circuit. In most cases, the results you get are more than good enough if you use the Figure 4 circuit. Because the mains itself is so variable, there will never be a simulation that is 100% accurate over all conditions unless your simulation is very complex and accommodates all the variables. Because there are so many variables, that would make the simulation overly complex, and this is rarely necessary in practice.
As noted in the introduction, it's extremely difficult to simulate transformer core saturation. Most simulators will include various cores, but almost without exception they are ferrite, suitable only for switchmode supplies. While it might be possible to find a core that works at mains frequencies, I've not been able to find a combination that's usable. The reality is that it doesn't matter, because the transformer models described here will match reality surprisingly well.
There are (of course) countless articles on-line that discuss switchmode power supply (SMPS) simulations, to the extent that the poor old linear supply is almost forgotten. This is a shame, because linear power supplies have traditionally been one of the most reliable DC sources ever used, while most SMPS designs are only guaranteed to work until they don't. The time between 'working' and 'dead' ranges from months to years, vs. decades for linear designs. Yes, linear supplies can (and do) fail, but they are usually easily repaired with no specialised test gear or SMD rework equipment being necessary.
There are no references as such, other than those provided in-line, which are mainly based on articles published by ESP. There are (very) few examples on-line for power supply simulations, but almost nothing I saw could be classified as usable. Some are nothing short of an unmitigated disaster, and manage to get nearly everything wrong. It should come as no real surprise that the main mentions on the Net of anything that follows proper procedures for simulations exists on the ESP website.
There may be some 'scholarly' articles that cover the topic, but these have to be paid for, and are usually expensive. In addition, you don't even know if the material is useful or not until you've paid for it, an unacceptable practice IMO. ESP's policy from the outset has always been that information should be as accurate as possible, and freely available.
The ESP articles referenced are as follows ...
The fourth reference is an article first published by ESP in 2001, and it uses simulations that are almost identical to those shown here. The difference is that it describes the design of linear power supplies, and only makes reference to the simulations. This article shows you how to set up the simulations to get the best results.
The following is not a reference (I only found it when the article was almost complete), but you may find it useful. The simulations shown don't include the timed switch so primary current measurements aren't accurate, but it does provide some good examples that can be modified to suit your application.
![]() | + + + + + + + |
Elliott Sound Products | +Snubbers For Mains Transformer Power Supplies | +
So, do mains (230/ 120V, 50/ 60Hz) transformers 'ring' uncontrollably when subjected to an impulse? Where does the impulse come from, and how do we stop it? This seems to be a topic of some considerable discussion on forum sites, and (predictably enough) one chap has convinced many of the forum dwellers that the answer is at hand. The real question is "Do I need one, and why?" This article hopes to shed some light on the matter, and is the result of many simulations and bench tests. The only impulse that will befall a transformer is due to diode switching, discounting other external influences such as mains 'transients' (commonly and incorrectly referred to as 'power surges') created by sub-station failures, nearby lightning, or perhaps from other network failures. Ultimately, we are (or should be) only interested if we can improve the DC, for example lower ripple, less high frequency noise, etc. Sadly, a snubber will not provide either, but nor will it hurt anything.
+ +++ +
++
+WARNING
+This article describes power transformers that are connected to the household mains supply, and all 230/ 120V mains wiring is inherently dangerous and is capable of causing electric + shock or death if contact is made by any part of your body. The reader assumes all responsibility for any injury (including death) caused by inappropriate safety precautions taken while + performing tests, and it is required that anyone who undertakes any test described herein is qualified to work with mains voltages. It may be unlawful in some jurisdictions to work with + mains wiring unless suitably qualified.
+
+
The number of myths that surround power supply design is quite astonishing. People seem to forget that the power supply for an amplifier or preamp has but one function - to produce DC. The DC then allows the amp (or preamp) to modulate the applied steady voltage in sympathy with the applied audio signal, but rejecting the DC (and most noise thereon) itself. The power supply itself has little influence over the sound quality, provided it is acceptably free of noise (no power supply will ever be 100% noise-free). Where noise is an issue, a simple resistor/ capacitor (RC) filter will reduce it to almost nothing, and this may be necessary for some microphone preamps (for example) where the PSRR (power supply rejection ratio) is minimal. An example of just that type of design is Project 66, because the transistor front-end has limited PSRR.
+ +The easiest way to make a quiet power supply is to use a 'conventional' mains transformer, operated from the mains, and having a good filter network and (for preamps) a quiet regulator. An example is described in Project 05, where the rectified transformer output is heavily filtered using a two stage network, and the regulators are low noise variable types with capacitor bypass on the adjustment pin. The output noise is measurable, but it's so low that it has never caused anyone a problem when used with any number of other ESP projects. Most designs use 'standard' diodes (e.g. 1N4004 or similar for preamps, larger diodes for power amps), but some people prefer 'fast' or 'ultra fast' diodes which do reduce the turn-off transient (but again, don't change the DC at all).
+ +This particular article ended up taking me into unexpected territory. The first obstacle was working out how to examine the small perturbations on a comparatively large mains frequency waveform, and that was solved with what I've dubbed a 'high frequency probe' (HFP). This removes the vast majority of the mains frequency, and allows one to examine the diode switching waveform up close and personal (as it were). Then it was necessary to work out if there is any likelihood of the diode commutation noise being transferred to the DC, and whether there's anything you can do to prevent it. In short, there's nothing that I would do any differently from what I've done all my working life in electronics. There really isn't anything that needs 'fixing'.
+ +There seems to be a train of thought that 'ordinary' mains (i.e. 230V/ 120V 50/ 60Hz) transformers will ring uncontrollably due to rectifier diode switching. There are several ways that this actually can occur, but in general it's simply not something that needs 'fixing'. If it does happen, the effects are usually mitigated simply by adding a capacitor in parallel with the secondary winding(s). While this will always create a damped oscillation, it's at a significantly reduced amplitude and frequency. "Yes, but ..." you may hear from some, with a detailed analysis that may even include factors that don't necessarily exist.
+ +The transformer's primary winding is connected to a very low impedance (and fairly well damped) source - the mains wiring (and subsequently any number of distribution transformers, sub-stations, etc.). The impedance of the 230/ 120V mains circuit is very low, typically less than 1 ohm for 230V or around 0.25 ohms for 120V. If this were not the case, it would be impossible to run a 2kW heater (for example) because the voltage would fall to an unacceptable minimum, affecting everything else on the distribution branch.
+ +As with so many things in audio, there is always a fringe group that thinks (or even insists) that things that are completely inaudible somehow manage to ruin the sound, and that all manner of unnecessary and often expensive solutions are required for the 'perfect sound'. Most regular readers will be well aware that I consider this to be snake-oil, because if you can't hear or measure any 'interference', then quite obviously it isn't a problem. This extends well beyond power supplies of course - there are myths that affect the entire audio chain (even including the mains cable to the power outlet). This is a topic that doesn't warrant further discussion because most 'high end' power cables are verging on fraudulent.
+ +Current regulations for conducted and radiated emissions are fairly strict, but I've seen nothing that would indicate that any low frequency transformer based power supply is able to produce emissions that would cause a product to fail the conducted emissions test. You can add a Class-X (mains voltage certified) capacitor in parallel with the transformer primary, and I have been able to verify that this does reduce conducted emissions slightly (I have a basic conducted emissions test set that I built many years ago). By definition (for a transformer), this capacitance is reflected to the secondary.
+ +In reality, the poor power factor due to current waveform distortion is a far greater problem, but this isn't something that we can change easily (see Section 2.1). At present, the only way to ensure a good power factor is to use a PFC switchmode power supply. One can be confident that this will introduce far more high frequency interference into the nearby electronics, and while (usually) inaudible, this will always be far worse than any mains frequency transformer can produce. You could use a 'choke-input' filter (the diodes feed an inductor before the filter caps), but these are heavy, bulky and create problems of their own.
+ +I was somewhat bemused when a reader asked a question about a circuit that's designed to work out the optimum snubber network (a series resistor-capacitor network, aka a Zobel network) for a transformer, and that's what led to this article being written. The important part of the alleged problem is that the circuit makes zero allowance for the fact that it tests the transformer with a 'small signal' stimulus, while the actual power supply is subjected to 'large signal' parameters.
+ +Indeed, some users have said that they could easily see 'transients' when running tests, but failed to see anything when the supply was being used normally. No, I'm not going to provide references to the device or comments, because search engines will ultimately include these in search results, providing a degree of 'legitimacy' to something that is unlikely to give you a worthwhile outcome.
+ +I've not provided any details for the 'optimum' type of capacitor, because the current through it is generally benign. As shown in the various screen captures, the peak (high frequency) voltage is only a couple of volts, and capacitor current is minimal when a series resistor is included. If a cap is used across the mains, it must be a Class-X2 type, but any polyester or polypropylene cap will be fine when used in a snubber circuit. Avoid multilayer ceramic caps, because they have a significant voltage dependency (capacitance reduces as voltage is increased).
+ + +First and foremost, you need an oscilloscope to perform the tests described here. Any attempt at developing a workable snubber/ Zobel network is utterly doomed without one. You must be able to see the waveform, and without a scope that's simply impossible. It might be possible to use a PC sound card instead, but I'd be rather nervous because you're looking at unknown waveforms, many of which could cause instant failure of the sound card (or even the entire PC) if you make an error and connect to the wrong place. Consider yourself duly warned!
+ +If you examine the mains waveform (as it really is, not as a pure sinewave), and you'll see that the top of the waveform is slightly flattened. Rather than a pure sinewave, it's common for the mains to have a distortion of around 5-10% (depending on where you live, time of day, etc.), due primarily to the sheer number of power supplies (non PFC of course) and other non-linear loads that are connected at any given time. There may be hundreds of power supplies, all demanding current only at the peak of the AC mains waveform. Your hi-fi linear power supply is no different, even with the intervening mains transformer.
+ +The following is a mains waveform scope capture at the secondary of an unloaded transformer. Transformers are excellent devices for this, as anything that happens on the primary side is reflected to the secondary (and vice versa), so it's not necessary to risk life and limb by connecting one's scope to the mains (a very dangerous practice unless you know exactly what you are doing, and even then it's still dangerous!).
+ +
Figure 1 - Mains Waveform, Showing Flat-Topped Sinewave
For the following simulation results, I did use a pure sinewave, because it gives slightly worse (more pessimistic) results than the flattened AC mains waveform. We need to get some idea of the rate of change, not worst case (at the zero crossing) but close to the waveform peak where everything happens a bit slower. Near the crest of the sinewave (at the point where the diodes start to conduct), the rate of change for a 230V 50Hz waveform is only 21mV/µs at a peak voltage of 318V (and yes, you did read that right). That is very slow in anyone's language. It's about the same when the diodes turn off, although that doesn't influence the actual diode turn-off time. Even at the zero-crossing point (where the waveform passes through zero) the slew rate is only 102mV/µs (50Hz, 230V) or 65mV/µs (60Hz, 120V).
+ +Remember that the end goal is to produce DC, which (pretty much by definition) is inaudible. Provided the DC is relatively noise free, the power supply rejection ratio (PSRR) of most opamps and power amps is capable of removing the majority of ripple, and (to a lesser extent) most higher frequency noise as well. Consider that the above simulation with a 10,000µF capacitor and a 1.2A current (at 30V DC) has around 1V p-p of ripple at 100Hz. This is easily dealt with by the vast majority of power amplifiers, and any ripple that appears at the amplifier's output should be close to the system's noise floor.
+ +For example, a more-or-less typical power amp may show an output ripple of (say) 150µV RMS with a power supply using 10,000µF capacitors. That's better than -76dBV (referred to 1V), and may (might !) typically only be audible if you press your ear to the speaker cone. That works out to -85dB referred to 1W into 8 ohms. For a speaker with a sensitivity of 85dB/1W/1m, the output from the speaker will be 0dB SPL at a distance of one metre.
+ + +I tested a small range of transformers (suitable for power amplifiers and preamplifiers) to measure their leakage inductance - this is the property of any transformer that produces 'transients' when the current is suddenly interrupted. It's generally accepted that these 'transients' are created as the rectifier diodes turn off, but in reality it's rarely (if ever) a problem with linear power supplies. There is certainly a measurable DI/DT as the diodes switch on or off, and it usually does cause a momentary transient. I also measured the capacitance between primary and secondary, as this can create a resonant circuit combined with the leakage inductance.
+ +The main purpose of the tests I did was to get an idea of the leakage inductance of the transformers, and how that interacts with the primary-secondary and diode capacitance in further testing. I checked for both primary and secondary leakage inductance. In brief that's the inductance caused by magnetic flux that 'leaks' out of the core, and doesn't interact fully with the transformer windings. It's shown in an equivalent circuit as a small inductor in series with the primary, but not sharing the magnetic core. There are other parasitic elements as well, shown in the following drawing. The values shown for the various parameters are based on Transformer #4 described in Table 1. C1 and C2 were not included (these are usually small parasitic capacitances that (more-or-less) represent inter-turn capacitances on the primary winding).
+ +
Figure 2 - Transformer Equivalent Circuit
The secondary leakage inductance can be approximated by measuring the leakage inductance of the primary, and dividing that by the square of the turns ratio. This will never be 100% accurate for a variety of reasons, most of which are related to the winding resistance which causes inductance meters to be somewhat inaccurate. To measure (approximate) primary leakage inductance, you short-circuit the secondary winding(s), and take an inductance measurement. It's somewhat beyond the scope of this article to examine more accurate techniques, and an approximation is usually quite acceptable unless you are dealing with transformers for SMPS.
+ +For transformers used in switchmode supplies, leakage inductance is a very important parameter, and can create havoc with the drive circuit because it generates large 'spike' voltages, created by inductor-capacitor (LC) resonance. There is always stray capacitance in any circuit, and especially in wound components like transformers because of the winding layers and wires side-by-side.
+ +The inference is that if this is a problem with switchmode transformers, then it must (by some strange twist of logic) also be a problem with low frequency transformers as well. The short answer (of course) is that it's not a problem at all, but that hasn't stopped people from claiming otherwise. There are very significant differences between the two types of power supply. A 'traditional' linear power supply operates at mains frequency (50/60Hz) with a nominal sinewave input. Because this has slow transitions, the chance of high frequency interference is minimal.
+ +A switchmode power supply (SMPS) usually operates with frequencies between 50-300kHz (some may be higher or lower), and the switching waveform is rectangular. It has very rapid transitions from maximum to minimum and vice-versa, with high voltage waveform risetimes measured in microseconds. The rate of change (delta voltage vs. delta time, or DV/DT) can easily be 100V/µs or more, and that means that everything must be carefully optimised to ensure that electromagnetic interference (EMI) and potentially damaging transient voltages are minimised. A fault in a 10c part can easily cause an expensive SMPS to fail, and such failures tend to be catastrophic.
+ +To equate these two very different topologies in any way is nonsensical. Linear power supplies have been the 'backbone' of all electronics for many years, but SMPS are now starting to become dominant in commercial products. DIY supplies are still most commonly built in the 'old fashioned' way, with a mains transformer, bridge rectifier and large filter capacitors. This is a more expensive approach, but 'linear' supplies tend to be extremely reliable, with examples aplenty that are 60 years old and still working fine. Very few SMPS will ever be able to match this. In most cases, even if a linear supply does develop a fault, it can almost always be repaired fairly easily. A transformer failure is an exception, but unless it's been heavily abused these are quite possibly the most reliable components ever.
+ +# | Secondary | VA | Primary | Secondary | CP-S ¹ | Construction + |
1 ² | 12.6 | 2 | 762 mH | 2.35 mH | 48 pF | E-I + |
2 | 15-0-15 | 80 | 6.4 mH | 130 µH | 347 pF | Toroidal + |
3 | 25-0-25 | 160 | 1.8 mH | 180 µH | 292 pF | Toroidal + |
4 | 28-0-28 | 200 | 8 mH | 570 µH | 347 pF | E-I + |
5 | 30-0-30 | 300 | 1.63 mH | 115 µH | 622 pF | Toroidal + |
++ ++
+- CP-S is the capacitance between primary (all leads shorted) and secondary (all leads shorted)
+- The leakage inductance (primary and secondary) cannot be measured with an inductance meter due to the high winding resistance. Because it's so high, + the readings will not be sensible. With a primary resistance of a bit over 1k ohm,
+ the inductance meter doesn't stand a chance. The same applies to the secondary. The values shown were calculated using the method described in the + Transformers, Part II article. +
Transformer #4 is a 'conventional' laminated core, rated for 230V input and 28-0-28V output, at a maximum load of about 200VA. I have a number of these, and they've been pressed into service for quite a few of my own projects. Primary-secondary capacitance is usually higher for toroidal transformers than 'conventional' types, due to the way they are wound. However, none showed any more than I expected based on the physical size of the windings.
+ +It's immediately obvious that toroidal transformers show much lower leakage inductance than 'conventional' (E-I laminations) transformers. This is normal, because the magnetic path is completely closed, and the turns are evenly distributed around the core. The even distribution and low leakage is the reason that toroidal transformers are much less likely to cause induced hum current into a metal chassis than an otherwise equivalent E-I transformer. The characteristics of toroidal transformers are such that leakage inductance will have a higher Q than conventional designs, and a small amount of ringing is more likely.
+ +Now, when a transformer is simulated using the values shown for leakage inductance, it is possible to create some damped oscillation as the diodes come out of conduction. I used the worst case (but not #1 as that's an exception) of 8mH (along with 347pF capacitance between primary and secondary), and sure enough, there's a brief period where the waveform shows signs of a damped oscillation - but it's only significant if you try to stop it by adding a capacitor! The frequency with a 220nF cap was around 38kHz in my simulation, determined by capacitance and leakage inductance. Without an external capacitor, not even the primary-secondary capacitance achieved anything 'interesting'. The diodes used in the simulation were 1N5404 (3A average current, 30pF capacitance at -4V reverse voltage).
+ +Without anything across the secondary (other than the diodes and following filter cap and load resistor), there were exactly zero cycles of ringing - nothing at all, let alone anything to get excited about. So, adding a capacitor across the secondary is more liable to cause ringing (damped oscillation) than nothing at all. This being the case, there's obviously no requirement for a more complex network to eliminate something that doesn't normally exist.
+ +One thing that simulators are very good at is highlighting the smallest 'deficiency' in a circuit, and this can show things that are almost impossible to capture on an oscilloscope. They also allow one to create a perfect component with no losses, and of course these do not exist on the test bench. So-called 'ideal' components are useful because losses can be introduced that exactly mimic the losses in a real part. However, simulators are very hard to set up so they mimic a real transformer, because there are non-linear effects that are difficult to estimate, and even harder to simulate.
+ +
Figure 3 - Transformer Response With Pulse Stimulation
In the above, the red trace shows the response when transformer #4 (as simulated) is pulsed with a +2V step with a rise time of 1µs (i.e. 2V/µs DV/DT). The source impedance is 100 ohms. The transformer has its primary shorted, and there is only the leakage inductance and primary-secondary capacitance in circuit. The green trace shows the result is the pulse is applied via a 10nF capacitor, but retaining the 100 ohm source impedance. Ringing is clear, and it would be much worse if the source impedance were reduced. The rectifier diodes were present for the simulation, but have almost no effect. The red trace shows good damping, with only a small overshoot, and no 'correction' is required. The turn-off transient is identical, but naturally it's inverted. Finally, the blue trace shows the effect of using a snubber/ Zobel network across the secondary, using a 10 ohm resistor and 100nF capacitor. As should be apparent, this makes matters (slightly) worse, not better.
+ +It is possible to 'optimise' the resistance value to provide better damping, but there is clearly no need to do so. The best result was obtained with a snubber using 100nF and 68 ohms, but the overall effect is so small as to be considered negligible. Naturally, there's no reason not to include a snubber circuit, but equally there's no need for it in the first place. The response shown with nothing at all (red) is close to optimally damped, and there's no sign of anything that could ever prove troublesome. Remember that the stimulus used is a great deal faster than the mains frequency and diodes can achieve, so in use there will be almost no disturbance at all.
+ + +This may be a side-issue, but it's intended to demonstrate that small perturbations in the voltage waveform are actually the least of your worries. The current waveform shown below is the most common source of power supply noise, and if you don't understand this the rest of the article (or the use of a snubber) is close to pointless. All capacitor-input filters (by far the most common) create this type of waveform, and there is very little you can do to make it more 'sensible'. Note the peak amplitude of the current - over 10A for an RMS current of 3.1 amps. There is a big difference between the AC secondary current and the DC (1.2A).
+ +A significant 'power' difference also exists - there is an input of 71VA (volt-amps = 3.1A × 23V) compared to an output of 36W (1.2A through a 25 ohm resistor). That's a 2:1 ratio of input VA vs. output power - the power factor is therefore about 0.5. About 2.5V is 'lost' due to transformer winding resistance and diode forward voltage drop. These are affected by the peak current, and not the RMS value.
+ +
Figure 4 - Transformer Secondary Current Waveform
The steady state secondary current waveform is shown above (the first couple of cycles aren't relevant and are not shown). There are relatively narrow peaks that coincide with the diodes conducting and passing current to the filter capacitors and load. The circuit is as shown in Figure 7 (below), with a 10:1 transformer (no-load output voltage of 23V RMS) 10,000µF filter cap and a 25 ohm load. Average current through the load is 1.2A with an average DC voltage of 30V. It is this current waveform that creates the most havoc in any power supply, and it requires careful grounding to ensure that the waveform isn't superimposed onto the DC output, nor coupled into adjacent wiring. Adding a snubber does not change this in any way, shape or form !
+ +If the capacitance is reduced, so too is the RMS input current, but the peak current is increased slightly. For example, reducing the filter cap to 1,000µF means that RMS input current is 2.8A (10.9A peak, up from 10.5A peak)) and the power factor is improved (albeit marginally). However, there's a great deal more output ripple and a lower average output voltage so the supply's performance is diminished - in many cases unacceptably so.
+ + +Of course, simulations will only tell you so much, so transformers #3 and #4 were also tested on the workbench. #4 is almost a letdown, since the measured response was very close to the simulation, except that the measured waveform showed no sign of the small overshoot seen in the simulation. The real transformer has some additional losses (particularly iron losses) that can't easily be accommodated by simulation. This will affect all transformers of course - simulations often don't line up perfectly with circuits using real components.
+ +
Figure 5 - Transformer #4 Response With Pulse Stimulation
When transformer #4 was subjected to a real impulse test, the waveform shown above was seen. The exact same stimulus was used (a ±2V squarewave), but the risetime of my function generator is much faster than I used in the simulator. The waveform is perfect - there is zero ringing, and therefore no reason to add a snubber network. There is a difference between this and the simulated version, largely due to losses that were not included in the simulation. A transformer is actually a fairly complex device in terms of losses, capacitance and inductance, and measured values don't always align with reality.
+ +When a toroidal transformer is used, they tend to have a lower leakage inductance and a higher Q, so it's possible that some (slight) ringing will be observed. As simulated, this is innocuous, but if you really think it needs to be tamed in some way, a 10 ohm, 100nF snubber will do that very nicely for Transformers #3 and #5.
+ +
Figure 6 - Transformer #3 Response With Pulse Stimulation & Snubber
Without a snubber, transformer #3 looked exactly the same as #4, except that the timebase was 2.5µs/ division instead of 10µs/ division as shown in Figure 4. The amplitude was identical. Since the waveform was so similar I haven't shown it here, but I added a snubber to show what it can do. No calculations were done - I just used what was lying on the workbench at the time, a 10 ohm resistor and a 220nF capacitor. There are no complaints about the waveform - it's neat and tidy, the impulse is slowed somewhat and the amplitude is reduced, and the small amount of heavily damped ringing is close to perfect. However, the test is unrealistic, because the impulse is so much faster than anything that can occur in a real power supply. It's also still a 'small signal' test that doesn't tell us what the transformer will do when supplying a rectifier, filter cap and load.
+ +If you look on-line for articles discussing leakage inductance in any detail, you'll find that almost all are related to switching transformers, where leakage really does make a significant difference. For mains frequency transformers it's hard to even find anything that discusses the topic, simply because it's not relevant for simple linear supplies. The situation is often different in TV receivers and other RF receiving devices because they are very sensitive, and even minute amounts of additional noise can cause interference.
+ +Testing with 'small signal' transients isn't actually very useful. Yes, it can be done, but the difference in real terms (i.e. audible changes to the signal) will be immeasurable and inaudible unless there are layout errors that allow the tiny amount of noise generated to get into the audio path. As you'll see below, a small signal test does not replicate a transformer's actual performance in a real circuit.
+ +There is no doubt whatsoever that snubber circuits are essential in most SMPS circuits, because they operate at high frequencies and with very fast rise and fall times. Using a snubber on a mains transformer won't hurt anything, and in a (very) few cases it might be needed to ensure that a product meets EMI testing requirements. Audibility is another matter altogether. It's possible that adding a properly designed snubber may lower the noise floor ever so slightly, but mostly you should not expect to hear a difference. You will certainly not hear any 'improvement' when music is playing, only when the system is (or is supposed to be) silent.
+ +None of this means that the rectifier diodes won't cause interference, but if you use a toroidal transformer, simply ensure that the section where the primary and secondary wiring enters the transformer is oriented away from other wiring (that's where the leakage flux is the greatest). In most cases, this alone will be enough to minimise any audible noise. Likewise, keep input, DC and output wiring well away from the rectifier diodes and transformer leads, because high (and very non-linear) current passes through the transformer and bridge rectifier and associated wiring. Most of the noise generated in a linear power supply is the result of the highly distorted current waveform, a series of positive and negative current pulses as the capacitor(s) charge on each half-cycle.
+ +If you still get audible noise, check all earth (ground) wiring before you resort to using snubber networks or caps in parallel with the rectifier diodes. There is a very real chance that caps in parallel with the diodes will make the problem worse, and a snubber network is unlikely to make a great deal of difference. It's one thing to run tests (with or without a dedicated 'test set'), but another entirely in a working circuit with high currents and 'real world' loads attached to the output of the filter caps.
+ +However, just like the 'tester' referred to earlier, these are all small signal tests, and they don't tell us anything about large signal performance. The only way to test that is to build and test a power supply with a load approximating that which will be present in the final circuit. Unfortunately, it's difficult to see any small, high frequency disturbance in the presence of a large, low frequency waveform, and that lead to the next idea I had.
+ + +One of the problems faced when you are trying to look at small disturbances on large amplitude, low frequency waveforms is simply the scale of the waveform. It's not possible to see a tiny high frequency ringing waveform when the AC voltage is so much greater than the 'waveform of interest'. To make this task easier, a simple high frequency 'probe' can be utilised. It will show the parts of the waveform of interest quite clearly, but without overloading the front-end of the oscilloscope.
+ +All you need is a 1k resistor and a 10nF capacitor. This will filter out anything below 16kHz, so you'll be able to see if there really is a problem, its magnitude and behaviour. In general, we can expect any disturbances to be at frequencies of between 10kHz and up to 25kHz. The filter I used has a cutoff frequency of 16kHz, but of course it's an easy matter to reduce (or increase) the cutoff frequency so you get the clearest possible look at any issues that exist. If you can't see any issues using this technique, then it's obvious that there's nothing that needs 'fixing'.
+ +The idea is that you connect the transformer, rectifier, filter cap(s) and a suitable (preferably representative) load resistor. The latter should be selected so it will draw the same current as the circuitry that will be powered from the supply. If the load is variable (such as a Class-AB power amplifier), then use your own judgement, but the load current should be about the same as the power amp's quiescent current. You can (of course) vary it as needed to see if that makes any difference.
+ +Armed with this tiny circuit, you are in a real position to see whether there is any ringing or not, and the magnitude and frequency of the waveform. It also lets you experiment with a snubber network if you think you need one, not with some ill-conceived 'tester', but with the actual power supply circuit. You can see instantly if there's a benefit or otherwise of a capacitor, snubber, or a combination of the two. The drawing below shows the circuit, and how it's connected to your power supply.
+ +Make sure that the power supply outputs are floating (not grounded anywhere !), because you will ground one of the transformer output leads with the ground clip of the oscilloscope. This won't have any appreciable effect over what you see on the oscilloscope, because the ground point is simply a reference, it's not an absolute requirement for most circuits. If you have a dual supply (± voltages, the transformer centre will be grounded anyway, and you can examine either AC winding (both will require a snubber if you want to go ahead with that).
+ +
Figure 7 - High Frequency Probe (HFP) Connections
This is (as near as I can tell) a new approach to looking at the disturbances created by rectifier diode commutation. I have searched, but didn't find anything even remotely similar anywhere. Not that it's groundbreaking of course - it's just a very simple high pass filter that lets you see things that would otherwise be obscured by the 50/60Hz waveform. The benefit is that you can determine not only whether there is a (potential) problem, but exactly what happens when you add a capacitor or snubber. In some cases it may be necessary to increase the value of Cf (or Rf) if the observed frequency is less than 10kHz. Using 22nF (or 2k2) will let you see down to 7kHz, but you will get a little more of the 50/60Hz waveform (although it will be reduced by over 40dB).
+ +Note that one of the transformer output leads is grounded by the scope. This is a temporary connection, but you have to do it that way to ensure that you measure the actual voltage across the secondary. The snubber is shown as optional, and 'SOT' means 'select on test'. With this, you can see any disturbance, and the three screen captures shown let you see the waveforms I measured during testing. The values shown for the snubber are a good starting place. In the waveforms captured below, I used 10 ohms and 220nF, simply because they were to hand at the time.
+ +
Figure 8 - Transformer #3 Response With HFP (No Snubber)
The transformer was the same type as the one used above (#3), but the one used for these tests was already wired to a rectifier and a pair of 5,600µF filter caps. I used an input voltage of 120V (50Hz) with a nominal 16 ohm load (roughly 2A DC output current), and had an unloaded voltage of 36V DC at the output (between the +ve and -ve terminals). The impulse is clear - a 2V peak lasting for ~2µs, followed by a few rapidly diminishing ripples. Now, I know that this isn't audible when an amplifier is connected (the supply is a test jig for power amplifiers), but I suppose it might not look 'pretty'. The pulse appears at the instant the diodes turn off. For clarity, only one pulse is shown, but obviously they occur at twice the mains frequency (100Hz in my case), and with alternating polarities because the rectifier is a full-wave bridge.
+ +
Figure 9 - Transformer #3 Response With With HFP (220nF Capacitor)
When I added a 220nF capacitor directly across the secondary winding, I obtained the waveform shown above. The amplitude is reduced to 250mV, and there are more 'ripples' (indicating ringing), but at a lower frequency. Note the oscilloscope timebase setting - it's now 100µs/ division, and the oscillation is at about 10kHz (a little below the filter cutoff frequency). This isn't a bad result, and personally, I would be perfectly happy with that. This is an arrangement I've used before, largely because it's a cheaper (and smaller) option than an X-Class cap across the mains. This was done purely to ensure compliance with conducted emissions tests, not because it made anything sound 'better'.
+ +
Figure 10 - Transformer #3 Response With With HFP (220nF, 10 Ohm Snubber)
When a 10 ohm resistor was added in series with the 220nF cap, the above waveform was obtained. This is as good as you'll get in any real circuit, and it's as good as you need as well. Again, I would not expect to hear the slightest difference through an amplifier connected to this supply, regardless of the presence or otherwise of the snubber. However, the result shown in Figure 10 is close to perfect - the damping is ideal, and the peak amplitude of the diode commutation 'disturbance' is reduced to 220mV.
+ +If you wanted to determine the absolutely best possible snubber circuit, you can just use a resistor and capacitor decade box. You could also build a simple test circuit using (say) a selection of switched capacitors and a 100 ohm or 1k wirewound pot, although a carbon pot would probably be alright for testing as average power is quite low. Whether this is something you think you need is up to you, but the chances are that a snubber similar to the one used for Figure 10 is likely to work well enough over a range of transformer, rectifier and filter circuits.
+ +While I had things set up, I next tested a 'wall transformer' - 230V in, 12V out, at 1A (12VA). The test supply used a 6,800µF filter cap and a 40 ohm load, providing an output of 360mA at 14.5 volts. This was checked using normal 1N4004 diodes and with MUR240 'ultra fast' diodes to see if it made any difference. The answer is yes and no - "yes", the impulse as the diodes turned off was reduced from a peak of just over 6V to about 1.8V, but "no", the DC was resolutely unchanged (even when the 'HFP' circuit was used to monitor it). The optimum snubber turned out to be 100nF in series with 100 ohms, and I made no attempt to measure the leakage inductance.
+ +Click here to see a sub-page with the waveforms I captured. It's actually worth looking at because the results are mildly interesting.
There is one special case where the use of either exceptionally slow or high speed diodes and snubber circuits may be required. For people in fringe reception areas for radio or TV, the receiver operates at maximum gain and it may be susceptible to RFI/ EMI created by a conventional linear power supply. If this is the case, even relatively small amounts of EMI generated by diode switching may be sufficient to cause interference. There are no exceptionally slow diodes any more, as they used rather ancient technology that's no longer used by anyone. As shown above, using fast (or ultra-fast) diodes and a properly designed snubber may help to reduce (or even eliminate) any interference.
+ +If this is a problem, the final circuit should include the fast diodes, snubber network(s) and a properly designed EMI filter, either stand-alone or integrated with an IEC mains socket. The chassis also has to be metal, with all panels in intimate electrical contact with each other. Gaps in the metalwork need to be kept small, and good electromagnetic screening relies on the panels being electrically joined along the full length of every joint. You are building a small Faraday cage, with the intention of containing both radiated and conducted emissions.
+ +If EMI is an issue where you live, every electrical device in the home will need to be 'EMI free'. In some cases, you may find that there is annoying buzz when a particular appliance or even light (assuming CFL or LED lighting) is turned on. Turn it off and the noise stops - this is a clear indication that EMI is a problem. Without extensive (and expensive) equipment, it's usually not possible to determine if the noise is radiated (often using mains wiring as an 'auxiliary' antenna) or conducted (passing interference directly into the mains wiring). Modern emissions standards may (or may not) have been complied with, and it's not uncommon for certain Asian manufacturers to apply CE, UL, CSA, VDE and other standards markings to products that have never been even tested, let alone certified.
+ +CE compliance is particularly strict, and genuinely certified products will usually not cause problems. Clearly, certification is not an option for a DIY project (it's very expensive to have done in an accredited test laboratory), so the constructor is pretty much left to his/her own solutions, should they be required. Hopefully, this material will help if a project turns out to cause issues. Mostly, simple linear supplies with no special precautions will be perfectly alright, and will not need any of the measures described. However, there can be exceptions for a variety of reasons.
+ +If EMI is a problem for audio equipment, the most likely way for it to cause problems is via the input leads. These should always be shielded, and good quality connectors help to minimise issues with contact resistance and/or corrosion which can allow the leads to pick up external noise. It also helps to ensure that input leads and power leads are separated, so there is minimal mutual coupling.
+ + +The test methodology that stirred up this particular hornet's nest is actually (at least in part) the cause of the very problem it's designed to fix. Yes, I'm fully aware that this is a paradox, and that's the issue. Stimulating a transformer to ring by sending a fast transient into the secondary via a capacitor is fatally flawed on so many levels it's hard to figure out where to begin ...
+ +If exactly the same pulse is delivered to the secondary via a resistor (with the primary shorted) and without a series capacitor, there will normally be no ringing (as shown above in simulations and scope captures). However, this is (as noted earlier) a small signal test, and it's not the same when used at full mains voltage with a rectifier, filter cap and load. Even transformers shown above that showed no sign of ringing with an impulse test performed quite differently when wired up into a complete power supply.
+ +Building a test circuit that first creates the illusion of a problem, then claims to provide a 'simple fix' is obviously of little use to man or beast. The article that prompted all of this goes into great detail, and has lots of maths in the appendix that describe the phenomenon of ringing. However, the writer appears to have missed that the 'injection' capacitor was the root cause of ringing in the first place. Once that's removed from the equation there may not be a problem to fix! If there really is a problem, it's far better to analyse it in the final circuit with a representative load.
+ +This was tested and verified both in simulations and on the workbench, with the two tests providing results that are so close that it confirms the methodology and the results. The important thing to recognise is that without a snubber the impulse waveform caused by diode switching will be (or will be very close to) critically damped. Adding the snubber deliberately 'under-damps' the circuit, and you need to see some overshoot before the waveform settles. When this is done, the impulse amplitude and harmonic content are reduced by a worthwhile margin.
+ +If a snubber is added with the values determined using the method described here, any ringing can be suppressed. However, it's really not necessary to do so, because it does no harm. It might look 'nasty' if you haven't come across it before, but the effects are largely benign. The DC output is not affected whether the snubber is in place or not, and the only real difference is that there is a small but potentially useful reduction in high frequency conducted emissions. The main cause of mains waveform distortion remains the very non-linear current waveform, which is a far bigger problem than a short burst of high frequency noise on the transformer's secondary winding.
+ +This is an article where, in an attempt to prove that something was completely unnecessary, I discovered that this may not be the case. However, I also found nothing to suggest that a snubber is actually needed. There are a couple of other articles on my site that started the same way, and this the nature of running proper testing. The findings here don't mean that I will recommend a snubber, because I know from many years in electronics that it mostly doesn't matter at all, and there's usually very little to be gained in a well designed circuit. Ultimately, the only thing that matters is the DC from the power supply output(s), and this is unlikely to be affected in any way.
+ +I used the same HFP circuit to examine the output (DC) waveform, and because all of the low frequency ripple was removed, I was greeted by a straight line with vestiges of high frequency noise superimposed. Nothing to get concerned about, since I was able to increase the scope's sensitivity to 20mV/ division, and the noise was just visible. Adding a capacitor by itself or the snubber made exactly zero difference to the observed noise, so it's safe to assume that this will probably be the case with your power supply as well. This was also tested by simulation, and even the extraordinary resolution available from the simulator failed to show the slightest difference in the DC output with or without the snubber.
+ +It's not uncommon to see very small 'spikes' on the DC waveform when you use the HFP circuit. These coincide with the charge current from the rectifier being delivered, and are largely due to the finite impedance (mostly ESR) of the filter capacitor. It's common for people to include low value film capacitors in parallel with electrolytic caps, but these achieve nothing useful unless the powered circuit operates at radio frequencies. Film or ceramic caps are required in parallel with opamp supply rails (close to opamps or power amps) to counteract the inductance of the supply wiring and/ or PCB tracks.
+ +If you like the idea of using a snubber circuit at the transformer's secondary (or secondaries), there is no reason not to use one. Likewise, feel free to use ultra-fast diodes if it makes you feel any happier. Most of the time, you can simply use a 10 ohm resistor in series with a 100nF capacitor. There's no 'optimisation' here, but that combination can be expected to work well enough for most transformer/ rectifier combinations. Don't expect the snubber to make the slightest difference to the audio, because it almost certainly will do no such thing. To optimise the snubber, as noted earlier I suggest a resistance decade box or a 1k pot, and a choice of perhaps three of four different capacitor values. You definitely need the high frequency probe, as that makes everything of interest so much easier to see.
+ +There is no real 'optimum' value for the snubber - it's a compromise between impulse amplitude and settling time. Aim for a slight overshoot as shown on the second page of this article. If we talk of damping (or damping factor), this is the opposite of Q ('quality factor'), and you should aim for damping of around 0.5 which is a Q of unity (damping is equal to 1/2×Q). The waveform obtained with no snubber will generally be critically damped already, and the snubber circuit actually deliberately creates a slightly underdamped system.
+ +All tests should be carried out with the transformer and rectifier operating normally from the mains, and with a load that approximates the load that will be applied in normal use. 'Small signal' tests are far less useful, and are unlikely to give the same results as a 'full scale' test at normal mains voltage and frequency. I have verified that once a snubber is determined, its performance is not load dependent, so it will work with variable current loads (such as power amplifiers). Naturally, you will ensure that mains connections are secure and wired to the appropriate standard to ensure you aren't electrocuted while testing!
+ +EMI is one area where there might be a small impact, but in general it's not an issue for the vast majority of DIY constructors. The section above should help if you do experience interference from your latest project, but in most cases there will be nothing needed other than sensible wiring practices. Switchmode power supplies create vastly more noise than any linear supply, and are more likely to cause interference than any linear supply using a mains frequency transformer.
+ +None of this will affect the DC, and therefore cannot affect the music via the DC supplies. If nothing else, it gives you something new to play around with, and it's also a worthwhile learning tool so nothing goes to waste.
There are no references for this article, because the tests I did are somewhat unique (for mains transformers) and the method I used to determine leakage inductance for transformer #1 does not appear to have been published before. That doesn't mean it hasn't been published, but I did some extensive searches and could find nothing similar. The same applied to the 'HFP' (high frequency probe) that lets you look at the impulse without any mains frequency AC getting in the way.
+ +As for references to the 'tester' mentioned in this article, I have no intention of giving it any legitimacy by providing a link to it. While the author has obviously spent considerable time putting his information together and it probably works well enough in practice, the method described here is a lot easier and cheaper. Some readers will recognise the info I'm referring to immediately by the description, and those who don't recognise it don't need it anyway.
![]() | + + + + + + + |
Elliott Sound Products | +Snubbers For Mains Transformer Power Supplies - Part II | +
Snubbers For Power Supplies - Are They Necessary And Why Might I Need One? - Part II
+© Rod Elliott - ESP (2019)
This sub-page shows supplementary waveforms for wall transformer and rectifier combinations. The transformer is 12V at 1A (12VA), connected to a bridge rectifier using 1N4004 and MUR250 diodes. All waveforms were captured using the 'HFP' (high frequency probe) described in the main article. The probe was modified to use a 10nF cap and 2k2 resistor, because the frequencies are a bit lower than with larger transformers having less leakage inductance. In particular, look at the amplitude of the spike waveform in Figures 11 and 14, showing that fast rectifiers do reduce the diode switching noise (but we still don't really care). The amplitude (and speed) of the impulse is reduced using a 100nF/ 100Ω snubber (Figs. 13 & 16). and this might be useful in test equipment.
+ +![]() | ![]() | ![]() |
Figure 11 - 1N4004 Diodes, No Snubber | Figure 12 - 1N4004 Diodes, 100nF Capacitor + 10 Ohms | Figure 13 - 1N4004 Diodes, 100nF Capacitor + 100 Ohms + |
The second set of traces was obtained from the same transformer, filter cap and load, but using 'ultra fast' MUR240 diodes instead of 1N4004 diodes. The spike is smaller with the fast diodes, and the snubber reduces the amplitude further. Otherwise the results with a snubber in place are very similar to those using 1N4004 diodes. The amplitude of all ringing waveforms is reduced, but the DC output was unchanged.
+ +![]() | ![]() | ![]() |
Figure 14 - MUR240 Diodes, No Snubber | Figure 15 - MUR240 Diodes, 100nF Capacitor + 10 Ohms | Figure 16 - MUR240 Diodes, 100nF Capacitor + 100 Ohms + |
From all of this testing, it's quite obvious that there are some significant differences between the use of 'ordinary' and fast diodes, and that snubbers can (and do) reduce the amplitude and frequency of any ringing waveform. However, they don't affect the DC at all, so if your wiring is routed carefully (keeping transformer leads well away from audio circuitry for example) you won't hear any change. 'Lead dress' (the way wiring is routed and segregated from other wiring, whether DC or signal) is always important, and getting it wrong can lead to considerable buzz in the audio. Adding snubber networks might be easier than re-wiring a project, or it might just let you imagine that proper lead routing is 'not important'. This isn't a wasted exercise, but in most cases you won't get any real benefit.
+ +In general, more and greater problems are created by incorrect placement of the main earth/ ground in the chassis, or by not ensuring that all DC feeds are taken from the filter capacitors, and never from the rectifier. It's quite surprising just how much difference this can make - a mere 10mΩ (0.01 ohm) will develop 20mV of very nasty-sounding noise with a peak current of 2A, and that doesn't even consider the inductance of the wiring.
+ +When a capacitor or snubber is added, the effective frequency is reduced (the uncorrected spike waveform has harmonics extending from the mains frequency well into the low RF band). This is the reason that adding a capacitor or snubber can ensure compliance with conducted emissions tests. Again, this isn't generally necessary, as most transformer based supplies will pass anyway.
+ +![]() | + + + + + + + |
Elliott Sound Products | +Class-D Amplifiers |
A completely new technology for audio amplification has been evolving during the last 15-20 years that has a clear benefit over current widespread Class-A, and AB topologies. We are talking about the so-called 'Class-D'. This benefit is mainly its high power efficiency. Figure 1 shows typical efficiency curves vs. Output power for Class-B and Class-D designs.
+ +The theoretical maximum efficiency of Class-D designs is 100%, and over 90% is attainable in practice. Note that this efficiency is high from very moderate power levels up to clipping, whereas the 78% maximum in Class-B is obtained at the onset of clipping. An efficiency of less than 50% is realised in practical use with music signals. The PWM amp's high power efficiency translates into less power consumption for a given output power but, more important, it reduces heatsink requirements drastically. Anyone who has built or seen a high-powered audio amplifier has noticed that big aluminium extrusions are needed to keep the electronics relatively cool. The loading on the power transformer is also reduced by a substantial amount, allowing the use of a smaller transformer for the same power output.
+ +These heatsinks account for an important part of the weight, cost and size of the equipment. As we go deeper in the details of this topology, we will notice that a well behaving (low distortion, full range) Class-D amplifier must operate at quite high frequencies, in the 100KHz to 1MHz range, needing very high speed power and signal devices. This has historically relegated this class to uses where full bandwidth is not required and higher distortion levels are tolerable - that is, subwoofer and industrial uses.
+ +However, this has changed and thanks to today's faster switches, knowledge and the use of advanced feedback techniques it is possible to design very good performance Class-D amplifiers covering the whole audio band. These feature high power levels, small size and low distortion, comparable to that of good Class-AB designs. (From now on, I will refer to Class-A and AB topologies as 'classical').
+ +From the DIY perspective, Class-D is rather unfortunate. Because of the extremely high switching speeds, a compact layout is essential, and SMD (surface mount devices) are a requirement to get the performance needed. The stray capacitance and inductance of conventional through-hole components is such that it is almost impossible to make a PWM amplifier using these parts. Indeed, the vast majority of all ICs used for this application are available only in surface mount, and a look at any PWM amplifier reveals that conventional components are barely used anywhere on the board. Since SMD parts are so hard to assemble by hand and the PCB design is so critical to final performance, DIY versions of PWM amps are very rare indeed (I don't know of any).
+ + +In classical amplifiers, at least one of the output devices (let them be bipolar transistors, MOSFETs or valves) is conducting at any given time. No problem so far, but they are also carrying a given current where there is a voltage drop between collector-emitter / drain-source etc. Since P = V × I, they are dissipating power, and even if there is no output a small quantity of current must pass through the transistors to avoid crossover distortion, so some dissipation is present. As the output voltage increases, for given supply rails the voltage drop across the transistors will fall, but the current increases. At saturation (clipping), VCE or VDS will be low, but current is quite high (Vout / Rspk). Conversely, at low power levels, current is small but voltage drop is large. This leads to a power dissipation curve that is not linear with output power. There is a non-zero minimum dissipation (zero percent efficiency), and a point where maximum efficiency is reached ... about 78% in pure Class-B designs, 25% or less with Class-A.
+ +Class-D on the other hand, bases its operation in switching output devices between 2 states, namely 'on' and 'off'. Before discussing the topology specific details, we can say that in the 'on' state, a given amount of current flows through the device, while theoretically no voltage is present from drain to source (yes, almost every Class-D will use MOSFETs), hence power dissipation is theoretically zero. In the 'off' state, voltage will be the total supply rails as it behaves like an open-circuit, and no current will flow (that's very close to reality).
+ +But how can our beloved audio signal be represented by an awful square wave with only two possible levels? Well, in fact it modulates some characteristics of this square wave so the information is there. Now we 'only' have to understand the way the modulation is done and how to restore the amplified audio signal from it. The most common modulation technique used in Class-D is called PWM (Pulse Width Modulation) - a square wave is produced that has a fixed frequency, but the time it is in the 'high' and 'low' states is not always 50%, but it varies following the incoming signal. This way, when the input signal increases, the 'high' state will be present for longer than the 'low' state, and the opposite when the signal is 'low'. If we do some maths, the mean value of the signal in a single cycle is simply ...
+ ++ Vmean = Vhigh × D + Vlow × (1-D), where D = Ton / T, (duty cycle) ++ +
T being the period of the signal, i.e. 1 / Fsw (switching frequency).
+ +For example, the mean value of a 50% duty cycle (both states are present for exactly the same amount of time) signal going from +50V to -50V is: 50 × 0.5 + (- 50) × 0.5 = 0V. In fact, the idle (no signal) output of a Class-D amplifier is a 50% duty cycle square signal switching from the positive to the negative rail.
+ +If we modulate the input up to the maximum, we will have a near-100% duty-cycle. Lets put 99%: Vmean = 50 × 0.99 + (-50) × 0.01 = 49V. Conversely, if the signal is lowest, we need near 0% (let's use 1%), so Vmean = -49V.
+ +PWM is usually generated by comparing the input signal with a triangle waveform as shown in Figure 2. The triangle wave defines both the input amplitude for full modulation and the switching frequency
+ +Figure 3 shows a typical PWM signal modulated by a sine wave. Notice that it is designed so signals between -1 and 1V will produce 0% to 100% duty cycles, 50% corresponding to 0V input. The 'digital' output uses standard logic levels, where 0V is a logic '0' and 5V is a logic '1'. Because of this digitisation of the signal, PWM amps are sometimes erroneously referred to as digital amps. In fact, the entire process is almost completely analogue, with any 'digital' circuitry being somewhat incidental.
+ +Notice that for a correct representation of the signal, the frequency of the PWM reference waveform must be much higher than that of the maximum input frequency. Following Nyquist theorem, we need at least twice that frequency, but low distortion designs use higher factors (typically 5 to 50). The PWM signal must then drive power conversion circuitry so that a high-power PWM signal is produced, switching from the +ve to -ve supply rails (assuming a half-bridge topology).
+ +The spectrum of a PWM signal has a low frequency component that is a copy of the input signals spectrum, but also contains components at the switching frequency (and its harmonics) that should be removed in order to reconstruct the original modulating signal. A power low-pass filter is necessary to achieve this. Usually, a passive LC filter is used, because it is (almost) lossless and it has little or no dissipation. Although there must always be some losses, in practice these are minimal.
+ + +There are basically two Class-D topologies - half-bridge (2 output devices are used) and full-bridge (4 output devices). Each one has its own advantages. For example, half-bridge is obviously simpler and has more flexibility as a half-bridge amplifier can be bridged as with classical topologies. If it is not correctly designed and driven, can suffer from "bus pumping" phenomena (transfer current to the power supply that can make it increase its voltage producing situations dangerous to the amplifier, supply and speaker).
+ +Full bridge requires output devices rated for half the voltage as a half bridge amplifier of the same power, but it is more complicated. Figures 5a and 5b show both topologies conceptually. Obviously, many components such as decoupling capacitors, etc. are not shown.
+ +Note that full bridge PWM amp needs only one supply rail - bipolar supplies are not necessary, but can still be used. When a single supply is used, each speaker lead will have ½ the Vdd voltage present. As it is connected differentially, the loudspeaker doesn't see any DC if everything is well balanced. However, this can (and does) cause problems if a speaker lead is allowed to short to chassis!
+ +The filter may be implemented by means of a single capacitor across the loudspeaker, by a pair of caps to ground, or in some cases by both (as shown by the dotted lines connecting the caps).
+ +For the rest of the document, we will concentrate on half-bridge topologies, although the vast majority of the ideas are also applicable to full-bridge designs.
+ + +The operation of the half bridge circuit depicted in Figure 4a is as follows ...
+ +When Q1 is on (corresponding to the positive part of the PWM cycle), the switching node (inductor input) is connected to Vdd, and current starts to increase through it. The body diode of Q2 is reverse biased. When Q2 is on (negative part of the PWM cycle), the body diode of Q1 is reverse biased and the current through Lf starts to decrease. The current waveform in Lf is triangular shaped.
+ +Obviously, only one of the transistors must be on at any time. If for any reason both devices are enhanced simultaneously, an effective short-circuit between the rails will be produced, leading to a huge current and the destruction of the MOSFETs. To prevent this, some "dead-time" (a small period where both MOSFETs are off) has to be introduced.
+ +Lf in conjunction with Cf and the speaker itself form the low pass filter that reconstructs the audio signal by averaging the switching node voltage.
+ +Timing is critical in all this process: any error as delays or rise-time of the MOSFETs will ultimately affect efficiency and audio quality. All of the involved components must be high-speed. Dead-time also affects performance, and it must be minimised. At the same time, the dead-time must be sufficiently long to ensure that under no circumstance both MOSFETs are on at the same time. Typical values are 5 to 100ns.
+ +The dead-time is a critical factor for distortion performance. For lowest distortion, the dead-time must be as small as possible, but this risks 'shoot-through' currents, where both MOSFETs are on simultaneously. This not only increases distortion and dissipation dramatically, but will quickly destroy the output devices. If the dead-time is too great, the response of the output stage no longer follows the true PWM signal generated in the modulator, so again distortion is increased. In this case, dissipation is not affected.
+ + +To ensure fast rise/fall times of the MOSFETs, the gate driver must provide quite a high current to charge and discharge the gate capacitance during the switching interval. Typically, 20 - 50ns rise/fall times are needed, requiring more than 1A of gate current.
+ +Note that the schematics shown use both N-channel MOSFETs. Although some designs use N and P channel complementary devices, that is IMO sub-optimal due to the difficulty of obtaining suitable P devices and matched pairs. So lets concentrate on N-channel only half-bridges. Note that, in order to drive a MOSFET on, a voltage above Vth must be present between its gate and source. The lower MOSFET has its source connected to -Vss, so its drive circuit has to be referred to that node instead of GND.
+ +However, the upper MOSFET is more difficult to drive, as its source is continuously floating between +Vdd and -Vss (minus drops due to on resistance). However, its driver must be also floating on the switching node and, what's more, for the on-state, its voltage must be several volts above +Vdd so a positive Vgs voltage is created when Q1 is on. This also implies a voltage shifting so the modulator circuit can communicate correctly with the driver.
+ +This is one of the major difficulties of Class-D design: gate drive. To solve the issue, several approaches are taken ... + +
Figure 5 (a, b & c) depict some possibilities for 'High Side' gate driving ...
+ +![]() Figure 5a - Transformer Coupled |
+![]() Figure 5b - Discrete BJT Driver |
+![]() Figure 5c - IC Driver |
Note that circuits in figures 5b and 5c have their PWM input referred to -Vss so may require previous level shifting of the comparator output, that will normally be referred to GND. Fig 5a will require level shifting of the inverted PWM only, as the transformer input can be referenced to GND as shown. Many of the driver ICs available now have inbuilt level shifters, and these are optimised for speed. Remember that any delay introduced into the switching waveform can cause distortion or simultaneous MOSFET conduction.
+ +We have still one problem to solve ... obtaining 12V above VS (the switching node). We can add another power supply, isolated from the main one, which (-) is connected to VS. This solution can be impractical, so other techniques are commonly used. The most widespread is a 'bootstrap' circuit. The bootstrap technique uses a charge pump built with a high speed diode and a capacitor. The output of the amplifier produces the switching pulses needed to charge the capacitor.
+ +This way, the only auxiliary power supply needed is 12V referenced to -Vss that is used for powering both the low side driver and the charge pump for the high side driver. As the average current from this supply is low (although there are high current charging peaks during the switching events, they last only 20-50ns, twice during a cycle, so the average is quite low, in the 50-80mA range), this supply is easily obtained from the negative rail with a simple 12V regulator (paying attention to its maximum input voltage rating, of course).
+ + +As can be seen from the previous figures, in order to excite the MOSFET driver, the PWM signal has to be referred to -Vss. So, as the modulator usually works from +/-5 to +/-12V, typically, a level shifting function is needed. One can choose to shift the level of the PWM signal and then generate the inverted version, or generate both outputs and invert both of them. It depends, for example, on the comparator type used (if complementary outputs are available, the decision is made).
+ +A basic level shifting function can be performed with a single or two-transistor circuit similar to the one depicted in Figure 6 (before the high side driver). While this may work at low frequencies, it is important to simulate the behaviour of the comparator and level shifter, as they can introduce considerable delays and timing errors if not properly designed.
+ +It is fair to say that the level shifter is one of the most critical parts of the circuit, and this is evidenced by the wide variety of competing ICs designed for the job. Each will have advantages and disadvantages, but in all cases the complexity is far greater than may be implied by the simplified diagrams.
+ + +The output filter is one of the most important parts of the circuit, as the overall efficiency, reliability and audio performance depends on it. As previously stated, a LC filter is the common approach, as it is (theoretically) lossless and has a -40dB/decade slope, allowing for a reasonable rejection of the carrier if the parameters of the filter and the switching frequency itself are properly designed.
+ +The first thing to do is to design the transfer function for the filter. Usually, a Butterworth or similar frequency response is chosen, with a cutoff frequency slightly above the audio band (30-60KHz). Have in mind that one of the design parameters is the termination load, that is, the speaker impedance. Usually, a typical 4 or 8 ohm resistor is assumed, but that would produce variations in the measured frequency response in presence of different speakers. That must be compensated for by means of proper feedback network design. Some manufacturers simply leave it that way so the response is strongly dependent on the load. Surely a non-desirable +thing.
+ +The design can be done mathematically or simply use one of the many software programs available that aid in the design of LC filters. After that, a simulation is always useful. Figure 7 shows a typical LC filter for Class-D amplifiers and its typical frequency response.
+ +This simple filter has a -3dB cutoff frequency of 39KHz (with 4 ohm load), and suppresses the carrier as much as 31dB at 300KHz. For example, if our supply rails are +/-50V (enough for about 275W at 4 ohms), the residual ripple will have an amplitude of about 1Vrms.
+ +This ripple is obviously inaudible, and 1V RMS will dissipate only around 200mW in a typical tweeter (not likely a problem, especially since the tweeters impedance will be a lot higher than 8 ohms at 300kHz). However, care must be taken as the speaker wires can become an antenna and affect other equipment. In fact, although a couple of volts RMS of ripple can seem low enough to run your speakers safely, EMI can be a concern, so the less carrier level you have, the better. For further rejection, higher order filters are used (with the potential disadvantage of increased phase shift in the audio band), although there are other clever ways to do it, as very selective bandstop or 'notch' filters tuned to the carrier frequency (if it is fixed, and that only happens in synchronous designs as the one described).
+ +Well designed Class-D amplifiers have a higher order filter and/or special carrier suppression sections in order to avoid problems with EMI. As can be seen in Figure 8, the response is dependent on the load, and in fact the load is part of the filter. This is one of the problems to solve in Class-D designs. It doesn't help that a loudspeaker presents a completely different impedance to the amplifier than a test load, and many PWM amps have filters that are not (and never can be) correct for all practical loudspeaker loads. Again, only a handful of good Class-D amplifiers use feedback techniques that include the output filter to compensate for impedance variations and have a nearly load independent frequency response, as well as to reduce distortion produced by non-linearities in the filter. Although passive components are thought to be distortion-free, this does not apply to ferrite or powdered iron cores that are used for the filters. These components most certainly do introduce distortion.
+ +Now, The Filter Components ...
+The output inductor has to withstand the whole load current, and also have storage capability, as in any non-isolated switching converter (Class-D half bridge design is in fact analogous to a buck converter, its reference voltage being the audio signal).
The ideal inductor (in terms of linearity) is an air-core one, but the size and number of turns required for typical Class-D operation usually makes it impractical, so a core is normally used in order to reduce turns count and also provide a confined magnetic field that reduces radiated EMI. Powder cores or equivalent materials are the common choice. It can also be done with ferrite cores, but they must have an air-gap to prevent saturation. Wire size must also be carefully chosen so DC losses are low (requiring thick wire) but also skin effect is reduced (AC resistance must also be low).
+ +Inductor core shape can be a drum core, gapped ferrite RM core, or toroidal powder core, among others. Drum cores have the problem that their magnetic field is not enclosed, hence producing more radiated EMI. RM cores solve this problem but have most of the coil enclosed, so cooling problems may arise as no airflow is possible. IMO, toroids are preferred because they feature both a closed magnetic field that helps control radiated EMI, a physically open structure that allows proper cooling, and easy and economical winding as they don't need bobbins.
+ +Many core manufacturers such as Micrometals or Magnetics offer their own software, very useful to design the output inductor as they help choosing the right core, wire size and geometrical parameters. The capacitor usually falls in the 200nF to 1uF range, and must be of good quality. The capacitor is responsible (in part) for high frequency behaviour and needs low losses. Of course has to be rated for the whole output voltage, but preferably much higher. Usually, polypropylene capacitors are chosen, and X2 mains capacitors are common. Needless to say, you cannot use electrolytics!
+ + +As I have stated previously, timing errors can lead to increased distortion and noise. This cannot be skipped and the more precise it is kept, the better the design will perform. Open loop Class-D amplifiers are not likely to satisfy demanding specifications, so negative feedback is almost mandatory. There are several approaches. The most simple and common is to take a fraction of the switching signal, precondition it by means of a passive RC low pass filter and feed it back to the error amplifier.
+ +To put it simply, the error amplifier is an opamp placed in the signal path (before the PWM comparator) that sums the input signal with the feedback signal to generate a error signal that the amps automatically minimises (this is the concept of every negative-feedback system).
+ +
Figure 8 - Typical Feedback Network Connections
Although good results are obtained this way, there is still a problem: load dependency, due to the speaker being an integral part of the filter, hence affecting its frequency response as shown above.
+ +Some more advanced amplifiers take the feedback signal from the very output, trying to compensate this. This way, a constant frequency response is obtained, with the further gain that the inductor resistance contributes much less to the output impedance, so it is kept lower, hence damping factor is higher (higher speaker control). However, taking feedback after the filter is not an easy task. The LC introduces a pole and hence a phase shift that, if not properly compensated, will make the amp become unstable and, ultimately, oscillate. Feedback may be taken from both the switching node and the filter output. Although this can give very good results, it is still difficult to maintain stability because of the phase shift through the output filter.
+ + +Pure PWM (based on triangle generators, also called 'natural sampling PWM') is not the only way to go in order to construct a Class-D amplifiers. Several other topologies have arisen, many of them based on auto-oscillation, where the hysteresis in the comparator and delays between the comparator and power stage can be taken into account to design a system that oscillates by itself in a somewhat controllable manner.
+ +Although simpler, these designs have some disadvantages, IMO. For example, the switching frequency is not fixed, but depends on the signal amplitude. This makes output notch filters ineffective, yielding higher ripple levels. When several channels are put together, the difference in switching frequency between them can produce beat frequencies that can become audible and very annoying. This can also happen of course with synchronous design as the one described here, but there is a simple solution - use the same clock for all the channels.
+ +Some self oscillating designs may have some other difficulties like start-up: special circuitry may be needed that forces the amplifier to start oscillating. Conversely, if for any reason the oscillation stops, you could end up with an 'always-on' MOSFET, and thus a large amount of DC at the output, followed almost immediately by a dead loudspeaker. Of course, these issues can be solved with proper design, but the added complexity can void the initial simplicity, thus no gain is obtained.
+ +Low distortion in a PWM amplifier requires a very linear triangle waveform, along with a very fast and accurate comparator. At the high operating frequencies needed for optimum overall performance, the opamps used need to have a wide bandwidth, extremely high slew rate, and excellent linearity. This is expensive to achieve, requiring premium devices. Some of these constraints are relieved somewhat by self oscillating designs (therefore making them slightly cheaper), but this is not an effective trade-off for the most part.
+ +Clocked designs (fixed frequency) are not easier to make than self-oscillating or modulated switching frequency designs, but are certainly far more predictable and tend to have fewer problems overall. The ability to synchronise multiple amplifiers ensures that mutual interference is minimised. An 'advantage' claimed by the proponents of non-clocked and 'random switching' designs is that the RF energy on the speaker leads is spread over a wide frequency range, potentially making such amplifiers more likely (or perhaps less unlikely) to pass EMI testing. From an overall perspective, this is more likely to be a hindrance than a benefit, as it is no longer possible to optimise the filter network for maximum switching frequency rejection.
+ +There are also PWM amps that claim to be truly 'digital', using One-Bit™ technology, or generating the PWM signal directly from the PCM data stream. Although the manufacturers of such amplifiers will naturally proclaim their superiority over all others, such self-praise should generally be ignored. Implementing feedback in a 'pure' digital design is at best difficult, and may be impossible without using a DSP (digital signal processor) or resorting to an outboard analogue feedback system. Including additional ADCs and DACs (analogue to digital converters and vice versa) is unlikely to allow the amplifier to be any 'better' than the direct analogue methods described in this article.
+ +A relative newcomer to the scene is the Sigma-Delta modulator, however at the time of writing this still has problems (challenges in corporate speak). The main issue is that the transition rate is too high, and it must be reduced to accommodate real-world components - particularly the power switching MOSFETs.
+ +The 'pure' digital solutions described above have another shortfall, and that's the fact that the number of different pulse widths is finite, and determined by the clock speed. A digital system can only switch on a clock transition. Based on currently available information, only around 8 x oversampling is possible if a digital noise shaping filter is added to the system. An analogue modulation system has an effectively infinite number of different pulse widths, but this is not possible with any true digital implementation.
+ +These latter comments cover a very complex area, one is outside the scope of this article. However, even the scant information above will give most readers far more information that is commonly available - especially from manufacturers of digital Class-D amplifiers.
+ + +In conclusion, Class-D amplifiers have evolved a lot since they were first invented, achieving levels of performance similar to conventional amplifiers, and even better in some aspects, like an inherent low output impedance that allows effortless bass. All this, with the great advantage of high efficiency. Of course, only if they are properly designed.
+ +However, although very attractive, Class-D designs are not very DIY friendly. In order to achieve a properly working design in terms of efficiency, performance and EMI, very careful PCB layout is mandatory, some component selections are critical and of course proper instrumentation is absolutely required.
+ +This article has been written in order to throw some light about the internals, advantages and difficulties of this not very well-known (and even less well understood) technology. Everyone thinks that 'Class-D' stands for 'Digital'. I hope that after reading this article, no-one thinks that any more
![]() |
Elliott Sound Products | Power Amplifier Development |
![]() ![]() |
Since the very first power amplifiers were developed in the 1920s, there have been countless different designs. The primary goals were usually to get more power with less distortion, and this quest continues. Early amps were very low power, and were paired with highly sensitive speakers, often using horn loading to get more 'noise' with the limited power available. Single-ended triode amplifiers were initially the only option, until design skills improved and new designs were developed. Most of these early single-ended amps would struggle to get even 5W output, using large power triodes. Push-pull operation could increase output by a factor of at least four.
Initially, valves were used primarily for radio/ 'wireless' detection and amplification, along with telecommunications (the latter has been a primary 'driver' of electronics development until fairly recently). Once people realised that it was possible to amplify weak signals to drive a loudspeaker (especially for wireless, public address and 'talking' movies from 1927 onwards), the race was on to get more power. It was quickly discovered that push-pull was superior to single-ended operation in all respects. Early valve amplifiers used triodes, because they had nothing else until the pentode was invented in 1930, with the beam-tetrode following not far behind (to avoid the patents held by Philips). 'True' tetrodes had a limited production, because they didn't work very well (hence the pentode).
With the advent of the transistor in 1948 there was a whole new design process to master once transistors became commercially available. By the 1970s, most development of valve amplifiers had ceased, since the 'writing was on the wall'. Some of the early transistor designs were very poor by modern standards, but some compared favourably against 'equivalent' valve circuitry. That's not to say that they were markedly 'better' than an equivalent valve power amp, and many people complained that they were inferior for a variety of reasons.
Not all of the complaints were justified, but there's no doubt at all that many were more than justified. Depending on the texts you read, some of the issues were purely subjective, because many people didn't (and some still don't) trust that transistors could achieve good results. Some differences resulted from the much lower output impedance of transistor amps. This came about for two reasons ... most valve amps had limited feedback because phase shift in the output transformer could cause the amp to oscillate if the feedback ratio was too high. Coupled with the high output impedance of valves (determined mainly by the internal plate resistance), these amps allowed the loudspeakers of the day to 'do their own thing' to an extent. With a typical damping factor of somewhere between unity and ten, the speaker would produce more bass and often more treble as well. The bass would often tend to be somewhat 'boomy', because Thiele-Small parameters were way off in the future, so enclosure designs were often empirical, and 'optimised' with common power amplifiers of the era.
Transistors changed this. Not only is their output impedance much lower than valves, but they also (generally) have higher gain. This meant that more feedback could be used, reducing both distortion and output impedance. If used with a speaker designed for a Mullard 5-10 valve amp (for example) the speakers would tend to be lacking bass response due to the higher damping factor. Mullard also produced a design using transistors (called the Mullard 10-10), and while it was popular back in the early 1960s, the sound quality would almost certainly be considered inferior to the valve version.
The earliest transistors available were germanium, and only PNP devices could be made with high performance. NPN transistors were also made, but they had lower gain and worse high frequency response than their PNP counterparts (silicon is the reverse - NPN devices are [usually] superior to PNP). Maintaining the correct bias current was always a challenge with germanium transistors, and early designs commonly used a thermistor (attached to the heatsink) in an attempt to prevent thermal runaway. This is a condition where the transistors get hot, so their gain and leakage both increase. This causes them to draw more current and therefore run hotter, creating a vicious cycle which would end when a device failed due to over-temperature. With a maximum junction temperature of around 90°C, germanium transistors were very easy to destroy!
This article is a small selection of amplifier designs that covers the period between 1958 and the present day. It's a small selection simply because there were probably thousands of different approaches, some quite similar, and others completely different, compared to others of the same era. I have tried to make the examples representative of the most common themes, but there are countless omissions because of the sheer number of different designs. Some of those would have been unmitigated disasters, with many others not far behind, but those shown are (for the most part) at least 'competent. Not wonderful, but able to do the job well enough for the 'average' listener.
This article isn't strictly a 'timeline', but I have tried to keep the sections in at least an approximation of chronological order. With some, that's difficult because the date of issue of a design isn't always available, and in some cases there's at least some overlap. This is particularly true of amps like the Williamson, which is held in high regard to this day.
The 1970s were probably the 'golden age' for amplifier design. Interest in quality sound was very high, and manufacturers were all after the dollars that the so-called 'baby boomers' had, in greater supply than ever before. There's a very interesting article on the Audioholics website - see 70s Stereo Gear. It doesn't look at the circuit designs, but it does help to explain why so many people are nostalgic for equipment of that era.
It makes sense to show one of the standards, against which most new designs would be compared. The Williamson amp is considered a 'classic' valve amplifier in all respects, and few other valve amps could match it for performance. Naturally, there were some that could better it, but at a significantly higher cost. Dating from 1947, it has remained a popular choice as one of the most 'definitive' amplifiers of its time. With 15W output (rather a lot for 'home duties' in 1947) and THD (total harmonic distortion + noise) below 0.1% at full power, it was an exemplary design for the day. In theory, it could be pushed to provide 20W, but with reduced performance. Various configurations can be found (including 'ultra-linear'), but the circuit shown next is the original.
Figure 1.1 - Williamson Valve Power Amp (1947)
The L63 valves were general purpose octal based triodes, common when the Williamson amp was developed. Later versions used either a pair of 6SN7 or 12AU7 twin triodes. Like most valve amps, the circuit is superficially 'simple', but a great deal of the performance depends on the output transformer. This has always been the Achilles heel of valve amps, and an otherwise (close to) perfect design will be laid to ruin by an output transformer that isn't up to the task. The apparent simplicity (assisted by the belief that valves are 'linear') belies the embedded complications. They can all be overcome using the military technique of throwing money at the problem until it goes away, but even finding someone who can wind a good output transformer gets harder all the time.
Figure 1.2 - Mullard 5-10 Valve Power Amp (1952)
Another popular valve design was the Mullard 5-10 (10W) amplifier, using an EF86 preamp, an ECC83 (12AX7) 'phase splitter' and a pair of EL84 output valves. It was never up to the standards of the Williamson, but it was comparatively inexpensive and developed a huge following after its introduction in 1954. Distortion performance was not quite as good as the Williamson (around 0.3% THD at 10W output), but that was considered more than acceptable at the time. While it's a great deal cheaper than the Williamson, it will still require a considerable outlay, particularly for the output and power transformers. To get 10W 'clean' output, the cost cannot be justified.
With the cost of valves today, and considering that you still need a power transformer, at least one filter choke (inductor), two output transformers along with the other parts needed to build two power amps, it adds up to a fairly scary number very quickly. Given the cost/ performance you can get with a modern transistorised amplifier (which beats the Williamson in every respect), I see no point spending perhaps $1,000 for a valve amp that can't equal even a 'lowly' LM3886 power amp IC. Some may choose to disagree, but tests and measurements will quickly reveal which design is superior.
The simple fact is that valve designs have great nostalgia, particularly with people who did not grow up with them. When I went through college, everything was valve based, and I still have some of my old text books as well as many valve data manuals, along with the 'bible' - the Radiotron Designer's Handbook (Langford-Smith). I do admit to the occasional hankering to build a decent valve amp, but my aspirations are tempered by the cost of such an endeavour. It also has to be said that I really don't need another amplifier, and doubly so for one that will cost me a small fortune to build. I have a couple of valve amps (of course I do ), but they are mono, high-power and not at all suited to hi-if.
Valve life, high temperatures and voltages all work against reliability. Even getting hold of decent valves can be a chore now. They are available of course, but they are expensive, and supply can be patchy. Many of these factors caused great excitement when transistors became available, particularly the ability to build an amplifier that didn't need an output transformer. With no heaters, the equipment could run cooler, and the use of low voltage made everything easier. Alas, everything wasn't as rosy as first imagined, so many early designs were 'sub-optimal' for a variety of reasons.
When transistors were first introduced, it was difficult to get performance that came even close to the better valve amps available. There were some very simple power amps that operated in single-ended Class-A, often using an inductor load. These were no better or worse than single-ended Class-A valve designs that were common in radios (AM, sometimes with SW [short-wave]) and 'radiograms' with an in-built turntable for 78 RPM (and later 331/3 RPM) discs. Most turntables, or more commonly 'record changers' which could hold a stack of discs and drop one at a time, were fitted with a ceramic pickup cartridge.
Many early transistorised amplifiers used techniques that were common in the earliest valve designs, using a drive transformer and an output transformer. In the late 1960s and early 1970s, many transistor guitar and PA amplifiers retained the drive transformer, because it made matching to the output stage much easier. None of these designs was 'hi-if' by any stretch of the imagination. It was only when designers took a different approach that the 'modern' solid state amplifier became viable for home listening.
The designs available now are such that not even the best (and most expensive) valve amps can compete. Output power is higher than (almost) anything built using valves, with lower distortion and wider bandwidth than can be achieved with any of the 'vacuum-state' amps of yesteryear. We are now at the point of diminishing returns - getting 100W with 0.02% THD (and similar levels of intermodulation distortion) is easy and relatively inexpensive.
Figure 2.1 - Inductor Load Car Radio Power Amp
The drawing above shows the basics of a 'typical' inductor-loaded power amplifier. The circuit shown is adapted from the original RCA version [ 11 ], and biasing resistors are modified as needed to suit silicon transistors (those used at the time were germanium). As simulated, it can deliver around 3W. The transistors shown were common at the time, with the AD149 rated for a dissipation of 37.5W and able to handle a junction temperature of 100°C (most germanium transistors were limited to 90°C). With a breakdown voltage of 30V, it was used in many power amps until silicon displaced germanium in all but a (very) few niche applications.
The inductor load offers a unique advantage, in that it will (theoretically) allow up to 24V peak-to-peak output voltage with a 12V supply, but this is never realised in practice. The inductor load allowed more power from a 12V car battery than was otherwise possible at the time. With a maximum output power of around 8W, the circuit was a big deal back then. Interestingly, I was completely unable to find an 'official' schematic of one of these amplifiers on the interwebs. Perhaps I was looking in the wrong places, but normally an image search will turn up something other than what I have already published. I ultimately found a circuit in an old RCA transistor manual, and the drawing is adapted from that.
Some car radios used a drive transformer that makes the transistor circuit a lot easier to drive from valve stages. Hybrid car radios were fairly common in the late 1950s and early 1960s. I had one in my car in about 1966. They used valves for the RF stages and the audio preamplifier, with a transistor output stage. Compared to modern units they were very ordinary, but they generally provided marginally more output power than the all-valve car radios that came before.
There's no doubt at all that most designers of the day did their best, but some made serious errors of judgement at times. In their defence, they were dealing with a completely new technology, vastly different from the valve designs that everyone was used to. Low voltages, much higher currents, transistors in an early stage of development and a design process that was at odds with everything they had learned made life hard for a designer. We have benefited from these early designs, because it's human nature to want to do better, and the improvements were rapid. By the mid 1970s designers had produced results that were every bit as good as the best valve designs that came before (often far better), and some of this equipment has since achieved 'cult status' amongst hi-if enthusiasts. Are they really that good? In reality probably not, but that never stopped anything from becoming the 'holy grail'.
Figure 2.2 - Transformer Coupled With Transformer Output Power Amp
The above is an example of a power amplifier using an inter-stage coupling transformer, along with a transformer output. This design dates from the late 1950s, and is adapted from a book written by Clive Sinclair [ 1 ] (about whom there's more info below). The component values are for a 250mW amplifier, and this was a very common circuit in transistor radios in the 1960s. It's not difficult to scale the parts for more output power, but there's little point. Because the design uses germanium transistors, I can't simulate it. It would be foolish to try to replicate a design such as this, as the requirement for custom transformers would immediately negate the circuit from an economic perspective.
Similar (but actually quite different) high-power (100-200W) circuits were used by many manufacturers (including me) in the early 1970s, using the driver transformer but without an output transformer. The driver transformers were readily available at the time, and were inexpensive. They made it (relatively) easy to build a high-powered amp with the minimum of fuss.
Figure 2.3 - Transformer Coupled With Direct-Coupled Output Power Amp
The circuit shown in Figure 2.3 is an example of a transformer-driven output stage. The trimpots (R7, R11) are used to set both the quiescent current and the DC output voltage. This makes them interdependent, and setup is somewhat fiddly. The output stage itself has a high output impedance, and the feedback is needed to get the impedance down to something passably sensible. The circuit shown can deliver 100W into 4Ω, and the output transistors (Q4 & Q5) were typically operated with zero bias current, with R9 and R13 being low value (around 4.7Ω) and provided output up to 120mA or so. Beyond that, the 'main' output transistors took over. The use of a dual supply is entirely optional, and an output coupling capacitor can be used if the amp is used with a single (positive) supply.
Performance was acceptable for PA (public address) at the time, and they were common in early transistor guitar amps. I still have one of the amps I built using this technique, and it works fine after close to 50 years! Compared to a modern design (such as the P27 guitar amp) it's lacking in most respects, but at the time it was a good amp, and I built them with up to 200W into 4Ω. Needless to say they did not use germanium transistors. The output transistors I used were made by Solitron - 97SE113, and unfortunately no datasheet can be found for them. Several other manufacturers used them as well, and they had a reputation for being almost indestructible. The output stage used driver transistors as well (connected as Darlington pairs) to minimise loading on the driver transformer. The latter used a ratio of 1.5:1+1 which was common at the time. The amps I built used an opamp (µA741) to drive the transformer, but most others used a transistor as shown.
It makes sense to start the (serious) solid-state designs with this amp, because it was introduced as a 'replacement' for the Mullard 5-10 valve design. Stereo was just starting to make serious inroads, so single-channel amplifiers rapidly fell from favour. Valve designs (new or revised) were still being published up until around 1963, but after that they started to fade away. Transistors were expensive, but there was no need for an output transformer. These were often referred to as 'OTL' - output transformerless amplifier. This reduced overall costs dramatically, and the voltages used were far more user-friendly. All the early transistor amplifiers used a single supply, with a capacitor feeding the loudspeaker. The capacitor is unfortunate (electrolytic caps are known to produce some distortion), but it also prevented the speaker from receiving DC if the amplifier failed.
The part designators shown are the same as those in the original 1960 Mullard publication [ 2 ], but I left out the preamp section. That was a very basic transistor design which also used silicon transistors, but has few redeeming features. It included a Baxandall bass and treble control, and while it would perform 'well enough', it's rather poor by modern standards. The circuitry used is discussed at some length in the article Discrete Opamp Alternatives. Note that R14 (100k trimpot) should really be 20k.
Figure 3 - Mullard 10-10 Transistor Power Amp
An unfortunate consequence of germanium transistors was that being predominantly PNP, the designers chose to use a negative power supply, despite the fact that the AD161/162 devices are complementary (NPN and PNP). This is seen above, with a -30V regulated supply being used. The regulator isn't shown, but was a relatively simple affair, using an AD149 as the series pass device. The AD161/ 162 output transistors use the TO-66 package, and are rated for 30V and 1A, with 4W total dissipation, although the Mullard documentation says they have been 'upgraded'. Released in around 1960, the Mullard 10-10 was a popular design, although it was somewhat underpowered unless the user had very efficient speakers. The published design recommended that the speakers be around 5% efficient - that's equivalent to 99dB/W/m, and very few modern drivers even come close. Even 1% is a big ask, as that means 92dB/W/m.
Another major advantage of these 'new-fangled' transistor amps was that they were much smaller than their valve counterparts. With no output transformers to mount (or pay for!), nearly everything could be installed on a printed circuit board, another 'new' idea back then. A heatsink was needed for the power transistors (also something new), but with such a low power it didn't need to be substantial. Once started, the 'solid-state revolution could not be stopped.
Interestingly, TR3 is a BC108, a silicon NPN transistor. TR3 is the error amplifier, and compares the signal at its base (the input) and emitter (attenuated output). If they are different, TR3 will attempt to make them equal, and any remaining difference shows up as distortion. TR4 (AC128) is the voltage amplifier stage (VAS) which drives the two output transistors. Germanium transistors have a serious temperature dependence, so a thermistor (R26) was used to maintain stable quiescent current. The collector load of TR4 is bootstrapped by C14 to ensure constant current through R20, which improves linearity.
The operating principle of the 10-10 was used by countless other designs, although the idea of a single negative supply rail didn't catch on. As near as I can tell, almost no-one else used it, and the more familiar positive supply took over (at least until amplifier designs started using dual (positive and negative) supplies. Something else that was discontinued very quickly was the use of power transistors without driver transistors, as was used by the 10-10. This made the performance far worse than it could have been, because the error amp (TR3) and voltage amplifier stage (TR4) are loaded more heavily than in later designs. However, it's hard to argue that it used too many transistors.
R14 (100k trimpot) is used to set the DC voltage at the junction of R22 and R23 to half the supply voltage, with the suggestion to use an oscilloscope to ensure that clipping is symmetrical. The speaker is connected via C15 (1,000µF), which blocks the DC from the speaker. There is no feedback from the output, so capacitor distortion from C15 would be measurable below around 50Hz or so. However, it's probable that any distortion from C15 would be masked by amplifier distortion, quoted as being 0.5% at 1W, rising to 0.8% at 10W output. So, while the 10-10 was certainly cheaper than the valve 5-10, its performance wasn't as good.
The 'El-Cheapo' amplifier (full title: 'El Cheapo 2-30') was published in 1964 [ 3 ], and it has its own page as Project 12A. This was one of the first 'high-power' amps I built, as did many of my work colleagues at the time. Using the then 'state-of-the-art' 2N3055 power transistors, when properly set up it sounded surprisingly good. It was certainly better than low-cost valve amps of the same vintage, and it was surprisingly reliable. Distortion performance was not great, with a simulated THD of 0.14% (the distortion quoted in the article was < 0.6% at 10W output). This is far higher than we expect now, but it was still the equal of most of the affordable valve alternatives.
Figure 4 - 'El-Cheapo' Transistor Power Amp
The supply is now the familiar positive voltage, and the power amp 'proper' uses only five transistors. Q1 is an emitter-follower, needed because the amp's input impedance is only 1k. When released, NPN silicon transistors were preferred, because the production processes favoured NPN over PNP (the opposite of germanium). The output stage used is quasi complementary-symmetry, using a compound (Sziklai) pair to 'synthesise' the lower 'PNP' transistor. Q2 is the voltage amplifier stage, and it uses a bootstrapped collector load to increase linearity and output impedance.
The original design included a regulated supply using a germanium series-pass transistor. This was required because the amp circuit has poor power supply rejection (less than 20dB) due to the simple design. and its DC stability is rather poor as well. The output voltage (before C7) is set for 30V with R5 to get symmetrical clipping. All feedback (AC and DC) is returned to the base of Q2, which makes it a 'virtual earth', having very low input impedance. DC feedback is from the output, via R4 and R5. Most AC feedback is provided by Rf, coming from after the output capacitor. This (at least partially) deals with capacitor distortion at low frequencies, but it also creates a low-frequency boost at around 6Hz. Not audible, and generally not a problem.
One potential issue was crossover distortion. This was the bane of most early transistor amps, resulting from a combination of problems. Power transistors at the time had poor gain linearity, so the hFE fell at low and high currents. For low-level signals, the gain reduction at low current reduced the amp's loop gain, so feedback was less effective. Transistor 'bias servo' circuits were uncommon, so there was always a trade-off between bias current stability and low-level distortion. In the original article, it was claimed that there was no crossover distortion, but I know from experience that it did have some, albeit minimal.
While the original title [ 4 ] was probably true when it was written (around 1968 as near as I can tell), it doesn't qualify now. It's a simple design, and is very similar to El-Cheapo in terms of the design. With an input impedance of about 1kΩ it would require an emitter-follower or similar low impedance drive circuit to function properly. It uses the (now very common) bootstrapped collector load (R5, R8 and C3) to improve linearity.
Figure 5 - RCA High Quality 10W Power Amp
There is one technique that's uncommon, even today. D3 and D4 bypass the emitter resistors (R12, R13) when the current exceeds about 650mA. This improves bias stability (quite dramatically) without the losses associated with high value emitter resistors. Of course there's a downside as well, because the diode turn-on/ off is not linear. In the case of a small amplifier, the diodes won't have any effect below ~3W, and that will cover most of the programme material, with only transients exceeding the current needed to cause the diodes to conduct.
This little trick has been around for a long time, but it's not often seen. I used it in the Project 137 powered speaker box amp, but it's uncommon for hi-if applications. By using higher than 'normal' emitter resistors, the bias is unconditionally stable, without any requirement for a bias servo. The THD will be in the order of 0.2% (as simulated), but using modern transistors it may be possible to get it lower. Using high-speed diodes improves distortion performance, but at additional cost (and they weren't available at the time). Still, it's only good for 10W, and an IC amplifier such as the LM1875 (quoted THD is 0.015%) can (allegedly) provide up to 30W. The reality is different, but the point is that simple amps like the RCA circuit are (mostly) irrelevant in the 21st century. However, they can still be fun to play around with.
This particular IC power amp was popular for what seemed like a few days, but was (as near as I can tell) soon abandoned. The Plessey version came in two types, the SL402 (14V supply voltage) and the SL403 (18V). The Sinclair IC was called the IC10, and was presumably a re-badged version of the SL403. Both the Plessey and Sinclair ICs were around £3.50 each in 1969 [ 17 ]. That was quite a bit for an amplifier IC that could deliver no more than around 3 watts!
Figure 5A - Plessey SL40x IC Power Amp Schematic
A notable 'feature' of these ICs was that they used NPN transistors throughout. It was claimed (quite incorrectly) that the large amount of negative feedback would 'eliminate' crossover distortion. Since an amplifier has almost no gain when the output devices are turned off, this is clearly impossible. Simulated distortion was around 0.3% with an 8Ω load at 2W output. A basic distortion analysis indicates that there is evidence of crossover distortion, although it's at a fairly low-level. Unlike many of the other IC power amps, I've not included a schematic for a complete amp (with all external parts in place) as there doesn't seem to be much point.
Figure 5B - Plessey SL40x/ Sinclair IC10 IC Power Amp IC
The IC was unusual in a number of respects, particularly the 0.2" (5.08mm) pin spacings and the heatsink bar running through the middle of the package. The IC is shown in Figure 5B, and the IC would be attached to a heatsink, usually of folded aluminium. According to the small amount of info still available, the bar through the middle of the package was steel - an unexpected (and unwelcome) choice due to its poor thermal conductivity. Expecting the IC pins to support the weight of the heatsink (especially in transit or if the amp were dropped) was 'adventurous' to put it mildly, and the all NPN transistor design is unique. I have seen all NPN circuits used before to obtain a push-pull output stage, but never for a power amplifier. It's a credit to the designers that they got it to work as well as it did, but it could never be classified as 'hi-if'. The integrated 'preamp' (a 'triple' Darlington) is an interesting addition, and that allowed a complete amplifier with active (Baxandall) tone controls to be built with a single IC - two for stereo.
The Armstrong 600 [ 5 ] was very advanced for its time (ca 1970), having very low distortion (simulated at ~ 0.02%, published THD 0.08%). The output transistors are RCA 40636 devices, which had very similar specifications to the 2N3055 (90V, 15A, 115W), and the output stage is quasi complementary-symmetry. Bias (quiescent current) is set by two diodes (D2, D3) and a trimpot (R11). There is no adjustment for the output DC voltage, and when simulated it was necessary to change R2 from the original value of 82k to 100k to obtain symmetrical clipping. With an 82V supply, the output devices were pushed very hard, especially if the amp was used with 4Ω speakers. However, that was presumably never a problem when these amps were in service.
Figure 6 - Armstrong 600 Transistor Power Amp
The designers have gone to a great deal of trouble to get everything right. Q1 is the error amplifier (all component designators are mine - the original schematic showed illegible reference details). To ensure that the gain of Q1 isn't compromised by loading, it buffered by an emitter-follower (Q2) before the voltage amplifier stage (Q3). The collector load of the VAS is bootstrapped as with the previous designs shown, and the output stage is again quasi complementary-symmetry. R11 is used to set the output stage for a quiescent current of 20mA.
The frequency compensation capacitors appear to be sub-optimal (at least in a simulation), but were no doubt correct for the original devices used. This design was probably at the pinnacle of what was achievable at the time, but it's not at the end of the design process. Overall this amp would withstand scrutiny today, but only if the test procedure were blind. A sighted test would cause people to think they could hear the output capacitor (it's enclosed in the feedback path so it effectively ceases to exist for the audio frequency range).
The Sansui AU-101 was built between 1973 and 1975, and is rated for 15W/ channel into 8Ω The circuit diagram is adapted from the service manual, including the component designators. It uses a single 44V DC supply, and although not shown, the power supply filter cap was only 1,000µF. The design is fairly simple, but (apparently) it sounded very good, based on a couple of reviews I came across. It includes feedback from after the output capacitor (C817), and the input transistor (TR801) gets DC feedback via R811 and R813, with all AC feedback via the output cap, R815 and C807.
Figure 7.1 - Sansui AU-101 Transistor Power Amp
The service manual is very detailed and has complete circuits for the preamps and power amps. As with most Japanese designs, the transistor types are uncommon, and in this case they appear to have been selected for particular parameters (especially gain). A simulation is inconclusive because the exact transistor types are not available as simulator models, but substituting common devices available today it seems to perform well. The specifications were written in the 'bad old days' when 'music power' was commonly quoted. In this case they claim 44W 'music power' into 8 ohms, a figure that is quite impossible into any typical load.
Although rated for 15W/ channel, the amp can provide 20W for short periods, until the main filter cap discharges under load. Distortion is claimed to be less than 0.8% THD at rated power, with response from 20Hz to 60kHz (±2dB). Overall, it should be capable of decent performance, but it's not as good as the Armstrong described above. Under normal listening conditions I'm sure that it would produce an 'acceptable' listening experience. In some areas, the AU-101 is considered a 'classic', but in reality it's just a reasonably competent amplifier, with a very basic preamp (not included here).
Another 'classic' of the late 1970s was the NAD 3020. At a claimed 20W/ channel it was no powerhouse, but it was something of a bargain when it was released in 1978. The amp itself is fairly basic, but the number of phase compensation capacitors throughout the circuit is somewhat baffling. As a current feedback amplifier it should need only a couple of caps at the most (the Sansui uses just one). The output is directly connected to the speaker via a small inductor (no output capacitor).
Apparently (I can't verify this with any certainty), NAD sold over 1 million 3020 integrated amplifiers, and there are many websites where the authors sing the praises of this amp. When released it was cheap, but people quickly discovered that it also sounded very good. I don't do reviews of any kind, but there are some very dedicated followers, and I didn't see anything to indicate that it's not beloved by all. While it's rated for 20W/ channel, the ± 30V supplies will let the amp deliver 40W into 8Ω.
Figure 7.2 - NAD 3020 Transistor Power Amp
To minimise any distortion that might be created by 'BK1' (a part that isn't mentioned in the parts list - it appears to be a circuit breaker), the majority of the feedback is applied via R629 (470Ω). BK1 is normally closed, but if it opens, enough feedback is available through R631 (1.8k) to prevent amplifier malfunction. Continuing with assumption that 'BK1' is a circuit breaker, failure to apply feedback from the speaker side would decrease damping factor (and it may be non-linear).
There are a few other things that are different from most other amps. Firstly, there are no emitter resistors for the output transistors. That makes the bias setting critical, and it's been designed so it's not user-adjustable. Factory setup would add a resistor in 'RX1' position to obtain the desired quiescent current. R653 is a 1Ω resistor, and is normally shorted. The jumper is opened to allow quiescent current to be set to the recommended 30mA by measuring the voltage across the resistor.
The second oddity is the combined 12dB/ octave high-pass and low-pass filters at the input (C601 to C609 and associated resistors. The low-pass filter provides ~1dB of boost at 20Hz, with a -3dB frequency of 12Hz. The low-pass filter is 3dB down at about 50kHz. Thirdly, the amp is powered from an unregulated dual supply, and the two 'odd voltage' supplies are regulated and are used for the preamp section. Next, the DC offset circuitry is the most elaborate I've come across, using a transistor (Q603), several resistors and a trimpot, plus two diodes.
Finally, the power amp has an in-built (switchable, but it cannot be disabled) 'soft clip' circuit (not shown) which may or may not provide any benefit. Very few other amps have used it, as all it really does is increase distortion as the maximum power is approached. NAD seems to be the only manufacturer who's used it in more than one design.
Sinclair was always something of an outsider when considering high-quality amplification. Clive (the late Sir Clive - 30 July 1940 - 16 September 2021) was ever the entrepreneur, and was responsible for the first Class-D (pulse-width modulation - PWM) amplifier, the X10. To say it was an unmitigated disaster is probably high praise, and it vanished from the market after what seemed like a few minutes. Like many Sinclair products, it used whatever transistors that could be acquired for the lowest cost possible. It was claimed (but I have no details) that Sinclair often purchased transistors that were 'factory rejects', being devices that failed to meet specifications.
Figure 8.1 - Sinclair 10W Transistor Power Amp
I have very little information on the first amp shown, only a schematic of the complete unit - preamp, power amp and a power supply that incorporated active current limiting. The latter was diode-fed from the emitter resistor of TR11, and was designed to turn off the power supply if the peak current exceeded ~18A or so. It's rather doubtful that it would provide much protection, as that much current could easily damage the output transistors. As you can see, no transistor part numbers are shown. That's because they weren't shown on the only circuit diagram I have, and it's likely that they changed depending on what was available (at low cost) at the time.
Figure 8.2 - Sinclair Z50 Transistor Power Amp
The Z30/ Z50 Sinclair amps [ 6 ] were early adopters of the now common long-tailed pair for the input stage. In addition, the bootstrapped load for the voltage amplifier stage has been changed to a current source. A current source provides a small advantage compared to bootstrapping, but it also limits the output swing because it cannot boost the collector resistor for the VAS above the supply rail. I've always liked the simplicity of the bootstrap technique, and the difference between an 'ideal' current source and a bootstrapped version is small. Predictably, the Z30 amp uses the same circuit as the Z50, but with some component changes.
The quasi complementary-symmetry output stage (still used with the Z30 and Z50) was one of the last things to disappear, when PNP power transistors finally became available with specifications that were close to those for NPN devices. They are still not identical, but are much closer than an NPN Darlington stage used with a complementary (Sziklai) pair. Interestingly (or not), there was considerable effort spent on making a Darlington and Sziklai pair perform equally. This was never a complete success though.
Before true IC power amps (of reasonable power) came along, Sanyo and a few other manufacturers made hybrid modules. These included the semiconductors, resistors and some low-value capacitors encapsulated within the hermetically sealed case. While the technique was used as early as the 1950s, audio power amps came along much later. They featured a metal back, screen printed 'wiring' and generally used transistors in die form (without additional packaging). More information is available on the specific techniques used from Wikipedia [ 7 ]. The circuitry used is actually fairly basic, with a quasi-complementary output stage. The higher power versions required external emitter resistors for the output stage, as they used two power transistors in parallel.
Figure 9 - Sanyo STK4042 Thick Film Hybrid Power Amp
The STK4042 [ 8 ] (80W into 8Ω, with ±45V supplies) is shown as an example, but there were many more, covering a wide range of output powers. While they appeared to make life easier for hobbyists and commercial manufacturers, the reality was always different. The number of external parts needed was generally quite large (see Figure 9B for the wiring diagram), and it was very difficult to build an amplifier using them without a PCB. Specifications varied, with output powers ranging from 20 to 200W, but with rather uninspiring distortion figures for the most part. Later versions were better than their predecessors, but they are no longer made (at least not by Sanyo as near as I can tell). These modules can still be found on eBay, but that's a risk I wouldn't take.
Like almost everyone involved in audio, I used them for a couple of 'quick & dirty' jobs, and while they certainly worked well enough, it didn't take much of an accident to cause them to fail. The basic idea was not too bad, the additional external parts count made them a lot less convenient than they appeared at first. Stereo versions were even less attractive, because there were more pins that had to be connected to external parts, with very close spacing. I suspect that the only reason anyone sells them now is for servicing equipment with failed modules, although a few die-hard hobbyists may still think they're a good idea.
The main disadvantage is that if (when) a module fails, the entire unit has to be replaced. There may only be one internal device (or connection) that's faulty, but the modules are not able to be serviced due to the way thick-film hybrids are made.
For quite a while now, power amps have been available as integrated circuits. One of the earliest was the LM12, and this is the only one with a circuit diagram shown here. There are several IC power amps that have been used in ESP projects, the LM386, LM1875 (or TDA2050), LM3876, LM3886 and TDA7294. Of these, the LM3876/ 3886 and TDA7294 are genuinely able to be called 'hi-if', with the LM1875 not too far behind. The LM12 had a somewhat unusual 4-pin TO3 case. Unlike almost all modern 'power opamps', it's compensated for unity gain and can be used as a high-power buffer. The absolute maximum supply voltage is ±40V, but ±30V is recommended. I1 to I4 are current sources.
I was able to simulate the circuit pretty much as shown, and it works quite well. Distortion (as simulated) is a little disappointing at 0.06%. Clipping recovery is not very good, and it shows evidence of 'rail-sticking', where the output remains 'stuck' to the supply rail for a few microseconds after the voltage should have recovered. Q14 and Q15 are transistors connected as diodes, and are required because of the protection circuitry which isn't shown in the datasheet. Q7 is an oddity that only appears in ICs, having two emitters. This can be simulated by using two transistors with their bases and collectors joined, but that won't work with discrete transistors unless they're well matched.
Figure 10 - National Semiconductor LM12 Power Amp (Protection Circuits Not Included)
The LM12 is no longer available. The circuitry is fairly straightforward, and it has the 'traditional' long-tailed pair as the input stage, buffered with a pair of emitter followers. The LTP (Q3 and Q4) has considerable degeneration caused by the two 5k resistors (R3 and R4). The LTP is loaded by a current mirror (Q5, Q6 & Q7), and Q9 is the VAS (voltage amplification stage). The LM12 was capable of very low distortion (the datasheet claims 0.01%). With ±30V supplies it can deliver about 80W into 4Ω. It was a useful IC, but more modern replacements can provide more power with lower distortion. The modified TO-3 package would be very expensive if made today (as are all TO-3 transistors).
The modern hi-if IC amplifiers are probably 'state of the art', and they are mostly very competent. It's highly unlikely that anyone would pick one in a true double-blind test, regardless of the competition. They offer high performance in a compact package, needing relatively few external parts. I tend to think of them as 'power opamps', because they are generally used very much like any 'normal' opamp. One thing to be aware of is the circuit gain. Most are designed for a closed-loop gain (set by feedback) of at least 20dB (×26 voltage gain). If you attempt to use them with less than that oscillation is probable.
There's a couple of other things that you need to be aware of as well. Because the surface area is fairly small, the rated power is certainly available, but expecting any of them to provide full sinewave power for an extended period will usually cause them to overheat and shut down. Provided the source is music, this rarely (if ever) causes a problem. The protection circuitry of the National Semiconductor (now Texas Instruments) parts is vicious, and it has a hair-trigger. My suggestion is to operate them at the lowest voltage you can, consistent with acceptable power output. Many commercial amplifiers use IC power amps, particularly for 'budget' products.
Some manufacturers have focussed on the convenience of these parts, and they are used in many low-cost guitar amplifiers. This is not recommended, because they are prone to failure if pushed hard for extended periods. Several guitar amps using them are well known for failures, and I would never recommend any IC power amp for that role. For serious listening, I have no hesitation using or recommending them - it's probably the easiest way to build a power amplifier (or five).
By way of comparison to the previous examples, Figure 10 shows A simplified version of the Yamaha AX-490 power amplifier. There is one added transistor (in the VAS - voltage amplifier stage), and simply introducing one extra transistor makes stability worse, and requires more (and more complex) phase correction networks. The circuit as shown simulates quite well, and it appears to be stable, but without the exact transistors specified that's not assured.
Figure 11 - Yamaha AX-490 Power Amp
This is not intended as an example of the 'best' amplifier available by any means, but it is representative of many designs found in commercial equipment from Japan. While it's likely that a true audiophile will scoff, the overall design is solid, and the AX-490 has respectable specifications. The reason for showing this circuit was to demonstrate the extra lengths that designers have to go to, to ensure stability when the basic design is 'improved', even if ever-so-slightly. As designs become more complex, ensuring unconditional stability just gets harder.
As anyone who has looked at schematics for Japanese audio equipment will be aware, they often come up with 'quirky' solutions and their design ideas are somewhat different from designs from 'the west'. Mostly, they are no better or worse, but the designers tend to look at things differently. This is a good thing, because often you can discover a better way to do things, but of course not all ideas are necessarily 'good'. For example, using one pair of 200W output transistors with ±55V supplies is really pushing the limits of the transistors' safe operating area (SOA). With a 4Ω resistive load, the peak dissipation is over 140W for each output device, and there's absolutely no room for error (or operation at elevated temperature)!
As a final design, Figure 11 shows the principles that are used in most modern designs. Not everything is used, depending on the designer and the expectations for the circuit. The current source (Q6) may be replaced by a bootstrap circuit, the current mirror (Q4, Q5) may be replaced by one or two resistors, and the VAS (Q7, voltage amplifier stage) may be a Darlington or compound pair. The output stage may use Darlington or Sziklai pairs, and the output inductor (L1) and its parallel resistor may not be used.
No component values are shown, because this is an example, and is purely intended as a demonstration of the basics of an amplifier where most of the designer's 'tricks' are shown. Even so, there remain untold variations. The input stage may use PNP transistors (with others for the current sources and mirror reversed as needed), the phase compensation (aka 'dominant pole') capacitor (C4) may be repositioned, and additional compensation may be needed elsewhere.
Figure 12 - Generalised Schematic Of A Modern Power Amp
Along with the above, the offset preset (VR1) may be omitted, and in some cases even the bias preset may be replaced with fixed-value resistors. Some designers will duplicate the input stage to obtain an aesthetically symmetrical input stage, which may or may not provide an actual improvement. The idea of a 'fully symmetrical' amplifier is actually a myth, because the NPN and PNP transistors will always be different. Considering how hard it is to get identical transistors of the same polarity, it's unwise to assume that PNP devices can ever be identical to their NPN counterparts. Matched pairs can be obtained, but they will almost always be NPN or PNP, not both.
As a generalised scheme, it's not possible to show every possibility, and even output stages can be far more complex than the simple arrangement shown. Paralleled output transistors are often used to boost output current capability and/ or ensure that the devices operate within their safe operating area, or simply to allow the use of cheaper (lower power) output transistors. Other variations include using JFETs for the input stage (always with reduced gain), and they may be arranged in cascode with BJTs to recover the gain lost by the JFETs. Sometimes a second LTP is employed to get the highest gain possible, but compensation becomes much harder.
The general scheme shown is comparatively simple, but it incorporates many of the design ideas that give good performance. Simplification does not necessarily mean that a design is sub-optimal, and using all of the schemes shown likewise does not mean that the design is 'better' than another. All design involves compromise, and some things will do nothing other than make the final design more complex, with no guarantee that performance matches the degree of complexity.
Stabilisation against oscillation becomes harder as a design is made more complex, and a particular design will often demand that the exact same transistors are used as specified. Since different devices have different ft (frequency transition), using other than the specified devices can lead to instability, with the requirement to modify the compensation circuits to prevent oscillation. Any amplifier will oscillate if the open-loop gain is greater than unity at a frequency where the phase shift equals 180°. This causes negative feedback to become positive.
Of course, there are countless variations on the basic theme shown. Some work very well even though they appear to be much simpler, while others perform worse, even though more complex. A great deal depends on the output transistors, and modern devices are far better than their early counterparts. In particular, it's important that the output devices have sufficient gain at low current - especially around the bias (quiescent) current. Early transistors exhibited considerable 'gain droop' at low current, so at low output levels they didn't have enough gain to allow negative feedback to work effectively. If the circuit has low gain, it must also have reduced feedback (in direct proportion), so distortion increases.
Class-D amplifiers are gaining in popularity, with some of the latest offerings being every bit as good as a Class-B amplifier. They are not without their own special issues of course, but they have the advantage of running much cooler than any purely linear amplifier. Because the output devices are either 'on' or 'off', the problem of high power dissipation during normal operation is relieved. The dissipation of the output MOSFETs (invariably switching types) is low, but there can still be high dissipation as one MOSFET turns on as the other turns off. In the following drawing [ 13 ], the pin marked 'DT' is used to control the dead time - a period where both MOSFETs have zero drive voltage. Extending dead-time is safer for the MOSFETs, but increases distortion.
Figure 13.1 - IRS2092 Based Class-D Power Amp
The schematic shown is one of a series of designs published by IR (International Rectifier), using the IRS2092 Class-D IC. This is a self-contained Class-D circuit, which incorporates the PWM (pulse width modulation) circuitry, along with MOSFET gate drivers (high-side and low-side), as well as over and under voltage protection. While it's no longer 'state-of-the-art', it is a capable IC, and untold thousands of Class-D amplifiers have been built using it, with many still readily available as kits or modules. Capable of operation at ±100V (although that's the upper limit), it can (at least in theory) be used for amplifiers of up to 2kW output. This requires a number of changes, with dedicated gate drivers and more (and bigger) MOSFETs.
The circuit shown is claimed in the datasheet to provide 120W into 4Ω, with distortion below 0.1% for any output below 100W. The design is a self-oscillating type, using a Sigma-Delta converter internally to convert the analogue audio into PWM. The nominal quiescent oscillation frequency is 400kHz, but this can be changed if required. The circuit shown is a little misleading in a few respects, in that it only shows the basic circuit. To obtain full protection (particularly for over-current of DC fault conditions) requires external circuitry, as does the separate 12V supply (indicated as -23V).
This is intended as an example only, and I used it because it's a very common 'discrete' Class-D amplifier, although it's now a fairly old design. Many Class-D amps available now are a self-contained IC, requiring only a few external parts. One of the biggest issues (and the reason I've not done a Class-D amp project) is the output inductor. This is critical to filter out the high frequency (400kHz) noise, and it, along with the 470nF capacitor shown, determine whether the amplifier creates unacceptable radio-frequency interference or not. Radiated RF can be notoriously difficult to remove, and the performance of the output filter is critical. The inductor also requires very low series resistance to prevent signal loss and/ or poor damping factor.
Up until fairly recently, a semi-discrete solution such as the one shown above was the only option, but that's changed with fully integrated designs now seeming to dominate the Class-D sector. There are many modules available, both from online auction websites and other vendors. Some are very good, while others manage to screw up the design to produce a 'product' that's worse than useless. However, of the better options, you'll come across quite a few using TI (Texas Instruments) ICs, most of which are pretty good (as long as the PCB is well designed). These are invariably SMD, with a great many pins, and are not suitable for most hobbyists to assemble at home. NXP (Philips) also has a range of ICs, with the following being a good example ...
Figure 13.2 - Fully Integrated Class-D Power Amp (TDA8954 - Shown as Stereo SE)
The above is an example of one of the current Class-D ICs available [ 14 ]. The TDA8954 also has a number of relatives, with different output power ratings. This is a single IC 2-channel SE (single-ended) or mono BTL amp, capable of up to 420W into 8Ω. Distortion figures are comparable to many Class-AB amplifiers, and they are rated for a maximum supply voltage of ±41V. They aren't as flexible as the TI version described briefly below, with a minimum load impedance of 8Ω when used in BTL (There is no option to parallel the two channels). Unfortunately, this does limit their usefulness somewhat, especially if you need to drive a subwoofer, as most are 4Ω. The drawing is simplified, but shows most of the important 'stuff'.
Note the speaker connections and input wiring. This is designed to operate one channel with the opposite phase to prevent 'bus-pumping'. a phenomenon common to Class-D amplifiers. Depending on the load impedance and input frequency, one or the other supply voltage may transfer energy to the other, causing it to rise - possibly to a level that will cause the amp to shut down. Operating the two channels with opposite phase is the most effective way to prevent this.
Figure 13.3 - Fully Integrated Class-D Power Amp (TPA3255 - Shown As Stereo BTL)
The TPA3255 [ 15 ] IC has four independent amplifiers, and it can be operated as 4 × single-ended (capacitor output) amps, 2 × BTL (bridge-tied-load) amps, or a single parallel BTL (PBTL) amp, capable of driving a 2Ω load. Quoted distortion is 0.01%, but that varies with the load impedance. The maximum total output power is about 600W, either as a single mono PBTL amp, two stereo BTL amps, or four single-ended amps. Other than a substantial number of capacitors, it's pretty much self-contained, with only a few resistors for setting various parameters. It requires more external parts than the TDA8954, and it has 44 closely spaced pins with a thermal pad on the top of the IC. I don't propose to go into any more detail, as everything you need to know is in the datasheet.
Needless to say, there is a wide variety of different Class-D ICs available, with most from the established manufacturers. Some are very good indeed, while others are probably best classified as 'alright'. When used for internal TV speakers and other applications where poor quality sound seems to be acceptable, product manufacturers will likely use whatever they can get cheaply. Despite any misgivings one might have, few of the ICs are actually 'bad' per se. They are often let down by the PCB layout or extreme cost-cutting. There is no doubt that some of the modules available fall into the latter category (I have a couple that had to be modified because of a major error in the PCB layout).
One thing to be aware of with most Class-D implementations is the output filter. It's designed to remove the oscillator frequency and allow the audio through, but it also can make the amplifier load impedance dependent. The high-frequency response changes, depending on the load. Ideally, all Class-D amps would include a Zobel network to set a predictable impedance at frequencies above 15kHz. Even more ideally, this would be included in the speaker box, so that it will present a known (and non-variable) impedance at around 20kHz. I don't recommend that anyone should hold their breath while waiting for this.
Note that this article is simply a collection of ideas. The operating principles for transistor power amps are consistent through all the designs. An input stage operates as an error amplifier, followed by the voltage amplifier stage. The VAS drives the output devices, which can use several different topologies. Some of the engineering concepts are now considered outdated, but all remain as valid today as they ever were. Not every concept shown here has been validated by simulation, and the drawings are taken from datasheets, service manuals or from my collection.
I must point out that the circuits shown are not intended for construction, although you can do so if you wish (do not expect any support as it won't be forthcoming). They are provided as examples of the progression of amplifier designs over the years, and as such are a very small selection of the countless designs that have been produced. Many of the transistor designs shown here use an output coupling capacitor, something that was discontinued by most manufacturers by the 1980s or thereabouts.
However, while the majority of new designs don't use an output cap, it is still used in some configurations. This is typical with some IC (integrated circuit) designs where the constructor has the option of using a single-supply (positive only) IC as two or four amps in a single IC. The idea is that amps can be used in BTL (bridge-tied-load) with two amps used in anti-phase for higher power (the speaker is connected between the outputs), or the amps are used independently. With a BTL connection, there's no need for a coupling capacitor to the speaker, because the two outputs are at the same voltage (anything from 6V to 40V DC). If the amps are used independently, an output coupling capacitor is required to block the DC. Many Class-D ICs use this technique, as does the low-power LM386 amplifier IC.
For reasons that I find puzzling (to put it mildly), many new designs are fully DC coupled, so response extends from DC up to the maximum claimed frequency. We can't hear DC and loudspeakers can't reproduce it either, and there is absolutely no reason to faithfully amplify any DC that may exist at the input. It doesn't take much DC to cause speaker damage, so this trend is most unwelcome, IMO. There's a sub-set of audio 'enthusiasts' who imagine that capacitors are 'evil' and damage the sound in mysterious ways. While it has been demonstrated by many experts in the field of electronics that many capacitors create some (usually small) amount of distortion, it has to be noted that even the worst capacitor in the world can't introduce distortion if there's no AC voltage across it. By making coupling caps larger than necessary for a given LF -3dB frequency, distortion becomes a non-issue!
A capacitor used to handle speaker current is in a different league. The 'ripple current' through the capacitor is the RMS current delivered to the speaker. For high power amps, this becomes a serious limitation, so elimination of that cap has much to commend it. Using an input capacitor remains very important, as without it, a DC fault in a preamp (or DAC) can easily destroy one's loudspeakers. As noted, if the cap is sufficiently large, there will be almost no AC voltage across it, so distortion is not a problem other than at very low frequencies ... where the speaker will introduce far more distortion than any capacitor.
Many modern designs are (IMO) somewhat over-designed. I do accept that aiming for the lowest possible distortion (for example) is a worthy goal, but there comes a point where further improvements are not worth the added complexity. Particularly if you build your own audio 'stuff', an amplifier so complex that it takes forever to get it to work isn't a good idea (and in some cases a constructor may never complete a complex project). ESP project amps (with or without PCBs) are mostly designed specifically so they are easy to construct, yet give performance that's in line with expectations. There are many designs that may offer higher performance, but beyond a 'respectable' set of specifications they become hard to put together, and usually require the exact transistors specified or performance is dramatically affected.
As design complexity increases, so do issues with stability. An amplifier that cannot be made stable (i.e. no high-frequency oscillation, whether continuous or parasitic, and the absence of ringing on transients) isn't usable. Unless the hobbyist has a good oscilloscope, even knowing that there is oscillation can be challenging, because most people don't recognise the symptoms. If you make your own PCBs, few amateurs are aware of the subtle distortions that a poor layout can introduce. A bad layout can couple 'dirty' signals on the supply rails into the audio path, and it's not at all difficult to double the distortion without even realising it.
The most recent development is Class-D [1, 2], using PWM (pulse-width modulation). Some are very good indeed, and others fall into the same category as mentioned in the introduction. They are 'adequate', but in some cases don't come close to even the low standards set in the early 1970s, However, they are capable of much higher power than anything available back then. There are now many IC versions (readily available from various on-line sites), usually available as fully built modules. These can work well but may not, for a variety of reasons. Having tested some of these, I know first-hand just how bad some of them really are [3]. Buyer Beware!
¹ Class-D does not indicate or imply that the amplifier is digital. While some Class-D amplifiers do have digital processing, the conversion from analogue to PWM is usually a purely analogue process (although this could be argued when Delta-Sigma [ΔΣ, aka ΣΔ] modulation is used). The term 'Class-D' was coined simply because we already had Class-A, Class-B and Class-C, so Class-D was next in the sequence.
² Tripath coined the term 'Class-T' (® registered trade mark) for its PWM ICs, but technically it's still Class-D. While many claims were made that Class-T was 'superior' to conventional Class-D, there's little evidence to back that up. Indeed, that's been a historical issue with audio, as there are far more opinions than facts presented on innumerable websites.
³ As an example, I have two different boards using the ST TDA7498 IC [ 16 ]. One is essentially unusable, having high (and very audible) distortion and poor performance overall (even after correcting a major PCB layout error). The other is fine to listen to, with no audible artifacts even at the onset of clipping. The good one sounds almost identical to by workbench LM3886 amplifier, although a proper blind A/B test is difficult in my workshop environment which has 'dubious' acoustics. (Sighted tests are always invalid!)
Apart from a number of circuits that I've collected over the years (many of which cannot be reliably identified), the following publications were used to compile this article. There are many other designs that could have been included, but many are so similar that it would only be repetition. Those shown are fairly representative of the progress of power amplifier design over the years. It's likely that I have omitted someone's favourite circuit, but this is an article, not a book.
While there were more sites (and datasheets) that I looked at during the course of writing this article, most were either to verify that I had not made errors in the drawings, or to double-check some of the facts presented. A few of the circuits were simulated (none of the valve or Class-D designs though), and the simulations matched the descriptions fairly well (within the normal error range for a simulation vs. a constructed amplifier).
![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Loudspeaker Power Handling Vs. Efficiency |
Copyright © 2005 - Rod Elliott (ESP)
+Page Published 16 December 2006
If I never see another loudspeaker rated at 1,000W (or more) again, it will still be too soon. Well apart from that fact that no voicecoil can withstand that kind of power for more than a few seconds or so without self destruction, why would anyone think that a 1,600W (AES) loudspeaker was a good idea?
+ +In the first instance, just consider a typical loudspeaker voicecoil. It is typically wound on some type of cardboard, thin aluminium, Kapton, fibreglass or some other similar material. I have never seen a ceramic or quartz voicecoil former, but that's what would be needed to take the temperatures involved at such an insanely high power - not to mention the wire and insulation used. I doubt that asbestos insulation would be considered a good idea these days. Think in terms of a typical old-style bar radiator or an electric toaster. These were/are typically around 1,000W and the resistance wire glows red hot (not surprisingly, this is the whole idea ).
Outrageous power ratings for both amplifiers and loudspeaker drivers are like maximum top speed for cars - many people would love to have a car that can do 300km/h even though it is illegal in most countries to get even close to the maximum (for example, the absolute maximum in most of Australia is 110km/h).
+ +One thing you won't hear any of the ultra high-power speaker makers discuss is exactly how the voicecoil manages to withstand the temperatures that may easily be achieved - or exceeded. They don't have access to any magic insulation or adhesives, and they are pretty much trapped into using the available insulation grades that are common for transformers and other electrical machines.
+ +The general classes are ...
+ ++ Class B - 150°C+ +
+ Class F - 185°C
+ Class H - 220°C +
This is the maximum permissible temperature for the insulation. Most adhesives can tolerate fairly modest temperatures, although there are a few epoxies that are good for 200°C or so. Kapton and aluminium voicecoil formers can withstand high temperatures, but the assembly is ultimately limited to the capability of the lowest temperature material. Note that the above are maximum temperatures, not temperature rises. Temperature rise is determined from the ambient, which may be 40°C or more inside the cabinet. The maximum temperature rise is the maximum allowed temperature, minus the ambient temperature. It is common to add about 30°C to the ambient to allow for hot-spots (connections or sections of the voicecoil outside the gap for example).
+ +In general, expecting to operate a voicecoil at 200°C continuously is unwise, and the lower the temperature the better.
+ +These power ratings for amplifiers and speakers are designed to appeal to those who have no understanding of efficiency, and think that power is the only thing that matters. For such people, a 1000W speaker must be better than a 200W speaker. What they don't understand is that a 200W speaker at 100dB/W/m is louder than a 1000W speaker at 90dB/W/m - the higher efficiency driver will achieve 123dB with 200W, vs. 120dB for the 1000W driver. This is ignoring all losses, which are dramatically higher in the high power speaker - see below to find out why.
+ + +![]() | Demonstration videos of low frequency drivers accepting vast amounts of power can be found on the Net, but
+ prove nothing. They are simply marketing ploys, designed to convince the buyer that the claims are real. Those that I have seen use the driver completely open - no box, and not even
+ a basic baffle. This ensures maximum cone movement and maximum cooling because fresh air can circulate. + + In addition, all that is needed to 'prove' the point is to operate the driver at resonance. The resonance impedance may be 10 times that at other frequencies, so the amplifier output voltage + is meaningless as a measure of power. If the impedance is 40 Ohms at resonance (for a nominal 4 Ohm driver), the nominal voltage is ~63V RMS (1kW / 4Ω), but at resonance the actual + power is only 100W for that voltage. This same test procedure can be used with the driver in an enclosure, but the drive frequency is simply increased to match the driver's resonance in the + box. + + If you want to burn out a competitor's product for the demonstration, simply drive it at a frequency far enough from resonance to give a spectacular looking failure. I leave it to the reader + to figure out if anyone could be so dishonest as to do this. |
So, what alternatives are there? Read on - this article explains the issues with dynamic drivers, and shows the deficiencies with many high powered loudspeakers. There are drivers that claim to take 1.5kW continuous power, yet the parameters of one such driver examined simply will not allow this much power to be used at low frequencies without exceeding the maximum excursion (to the point of damage). Further to this, the driver parameters are such that the actual performance cannot be optimised for any known enclosure - in short, the driver is a pig, and it is extremely difficult to make it perform well regardless of the enclosure type. The driver in question will not be named, but was 'commended' to me to prove that I am wrong. It has done nothing of the sort (predictably), because the driver is designed solely to appeal to those who continue to think that power is important. Worth noting is the fact that as a subwoofer in a 300 litre sealed box, power is limited to less than 300W below 40Hz - a true sub needs to be able to get to 20Hz to warrant the name 'subwoofer'. Drivers such as this are the equivalent of putting a Formula 1 engine with full race tuning into a small sedan, and wondering why it kills you on the first corner . The combination simply doesn't work properly, and there is no point pursuing such silliness.
While it may appear that many of the calculations in this article are based on the type of SPL (Sound Pressure Level) usually needed only for large scale public address, the same things apply for audio and home theatre. The effects are reduced because of the lower sound level normally used in a home environment, but are no less real since domestic loudspeaker drivers are normally rated for significantly lower power and efficiency than professional drivers.
+ +There is absolutely no good reason that anyone should imagine that a loudspeaker driver capable of 1kW is a good idea - it isn't now, and never was. There are so many other areas in audio where outrageous claims are made - the proliferation of PMPO advertising power (having no connection whatsoever with reality), stunning lies about the importance of 'specialty' cables in systems, 'magic' components - the list is endless. Very high power, depressingly low efficiency loudspeakers are just another thing to create FUD (Fear, Uncertainty & Doubt) for buyers and DIY people alike. I hope this article helps a little.
+ + +This article was in preparation for a considerable time. What originally looked like a relatively simple task turned out to require a lot of time, effort and perseverance. From the initial idea to publication took well over a year, and even now, there are bound to be some (hopefully) minor errors. The tests were conducted exactly as described, but lack of the very sophisticated equipment required to guarantee complete accuracy (especially with magnetic compression) means that some errors are inevitable.
+
+The above notwithstanding, the results do show the effects as described, and the article is intended to inform, not to criticise or endorse any manufacturer or specific product. All effects described are real, and although some may seem 'off-the-wall', all results are measured - not simulated or obtained theoretically.
Consider that the average high efficiency loudspeaker is typically no more than about 5% efficient. This means that only 5% of the applied electrical energy is converted into sound, the rest is dissipated as heat from the voice coil. This 5% efficiency speaker will be rated at 99dB/W/m - this is much higher than normally achieved.
+ +If we could get one, a 100% efficient (direct radiating) speaker would convert 1W of electrical energy into 1W of acoustical energy. This will give us 112dB SPL (at 1W, 1 metre, when radiating into half space). Since no such loudspeaker exists, we must use what is available. Typical hi-fi loudspeakers are typically around 90dB/W/m - only 0.62% efficiency! 99.38% of all applied power is wasted as heat.
+ +At one stage, professional sound reinforcement speakers were commonly around the 100dB/W/m efficiency level, but this is now rare. Only a few of the traditional professional manufacturers (and a small number of specialist speaker makers) have drivers this efficient, and herein lies the problem (or part of the problem). In addition, there is an ever greater demand from bands and venues to minimise the size and weight of the PA system. It used to be accepted that the PA was going to be big, and everyone adapted to the reality of horn-loaded systems that could produce the required SPL with the power amps that were available at the time.
+ +Now, there is a realisation that 'power is cheap', and this is quite true. High power amplifiers are now very cheap compared to even a few years ago. If more power can be supplied to the loudspeakers, the logic is that fewer loudspeakers are needed for a given SPL, and horn loading can be dispensed with as it makes the boxes too big. High efficiency speaker drivers are more expensive to make too, so with vast amounts of cheap power available, why bother? Since power is so cheap, loudspeakers with efficiencies even below 90dB/W/m are common - all you need to do is use a more powerful amp and everything is back where it should be, right? Wrong!
+ +Since the majority of all electrical power is converted to heat, the higher the power applied to a speaker, the more heat you have to get rid of. The typical loudspeaker is not a good design for heat disposal, and many of the more dedicated manufacturers have gone to extreme lengths to get the best possible cooling for their driver's voicecoils.
+ +Even so, there are limits. These limits are physical, metallurgical and chemical, and no amount of marketing hype will change any of these. The adhesive that bonds the voicecoil to the former has a difficult job, high temperatures, often extreme forces acting upon it, and high vibration levels all stress the adhesives. It is not uncommon for a voicecoil to reach (or exceed) 200°C, and the more power that is wasted as heat (because the speaker is inefficient) the more power you need to put into it to get the sound pressure level (SPL) you had when the voicecoil was cold.
+ +Advanced cooling methods have been developed, but these rely on the cone/voicecoil assembly moving - preferably by comparatively large distances (10-20mm or more). At midrange frequencies, there is very little cone movement, so there is also very little airflow across the voicecoil and former. A vented back plate isn't of any use if there is very little cone movement and virtually no airflow.
+ +
Figure 1 - Basic Loudspeaker Motor Construction
Figure 1 shows the typical basic construction of the loudspeaker motor. Various proprietary variations exist, but the essential elements remain much the same. The voicecoil has two ways to get rid of heat - radiation and convection. We can forget convection, as there is nowhere for the hot air to 'convect' to, other than within the motor assembly. While the pumping effect of the cone's movement does help to move the air around, in many cases there is actually nowhere for the air to go. Where the 'self cooling' effect is designed well, this only works at low frequencies where there is significant cone movement - at higher frequencies the cone travel is such that there is little or no pumping effect at all! Radiation will make the rest of the motor hot as well, but at least that has enough area to get rid of some of the heat. Some manufacturers use aluminium baskets to support the speaker's motor components, and this will act as a heatsink. One maker even has the 'basket' in front of the cone so it won't be trapped inside the box. Finned magnet covers are fairly common now, and virtually all drivers that claim to be able to handle appreciable power use a vented pole piece as shown.
+ +But is this enough? How hard is it to dissipate heat into the surrounding atmosphere? What other options are there? Very few, unfortunately, and this is a part of the overall problem. The problem is made worse with drivers with a low efficiency, because for a given SPL, more power is needed right from the start.
+ +Consider this ... Assume we have a loudspeaker rated at 90dB/W/m (a softspeaker?) versus another rated at a much more respectable 100dB/W/m. With one Watt of electrical energy applied, one will have 10 times (10dB) the SPL of the other. While this is insignificant if we are happy with 90dB SPL, if we try to obtain 110dB SPL at one metre, the efficient driver will do this with only 10W, while the inefficient driver needs 100W. Another 10dB makes that 100W vs. 1000W - anyone want to guess which speaker will last longer before the voicecoil melts?
+ +There is another more insidious aspect to this. So far, we have assumed that the electrical input to SPL ratio is constant, but it most certainly is not. As a voicecoil gets hot, its resistance rises. This increases the impedance of the speaker, so less electrical energy goes in, and less acoustic energy comes out. The 1000W amp needed to drive our inefficient speaker will probably be delivering half that (because of the increased impedance) by the time the voicecoil is ready to depart this world, so SPL is increased by only 7dB instead of the 10dB we expected when we applied 10dB more signal level. Meanwhile, the efficient driver only has to dissipate 100W, there is less heating, and consequently less relative drop in level as power is increased.
+ +Welcome to the real world of 'power compression'. JBL [ 1 ] has performed tests showing that power compression can reduce output by anything from 3dB to 7dB from the expected SPL at elevated temperatures. Seven Decibels! Remember that each 3dB means double or half the power, so 7dB is more than 4 times. You use a 1kW amp on a speaker and expect it to be pretty loud (not an altogether unrealistic expectation), but if another speaker can be just as loud with only 100W, then which one is preferable? Loaded question ... of course the more efficient driver will be the better choice, all other things being equal.
+ +Remember the bar radiator from the opening paragraphs? How long can a voicecoil survive with 500W or 1kW or more being dissipated as heat? If the programme material has plenty of bass, the cone will move, and that will push air through the magnetic polepiece gap and past the voicecoil. This will certainly help cool things down, but where does the hot air go? Into the cabinet, to be sucked back through the gap next time the cone moves outwards? That's not very useful. It is fairly obvious that as a solution to maintaining a sensible voicecoil temperature, this method sucks (pardon the pun ).
The nature of music helps us here. Music has (or should have) loud bits, soft bits and even silent bits. This diversity is called dynamic range - a term that describes any signal that is not a continuous waveform. Dynamic range is simply another way to describe the peak to average ratio.
+ +
Figure 2 - Typical Audio Waveform
The average power delivered to a system is where we get the SPL from - this is averaged over a period of time, and accounts for asymmetrical waveforms, brief bursts of high levels followed by periods of lower levels, impulse signals, etc. It is not uncommon to find the crest factor at around 3:1 - peaks are 3 times higher than the average level. This is approximately 10dB.
+ ++ A 10dB in voltage or current is a ratio of 3.16:1, and 10dB with power is a ratio of 10:1. A change in voltage ratio of 3.16:1 causes a change in voltage, current + or power of 10dB. For those who do not fully understand the relationships of dB, I suggest you read Frequency, Amplitude & + dB. In brief ...+ +
+ dB (Power) = 10 × log ( P1 / P2 ) = 10 × log ( 10W / 1W ) = 10 × log ( 10 ) = 10dB
+ dB (Volts) = 20 × log ( V1 / V2 ) = 20 × log ( 3.16V / 1V ) = 20 × log ( 3.16 ) = 10dB
+ This remains an area where people regularly become confused, but once you know the way dB is calculated it all falls into place. +
Some recorded material has more dynamic range, some less, with typical values between 6dB and 20dB - the lower figure is more likely on modern, highly compressed material.
+ +To reproduce a signal with a 10dB crest factor cleanly (without clipping distortion) means that if your average level requires 10W, the peaks will need 100W - a 100W (minimum) amplifier is needed to get 10W of clean undistorted average electrical energy. If you use a low efficiency driver (such as the softspeaker described above), then these figures could be 100W and 1000W respectively!
+ +To determine the level of power compression, it is necessary to know the thermal mass of the voicecoil assembly, the rate of heat transfer, plus a whole swag of other things that the manufacturer does not tell us. Alternatively, it can be measured, albeit with some risk to the driver itself. This is the quickest and most accurate way to figure out just how much thermal compression a given driver exhibits. While we are at it, we'll also use an indirect method to measure the voicecoil temperature.
+ + +Copper has a thermal coefficient of resistance of about 4E-3 per °C. Therefore, if a voicecoil has a DC resistance of 6Ω at 25°C, at 200°C this will increase to ...
+ ++ RT2 = RT1 × (1 + α × ( T2 - T1 )) ++ +
where T1 is the initial temperature, T2 is the final temperature, and α is the thermal coefficient of resistance. Substituting our values in the above equation we get ...
+ ++ R200 = 6 × (1 + 4E-3 × ( 200 - 25 )) = 10.2Ω ++ +
There is some discrepancy as to the actual coefficient of resistance for copper - figures found on the Net range from 3.9E-3 to 4.3E-3. I have adopted a middle ground, settling on 4E-3. Feel free to use the value with which you are most comfortable. Note also that the coefficient of resistance does change depending on whether the copper is hard drawn or annealed.
+ +If we know the change in resistance, then it is a relatively easy matter to calculate the temperature, provided we have a reference resistance taken at a known temperature before the test.
+ ++ ΔT = ΔR / (RT1 × α) ++ +
Where ΔT is the temperature rise and ΔR is the change in resistance. For the previous example the change in resistance is 10.8 - 6 = 4.2 Ohms, so we get ...
+ ++ ΔT = 4.2 / (6 × 4E-3) = 175°C+ +
+ T = ΔT + T2 = 175 + 25 = 200°C +
This in agreement with the previous calculation. So, all you need to know to calculate the voicecoil temperature is the resistance at ambient temperature, the ambient temperature itself (a thermometer works for this ), and the resistance at the designated power level. Because resistance has to be measured with an AC signal, the frequency needs to coincide with the speaker's resistive region so you get resistance rather than impedance. The most convenient is at around 200-500Hz (it's the lowest point on the impedance curve).
The only way to determine just how long it will take for the voicecoil to reach this temperature is by measurement. Although it is possible to calculate it, this would require far more information than you will be able to obtain from the maker, and far more maths than I am prepared to research and pass on.
+ +It is usually safe to assume that the temperature rise will take less than 30 seconds for most drivers - the thermal inertia of the voicecoil assembly is normally quite low, but some low efficiency subwoofers may have relatively heavy coils and formers, thus increasing the thermal inertia. This is not a bad thing for a driver that handles intermittent bursts of high power with long rest periods in between, and it is the very nature of the programme material that allows many of these drivers to survive the power applied.
+ +![]() | The above assumes that resistance measurements are taken electrically, and with little or no phase shift - to obtain + an accurate result requires that you monitor the RMS voltage and current applied to the speaker. The voicecoil resistance may then be determined using Ohm's law. Because of 'flux modulation' + (see section 4), simply measuring the change in SPL is an unreliable method for calculating the voicecoil temperature, and should not be used. |
Let's say that we have a requirement for a continuous average SPL of 115dB at 1 metre. Such a system might be used in a movie theatre (for example), and by the time the signal gets to the audience, the average SPL might be reduced by about 30dB (due to 'room loss' and distance) - around 85dB SPL within the theatre itself (this is the reference level for theatre systems).
+ +From the above, it is quite obvious that as the voicecoil gets hotter, less power is delivered. Provided the signal is applied for long enough and the heat can be removed at a rate that prevents meltdown of the voicecoil, the average SPL will be reduced - after meltdown, SPL will be reduced to zero! Based on what little information is available from manufacturers, it seems that the loudspeaker voicecoil will reach thermal equilibrium within around 20-30 seconds. High frequency drivers will be faster (because of the low thermal mass of the voicecoil assembly), and very large subwoofers will most likely be slower, as the voicecoil assembly will be substantially heavier, and thus have a greater thermal mass.
+ +If we work only within the constraints of the maths shown above, we can arrive at a good estimate of the final efficiency of a driver that is being pushed to its limits. Consider the example of an 8 ohm driver, having a DC resistance (DCR) of 6 ohms at 25°C. The ratio of nominal impedance (Z) to DCR is therefore 8 / 6 = 1.33:1.
+ +At a voicecoil temperature of 200°C as shown in the previous section, the resistance increases to 10.2 ohms, so the nominal impedance increases to ...
+ ++ Z = DCR × ZRatio = 10.2 × 1.33 = 13.56 ohms ++ +
Neglecting (for the moment) any other effect, we'll assume that at full power, the voicecoil temperature will reach around 200°C. That means that if the nominal 8 ohm driver were to be supplied with a signal of 50V RMS (average), that will work out to 312.5W at 25°C. After around 20 seconds when the voicecoil reaches its maximum temperature, the power will fall to ...
+ ++ P = V² / Z = 50² / 13.56 = 184.4W ++ +
That represents a drop in power (and SPL) of 2.28dB, close to half the power you thought you had. A 90dB/W/m driver has fallen to 87.7dB/W/m. Now, this is a simplification, but the actual power and SPL will be very close to the values calculated. Where you expected the speaker to produce about 115dB SPL at 1 metre, after only a short period it will only give you a tad under 113dB SPL - a significant decrease. To achieve 115dB SPL, this driver now needs 528W average (a 27dB increase over the 87.7dB/W/m effective sensitivity), but that will just cause the voicecoil to get hotter still, and it will fail - most likely without ever achieving the target sound level on a long term basis.
+ +Using the same expectation (115dB SPL), but substituting a 100dB/W/m driver, we can see that only 32W is needed to achieve the 115dB SPL required. With only 32W, there will be very little thermal compression - perhaps 0.5dB worst case - so power has to be increased to a bit below 40W to compensate. Although the extra power will cause the voicecoil to get a little hotter (and so reduce the actual power and SPL a little more), it is well within the capacity of a sensible sized amplifier to cope with.
+ +Remember that we already established that the peak to average ratio is typically around 10dB, so the 90dB/W/m driver will need an amp capable of 10dB more power than the average level (a rather daunting 3,000W!), while the 100dB/W/m unit will only need 10dB over and above 40W - namely 400W. It takes little imagination to realise that the lesser (and probably cheaper) speaker is no bargain after all, and will almost certainly fail if a 3kW amplifier is used. All this, and it is still perhaps 3dB shy of the 115dB SPL expected of it. In my books, that represents a travesty, not a bargain. Looking at the data in a table helps to see the information at a glance ...
+ +Sensitivity (dB/W/m) | Power (Average) |
+ Power (Peak) | Thermal Compression | SPL (Actual) |
90dB | 300W | 3,000W | 2.28dB | 112.5dB |
100dB | 40W | 400W | 0.5dB | 115.5dB |
This is simply a theoretical exercise, but the effects are both real and demonstrable, and the above is not at all unrealistic. In fact, it is rather optimistic - many 'high power' loudspeakers can suffer 5-7dB of power compression based on the tests done by JBL [ 1 ]. Every dynamic driver will suffer from thermal compression, because the voicecoil winding has no choice but to get hot when it is dissipating a lot of power. Only a very few (mainly professional) loudspeaker manufacturers treat this as seriously as it deserves. Covered in Design of High Quality Passive Crossovers (ESP website) is the effect of the changing impedance on a passive crossover network. Thermal effects are wide ranging and rather insidious, changing the way the speaker system sounds depending upon the power applied.
+ +It is only by choosing a driver whose efficiency is matched to the requirements that the requirements have even the slightest chance of being accomplished in practice. Quite obviously, a higher efficiency loudspeaker driver will need less power to achieve the result, but not so obviously, high efficiency should be sought whenever possible - it will always give a better result (all other things being equal).
+ +I built the test voicecoil assembly (1.45 Ohm at 20°C) pictured below. This coil was subjected to a range of currents from 1 to 3A. DC was used for all thermal tests so that there was no chance for inductance to influence the readings. The polarity was arranged so that the voicecoil was pulled into the gap. Since the coil former was deliberately designed to sit against the rear plate, this removed the necessity to clamp the coil in position. The purpose of the winding around the outside of the magnet will be explained in Section 4.
+ +
Figure 3 - Test Motor Assembly
The test motor is not a powerhouse, but neither is it insignificant. It uses a 95mm diameter magnet, 15mm high. The front plate is 5mm thick, and the rear plate 4.5mm. The gap measures 1.65mm, having a centre pole of 25mm diameter. This driver would originally have been rated for around 25W continuous power, with perhaps 100W peak power rating. Unfortunately, the exact details are long gone, and the magnet assembly is all I have left of the speaker. By some 'standards' that seem to be applied today, it is probable that my estimations are much too low - a small car loudspeaker I have lying around has a motor that is less than ½ the size, but is rated to 80W peak power - in someone's dreams!
+ +The measured values shown below represent a low power dissipation across the range. The voicecoil was in intimate contact with an aluminium former which was in direct contact with the centre polepiece. This means that cooling was far better than would normally be the case (although there can be no 'forced air' cooling because the voicecoil was not allowed to move during the tests), yet the resistance range is considerable. I did try the coil withdrawn from the magnet assembly (so it had no cooling from the gap), and it became extremely hot within a few seconds.
+ +Using the formula from above we can determine how hot the coil actually became at each current ...
+ +Current | Voltage | Power | Resistance | Temperature |
0 | 0 | 0 | 1.45Ω | 20°C |
992 mA | 1.54 V | 1.528 W | 1.55Ω | 37°C |
2.006 A | 3.66 V | 7.34 W | 1.82Ω | 84°C |
2.91 A | 6.68 V | 19.44 W | 2.295Ω | 166°C |
Remember that these figures were easily reached at very low power - 20W is well within the normal range of a domestic system. Based on these figures, we see a resistance change from 1.45 Ohms to 2.295 Ohms at just 20W - almost a 60% increase. An impedance increase of 60% will certainly cause havoc with a passive crossover, and the power compression will be audible unless all drivers have the same or very similar impedance increase. Even at an average power of only 1.5W there is an impedance increase of almost 7%, which would take an 8 Ohm driver to 8.55 Ohms - enough to measurably affect crossover performance, and it may be audible with some material (that's about 0.57dB impedance change). The maximum of 166°C in this test is not an insignificant temperature, and many adhesives will be suffering to some degree.
+ +While it may not seem to be the case, thermal compression may be the only thing that prevents driver failure in many systems. As the voicecoil temperature increases, the power delivered to the loudspeaker falls, thus limiting the temperature rise. There's a limit to this though, and if the amp is powerful enough and the operator is unskilled, there's nothing to prevent the faders to be pushed ever higher until speakers fail.
+ +It's commonly believed that amps should be able to provide more power than the speakers can handle (usually called 'headroom'). A ratio of perhaps 2:1 is wise, provided the system is used sensibly, and ideally incorporating some form of speaker protection system that will prevent the operator from exceeding the speaker ratings. Unfortunately, many 'protection' systems don't work well because they are often set up improperly. Beware of the pernicious myth that "clipping kills speakers" - it doesn't, sustained overpowering kills speakers, regardless of how that happens. (See the Speaker Failure article for more details on this topic.)
+ + +Note that this section deals primarily with loudspeaker driver efficiency - distortion mechanisms abound, and the magnetic circuit (flux modulation in particular) is a major contributor to the many different and exciting ways a dynamic loudspeaker can modify the waveform, and thus introduce frequencies that were not present in the original signal (distortion). The distortion factors have received more attention than the other effects discussed here, but this is still an area where most driver manufacturers would prefer to remain silent. Nonetheless, a search for flux modulation reveals a surprisingly large number of documents, although not all are useful, and too few link to loudspeaker manufacturer's websites. In nearly all cases, only the distortion mechanism is discussed, but there's more ...
+ +It is within the magnetic circuit that we see effects that are either ignored completely or glossed over. While most professional loudspeaker manufacturers take the magnetic circuit seriously, the vast majority of general purpose driver makers do not - pressed steel pole pieces that are unacceptably warped being a very common problem. (I have seen a rear plate (brand new) that had 0.5mm of wobble when placed upon the magnet. That represents a significant air gap in the magnetic circuit. The front plate wasn't much better.)
+ +The essential parts of the motor's magnetic circuit were shown in Fig 1, but as noted this is a fairly generic arrangement. To make an efficient loudspeaker means that the magnetic circuit must be optimised. An example of this optimisation is shown in the JBL paper [ 1 ], and it is obvious that a great deal of thought and research has gone into the design of the magnetic circuit to ensure the flux density across the gap is as high as possible.
+ +Consider the effect of minute air gaps between the front and rear plates and the magnet itself. Any air gap (or anything else that has low permeability to magnetic flux - e.g. adhesive) will reduce the effectiveness of the magnetic coupling between the magnet and the plates, and hence to the gap itself. This weakens the flux across the gap, and in turn reduces efficiency. Of particular importance is any part of the magnetic circuit that is saturated (a condition where the material will not accept any more magnetic flux) - an excellent example can be seen at the Infolytica website, where saturation is shown in the rear plate of a loudspeaker motor. (It may be difficult to find - the original link died)
+ +Now, consider the effect when there is current in the voicecoil. There are now three different sources of flux in the gap and the magnetic circuit as a whole ...
+ +The force produced by the current in the voicecoil creates a magnetic field that uses the static flux in the gap as its only means of propulsion. The static flux is not a solid! It will bend when an opposing magnetic force is applied, and the amount by which it bends is determined by the static flux density and the voicecoil current. Likewise, the magnet flux will be modulated - this isn't solid either, and that's what the coil around the magnet in Figure 3 was used for.
+ +The first thing I had to do before useful tests could be continued was rewind the voicecoil - the thermal tests (and some initial magnetic tests) had damaged the wire insulation causing shorted turns. The second version had a measured impedance of 2.11Ω when installed in the gap.
+ +The test motor was driven with varying levels at 400Hz, and the modulation of magnet flux density was monitored by the outside winding. The coil was locked for all tests. As the voicecoil is driven, it is logical that the flux produced by the coil must traverse the magnetic circuit, and the external coil picks up the variation. In an ideal situation, the induced signal should be a smaller replica of the modulating voltage (and current). Smaller because the outside of the magnet is a good distance from the mean magnetic path, so the variation of field strength can be expected to be a lot less than that within the magnet itself. A replica means that there should be no additional distortion, and the voltage change on the outer coil should be directly related to the drive voltage.
+ +Two interesting things were revealed by this test, and while one was pretty much expected, I didn't notice the other until I had run a number of tests. That a small signal would be picked up was completely expected, and likewise I figured that it may not be linear. The part that I almost missed was the distortion of the flux at higher drive levels - the waveform is shown in Figure 4. Why did I almost miss it? Simply because it was obscured by the distortion in the applied waveform, and I was concentrating on measuring the amplitude. The test procedure was amended after I saw this - the original drive signal was 50Hz, derived from a Variac. I then switched to using a 400Hz signal, amplified with a P68 subwoofer amp to get the voltage and current (with minimum noise and external distortion) needed to repeat the original tests with a clean sinewave. The coil is a difficult load, having an impedance of just above 2 ohms.
+ +Tests were also performed on a pair of real loudspeakers - the test motor is one thing, but to be meaningful a magnetic assembly needs to be tested in a working condition. Figure 4 shows the waveform obtained from a Vifa M13MH-08 driver (now out of production), with 15V RMS at 400Hz applied. Although I collected test results for the other driver (a car speaker, rather optimistically rated for 80W peak), the results were very similar to the Vifa and test unit.
+ +
Figure 4 - M13MH-08 Motor Outer Coil Distortion
The distortion shown above started (in less severe form) at a relatively modest drive level. Distortion was measurable at very low levels, and started to become clearly visible at about 7V. The waveform in Figure 4 is a screen capture of that seen at a drive level of 15V. The signal was applied in very short bursts to eliminate thermal compression, and while the waveform could be captured, I could not measure distortion accurately. I did capture the levels using the FFT capability of my PC based oscilloscope, and the second harmonic of the Figure 4 waveform is about 20dB below the fundamental (an exact figure is difficult because of the burst waveform). The distortion is quite visible - note that the positive and negative peaks are of different amplitudes. If you look very carefully, you will see that the level (look closely at the peaks) is starting to fall after only 8 cycles. This is thermal compression starting to show !
+ +15V RMS is equivalent to only 28W (based on the nominal impedance), so one doesn't really expect 'bad' things to happen. There was measurable distortion even with less than 1V drive in all motors tested. It is certain that distortion started earlier than this, but the signal level from the outer coil was too low to get an accurate reading. Where I was able to measure the distortion, this is shown in Table 3. Measured distortion in the outer coil was almost all third harmonic, until the waveform started to become asymmetrical. The applied signal has a distortion of about 0.04%.
+ +The effect of changing the flux levels depending on drive level is often lumped together with other magnetic effects and collectively called 'flux modulation', and it works alongside power (thermal) compression to reduce the efficiency of the driver at high power levels. There are several AES papers that discuss the magnetic circuit, but unfortunately they are not available except at considerable cost - while not expensive per se, they are expensive if it proves that they do not contain any information you need. One thing that is very apparent (from examination of the data available from the few magnetic simulation tool suppliers world wide), is that the traditional motor assembly depicted in Figure 1 is flawed. In nearly all cases, the back plate will saturate, reducing the available flux at the gap and (probably) causing asymmetry of the voicecoil induced flux.
+ +Voicecoil (V RMS) | Outer Coil (mV RMS) | Distortion |
1 | 6.57 | 0.8% |
2 | 13.3 | 1.1% |
3 | 21.3 | 1.3% |
4 | 30.8 | 1.6% |
5 | 39.3 | 1.9% |
6 | 48.3 | 2.1% |
7 | 56.7 | Visible |
14 | 126 | Moderate |
15 | 135 | See Figure 4 |
The distortion figures in Table 3 show what I was able to measure - using the test motor rather than a real speaker. Above 6V, the coil became too hot too quickly for a distortion reading, and even the figures shown for coil voltages above 3V are lower than they should be, because thermal compression had already reduced the coil current noticeably. At 7V, the onset of distortion was visible on the oscilloscope, and at 14V saturation was noticeable. At 15V it appeared as shown in Figure 4 - this is a sure sign that there is magnetic saturation because the waveform is asymmetrical. All three motors tested this way showed identical behaviour.
+ +Unlike thermal compression, the flux modulation (or flux compression) effect is not limited by time - it is instantaneous. If a single short duration burst is applied at 10W, we may measure a peak instantaneous SPL of 100dB (for example). Increase the power by 10dB (100W), and instead of 110dB SPL, we may measure 109dB - an instantaneous loss of 1dB. Sustained power will then heat the voicecoil so thermal compression adds to the problem. After perhaps 10 seconds, the SPL may fall by a further 5dB, giving a total loss of ~ 6dB SPL. The speaker has effectively become 6dB less efficient than expected, and requires 4 times as much power as we thought we'd need. This additional power will only cause more of the same effects, and the process is a vicious cycle - the more power we apply to overcome the magnetic and thermal losses, the greater the magnetic and thermal losses become, so requiring even more power. This can continue until the smoke is released from the voicecoil, at which point we have an ex-loudspeaker.
+ +Thermal compression was a problem during magnetic circuit tests too, and this was another reason I could only use short bursts of signal for testing. With an applied voltage of 5V RMS, the outer coil voltage fell from an initial value of 39.3mV to 31.8mV over a period of about 5 seconds. That's a 1.8dB fall at an initial power of 11.8W (falling to 9.5W in 5 seconds).
+ + +For every action, there is an equal and opposite reaction. This applies in all areas of physics, and the loudspeaker magnetic circuit is no exception. Fig 6 shows Fleming's 'left hand rule' applied to a voicecoil in a loudspeaker motor. From this we see that the voicecoil exerts a magnetic force against the static field set up across the gap. It is not sensible to assume that the static field is unaffected.
+ +
Figure 6 - Left Hand Rule and the Loudspeaker Motor
Based on the information in the references, it would seem that unless one goes to fairly extreme lengths to get it right, flux modulation can have a profound effect on the instantaneous efficiency of a loudspeaker. Saturation of the pole-pieces in particular should be avoided, but few loudspeaker motors are designed to prevent it.
+ +The modulation can be reduced only by reducing the voicecoil current or by increasing the static flux density - thereby increasing its resistance to bending forces. In addition, the magnetic circuit must have sufficient reserve capacity to ensure that it never saturates. In a cycle of audio signal, the voicecoil current ...
+ +If the pole piece(s) are already close to saturation (where they cannot sustain any further magnetic 'lines of force'), the field strength cannot be increased and decreased by the same amount in the centre pole and top plate, so the waveform will be distorted and will also lose some efficiency.
+Although I have a magnetometer, unfortunately it is not only uncalibrated, but as I discovered during tests designed to prove the above, it is also non-linear at high field strengths as used in a loudspeaker. This made any tests based on its use rather pointless - hence the coil around the outside of the magnet.
+ +
Figure 7 - Principle of Motor Action
The figure above is from one of my old textbooks [ 4 ] showing the exact parameters that affect the operation of a loudspeaker. The wire diagrams show a + to indicate that the current is moving away from you (conventional current flow, from positive to negative), and the dot means it is flowing towards you.
+A quote from the text ...
+ ++ The basic principle of the conversion of electrical energy to mechanical energy in a motor rests upon the fact that when a current-carrying conductor is placed + in a region occupied by a magnetic field (unless the direction of current and the direction of the magnetic field are parallel), a reaction is set up that tends to + move the conductor out of the field. This principle is illustrated in the diagrams of Fig. 14.1. (Figure 7 above) ++ +
The effect of any external field on a static magnetic field must cause the static field to be deformed. This deformation is a part of 'flux modulation', and a considerable amount of the effect will be found in the gap - the magnet itself will normally be relatively immune from demagnetisation caused by voicecoil current (although Alnico magnets have apparently been known to have been demagnetised by excess voicecoil current), but the magnetic path itself is another matter altogether. This is very difficult to measure, and I was unable to detect any variation in actual magnet 'strength' during my tests. Highly specialised equipment is needed for these tests, and some further information is available from Reference 5.
+ +As the voicecoil moves, there is distortion of the static flux because instantaneous movement of the coil, former and cone is not possible. Movement is impeded by inertia and the loudspeaker's suspension - there is also air loading on the cone, but this is comparatively insignificant. To minimise distortion of the static flux, moving mass and suspension stiffness must be reduced to the minimum - these factors are a matter of compromise, based on the end use of the driver.
+ +With low flux density across the gap and/or a wide gap, it is logical that this makes it easier for the voicecoil flux to force it 'out of the way'. Many of the current crop of subwoofer speakers will have this problem - a heavy cone means that by the time it has started to move, the signal induced flux will have distorted the static flux significantly.
+ +This concept is easily demonstrated. Take a pair of magnets, and align them so that they oppose each other. At low field strengths (magnets a fair distance apart) it is easy to push them closer together. As they get closer, the forces become greater, and far more effort is needed to move them that last millimetre than was needed when they are further apart. The same thing happens (but the other way around) with the magnets attracting. It's easy to keep them apart when they are some distance from each other, but when they get close ... snap! (Be very careful if you use neodymium magnets - they can really hurt if you get skin caught between them.)
+ +That this form of flux modulation will reduce (instantaneous) efficiency should be quite apparent, and the wide voicecoil gaps favoured by modern speaker manufacturers will make the situation many times worse (magnets further apart). The wide gaps are used because this makes the speaker cheaper to make, having no close tolerances and thus requiring no skilled assembly workers.
+ +It is also worth noting that because magnetic flux is not solid, the voicecoil may not be fully immersed in the magnetic field well before it actually leaves the gap. This contributes distortion and loss of efficiency, since the total flux through the coil is not constant, and varies with applied current. This may occur before the coil even starts to move.
+ + +The speakers I experimented with are a mixed bag, with some fairly well known drivers and some that are rather less well known. They are also of rather different vintages, ranging from very recent to many years old. This doesn't change anything, since so few manufacturers have ever published power compression figures, even fewer have examined flux modulation, and just as few have tried to do anything about it.
+All drivers were measured free field (without a baffle), and only the instantaneous compression levels were measured - some of the drivers I have in my workshop are on loan, and I certainly never wanted to blow any of them. The test was arranged as follows ...
+ +Not one driver I tested managed the 19.35dB increase, and remember that this test was specifically designed to keep the voicecoil cool enough to prevent thermal compression effects - the first high power burst only was measured, and that was not long enough to allow voicecoil heating at a level that would skew the results. I verified that heating was minimal by allowing the power test to run for some time, and no significant thermal compression was noticed.
+ +The driver measurements are shown in the following table. Note that all voltage levels are peak-to-peak (P/P). The measurements were taken using a Philips PM382A Analogue/Digital oscilloscope. The signal processing capabilities of this 'scope make it ideal for detailed measurements at this level. The microphone was an ESP measurement mic, and was powered from a phantom feed / preamp combination. All microphone voltages listed are at the output of the preamp, which was not changed for the duration of the tests. Drivers were nominally 8 ohms (except #1 - 4 ohms), although the actual impedance at the test frequency was not measured.
+ +Test # | Amp V80dB | Mic V80dB | Amp V100dB | +Mic V100dB | Amp Change | Mic Change | dB Loss |
1 (4Ω) | 2.41V | 811mV | 23.30V | 7.50V | 19.707dB | 19.321dB | 0.386dB |
2 | 2.41V | 579mV | 23.10V | 5.47V | 19.632dB | 19.506dB | 0.126dB |
3 | 2.40V | 516mV | 23.10V | 4.79V | 19.668dB | 19.354dB | 0.314dB |
4 | 2.43V | 488mV | 23.50V | 4.51V | 19.709dB | 19.315dB | 0.394dB |
5 | 2.43V | 448mV | 23.30V | 4.04V | 19.635dB | 19.102dB | 0.533dB |
6 | 2.43V | 404mV | 23.30V | 3.72V | 19.635dB | 19.283dB | 0.352dB |
In case you are wondering, the tone burst frequency of 877Hz was selected for two reasons. Firstly, the relatively high frequency ensures that there is minimal cone excursion, so we can be certain that the voicecoil remains centred in the gap regardless of power level. Secondly, that happens to be the frequency my tone burst generator provides (it is a fixed frequency type), so I had to choose between modifying it, building a new one that used an external signal generator, or using what I had. Not a difficult choice given that I don't have as much time as I'd like for pure research.
+ +![]() |
+ Be aware that there is inevitably some margin for error in all acoustic measurements, especially at the lower (80dB reference) level. I would not expect
+ the error to exceed 0.1dB, and I made every effort to get the results as close as possible. + The likely magnitude of measurement error is seen in the small variation in measured difference of amplifier output level - the maximum difference between + all measurements is 0.077dB. Some of that is caused by differing driver impedance interacting with amplifier output impedance and cable resistance. There are + also limitations imposed by the digital oscilloscope. |
Overall, most of the results are not too disappointing - Driver #2 was the best, 'losing' only 0.126dB. Driver #5 is the worst, with over 0.5dB loss. The point to note is that all drivers lost some of the expected increase in level. It is very obvious indeed that had the peak power been increased to (say) 100W or so, these results would have been much worse.
+ +Although there are a couple of exceptions, the higher the efficiency of the speaker, the less 'magnetic compression' is seen. Be aware though - I did not test for distortion, and this can be quite high as a direct result of flux modulation. Also, because my test equipment (and environment) is not optimised for loudspeaker testing, there is invariably some influence despite the mic being close to the loudspeaker. Although I took pains to ensure that reflections were minimised, these effects cannot be eliminated completely without a fully anechoic test environment.
+ +While it may have been nice to have been able to drive the speakers a lot harder to obtain a better indication, I didn't have the luxury of expendable drivers or a suitable soundproof enclosure (the instantaneous SPL was high enough as it was, and I wore hearing protection).
+ + +All loudspeaker drivers have mechanical limits. In some cases the suspension might be stiff enough to prevent the voicecoil from striking the back plate, and in other cases there might be no such limitation. Low resonance drivers with a long Xmax and a soft suspension are vulnerable, and doubly so if they are used with any kind of tuned enclosure. Below the tuning frequency, the driver has almost no rear loading, and excursion can easily rise to the mechanical limits and beyond if given too much power at very low frequencies.
+ +Some driver manufacturers specify 'XDamage' - this is the maximum possible excursion before damage occurs. In general, if Xmax is exceeded occasionally, the speaker will usually survive for many years. Should you exceed XDamage then you will almost certainly damage the driver, and if it's exceeded on a regular basis expect a short life and some pretty gross distortion as the mechanical limits are reached and voicecoils impact on rear plates or the cone buckles. All speakers have a mechanical limit, and it doesn't matter if the manufacturer fails to tell you what it is.
+ +The sensible approach is to largely ignore the claimed maximum power handling, and when designing (or testing) an enclosure try to arrange some method of measuring the peak excursion. The maximum power you can use before unacceptable low frequency distortion occurs (Xmax) is the power rating for that speaker, provided it is lower than the maker's claimed maximum. If a speaker reaches its maximum excursion at (say) 40Hz with 250W input, then it matters not a jot if the maker claims it can handle 3kW. If you exceed Xmax at 250W, then that is the maximum power you can apply. Any more simply places the driver at risk of mechanical damage.
+ +No, you don't even need 'headroom', because the speaker driver reaches its limits at 250W, and any more than that will not only exceed Xmax but may bring you dangerously close to XDamage. If the speaker is in a vented or other tuned enclosure, then it is important to ensure that there is a steep high-pass filter in front of the power amp that prevents appreciable power from reaching the speaker below the tuning frequency. Using a filter also ensures that all the power going to the driver is going to make noise, rather than flapping the cone around for no good purpose.
+ + +Power compression comes in two distinct forms - thermal (long term) compression, and instantaneous flux modulation compression. Thermal compression upsets the tonal balance of a multi-way system, as the driver with the greatest compression becomes softer with respect to the other drivers in the system. The change in voicecoil impedance with varying temperature will also affect any passive crossover network, with high order networks being the worst affected (they are very critical with respect to load impedance).
+ +Flux modulation effects are instantaneous, and can affect any driver, although tweeters are less likely to suffer because the power is relatively low even at high volume levels. The use of ferro-fluid may be of great assistance in this respect, although I was unable to test this. Flux modulation causes dynamics to suffer and distortion is created, because the magnetic circuit cannot sustain the maximum flux across the gap as the signal level varies. One of the big problems with flux modulation is that most people are oblivious to its existence, and the details shown here are almost never discussed or published by manufacturers. It is possible that some of the driver manufacturers are unaware of the problem, let alone what needs to be done to minimise it.
+ +Even the limited tests I was able to perform show that flux modulation (magnetic compression) is quite capable of 'squashing' transients to some degree. In an extreme case (assuming low efficiency drivers and considerable amplifier power), where transients should jump out at you, they may blend into the overall mix, losing impact and removing some of the life from your music. The many owners of low powered Class-A amplifiers are forced to use high efficiency drivers to get an acceptable sound level in their listening room. Although the amplifier is often cited as the reason the systems sound good, one of the likely reasons should now be obvious - with no (or little) power compression of either form, high efficiency systems will give much better transient (impulse) response and dynamics. There can be no doubt that these systems will have dynamics that are very difficult to match by systems that require hundreds of Watts to achieve the same in-room SPL. Having said this, please bear in mind that at this stage there appears to be little or no evidence to suggest that these effects are actually audible. They are measurable, even with relatively primitive techniques, but it is quite possible that a blind A-B test would not reveal any problems at a sensible SPL. Other effects may be present which are audible, but not related to the problem. This is a very difficult area, because it is very hard to isolate the effects as they are somewhat interdependent with other driver parameters, and there does not seem to be any way the various effects can be isolated.
+ +A general solution seems easy, if rather expensive and very limiting with most modern loudspeakers ... use high efficiency drivers. The lower the power that is needed for a given SPL, the less compression the driver will create - be it thermal or due to flux modulation. Because flux modulation effects are comparatively small - at least for domestic reproduction - thermal compression is by far the most dominant factor. For very low frequency drivers, high efficiency is not possible - the moving mass usually needs to be fairly large to obtain a low resonant frequency, and this will always have an adverse effect on efficiency. Despite the limitations, there seems no good reason that any driver should have an efficiency of less than 90dB/W/m - anything lower means that amplifier power has to be increased, and the problems then become apparent.
+ +In some cases it may be possible to use multiple drivers to increase the effective SPL by creating a small array, which improves the on-axis effective efficiency. By using more than one driver, the power needed by each is reduced for the same SPL, so the effects of both thermal compression and flux modulation are reduced. There are many who claim that their arrays sound exceptional, and this may be part of the reason. Another alternative is horns (loved and hated by a roughly equal number of people). Having very high efficiency, all power compression effects are reduced to a fraction of that of direct radiating loudspeaker systems. Both approaches come at the expense of power response within the listening space, however few loudspeaker systems have a flat power response anyway so this may not be as great a problem as may be expected.
+ +Yet another possible solution is to use electrostatic drivers, since these have no magnetic circuit and are renowned for their dynamics. They are not to everyone's taste though, and have a much smaller 'sweet spot' than most other speaker systems. Planar drivers rely on a much more distributed magnetic circuit, so may also be an improvement, but I have no information to support this so it remains conjecture.
+ +One thing that is readily apparent for dynamic (moving coil) drivers, is that the static field strength should be as high as possible. Typical flux densities for (half decent) loudspeakers range from around 1 Tesla (10,000 Gauss) up to around 2.4T, and I would suggest that anything less than 1T is next to useless. Very few drivers use magnetic materials that will provide much more than 1.8T across the gap - it seems to be accepted that mild steel (as used by most of the cheaper drivers and many not-so-cheap drivers too) is unable to provide a gap flux density of more than 1.8T, regardless of magnet strength - the now common use of dual magnets on subwoofers should be seen for what it is - a marketing ploy! To obtain higher field strengths requires the use of specialised alloys that are optimised for magnetic circuits. It is important that the static flux is many times stronger than the dynamic (voicecoil) flux to obtain maximum performance.
+ +Finally however, much as we may make a fuss about the theory and reality of magnetic compression effects (flux modulation), there is little or no data to suggest that the dynamic sound quality of even low efficiency loudspeakers is considered lacking by the majority of listeners. Even the distortion components that are introduced are barely audible [ 6 ] - if at all. Thermal compression is another matter altogether, and it should be obvious to anyone that low efficiency is the curse of dynamics and reliability in high power systems. Although it will not often be a problem for domestic systems, there are plenty of people who have experienced it first hand as the result of a party or similar gathering. The result is usually measured by the number of speaker drivers that have failed. While it is commonly believed that the mere act of an amplifier clipping is the major cause, see the article Why do Tweeters Blow when Amps Distort?, the real reason is sustained power, (and thus excessive heat) and all of the drivers in your system are vulnerable.
+ +As for mechanical limits, ignore them at your peril.
+ +Please note that the many references to JBL in this article are not intended to be an advertisement for the company or its products. While I do have considerable respect for JBL, it just so happens that they provide more detailed information than anyone else - to not refer to the extensive data would be to diminish the value of this article considerably.
+ + +Although it has been mentioned above and in many other ESP articles, passive crossover networks are affected by the driver impedance, and if the impedance changes due to temperature, the crossover network is no longer accurate. Although this is not usually a major issue with domestic systems used at relatively low levels, passive networks in conjunction with (very) low efficiency drivers can have a dramatic effect on the sound of the loudspeaker system.
+ +Passive networks have another issue as well - the coils used have resistance, and at moderate to high power levels where the coil(s) get hot (and some can get surprisingly hot with sustained high power), the effective series resistance can increase further. This will not only affect the network's frequency accuracy, but may further diminish the damping factor for woofers. All commercial products have to balance the component cost against the final selling price, and use of economical (rather than ideal) coils is not at all uncommon.
+ +Consider a coil with a resistance of 1 ohm (fairly typical of a reasonably good quality inductor of around 3mH), in series with a woofer. The damping factor is already reduced to a maximum of 8 (for an 8 ohm driver) because of the resistance, but if the driver is also low efficiency, you need more power for the desired SPL. If the average power is (say) 20W, then about 2.5W is dissipated in the woofer's series crossover coil. That's not much, but it has very little cooling - the crossover network is often underneath damping material, so airflow is almost zero. Even 2.5W will cause the coil to get rather hot with no effective cooling, so the resistance goes up. While damping factor will probably remain about the same (because the woofer's impedance has also increased), the crossover network's parameters have changed even more than first expected. The network has a different load impedance, and has a different series resistance.
+ +The problem gets worse as crossover frequency is reduced (bigger inductor, more resistance) and/or driver efficiency is reduced, needing more power. This is one of the more compelling reasons to use active crossovers and separate amps. We can't do much about the speaker's impedance change as it gets hot, but to compound the errors by using passive crossovers is not high fidelity. Again, using high efficiency drivers mitigates many of the problems for normal listening levels.
+ +The use of passive crossover networks in high power systems is strongly discouraged. There is absolutely no benefit to be gained, but they cause a great many problems. Making a passive crossover network that has adequately sized inductors (series resistance must be as low as possible) becomes very expensive - just for the copper wire alone! The money is far better spent on building an active crossover and separate power amplifiers for each speaker. For optimum performance and freedom from crossover interactions, my philosophy is that any speaker rated at or above 50W programme material should be active. The benefits far outweigh the disadvantages. It does make auditioning a stand-alone amplifier rather difficult, but I can live with that.
+ + +![]() | + + + + + + + |
Elliott Sound Products | +Rectifiers, Selection & Usage |
While there are several pages on the ESP website that describe various rectifiers as used in power supplies, this page has been added to show the different rectifier topologies that may be found in any number of products. Each (except the 'voltage clamp') uses the same input and load current for ease of comparison, so it's easier to see the relationships in an 'ideal' case.
+ +In reality, the transformer will add series resistance due to the winding resistance of the primary and secondary, but this has not been included because it makes the comparisons less clear. Nevertheless, it's always present, and the peak input current allows you to calculate the voltage loss for your application. A 'token' 1Ω resistor is used in series with all voltage sources to make the comparisons more sensible.
+ +All waveforms shown assume 50Hz mains, a source impedance of 1Ω, and an AC voltage from the transformer of 50V peak (35.4V RMS). Where a transformer has a dual winding, each half of the winding has the same impedance (1Ω) and voltage. The output load resistor is scaled as needed for an output power of (close to) 20W. For example, if the DC voltage is 50V, the output resistance will be 125Ω, giving an average current of 400mA. The filter capacitance is adjusted to that value which produces an output ripple voltage of about 2V peak/peak. Some variations are inevitable, and especially so when you have to use values you can get rather than the values shown in the drawings.
+ +It's not especially easy to ensure that the output power and/or ripple are exactly as intended, so there will be some small variations seen in real life. However, these are less than the expected difference due to mains voltage which can vary by ±10%, and sometimes more. The purpose of providing a reasonably consistent load power is to ensure that comparisons are as easy as possible, but there will be inevitable conflicts so it's important that the reader understands the application.
+ +While a simple table is provided for each rectifier type, these are idealised and only represent the performance with the source and load impedances shown. Diode voltage drops are based on an 'ideal' diode, which has the normal 0.65V forward voltage, but no appreciable internal resistance. Input current for any rectifier depends on the total source impedance, so the values shown in the examples will be different in reality. The actual current depends on the transformer, diodes and load current. Input/ output relationships are also affected by component impedances, but those shown give a representative idea of what you can expect. All DC voltages and currents shown are average, and AC is RMS (except for half-wave which shows the average value of the AC).
+ +Note that 1.414 is the square root of two ( √2 ), and assumes an undistorted sinewave. Reality will be different. 'PIV' is peak inverse voltage (the maximum blocking voltage for a diode).
+ +Peak voltage = Vin(RMS) × 1.414
+ +In the drawings, I have shown a voltage source rather than a transformer winding. This was done to keep the drawings consistent, and make everything as clear as possible. The voltage source and source resistance (Rs) are considered as a single part, and are maintained for all rectifier types. A transformer winding can simply be used in place of these two parts in a 'real' circuit.
+ +If you are unsure about transformers, then please read the Transformers - Part 1 article (along with Part 2), and/ or Power Supplies for more complete descriptions and detailed analysis. This article is intended only as a primer to the various rectifier types that are in common usage, and it is not intended to replace any of the other material already published on the ESP site. In all cases, the input voltage is deemed to be the generator voltage (ignoring the series resistance), and is determined by ...
+ +The 'load power factor' figure is derived from the actual load power (in the load resistor) and the input VA (RMS voltage × RMS current). This is provided as a comparative figure so you can see the differences in the circuits as shown (the ideal power factor is unity (1), but this is never achieved with any simple rectifier circuit). In reality, the figure will vary depending on the actual source resistance/ impedance. This will also cause the voltages and currents to differ from the values given. As the source impedance is increased, peak current (in particular) is reduced, as is the output voltage.
+ +The power factor is important because it tells us that if you need 100W DC output, a transformer of at least 180VA is required regardless of rectifier type, or it will be overloaded. While short-term overloads will not cause any problems, if sustained the transformer will overheat and fail. If one were foolish enough to use a half wave rectifier, the transformer will overheat anyway, due to the high DC component in the windings which will cause saturation, massive over-current, and guaranteed failure.
+ +Peak voltage = Vin(RMS) × 1.414 (assuming an undistorted sinewave input).
+ +++ ++
++
When any power supply using diodes feeds a filter capacitor, the diodes can only conduct when the peak AC voltage is greater than any voltage remaining across the capacitor. + This means that the AC current waveform is distorted (no longer a sinewave), and the peak current has to be great enough to re-charge the filter capacitor, and supply the load. Each + explanation below indicates the peak current, and it's always much greater than the average. This causes relatively poor transformer utilisation, since current flows for much less than the + full half-cycle period (10ms for 50Hz mains, 8.33ms for 60Hz). The diode conduction period is rarely more than 2-3ms, and generally far less.
+ + As an example, if the peak diode current is 3A and the transformer plus diode series resistance is 1Ω, the peak voltage is reduced by 3V, plus the diode's forward voltage (0.75V + for each conducting diode). This means that a transformer with a 50V peak output (35V RMS) cannot provide 50V DC output. For a bridge rectifier, the maximum DC level with 3A peak output is + 45.5V, with the average being a bit less because of superimposed ripple. +
Please be aware that the above note includes a number of simplifications, and the reality can be slightly different. It's outside the scope of this article to go into great details though, so I suggest that you also read the articles on power supply design and transformers. While the process of rectifying an AC voltage seems fairly straight-forward, it's actually quite complex when examined closely. For most power supply applications, a small inaccuracy is of no consequence. Normal mains voltage changes far more than any small error due to simplifications made, and few transformers provide their exact 'name plate' voltage because it depends on the current drawn.
+ +Note that 'choke input' filters are included, but only 'in passing' as it were. These are a special case, and are rarely used today for 'linear' (i.e. mains frequency) power supplies. Also, be aware that the term 'linear' is really a misnomer when describing rectifiers. The current waveform is not linear (i.e. not sinusoidal), but consists of brief pulses of current when the input voltage exceeds the voltage across the smoothing capacitor. It stands to reason that current cannot flow when diodes are reverse-biased, and this is the situation for well over 70% of the input voltage waveform. Current typically flows for around 15% or less of each half-cycle. For example, the diodes may conduct for 1.5ms in each 10ms half-cycle (for 50Hz mains). Using a choke input filter changes this, but in a somewhat counter-intuitive way.
+ + +This is the simplest of all, and also the least desirable. It is not recommended for anything. There is no need to use a half wave rectifier given the low cost of suitable diodes, and because it introduces DC through the transformer winding(s), it's surprisingly easy to cause transformer saturation that can lead to the demise of the transformer due to over-temperature.
+ +
Figure 1 - Half Wave Rectifier
Current flows only when the transformer's secondary voltage is more positive than the remaining voltage across the filter cap (C1). Because the current is unidirectional, this creates DC through the secondary. With a toroidal transformer, you may only need a few 10s of milliamps to cause serious problems.
+ +Output ripple is at the mains frequency. Required diode PIV is twice the peak input voltage, so 50V peak input requires 100V diodes (minimum). The input current is 4.75A peak, with an average value of 444mA DC. Average DC output is 44.5V DC with 19.7W in the 100Ω load. The voltage is lower than expected because the high peak current causes a significant voltage loss across the 1Ω source resistance. Capacitor ripple current is 4.3A peak (1.22A RMS).
+ +I recommend that this rectifier is never used for anything that demands more than a (small) few milliamps of load current (and that is of dubious value to anyone). See Half-Wave Revisited to see some test results and oscilloscope waveforms.
+ +++ ++
+Input Current 445mA average (1.3A RMS, 4.75A peak) + Output Current 445mA + Capacitor Current 1.22A RMS + Load Power Factor 0.435 +
I consider half-wave rectifiers to be an abomination, and as such I strongly suggest that you never use them if there's any other option available. It's rare that you can't use a bridge rectifier, with the only exception that comes to mind being valve (vacuum tube) power transformers that have a tap for the negative bias supply. Because only one tap (almost never a separate winding) is provided, there is no choice, as it's already been made when the transformer was designed. That doesn't mean I like it, and given the cost of the transformers it would be nice if a separate winding was provided to allow a 'proper' rectifier and filter.
+ + +The 'traditional' full wave rectifier was popular in the valve (vacuum tube) era, as it was easy to do with a single rectifier valve with a common cathode and two anodes. However, it requires more wire on the transformer, which has to be thinner than desirable so the wire can physically fit into the transformer winding window. While convenient, it is no match for a bridge rectifier which achieves the same result with a single transformer winding, which will typically have a winding resistance of less than half that of each winding shown.
+ +Full wave rectifiers are common again in switchmode power supplies (SMPS), because the number of turns needed is small, and it's cheaper to add a few more turns to a high frequency transformer than to buy additional high speed diodes. It also lends itself to other rectification schemes, such as synchronous rectifiers (using MOSFETs) that have very low losses.
+ +
Figure 2 - Full Wave Rectifier
There is no net DC in the transformer windings. Each half of the winding has a significant DC component, but the two windings have opposite phases so the DC component is cancelled. The full wave rectifier is commonly found in pairs, and used with opposite diode and capacitor polarities to generate positive and negative supply voltages at the same time. This then becomes a full wave (dual supply) bridge rectifier and is shown further below.
+ +Output ripple is at twice mains frequency, diode PIV is twice the peak input voltage. 50V peak input requires 100V diodes (minimum).
+ +Note that C1 is almost half the value needed for a half wave rectifier, with peak and average currents all reduced substantially.
+ +++ + ++
+Input Current 2 × 230mA RMS (3A peak) + Output Current 460mA + Capacitor Current 938mA RMS + Load Power Factor 0.65 +
The bridge rectifier is full wave, and makes maximum use of the transformer winding. This is the most efficient rectifier in common use, and diode bridges are readily available with many different voltage and current ratings, suitable for most needs for general purpose power supplies. There is no net DC in the transformer winding, as current is delivered symmetrically, via two diodes in series for each polarity. This is the preferred rectifier for the vast majority of applications, as its performance is generally hard to beat with other topologies. The bridge is sometimes known as the Graetz circuit or Graetz bridge [ 2 ].
+ +Bridge rectifiers are probably one of the most common of all, and can be made with discrete diodes or purchased as an encapsulated module. Voltage ratings range from around 50V up to 1kV or more, with continuous average current ratings from 150mA to 1,000A. Peak current ratings can be a great deal higher - even 1A diodes can handle a non-repetitive surge current of 30A or so (albeit for less than 10ms). For serious power supplies (power amplifiers, bench power supplies, etc.), a 400V, 35A bridge module is convenient, extremely robust and usually inexpensive. This is my recommendation for all such applications.
+ +
Figure 3 - Bridge Rectifier
While the presence of two diodes in series is a small nuisance, the voltage loss is generally less than 1V compared to a full wave circuit. In reality, this is more than compensated by the fact that the transformer winding is fully utilised for positive and negative half-cycles, so there is more commonly a net increase of output voltage for a given output voltage and current with any given transformer.
+ +Output ripple is at twice mains frequency, diode PIV is the peak input voltage. 50V peak input requires 50V diodes (minimum).
+ +Peak input current is just over 3A (1.04A RMS) for an output current of 453mA (20.5W in the 100Ω load). Capacitor ripple current is 2.6A peak (938mA RMS).
+ +++ + ++
+Input Current 1.04mA RMS (3A peak) + Output Current 454mA + Capacitor Current 953mA RMS + Load Power Factor 0.57 +
This is another circuit that's hard to recommend. Despite the name, there is no net DC in the transformer winding, but input current is a lot higher than you may expect. Because the input is delivered via a capacitor, there can be no transformer DC. C1 passes positive peaks when the voltage is greater than that across C2, and recharges during negative peaks. The voltage rating for C1 is the same as the peak AC voltage (50V for the example shown). C2 must be rated for twice the peak input voltage, because it's across the full output (roughly 92V for the circuit shown). The input source and output share a common connection that may be useful. A full wave doubler uses the same number of parts but has lower ripple (for the same capacitor size), draws a lower peak current from the source, and is the preferred option for most applications.
+ +
Figure 4 - Half Wave Voltage Doubler
Output ripple is at the mains frequency, and diode PIV is twice the peak input voltage. 50V peak input requires 100V diodes (minimum). Beware of capacitor ripple current, which is around 3A peak (1A RMS) for an output of 91V DC (at 227mA output current).
+ +The two capacitors do not have to be the same value. C1 can be smaller than C2, which does not affect the output ripple voltage. However, if C1 is too small the output voltage will be limited due to the capacitive reactance (impedance) of the capacitor.
+ +A half wave doubler can be a useful addition to an off-line (powered directly from the mains) PSU, but these cannot and must not be used to power anything with user-accessible terminals. This type of supply is covered in the article Small Power Supplies article, and will not be discussed further here.
+ +++ + ++
+Input Current 1.04mA RMS (3A peak) - Also applies to C1 + Output Current 227mA + Capacitor Current 953mA RMS + Load Power Factor 0.58 +
This is a useful circuit that's not uncommon with valve amplifiers, and it's a recommended rectifier provided you understand the limitations. There is no net DC in the transformer winding, but input current is again higher than you may have expected. It outperforms the half wave version in every way, but lacks a common connection between the source and output, which may be important in some circuits.
+ +
Figure 5 - Full Wave Voltage Doubler
Output ripple is at twice mains frequency, diode PIV is twice the peak input voltage. 50V peak input requires 100V diodes (minimum). Peak input current is just under 3A for an output current of 228mA (20.8W in the 400Ω load). This arrangement has the advantage that a half voltage output (about 46V DC) is available at the capacitor centre tap, but the ripple here is at mains frequency.
+ +The centre tap 50Hz ripple voltage is 2V peak-peak with no load, and it increases (perhaps dramatically) if any current is drawn. Both caps require a working voltage of half the total output voltage (50V DC minimum for this example), because they are wired in series. Capacitor ripple and peak currents are the same as for the half wave doubler (711mA RMS, 2.8A peak). Input current is 3A peak (711mA RMS).
+ +Output ripple is at the mains frequency, and diode PIV is twice the peak input voltage. 50V peak input requires 100V diodes (minimum).
+ +++ ++
+Input Current 1.06A RMS (3.05A peak) + Output Current 230mA + Capacitor Current 712mA RMS + Load Power Factor 0.56 +
If used for a dual supply (such as ±15V after regulation, be aware that the ripple is at mains frequency on both outputs. This means that filter capacitors need to be larger than with a full-wave, centre-tapped bridge (shown next). While it's now necessarily a problem, it is something you need to be aware of.
+ + +This arrangement is simply a pair of full wave rectifiers, with each having opposite polarities for the diodes and capacitors. Because it commonly uses a standard encapsulated bridge rectifier, it's more commonly known simply as a dual supply bridge rectifier. The performance of each polarity is (almost) independent of the other, but losses in the transformer apply to each output, even if one has no load. This is not a limitation in real world applications, and the outputs can be considered to be independent for all practical purposes.
+ +
Figure 6 - Full Wave, Dual Supply Bridge Rectifier
Output ripple is twice the mains frequency at each output, and diode PIV is the sum of the two windings (100V for the example shown).
+ +++ + ++
+Input Current 2 × 1.06A RMS (3.05A peak) + Output Current 2 × 460mA + Capacitor Current 953mA RMS + Load Power Factor 0.56 +
The next class of rectifier is the voltage multiplier. I've elected to draw the tripler circuit using the 'traditional' format, with the diodes and caps forming a triangular pattern. This was done so the circuit is immediately recognisable, as it's the most common way these circuits are drawn.
+ +Multipliers are normally only ever used when particularly high voltages at very low (or almost no) current is needed. Although a voltage doubler is technically a voltage multiplier, it's generally not classified as one because it is much more commonly used for power applications (although low power doublers are also common in metering circuits). The most common use for voltage multipliers was the 'tripler' circuit that was standard with all CRT colour TV sets and monitors. The tripler generated the acceleration voltage for the final anode of the cathode ray tube.
+ +Voltage multipliers are also used to generate the polarising voltages for electrostatic loudspeakers, Geiger-Müller or photo-multiplier tubes and many other circuits that need high voltage at low current. It's common to drive a multiplier from a higher frequency than the mains. In CRT sets (PAL system) the frequency was 15,625Hz (the horizontal deflection frequency), but higher frequencies are not uncommon. This allows smaller capacitors to be used, but the diodes must then be fast or ultra-fast types.
+ +
Figure 7 - Voltage Tripler
Although a tripler is shown, any multiplication desired can be created by adding more diodes and capacitors. The voltage across C1 is the same as the peak input voltage, but there is double that voltage across each of the others. All diodes are subjected to twice the peak AC input voltage. The table for operating conditions is not relevant for voltage multipliers, but the circuit shown draws only 1.9mA RMS, but that's for a rather miserly 147µA of output current. This gets worse as more stages are added. While more current can be drawn, the capacitors need to be larger or you can accept a lower voltage.
+ +To extend the multiplier for more output voltage, simply duplicate D2, C3 and D3 (one 'stage'). Each additional stage boosts the output voltage by the peak-peak voltage from the generator, in the case shown that adds 100V with every new section. Extending to four sections will give an output approaching 250V, but the output starts to sag badly with a lot of stages. For example, five stages should give 350V, but the voltage will be only around 320V if the 1MΩ load is maintained.
+ +You will regularly see this circuit referred to as the 'Cockcroft-Walton' voltage multiplier, so named after the gentlemen who invented it in 1932 [ 2 ]. It is not an efficient way to get a high voltage, but it's cheap and works extremely well if you don't need much current. If you do need appreciable current, then this is not the circuit to use.
+ +With the values shown (including the 1MΩ load), output ripple voltage is less than 1V peak-peak.
+ +There are many variations on the basic multiplier (including a full wave version), but these will not be covered further here.
+ + +The final rectifier is included because it's so common, although most people will never get to play with one (and it's not recommended because they are deadly and have killed quite a few people). The voltage clamp 'rectifier' is used in microwave ovens, to supply the cathode voltage for the magnetron. The anode is earthed/ grounded, so the supply voltage is negative. Unlike the other circuits shown here, the voltage and current are set to those that may be used in a 1kW microwave oven.
+ +
Figure 8 - Voltage Clamp
The clamp can't really be considered as a 'normal' power supply circuit because the output is pulsating DC, with the peak voltage (close to) equal to the full peak-to-peak voltage from the MOT (microwave oven transformer), with the 'positive' peaks clamped at zero volts (± a few diode voltage drops). The average DC voltage and current outputs are shown, and it's worth noting that the input VA rating gives an output power factor (as shown for the other examples) of 0.56 - much as for any other rectifier. That means that the transformer has to supply 1.96kVA for an output power of 1.12kW.
+ +The magnetron is not modelled in the above, and the load is just a resistor. It's not a perfect analogy, but is sufficient to demonstrate the workings of this arrangement. The magnetron actually gets a voltage that varies from zero to -6.9kV at the mains frequency. It's crude, but in reality it works very well as countless microwave ovens use it. Most modern versions use a switchmode supply ('inverter') instead, because it's a lot lighter. Reliability of the 'old' method is very high, something that isn't necessarily the case with an inverter supply.
+ + +Multi-phase rectification is actually more common than you might think, as it's standard in most car alternators. While this isn't something that normally interests most (audio at least) hobbyist constructors, it's still important because it's such a well used technique. The three windings generate alternating voltages that displaced by a 120° phase angle, and these are rectified using a modified bridge. Anyone who's ever dismantled a car alternator will have seen the six press-fit diodes buried into the aluminium casting at the back of the alternator, and now you can see exactly how they are wired. Note that in some designs there may be a separate set of diodes for the field winding, so you may see anything from six to twelve diodes in all.
+ +The three windings are shown wired in Delta (Δ), but the 'Star' (aka 'Y' or 'wye') connection is also shown. The Star connection has a neutral point, which is standard for electricity distribution, but it doesn't have to be utilised (it's shown as 'N.C.' meaning no connection). A balanced (equal current in each phase) 3-phase system works just as well with or without the neutral. A Star connection requires fewer turns than Delta as the required voltage is lower, but overall efficiency is generally similar. When Star connected, the voltage between source outputs is √3 × RMS voltage referred to the neutral. For the circuits shown, each Star generator outputs 6.53V RMS, and the Delta generators output 11.31V RMS. The peak voltages are shown on the drawing.
+ +A Star connected alternator (or transformer) has output voltages that are a combination of the voltages from any two adjacent sources. The voltages between any two outputs have to be determined by vectors for Star connections, because of the 120° phase difference (which is where the '√3' term comes from). In Delta, the voltage is between any two points is that of each individual generator, since they form a series 'string' of sources, joined to form the complete Delta formation.
+ +
Figure 9 - Three-Phase Bridge
The voltages and load resistor have been changed in this circuit to match those normally found in a car's electrical system, but the load is arbitrarily set at 5Ω to provide a sensible current. A car's electrical system can present a much greater equivalent load, with a battery charging current of 20A or more. Note that only the rectifier and source windings are shown, as the field (aka exciter) winding, slip rings and regulator are not a direct part of the rectifier itself.
+ +For automotive use, the operating frequency depends on engine speed, and may be as low as 8Hz (500 RPM), rising to 80Hz (5,000 RPM) or more depending on how fast the engine is revving, and the relative sizes of the engine and alternator pulleys. These are not relevant to the circuit as shown, but the alternator must obviously be designed to handle the speed range encountered for any given setup within the vehicle's systems.
+ +Although there is no capacitor (or battery) at the output, the ripple is only 2.1V peak-peak (about 640mV RMS) because of the overlap between the three phases and the full wave rectification. In large systems, the relatively low ripple voltage means that filtering may not be required at all. The ripple frequency is six times the input frequency, so with 50Hz (for example) the ripple is at 300Hz. In some cases there may be more than three phases - six is fairly common, but 12-phase systems also exist. A 6-phase system can be produced by combining Star and Delta windings with 12 diodes. Output ripple is then 600Hz with a 50Hz supply, and for an equivalent to that shown above has only 480mV peak-peak (156mV RMS) ripple with no filtering.
+ +Multi-phase rectifiers are also common in industrial systems, powering everything from variable speed motor drives, high power radio and TV transmitters, to electric trams and trains. So, while not a technique that many audio people will encounter, it would have been remiss of me not to include multi-phase rectification in an article that describes rectifiers in general.
+ +While these 3-phase systems are common for power distribution and industrial applications worldwide, this article is not about 3-phase systems, other than the info shown above. If you want to know more, there are countless websites that describe 3-phase systems in detail, and this is not the place to expand on a topic that's not really relevant to the purpose of this article.
+ + +The reason for the re-visit is that I didn't want to break the 'flow' of the article by introducing specifics into this one rectifier type. Countless on-line articles say it's a poor choice, but only a few I found even mention transformer saturation (and some so-called 'engineers' even deny that it can happen!). This is the single most important limitation, because it's easy to destroy a perfectly good transformer by trying to simplify a design to its minimum. I tested a 200VA E-I core transformer, with 2 × 28V secondary windings. The secondary current rating is a bit over 3.5A RMS at full power. This is probably not your typical transformer that would be used with a half wave rectifier, but it has plenty of scope for testing. The no-load voltage from the transformer was 62V when I ran these tests.
+ +
Figure 10 - Transformer Magnetising Current
With no load, the transformer draws a magnetising current of 66mA RMS. This is shown above, and it's pretty much what we expect from a transformer of this size and construction. I used a 270Ω resistor as the load, and that increased the total current to 112mA. Note that the measurement shown as 'V RMS' is actually 'A RMS' because I'm using a 1A = 1V current monitor.
+ +Then I added a diode in series with the 270Ω resistor to create a half-wave rectifier. Average DC output current with this arrangement is about 102mA. The next trace shows the transformer primary current with this rather small DC load (after all, it's only 100mA average from a 3.5A transformer).
+ +
Figure 11 - Saturated Waveform Due To Rectifier (270Ω Load)
The input current has doubled, rising to 112mA RMS, even though the RMS output current is only 0.7 of that drawn by the 270Ω resistor alone. It takes little imagination to work out that increasing the DC a bit further will make matters far worse, and indeed, this is shown in practice. If the load resistance is halved (135Ω), the primary current increases further, to a rather scary 172mA.
+ +
Figure 12 - Saturated Waveform Due To Rectifier (135Ω Load)
The final waveform is shown above, and transformer saturation is quite obvious. The asymmetrical waveform is a dead giveaway that something is wrong, and it's not something that you normally ever want to see (and this is definitely no exception). So, using a half wave rectifier is not only very inefficient, but the DC component causes transformer saturation at surprisingly low current. The situation is actually not as bad with small (less than 10VA) transformers, because they already have very poor efficiency (some barely increase their primary current if the output is shorted!).
+ +Had this same test been done with a toroidal cored transformer, the results would have been a great deal worse, because they are utterly intolerant of asymmetrical loads, and have a much sharper saturation limit. It's worth noting that most electrical regulations now prohibit half wave rectifiers, because they produce even harmonics of the AC waveform and introduce asymmetry which creates a net DC component. I'm not the first to report on this, but unless you know what to search for, you probably won't see any reference to saturation in most explanations.
+ +The effect is real, easily demonstrated (as seen above) but only spoken about in hushed voices at the end of long, dark corridors with 'Beware of Lions' signs posted at regular intervals. Ok, that may be a bit far fetched, but you get the point.
This is a 'late entry' to this article, and was added because these filters have regained popularity - in switchmode supplies. They were once used in some 'high-end' valve amps, but generally fell from favour due to the size and cost of the 'choke' - an inductor. The main advantage is that instead of the filter capacitor charging only in short bursts, the charge time is only slightly less than a full half-cycle for each polarity. The 'off time' (when no diodes are conducting) can be as low as a few hundred microseconds, even with 50Hz mains. Unlike any of the other rectifier/ filter combinations discussed, the diodes must be high-speed types.
+ +The output voltage is lower than a capacitor-input filter, but that in itself isn't a problem, as it only requires that the transformer has a higher voltage secondary. One interesting fact is that the primary current is close to being a squarewave at high current. It's not a 'true' squarewave, because there will be some residual of the AC input superimposed. This is likely to be unexpected by most people. It's probable that few hobbyists (and likely few professionals as well) have noticed this, as not many folk use a current monitor, such as those described in Project 139 and Project 139A.
+ +
Figure 13 - Example Choke Input Filter
An example is shown above, using the same AC input, rectifier, load and filter cap as the Figure 3 circuit. I ignored the resistance of the inductor for this example, but it's real, and cannot be ignored in a real circuit. The resistance value depends on the application, and for the Figure 13 example it needs to be less than 1Ω. The resonant frequency of the inductor and capacitor must be (much) lower than the lowest frequency of interest. This is determined by the formula ...
+ ++ f = 1 / ( 2π × √ ( L × C )) ++ +
All inductor input filters have a common problem, in that there is always resonance, and when power is applied the voltage will overshoot the expected steady-state value by an amount that depends on the load current. In general, you should use the largest inductor you can, but this poses a serious problem. High inductance means many turns of wire, and that increases the resistance. It also means that the iron core will be subjected to a very high unidirectional current - it's DC. The flux must be kept below the saturation point of the core, so it requires a substantial air-gap. These factors combine to mean a very large, heavy and expensive component.
+ +The alternative is to use a 'swinging' choke. These are deliberately designed so they will saturate, so the inductance is high at low current, and reduces as more current is drawn. These are a 'special' case, and the design process is not straightforward. It's quite likely that most were determined empirically when they were more common, as this would have been fairly easy to do. There's some more info available on the Valve (Vacuum Tube) Amplifier Design Considerations - Part 2 (section 6) page.
+ +I don't intend to go into great detail with these, because they are irrelevant for solid state amps, and somewhat impractical for valve designs. When used with switchmode supplies the operation is somewhat different, as the inductor/ capacitor is set up to act as a PWM filter. The DC output is dependent on the mark-space ratio of the PWM waveform, in exactly the same way as for a PWM (Class-D) amplifier. The only difference is that the output is DC, not AC (audio).
+ +Changing the value of the inductor (within reason) doesn't change the DC voltage, but there is a resonant interaction with the filter capacitor. For example, you could use a 100mH inductor with a 2,000µF capacitor, which has a resonant frequency of 11.2Hz. Any load that varies will cause the resonant circuit to 'ring' at the resonant frequency, causing large output voltage variations. Choke input filters are 'sub-optimal' for any application where the current changes quickly. They do have much better steady-state regulation than the more common capacitor input filter, but that disappears when you have a rapidly changing load current.
+ +As a result, it's not possible to recommend using choke input filters with mains frequencies for low voltage supplies, because the requirements are in serious conflict. When carefully designed, and with a reasonably constant load, they are more efficient than a 'traditional' capacitor input filter, and transformer utilisation (output power vs. VA rating) is better. A capacitor input filter means that for a 100W nominal output, the AC input will be around 200VA, but a choke input filter will reduce that to about 140VA. However, the saving on the transformer is more than offset by the cost of the inductor. Choke input filters also require a bleeder resistor to maintain regulation if the load draws little or no current. This can be a real surprise when you leave it out and measure the full AC voltage times 1.414 unloaded.
+ +A 35V transformer will provide 45V DC with a capacitor input filter, and around 25V DC (VAC × 0.8) with a choke input filter under load. Without the bleeder, you'll get 42V DC with no load, because the inductor isn't doing anything - without current, it's just a large (and heavy) resistor. You can estimate the minimum value of the inductor by dividing the DC voltage by the bleeder current in milliamps. If you expect 25V with a 100Ω bleeder (250mA and 6.25W) the minimum inductance is 100mH.
+ +
Figure 14 - Current Waveforms For Fig.13 Choke Input Filter
It's instructive to look at the current waveform in particular. With the 300mA load (100Ω load resistor), the input current is 377mA, and the capacitor current is reduced to 529mA. Compare that with the values shown for Figure 3, where the capacitor current is more than double the DC current. The AC current waveform is important, and it's easily seen that the transition from positive to negative (and vice versa) is very fast, so high-speed diodes are essential.
+ + +This is a very short overview of the rectifying element itself - the diode. Over the years there have been many advances in diode manufacture. The earliest diodes were used in 'crystal sets' - personal radio receivers, with the 'diode' being a crystal of Galena (Lead Sulphide), as well as several other substitutes. These could only work at very low voltage and current, and could not be used for power rectification.
+ +Mercury arc rectifiers (1902) were the first high voltage, high current (>500A was common) power rectifiers, using a mercury cathode and multiple anodes. These were used in the early days of electric trains and trams, typically providing 1,500V DC at hundreds of amps. They were also common in industrial applications. They were not suitable for low voltages.
+ +Valve (vacuum tube) rectifiers started with the Fleming valve (1904), and were the only satisfactory medium/high voltage, low current rectifier suited to consumer goods. They remained the main type of rectifier in radios, TV sets and other products until the 1960s, when silicon diodes became readily available in both high voltage and high current versions.
+ +One of the first rectifiers that was usable at low voltage was based on copper oxide (1927). These have a very low reverse voltage (~2.5V max) and a fairly low forward voltage, but are not very efficient. Even medium voltages required a stack of copper oxide diodes, along with a substantial heatsink for each junction.
+ +Selenium rectifiers came along in 1933, and offered many advantages over copper oxide. They are still fairly low voltage (20-25V reverse voltage) and not very efficient, with a 1V forward voltage drop. These were also used in large stacks (and again with substantial heatsink tabs at each junction) for battery chargers and other low voltage, high current applications.
+ +Germanium diodes have a low forward voltage, but were nearly all of 'point-contact' construction, and were therefore suited to low current, low forward voltage applications. They were used extensively as detectors in early transistor AM radios (which also used germanium transistors). They are still available, but supply isn't very reliable. Germanium diodes are also useful for use in metering amplifiers, where the low forward voltage is useful to ensure high linearity.
+ +Silicon diodes are the bulk of all rectifiers used today. They include Schottky diodes (very high speed, and a relatively low forward voltage, but limited peak reverse voltage), Silicon carbide (SiC), as well as many other variations (most are not relevant to the topic of power rectification). Modern silicon diodes are available in almost any voltage/ current configuration you are likely to need, from a small DC power supply up to providing DC for electric trains and other high power DC requirements. Getting useful historical info is very difficult, but by the 1960s, few commercial products used anything else - including most valve equipment. (I built my very first guitar amplifier in the 1960s - it used valves, but had silicon diode rectifiers.)
+ +High-speed and/ or 'soft-recovery' diodes can be used with mains frequency rectification, and are common in switchmode power supplies. They are also necessary if you use a choke-input filter, even at 50/ 60Hz. These diodes are characterised by their rapid transition from 'conducting' to 'not-conducting', which minimises switching losses in circuitry that has rapid transitions (switchmode power supplies for example. While they aren't necessary for 'normal' rectification with a 50/ 60Hz supply, they do no harm and may reduce conducted emissions. They are more expensive than 'ordinary' diodes with the same voltage and current specifications.
+ +Entire articles have been written on this topic, so a search will find more if you are patient.
+ + +This article is intended as an introduction to the various rectifier types in common use. The exception is the half wave version, which as already noted is a very poor choice. While many authors will point out that half wave rectifiers are a poor choice due to the high ripple and the need for a much larger than normal capacitor, few seem to have noticed that the DC component in the transformer will cause saturation, greatly increased magnetising current and transformer overheating.
+ +The other rectifiers shown are all ok to use for anything you need, and it's simply a choice based on the transformer voltage and the requirements of your circuit. Half wave doublers aren't a good choice though, as the ripple voltage is at the mains frequency, rather than double the mains frequency with most others described.
+ +As already noted, this is not a full description of power supply design techniques, which is described elsewhere on the ESP website. The information here is to let you see the options and make comparisons between the topologies. To assist you in this, each supply is set up to develop the same power in the load resistor, and you can see the input and capacitor current requirements for each example. In use, these will usually be different from the values shown, not because of any error, but because the source impedance can have a large influence on the peak current. This is especially true for supplies that have a larger than normal transformer and filter caps. Average and/ or RMS values should normally scale fairly linearly as the supply is made bigger or smaller than the examples.
+ +There's only a small amount of information provided for diodes, primarily from a historical perspective. Most of the time, you will either use a large encapsulated bridge rectifier or other diodes to suit the supply voltage and current. Capacitors have to be selected to ensure they can withstand the ripple current, but if the supply is for a Class-AB power amplifier, a brief excursion beyond the cap's ratings will cause no harm. The ripple current rating becomes important with Class-A amplifiers and/or bench power supplies that may pull maximum rated current for prolonged periods.
+ +A little more space than expected was taken up with the evil half-wave rectifier and multi-phase rectifiers with descriptions of 3-phase connections, but (I hope) only enough to explain how a multi-phase rectifier is configured.
+ + +There's very little in the reference section, because most of the circuits are so well known that it's not possible to provide attribution to the original designers. It's probable that most of the circuits would have been developed independently by several people at or near the same time. The two exceptions are the bridge rectifier (which has at least two claimants for the invention) and the voltage multiplier. These are the only ones (that I could find) where something is known of the developer(s).
+ ++ 1. Diode Bridge - Wikipedia+ +
+ 2. Cockcroft-Walton Voltage Generator - Wikipedia +
Some of the info on diodes came from Wikipedia, but a lot of it is scattered and it's not possible to try to include all the sites I looked at trying to get information.
+ +![]() |
Elliott Sound Products | Regulators Part II |
Apart from their use within equipment (which is the main topic here), regulated supplies are very handy pieces of test gear. Ideally, a test supply will have a voltage range sufficient to handle everything from logic circuits up to power amplifiers, preamps, and any other electronic circuits that are either faulty, or have just been built. The inclusion of current limiting is especially handy, as you can set the limit low enough that it won't cause any damage (preferably less than 100mA). By increasing the voltage slowly, any fault will cause the current to rise very quickly after you reach a voltage that causes the fault to manifest itself. A very common requirement for power supplies is for battery charging, for lead-acid, nickel-cadmium, lithium or metal hydride cells and batteries. I suggest that you also read Bench Supplies - Buy Or Build?, as there are some relevant points made, along with some more circuits to consider.
A good supply is ideal for charging batteries, as you can set the maximum voltage and current independently. If the battery (or cell) is fully discharged, you limit the charge current to a safe value, so the supply's output voltage will fall, rising as the battery charges. Once charged to a reasonable level, the voltage will remain stable and the current will fall as the battery approaches full charge. For example, to charge a Li-Ion cell, you'd set the open-circuit voltage to 4.2V, and the current to perhaps 1/10 C (i.e. one-tenth of the cell's capacity, say 250mA for a 2,500mA/h cell).
If testing audio equipment, you can verify that the circuit draws an appropriate amount of current (this depends on the circuitry), and doesn't misbehave when a suitable operating voltage is reached. Output voltage (for dual supply circuits) should all be close to zero volts, and if you have at least a couple of amps available, even low-volume tests can be done with power amps connected to a dummy load or a speaker. You can also test power supplies! A 15V regulator should show low output until the input reaches around 17V, and after that the output should not increase beyond 15V. If it does, you know you've made a mistake before it has the opportunity to damage the equipment with which it will be used.
I have a number of different power supplies, and one or more get used almost every time I test a new project or circuits shown in articles. They range from fixed ±12V switchmode (in series for 24V), a fixed 5V switchmode supply, a variable 0 to ±25V supply with current limiting, and a Variac adjustable supply that can provide isolated AC up to 50V, and unregulated DC up to ±25V. The one that gets used depends on what I'm testing. When all else fails, I have another (external Variac controlled) supply that can give up to ±90V at 10A or more. Nothing gets attached to that until it's been verified as being fully functional with one of the others!
Most people's preference for high current is a switchmode supply, but they come with limited voltage ranges (the most common are 5, 12, 24 and 48V). Some have a trimpot to let you control the output voltage over a limited range, others don't. If low noise is a requirement, then you can use a switching supply followed by a linear regulator, and the circuits shown in this article can all be provided with a DC input from a suitable SMPS. Whether you can get the input voltage you need is another matter altogether!
Power supply units (aka PSUs) are everywhere, from large and imposing laboratory units down to 'plug packs' (aka 'wall warts'). They can be regulated or unregulated, but most small switchmode types are regulated, while older (transformer-based) supplies generally were not. When metering is included (with or without a connected computer), these are also known as SMUs - source measure units. For some general ideas for bench power supplies, see Bench Power Supplies - Buy Or Build? Note that it's an article, and the circuits are not part of a construction project.
Most of the time, when anyone mentions voltage regulators we think of IC based solutions. These have been with us for a long time, and they are perfect for most preamps and other relatively low-voltage (5-15V), low-current (less than 1-2A) applications. However, there's also a need for regulators that can provide higher current, higher voltage, or a combination of both. An example is Project 221, which is intended to allow you to run a low-power tweeter amplifier from the main supply of a bigger amplifier.
The circuitry used in the project is deliberately very simple, and it has minimal protection because its output isn't exposed to the outside world (an inherently hostile place for electronics). There's often a need for a regulator that can supply voltages in the range of 50 to 100V (sometimes more), and at relatively high currents. Even with a linear regulator, there's no set limit to the current you can get, but ultimately it comes down to cost. You might have a suitable transformer and other parts, and be understandably reluctant to buy (or try to build) a switchmode supply that can deliver (say) 80V at 10A or more. This is a serious undertaking, but a fairly simple linear regulator may be possible from parts you already have.
This article is based on linear regulators (no switchmode designs), which have the distinct advantage of being (electrically) quiet, something that only very heavily engineered switching regulators can achieve. However, all linear regulators are inefficient, and dissipate significant power at high output current. I'm only going to show series regulators (as opposed to shunt types), which use a transistor in series with the incoming supply and the output. I'll also only show only positive regulators, as negative types (if required) simply use PNP in place of NPN transistors (or negative versions of IC regulators where applicable), and input voltages, diodes and polarised capacitors are reversed.
The basic regulator will be intended to provide a nominal 24V output, at up to 5A or so for most of the design ideas shown, but a few are variable. Output current can be increased by using either higher current series-pass transistors, or using two or more in parallel. For the purpose of the ideas described, the series-pass transistor (TIP35) is assumed to have a gain (hFE) of 45, as that's around the figure you'll normally get (the datasheet says it can range from 15 to 75 with 15A collector current, and the simulator model assumes hFE to be 55). I've also assumed the driver transistor (mostly a BD139) to have an hFE of 75. In reality, these figures will vary, but if you design for the 'worst-case' you may end up with a design that needs too much current. If you design for the 'best case', the design may fail to provide the required current. The TIP35/36 devices are rated for 125W at 25°C.
For the examples that follow, a loaded input voltage of 30V is assumed (which will typically rise to ~35V with no load), and the output voltage is nominally 24V. It will be lower with simple circuits because they have no feedback to correct the output voltage. Output current can range from 1A up to 10A, with the intention of keeping the dissipation in the series pass transistor below 60W if possible.
With a few changes, any of the circuits shown can be driven from an input voltage up to 100V (using a TIP35C), and provide an output voltage of 5V to 90V. Output current depends on the number of output transistors used and the required current. Most of the basic circuits shown are rated for up to 5A with a nominal 24V output, but that's an arbitrary limit I set to make the circuits comparable to each other.
You won't hear many people saying this, but in most applications, regulation isn't necessary. We use regulated supplies for preamps because they provide a nice, low-ripple supply that will never subject the opamps to any voltage above the maximum allowable. However, it doesn't matter if the voltage is ±12V, ±15V or slowly varying between the two! The opamps for an audio circuit really don't care about the actual voltage, nor if it's different between positive and negative. We expect them to be the same, but it doesn't matter. In some cases (particularly for power supplies), the opamp supply voltages may be radically different. The same applies to many other supplies, other than those being used for precision test and measurement circuits.
Accordingly, it's likely that even the simple circuits described will be more than satisfactory for many applications, where you need fairly high voltage and more current than you can get from a 3-terminal regulator IC. Current limiting can be useful, but it's not always essential, it makes the design more complex, and it's more likely to misbehave under some conditions. Sometimes, all you need is an overcurrent trip (an 'electronic fuse') which is far less stressful to implement.
Because transistor gain is always a bit of a lottery, I ran some gain tests on a number of TIP35C transistors. With a collector current of 216mA (averaged), the gain was 36. Increasing the current to 400mA, the average gain was 42, rising to 46 at just under 1A. The lowest gain measured was 25 at 150mA, and the highest was 55 at 1.1A. That is in keeping with my expectations, so the simulated circuits shown below will work as shown. Of course, in any batch of transistors there can be 'outliers' that have higher or lower gain than anticipated (one had a gain of only 30 with 20mA base current), and a final design should account for that.
In all examples shown, low value resistors (< 1Ω) are wirewound types, typically 5W ceramic types. All low values are shown in mΩ, so for example, 100mΩ is 0.1Ω. In most circuits, you'll see a reference to the 'series-pass' transistor. This provides the output current, and (for feedback regulators) its base current is controlled continuously to ensure that the selected voltage is delivered, regardless of output current (up to the maximum allowable).
In any regulator (voltage or current), a stable reference voltage is required. It doesn't matter if the output is a fixed or variable voltage, a reference is still necessary. For most simple supplies, a zener diode is the easiest and cheapest, but it is not the most accurate. A zener diode's voltage is dependent on its temperature, with the exception of 5.6V zeners. There are two thermal effects that cancel each other with a 5.6V zener diode, but this doesn't work with other voltages. If you need high stability, zener voltages between 5.1V and 6.8V are pretty good, but this degree of accuracy isn't always needed. See AN008 - How to Use Zener Diodes in the ESP application notes section for more detailed analysis. IC regulators (including adjustable reference 'diodes') use a bandgap reference, typically 1.25V or 2.5V.
The supplies shown use a zener diode, and its current will vary from a maximum at no load to a minimum at full load, because the series-pass transistor(s) need base current from the zener stabilised reference. Sometimes, you may need to use (for example) a pair of 12V zeners in series, rather than a single 24V zener (the same applies to other zener voltages as well). Zener diodes should always be operated with 10% to 50% of rated current (4mA to 20mA maximum for 24V, 1W zeners). Operating at more than 50% of current rating causes zeners to run hot, and they're difficult to cool effectively.
This is easily overlooked, especially when it all appears to be so straightforward. It's a simple job to work out the maximum current for any zener diode, knowing the maximum dissipation and the voltage. For example, a 12V, 1W zener can handle a maximum current of 83.3mA ...
IZ = PZ / VZ
Using this, you can determine that a 12V, 1W zener should carry between 8mA (10%) and 40mA (50%) at no load and full load. The zener current is at its maximum with no load because no current is drawn by the series-pass transistor (includes Darlington and/ or paralleled transistors). When current is drawn by the load, the series-pass transistor's base current increases, leaving less current for the zener diode. If the current falls below 5%, the regulation may be adversely affected.
If you don't provide at least 5% of the rated zener current, the voltage may be lower than expected. Most zener diode datasheets state the test current, which is usually between 5% and 20% of the maximum. Likewise, many datasheets also state that the maximum current is about 10% less than the figure given by the formula shown above. The test current is usually stated, and that's usually a good value to aim for. As noted though, the current varies, so you have to find a 'happy medium' (ideally between 10% and 50% of the maximum). This can be extended to 5% to 50% if you can't manage to keep the current above the 10% value without exceeding the maximum. Meanwhile, you have to allow enough current to drive the series-pass transistor(s).
While it possible to operate a zener at its maximum power rating, it's definitely not recommended. Even at 50%, the diode will run fairly hot, as the only heatsink it has access to is the copper track of a PCB, or other component leads when a PCB isn't used. My test has always been to discover if I can keep holding a component without shouting "rudeword" and dropping it or letting go. This applies to pretty much everything with the exception of ceramic wirewound resistors. Even then, excess heat is likely to cause damage to PCB materials or other parts nearby (especially electrolytic capacitors). It's not uncommon to see burnt patches on a PCB beneath wirewound resistors, and sometimes the solder pads and/ or tracks will de-laminate (separate from the fibreglass).
Rather than a zener diode, you can also use a precision voltage reference, such as the TL431. These can be used with a pair of resistors or a resistor and a trimpot to get a very accurate and stable reference. The maximum allowable voltage is 36V, and the maximum current is 100mA ... but not at the same time. For the TO-92 version, maximum dissipation is 770mW, but it would be unwise to operate the IC at more than 500mW, and preferably less. My suggestion would be around 250mW, so at (for example) 24V, the operating current will only be 10mA. For high output current, a very high gain output stage is needed for the series pass transistor(s) and their driver transistor. MOSFETs are tempting, but come with caveats - see Section 12 - Using MOSFETs.
The general idea for a simple regulator is shown in Figure 2.1. While this will work, it's less than ideal, so we need to add a few parts to improve performance. If the output current doesn't need to be more than about an amp or so it will do the job, but it is quickly found wanting if you need any more. Because there's only a single transistor, R1 has to be able to supply enough base current for Q1 and provide the current for the zener diodes. Even for 1A output at 24V (nominal) with a 30V DC input, R1 has to supply a minimum of 50mA, 28mA 'reserve' current for the zener diodes and 22mA for the base of Q1. With no load, the total current is passed through the zeners. The problems get worse if more current is needed.
Note the diode connected across the series-pass transistor. That's there so that if (when) the supply is connected to a voltage source (such as a battery) but isn't powered on, the diode passes voltage back to the input. By including this, the transistor can never be reverse-biased which can lead to failure. It also bypasses voltage spikes (from inductive loads, motors, etc.) around the transistor. This should be included in any power supply, even if it's not exposed to the outside world.
Figure 2.1 - Basic Regulator Circuit
The simple circuit shown has disadvantages, as you'd expect. The zener current is higher than it should be (so two 12V zeners are used in series) and it varies too much depending on the load. Regulation is mediocre, and there's no protection. If the output is shorted it will supply as much current as it can, leading to almost instant failure of the series-pass transistor. Because there's no driver transistor, the base current that needs to be provided varies widely. We must provide enough current to accommodate the 'typical' hFE, which as stated in the intro we'll take as 45. That means you need 22mA base current, so R1 has to be around 180Ω (30V input), and rated for at least 1W. The zener current will be 61mA with no load (35V input), and around 22mA at full load. It's also necessary to allow for a higher than expected input voltage with no load. If it comes from a transformer, bridge rectifier and filter cap, it will rise to about 35V, and this is the most likely voltage source.
Figure 2.2 - Improved Basic Regulator Circuit
By adding a driver transistor, we lose a bit of output voltage (around 0.7V), but the circuit is far more attractive overall. The pair of zeners can be replaced by a single 24V zener, and by splitting the feed resistance into two (2 x 470Ω) we can add a capacitor to ground. This attenuates ripple for a cleaner output. A larger capacitor reduces noise better. While it's often seen, adding a capacitor in parallel with a zener diode is close to useless because their dynamic resistance is very low, so the cap doesn't achieve anything useful. In Figure 2.2 (D2, marked 'Optional') is used to offset the emitter-base voltage on one of the transistors, or you can use two to get closer to 24V output.
The Figure 2.2 circuit is easily capable of 5A output with a 30V input. Zener current is well within the desirable limit, and even with no feedback, the regulation is acceptable. It's not precision, but nor are most of the circuits shown in this article. They are best described as 'utilitarian', in that they will do the job 'well enough' for most applications. If you need precision, you won't get it from simple discrete circuits.
The two regulators so far are very basic, having no form of protection, and no way to adjust the output voltage to be closer to the desired 24V. This is because they lack feedback, which is essential for reasonable performance. Feedback is also used to provide good overload protection, but that will come later. The Figure 2.2 circuit is capable of reasonably good regulation, although the output voltage is only about 22V with a 5A load. The output of both of these simple regulators can be boosted a little, by adding a diode (or two) in series with the zener. The forward voltage of the diode(s) helps to offset the base-emitter voltage of the transistors.
Note that for all of these simple regulators, I've only shown a single TIP35 power transistor. In most cases, at least two should be used (in parallel, with emitter resistors) to keep the temperature down to something 'sensible'. The emitter resistors can also be used for current sensing, and an additional resistor isn't needed. If there are two transistors in parallel, the emitter resistance should be double the value shown, and sensing taken from both resistors as shown next.
Figure 2.3 - Improved Basic Regulator Circuit - Parallel Output Transistors
The above should be used in most cases, but only a single transistor (and current sense resistor) are shown in the other circuits for clarity. It's important to sum the two voltages dropped across R4A and R4B, because the transistors will not be matched, and one will supply more current than the other. The effective current limit resistance is 135mΩ, which will bias 'on' a current limit transistor at around 4.8 - 5.2A total output.
One of the first things that regulators that interface with the 'outside world' need is current limiting. It comes with caveats though, especially if the output is shorted (which will happen). Figure 3.1 shows the general principle, which has been around almost for as long as discrete regulators. It's very basic and just uses diodes. When the combined base-emitter voltage and that across the sense resistor (R4) exceeds the voltage drop of the diodes (about 2.6V), the diodes shunt the base current from Q2 (the driver) to the output. As a current limiter it's best described as "better than nothing", as it lacks any pretense at precision. However, it might just save the series-pass transistor(s) from failure, provided the fault is transient.
In reality, it's almost impossible to apply a direct short across anything, because there are always connectors and wiring forming part of the circuit. The total resistance depends on many factors, but it's 'traditional' to always design for the worst case. In fact, the transformer, bridge rectifier and internal wiring also add to the total series resistance, but in general it would be unwise to assume more than 100mΩ (0.1Ω) of external resistance.
Figure 3.1 - Improved Basic Regulator With Diode Current Limit
As shown, the simulator tells me that current limiting starts at 5A, and with a shorted output the current is 6.5A. A better scheme is shown next. R4 is the current sense resistor, and if the voltage across it exceeds 0.65V, Q3 will conduct, and it will bypass base current from Q2 to maintain the set current. The advantage is that the current limiter has gain, so it is more accurate than the Figure 3.1 circuit. With 0.1Ω (100mΩ) for R4, current limiting starts at about 5.5A, with the final current into a short-circuit limited to about 6A. This still isn't a precision limiter, but it's a lot better than a string of diodes.
Figure 3.2 - Improved Basic Regulator With Variable Current Limit
A simple transistor current limiter will often rely on a resistor value (for R4) that's unobtainable. The solution is to add a low-value pot (VR1) so the current can be adjusted. This can be used with any of the following circuits, and it lets you set the current with reasonable accuracy. Because the single current-sense transistor has limited gain, expect the current to vary by up to 300mA or more from the onset of limiting to a shorted output. This isn't a problem, as the limiting is intended only to provide some protection for the series-pass transistors, and it's not intended to be a precision circuit.
One thing that may appear strange is the use of an NPN transistor for limiting. It doesn't look like it, but both the base and collector are positive with respect to the emitter, so it must be NPN. In some of the other circuits shown below, the transistor is PNP, and the base and collector are negative with respect to the emitter. This can get confusing, but it depends on how the current limit circuit is configured. Make sure that you follow the drawings thoroughly to ensure that you understand when (and why) an NPN or PNP limiter transistor is used.
The problem with all simple limiters is that Q1 will dissipate up to 175W (35V across the transistor at 5A), far more than a TIP35 can handle under short-circuit conditions. It will usually be less in reality, because the incoming DC supply will never have perfect regulation because it has some internal resistance (transformer windings, diode resistance and wire resistance). Even if these add up to 0.5Ω, Q1 will still be subjected to a dissipation of almost 160W, and it will still fail. Simple limiters require that the series-pass transistors can dissipate the maximum power, with particular attention paid to safe operating area. See The Elephant In The Room for details.
Figure 3.3 - Improved Basic Regulator With Foldback Current Limit
The answer to this is a technique known as foldback current limiting. As the voltage across the series-pass transistor increases, the allowable current is reduced. With the arrangement shown above, the circuit can only provide 1.6A into a short-circuit, while still being able to provide 5A at full voltage. The addition of just one resistor (R6) means that as the output voltage falls, Q3 gets additional base current through R6, turning it on harder and reducing the available output current.
The highest power in Q1 is 60W at an output current of about 3A and an output voltage of 9V. The general characteristics for foldback limiting are shown in the following graph. This is for the circuit shown above, and the trends are similar with most foldback regulators. The short circuit current is determined by R4, R5 and R6, and they are interactive. If any one of these resistors is changed, the limiting characteristic is modified. There is some leeway with R6 without seriously affecting the maximum current, but not very much.
Figure 3.4 - Foldback Current Limiting Voltage, Current And Power
As you can see, as the current increases, the voltage remains steady until the maximum (4.8A) is reached. This causes the output voltage to fall, which allows more current through R6, turning Q3 on harder. With the output shorted, the maximum current is 1.6A, and Q1's dissipation is 46W. Worst-case dissipation is 61W, with an output voltage of 9.3V and a current of 3A. All foldback limiters have a hidden 'gotcha', in that the circuit may not power up normally with anything close to full load. Foldback limiting is a form of positive feedback, and like all positive feedback systems it can be unstable under some conditions.
Figure 3.5 - Foldback Current Limiting (Traditional View)
Figure 3.4 shows the 'traditional' way that foldback current limiting is shown on a graph. A 'regular' current limiter simply provides constant current at any voltage once it's active, but the foldback limiter reduces the current as the load impedance falls. With simple limiting, if the regulator's input voltage is 30V and the output is shorted, it will deliver 5A, resulting in a regulator dissipation of 150W. With a foldback limiter, the maximum current with a shorted output is 1.5A, so the regulator dissipates only 45W. The lower the output voltage (with intermediate load currents), the lower the output current. As you can see, with an output voltage of 10V, the basic limiter still provides 5A output, where the foldback limiter reduces that to about 3.1A. You can work out the dissipation for each limiter type easily, and a foldback limiter always has lower power dissipation in the series pass device(s).
While the drawing shows a sharp transition from voltage to current regulation, this isn't the case with simple limiting circuits. In most cases, you'll see the voltage sag noticeably as the maximum preset current is approached, and for a 2.5A limiter this may start to be measurable from perhaps 2.3A onwards. Beyond the preset current limit, simple limiters will also allow the current to increase with decreasing load resistance. A precision current limit isn't usually required, and even the most basic arrangement will be sufficient to prevent disasters if everything is designed to handle the worst case.
Figure 3.6 - E-Fuse Protected Basic Regulator
There is another way to provide protection, and this one is (close to) bulletproof. An SCR (T1 for 'thyristor 1') is triggered if the current exceeds a preset maximum. Once it's triggered, the SCR shorts out the zener diode, and reduces output voltage and current to zero. It's reset by turning the power off and on again, or you can use a pushbutton in parallel with the SCR. It will cease conduction when it's shorted out, because there is no holding current. The nice part of this is that if the fault is still present, the SCR will be triggered again as soon as you release the pushbutton, and there is no way to force the regulator to provide more than around 6.4A. The extra capacitor (C3) is necessary to allow the regulator to charge C2. Note that R1 and R2 should be 1W if you use this arrangement, as they will dissipate just under 0.5W when T1 is triggered.
Note the connection of the 'Reset' switch. I have seen similar circuits where the switch is in series with the SCR, but that means that if the switch is open there is no protection! By having the switch in parallel, provided the fault has been cleared, output voltage is restored when the switch is released. If the fault is still present, the SCR will be re-triggered the instant that the switch is opened, so protection is never compromised. There are many things that have to be properly thought through with circuitry, and just putting a switch in the wrong place can lead to failure.
For many regulators, this arrangement can be the saviour of the series-pass transistor. While R4 does reduce the regulation (the output will fall by 0.5V from no-load to full load), this is rarely an issue with a simple design. R4 can be repositioned so it (and Q3 with associated resistors) comes before the regulator itself. The position doesn't matter, as the extra current for the regulator is minimal (only about 10mA with a 30V input). There's a solution for everything, even if it's not immediately obvious. There's also another way (as always), and it's far from obvious.
Figure 3.7 - 'Lossless' Current Detector
The reed switch shown above has the advantage that there is very little resistance in the circuit (I used 1mm wire, and heavier gauge wire would be used for higher current), but there is a (very) small delay because it's a mechanical contact. With the switch I tested, it requires 32 ampere-turns (2.3A, 14 turns) to operate, and it can be configured for almost any current you like. Anything over 32A would be a challenge though, as that implies less than one turn. Positioning the coil along the body of the switch provides some minor control over the trip current. Also of interest is just how fast the reed switch is. With only a small over-current (about 2.5A), it operates in 250µs - and yes, you did read that correctly. With a higher current it just gets faster, and I measured 200µs with 3A. That's not as fast as you'd normally expect from semiconductor circuitry, but it is still fast enough to protect the series-pass transistor.
Figure 3.8 - Reed Switch E-Fuse Protected Basic Regulator
The implementation is shown in Figure 3.7, and the trip current is set by the number of turns. Since all reed switches will be a little different, you'll need to test the coil and switch combination to work out the number of turns for the preset current. My test switch pictured above has 14 turns, and will trip reliably with 2.3A. If the winding is reduced to 7 turns, the trip current is 4.6A. Should the output be shorted, the instantaneous current from C2 will be very high, so operation should be close to instantaneous. If it's only used as an e-fuse, the exact current probably doesn't matter too much, as it's there for protection, not for precision current limiting.
When you have both voltage and current regulators (any form of current limiting), it's usual that the current regulator is 'dominant'. In other words, when the current limiter is active, it controls the output voltage and overrides the voltage setting. This usually (but by no means always) results in a stable circuit, because the two regulators cannot fight for control. By making the current control dominant, the preset current will be delivered whenever the load demands more, regardless of the voltage setting. The latter is automatically altered to maintain the preset current, unless the load current is less than the limit. Then (and only then) is the voltage control active.
The next set of drawings show feedback regulators, which have better regulation than the simple versions shown above. Feedback is used to ensure that any change in the output voltage is compensated by means of an error amplifier. This term explains what it does - if there's an error, the error amp makes the necessary compensation to restore the voltage to its preset value. All IC regulators contain an error amplifier plus comprehensive protection schemes. These include current limiting and thermal protection that turns the IC regulator off if the temperature rises beyond the preset limit (typically a die temperature of around 125°C).
Figure 4.1 - Basic Feedback Regulator
The above circuit used to be the mainstay of regulators before the advent of IC-based versions. I used it as a 48V phantom power supply in Project 93, but configured for much lower current. The feedback is via R5 and R6 to the base of Q3. If the output voltage falls, Q3 turns off just enough to restore equilibrium, and R4 (which can be installed for current sensing) has no effect on the output voltage because the feedback is taken from after the resistor. It can be used with foldback limiting (Figure 3.2), or an e-fuse arrangement as shown in Figure 3.4. Foldback limiting has to be set up carefully, because there are two feedback networks, one negative (to maintain the set voltage) and one positive (to provide foldback). The two will fight each other for control.
R7 may look out of place, but it's intended to stabilise the current through the zener, ensuring better regulation. By taking it from after the regulator, there is no injected noise (mainly ripple). Even at 5A output, the ripple is attenuated by over 40dB. The output voltage remains within 100mV of the target value from no load to 5A. If you need a more accurate voltage setting, either R5 or R6 can be replaced with a trimpot in series with a fixed resistor, allowing the voltage to be set precisely.
Far better performance can be obtained by using an opamp in place of Q3, but that comes with limitations. Most are rated for a supply voltage of no more than 36V, so high voltage regulators cannot be realised with readily available opamps. Despite the number of so-called 'super' regulators used to power preamps and the like, a circuit such as that shown is perfectly acceptable in most cases. The circuit can also be used as a 'pre-regulator', allowing preamp circuits to be powered from the main power amp supply, with the discrete regulator followed by an IC version. This will provide almost infinite power supply ripple rejection.
Figure 4.2 - Opamp Based Feedback Regulator
The circuit in Figure 4.2 uses an 'ideal' opamp (available in the simulator I use), and as such it's close to perfect. The current limiter comes in at 2.6A, and reduces the reference voltage to maintain the preset current. For true precision, the current limit circuit would also use an opamp, as that provides much higher gain than the two transistors, and therefore has much better control of the output current. Since this article is primarily about 'simple' regulators, adding another opamp is out of scope. Be warned that when you do include an opamp or additional gain stages, there's always the likelihood that the circuit will become unstable, and it's necessary to include compensation capacitors to roll off the gain at high frequencies, where oscillation is likely to occur. Bear in mind that the 'ideal' opamp can provide as much base current as is needed by the series-pass devices, where a real may be unable to do so.
The easy answer to the opamp conundrum (for high voltages) is to make it discrete as well, but the circuit becomes much more complex. In the majority of cases where you'd use a discrete regulator it's simply not necessary, but feel free to experiment if you want to. It will make voltage regulation better, but stability issues are always waiting to pounce. You will be able to get the voltage change (no load to full load) down to less than 1mV with a suitable opamp, but that's rarely important. It's a different matter if it's a lab supply where very accurate voltages are essential, but 'general purpose' regulators don't need to be that precise.
For any regulator, it's important to ensure that there is enough input-output differential to ensure there's no ripple 'breakthrough'. All regulators require some 'headroom', the difference between the input voltage and the output voltage. I've used an example of 5V in the examples, but that's often cutting it fine, especially if there's ripple on the incoming supply. This is where the selection of the transformer, bridge rectifier and filter capacitance is very important. If you get it wrong, the regulator will not perform as expected.
The transformer is at the heart of any power supply. To ensure that its regulation is adequate and to ensure it won't overheat, it needs to have a higher rating than you may expect. Capacitor input filters impose a heavy load on a transformer, so if the average output voltage is 30V and the current is 5A, that's 150 watts. However, transformers are rated in VA (volt-amps), and the VA rating should be not less than (around) 1.7 times the DC power. That means a 225VA transformer. Power supplies are used differently from (for example) audio amplifiers, and are often expected to provide full current for extended periods. To get a reliable 30V average DC voltage, the transformer will normally have at least a 25V secondary. The voltage will be close to 35V with no load, and AC current is double the DC current. A 25V transformer delivering 5A DC after rectification and filtering needs to deliver 10A AC (RMS), which is 250VA. Note that it doesn't matter if the DC output voltage is 5V or 25V, if the output is 5A then the transformer is still delivering 250VA. For safety, you'd use a 300VA transformer, as that's a standard size.
The circuits shown so far can function with an input-output differential of less than 3V, provided the average voltage remains at 30V or more. This means that the ripple's negative voltage can extend down to 27V (a total of ~8V peak-peak with an average of 30V) and the circuit will maintain regulation without ripple breakthrough. The next thing is to work out the DC supply for the regulator, which often produces a few mental shock-waves when you start to add up the pieces. For this, I'll assume the full-load ripple to be 4V P-P ...
The required capacitance for a given load current and ripple voltage is determined (approximately) by the formula ...
C = ( IL / ΔV ) × k × 1,000 µF ... where
IL = Load current
ΔV = peak-peak ripple voltage
k = 6 for 120Hz or 7 for 100Hz ripple frequency
Since I will always use 100Hz ripple frequency (50Hz mains), this can be checked easily, so ...
IL = 5A, ripple = 4V p-p, therefore C = 8,750µF (use 10,000µF)
This is well within expectations, and with a 25V transformer the average voltage (simulated) is just under 30V, with 2.9V p-p ripple. However, we've not considered the transformer's regulation yet, and it has a big influence on the final outcome. Transformers never provide the same regulation with a rectifier and capacitor load as they do with a resistive load (see Linear Power Supply Design for more on this topic). To be able to get 24V output at 5A means the transformer will have to provide an output current of close to 22A peak, or a bit over 9A RMS. We know that the voltage will sag under load, so we will probably need an output of 30V RMS to ensure that the voltage doesn't collapse too far. That means a 300VA transformer. It's only just big enough, but it will work at full load. Note that the RMS current from the transformer is almost double the DC current, something that isn't always appreciated or accounted for.
Of course, you may not need the full 5A output continuously so a smaller transformer (lower VA rating) may be suitable. This is entirely dependent on the application, something I can't predict. This is all part of the design process, and you need all of the information. Many people ask questions on forum sites with the bare minimum (and often not even that) and expect others to help them with a solution. It can't be done - all of the info needs to be available, and there's quite a bit of design work involved just to determine the transformer and filter capacitor requirements.
If you need a higher voltage, it's just a matter of increasing the zener voltage for the simple regulators, or changing the feedback resistors for the Figure 4.1 circuit. Ideally, the zener voltage for this will be around half the desired output voltage, so you'd use a 24V zener diode for a 48V supply. The series-pass transistor (Q1) will be happy with anything up to 100V input, provided you use the TIP35C. However, if you increase the voltage, you also increase the chances of failure if only a single transistor is used. My recommendation would be that if you double the voltage (from 24V to 48V) the output current should be halved (from 5A to 2.5A). It's important to always be aware of SOA - See #8.
Be particularly careful if the input voltage is much greater than the output voltage. While it's certainly possible to have 100V input and 5V output, it would not be sensible. Even with 1A of output current, Q1 will dissipate 95W (until it fails, which it will), and it's hard to get that much heat out of the transistor and into a heatsink. A heatsink that can dissipate 95W and remain at a sensible temperature is going to be a very substantial piece of aluminium - you're looking at a heatsink with a thermal resistance of 0.27°C/W for a temperature rise of 25°C (50°C heatsink temperature). The maximum allowable DC current through a TIP35C with 95V across it is only 100mA, limited by second-breakdown. No-one ever said that this was easy, other than someone who's never done it.
You can use higher voltage transistors if you really do need to reduce a high voltage to the required voltage, but you must consider the safe operating area (see below). There are many considerations, and it's not just about the transistor voltage rating. All resistors will dissipate more power too, and in general, regulating high voltages can be particularly challenging.
Figure 6.1 - 48V Feedback Regulator
As an example, Figure 4.1 is easily modified to provide 48V output at 2A. The circuit can supply more, but in the interests of minimal change, 2A is realistic. There are a few resistor changes, and Q3 is changed for a higher voltage version (The BC546 is rated for 80V) and the zener voltage increased to 24V. Even without any adjustments, the output voltage (simulated) is 46V, well within the limits set for phantom microphone power for example.
For other voltages (and currents) it's a matter of selecting the component values to ensure sufficient base current for the series-pass devices, a stable zener current, and transistors that are within their safe operating area at all times. I didn't include current limiting, but an e-fuse circuit would be useful if there's any chance of the output being shorted. As you can see, the topology isn't changed at all, and with suitable high-voltage transistors a circuit such as this can regulate almost any voltage you like.
High voltage regulators were very uncommon with valve (vacuum tube) amplifiers, but there are some valves that are particularly fussy about the screen voltage. Transmitting valves (for RF work) have been used for audio, with one I'm familiar with being the 6146B. With a 750V plate supply, failure was assured if the screen was operated at more than 200V, and the only way to ensure reliability was a regulator. When these were built, no transistors were available that could handle the voltage, so it used a zener-controlled valve regulator. It worked well enough, but today there are many transistors that would be a lot better.
Often, the only thing you need to do to get more current is to use paralleled series-pass transistors, and you may also need to upgrade the driver transistor as well. Current up to 20A or so is usually not especially difficult, but the power transformer, bridge rectifier and filter caps become a serious (financial) problem if 20A or more is needed on a continuous basis. You'll also be looking for a pretty serious heatsink, depending on the load's duty-cycle. For momentary current up to 20A (less than ~100ms) you often don't need to do very much, but if the current is required for more than a few seconds you're probably better off with a switchmode supply. Again, it's important to be aware of SOA - See #8.
If you need lots of current at a relatively low voltage, a switchmode supply followed by a linear regulator will usually work well. The SMPS will be regulated, so you don't have to consider transformer regulation or other losses within a 'linear' supply. A voltage differential of 5V will normally be quite sufficient, and the regulator is greatly simplified.
Figure 7.1 - 24V @ 10A Regulator
The principles aren't changed one bit. We need an extra output transistor to handle the current, and that in turn requires a bigger driver transistor (Q2) and error amplifier (Q3). Due to the higher current through the circuit, we must ensure that everything is well within its limits for a long and happy life. Each output transistor has its own emitter resistor to force current sharing, but if current limiting or an e-fuse were needed, all three should be monitored, using summing resistors as shown in Figure 2.3. The higher the resistor value, the better, but we still need to keep the voltage drop to less than 0.5V, so 100mΩ resistors would be preferred.
Because the two base current feed resistors (R1 and R2) are a lower value, the bypass capacitor should be increased to ensure good ripple rejection. 220µF is ideal, and maintains much the same performance as we had with the lower-current version. While the circuit was simulated with a fixed 30V input, in reality it will likely be 35V at no load, and a 500VA transformer would be needed to maintain a voltage of not less than 30V (including ripple) at the input. Add to this the need for at least 20,000µF filter caps, a very good heatsink, plus the components. Adding current limiting would make it more complex of course.
In this case, the 'elephant' is SOA - safe operating area. There are three different parts to an SOA graph, the bonding wire limit (before it acts as a fuse), the thermal limit (how much heat can be removed from the junction) and second breakdown. Thermal and bonding wire limits are easy to deal with, but as shown below, second (or secondary) breakdown becomes an issue once the collector-emitter voltage exceeds 30V. While you may think that SOA only applies to the power transistors, it applies to every transistor in the circuit. The driver transistor is the next most at-risk, but it's unusual to see an SOA curve in datasheets for smaller devices (the BD139/140 are rated for 1.5A maximum, with a dissipation of 8W). It's always better to err on the side of making the driver transistor bigger (e.g. TIP41) than smaller, but you also need to consider the hFE at the expected collector current.
I've only shown the TIP35/36C graphs, as the 'A' and 'B' versions are identical, other than a lower maximum voltage (some suppliers only stock the 'C' versions). One of the reasons I recommend these transistors is that they are very rugged, and they are low-cost. The graph below was adapted from that shown in the Motorola datasheet, but it applies whoever makes the transistors. The essence of the graph is unchanged, but I made mods to the graph to make it easier to read.
Figure 8.1 - TIP35C, 36C Safe Operating Area
The second breakdown area is where things can get out of hand very quickly. The phenomenon is caused by 'hot-spotting' on the silicon die. If there's any difference between the temperature of one section versus another (which will always be the case), one small section will be a little hotter than the rest. This increases gain in that area, and also reduces the base-emitter forward voltage. The hot section then becomes hotter because it carries more current. This cycle continues until the transistor fails, which can happen very quickly. You can see that the SOA changes with time, so for DC it means lower voltage and/ or current than for momentary pulses. The shortest pulse shown is 300µs. For a regulator, we are primarily interested in the DC conditions unless we know (for certain) that short pulses are the normal load for the regulator in use.
For example, at a collector voltage of 30A, the maximum current is 4A, a dissipation of 125W (the full rating for the transistor). Increase the voltage to 40V and the current is only 2A (80W). A further increase to 50V, and current is only 1A (50W). At 100V, the current is reduced to 100mA (a mere 10W). You ignore the SOA of any transistor at your peril, because failure is never a matter of 'if', but 'when'! The transient ratings mean that you can get more current at higher voltages, provided the time is short. With 40V collector-emitter, you can get 4.5A if the 'event' is over in 300µs, so charging an output capacitor (for example) won't usually kill the transistor - provided the current is limited to remain within the SOA. The SOA topic is discussed in detail in the article Semiconductor Safe Operating Area, but with an emphasis on power amplifier designs.
In all of this, there is still another 'gotcha'! Note that the figures shown are all for a case temperature of 25°C. Maintaining this in use is generally impossible, so the maxima all have to be derated at elevated temperatures. For the TIP3x devices, the dissipated power is derated by 1W/°C (from the universally accepted 25°C), so at a case temperature of 50°C, the maximum power is reduced by 25W (maximum allowable dissipation is therefore 100W, not 125W). At a case temperature of 150°C, no power may be dissipated at all. The die (or junction) will also be at a temperature of 150°C, and any additional power will increase the junction temperature to the point of failure (150°C is the maximum allowable). Most bipolar transistors are the same in this respect, but some MOSFETs can tolerate up to 175°C junction temperature.
Failure to accommodate the SOA vs. temperature curves is a major reason for failure, and few people consider the thermal resistance from case to heatsink. A transistor dissipating 50W can easily have the case temperature a full 50°C above the heatsink temperature (1°C/W). See The Design of Heatsinks for a very detailed discussion of how to apply a heatsink, knowing the power to be dissipated, transistor specifications, etc. Heatsinks only seem simple, but there's a lot needed to get it right. Using the right thermal interface material (aka 'TIM') is essential to minimise the thermal resistance, which can mean success or failure of the end result.
There are countless power supply schematics on the Net, and a majority of them underestimate the power dissipation, and give nary a thought to SOA. Perhaps surprisingly, many of these circuits will work with most typical loads, but unfortunately, if you have a power supply, it will be used 'inappropriately' at some stage. This is the nature of a power supply, you never know what it may be expected to drive in advance, and it's not until you have one that you'll come up with 'exciting' ways to use it. Yes, I am speaking from personal experience, with at least 40 years using power supplies in ways I didn't envisage when I built my first unit. Fixed (internal) supplies have one major benefit - you know exactly what they need to drive, and they're generally in the same chassis.
A semi-discrete design can be engineered to have excellent performance, and an example is shown below. The error amplifier is now an opamp driving a transistor, so it has a vast amount of gain for high accuracy and very good ripple rejection. The extra complications are not particularly DIY friendly, as there's a lot of extra parts. Of greater concern is stability. No-one wants a power supply that thinks it's an RF transmitter with some loads, and stability needs to be verified at every possible combination of output voltage and current. In any high-gain circuit, ensuring complete freedom from oscillation can be surprisingly difficult, and power supplies are no different.
Figure 9.1 - Semi-Discrete Regulator
As you can see, the opamp needs its own power supply (±12V), and there are two capacitors to ensure stability. You may wonder where the reference voltage is, as it's shown using a zener diode for the other designs. The reference is the -12V supply! This circuit is adapted from Bench Power Supplies - Buy Or Build?, a discussion as to whether one should consider building a variable bench supply or not. It's been changed so that only the highest current range is included, and voltage adjustment has been set up to allow it to be trimmed with the preset. The circuit was originally devised by John Linsley-Hood and was published in 1975. Although the circuitry is rather dated, it will still perform very well. C3 and C4 are included to slow down the circuit, and these prevent oscillation. Their inclusion also means that there's overshoot and undershoot when the load is connected or disconnected, and this may not be desirable with some sensitive circuits. Q3 must be mounted on a heatsink, as its dissipation can be up to 2W.
If you only need a fixed voltage, and your requirements are fairly relaxed, this is not the kind of circuit you'd normally use. The article has several other circuits that are worth looking at, but the complexity is fairly high in all cases. Note that the Figure 9.1 circuit is designed to be able to drive full current into a shorted output, so it uses two TIP36 power transistors. They are within the SOA curve at all times, but a fan forced heatsink is essential. I doubt that many readers will find this an attractive proposition.
If you imagine that even better performance is needed (particularly accurate current limiting), then the pain is increased accordingly. When you have two feedback systems (one for voltage, one for current), there is always a point where both are active, and if not worked out properly they may be fighting each other for control. This will lead to instability (oscillation) that will usually be very difficult to suppress successfully, so there may be some combinations of output voltage and current that cannot be used without the supply oscillating. This is unlikely to be high on anyone's wish-list.
Figure 9.2 - LM317 Based Regulator
One arrangement that's very common is a current-boosted LM317 (and/ or LM337). Without external current limiting provided by Q3 and Q4, there is no protection at all, so a short or severe overload at the output will cause the booster transistors to fail. When current limiting is applied at the input side as shown, there may be some ripple on the output when the current-limit circuit is active. The only way to eliminate that problem is to have a separate sensing resistor at the output, but that affects regulation. Note that both emitter resistors for Q1 and Q2 are monitored, as the gain of the transistors will be different (emitter resistors notwithstanding). The ICs have their own internal bandgap reference, using 1.25V.
It's shown here using just a 10k pot to set the output voltage, and the trimpot (VR2) provides the ability to use a standard value linear pot to set the voltage for a variable supply. As shown, the output is adjustable from 0V up to 25V. The requirement for a clean negative supply for the current limiter and voltage pot is a nuisance, but that can be provided by a low-current regulator. There are endless possibilities for voltage regulation, and the circuit needs to be selected based on your needs. A boosted 3-terminal regulator is a good solution when you need particularly good regulation, but without protection it's vulnerable to damage by overload. Zero volts output is possible by using the low-voltage negative supply (as close as possible to -1.25V). This is used for VR1, R2 and the emitter of Q4. The three diodes are important. Without D2, if the output is shorted the IC will be damaged. The other two protect the supply against an external voltage (of either polarity).
This is about as simple as it's possible to make a power supply based on the LM317. If you need alternative current limits, the easiest is to use another sense resistor in series with the input (but after C1 of course). This can use switched values, or the voltage across it can be amplified. The latter is a more complex solution, and isn't shown here. Some example circuits are shown in Bench Power Supplies - Buy Or Build?. The current limiter needs to be fairly fast to provide full protection for the current boost transistors.
While there are some good examples in the LM317 datasheet, most are without explanation, and a few appear to be rather suspect. I cannot vouch of any of the circuits described in the datasheet, as many will not simulate properly (if at all), and others are just basic modifications to the generally accepted circuits. I suggest that if you do decide to use any of the demonstration circuits that you do so with care, and be prepared to encounter difficulties (oscillation can be particularly troublesome).
![]()
Note Carefully: In the documentation for various regulators, the input-output differential voltage is quoted. This is 40V for the LM317, and many people seem to think that it's therefore alright to have an input of (say) 60V, provided the output voltage is set for at least 20V. This myth is backed up all over the place, but fails to consider reality. When power is applied and there's a decent sized output capacitor, it's discharged at power-on, so the full 60V is across the regulator. If there's a momentary short at the output, the full 60V is across the regulator.
So, while it's claimed that the input voltage can be greater than the input-output differential voltage, relying on this can lead to failure. You may be able to bypass the IC with a 36V zener diode that can handle the output cap's charge current, but even a momentary short will probably kill the zener, and the output will be at the full unregulated voltage. You won't find many people talking about this, but it's very real. I would never advise anyone to operate any regulator IC with more than it's maximum voltage at the input.
All of the power supply circuits shown are capable only of sourcing current. That means they can provide power to a load, but they cannot sink, which is to accept current from another source. There are laboratory power supplies that can do either, namely provide or accept current. For most testing, this isn't necessary, but a supply with this capability is known as a '2-quadrant' supply if it can source or sink current of one polarity, or a '4-quadrant' unit can source or sink current of either polarity.
A basic 'electronic load' is (usually) a single quadrant current sink. It can absorb current, but cannot supply anything to an external load. These are specialised, and are typically used to test power supplies. It's unlikely that you'll ever need one, as most of the time a suitable resistor bank is the easiest (and you may already have one as a dummy load for amplifiers). If you do happen to need a true electronic load, some modern switchmode types use 'regenerative' capabilities, and can return the absorbed power back into the mains, minimising wasted power. There's a lot involved, and they are definitely not a DIY project.
A supply that can both source (supply) and sink (absorb) power needs a set of transistors for each function. It requires feedback to ensure that its output voltage remains fixed regardless of whether it's sourcing or sinking current, and a dual-polarity current limit so that excessive power won't cause damage. Consider a supply set for 6V, but connected to a 12V car battery. The battery can deliver hundreds of amps (at least for long enough to blow up the PSU), so the supply must be designed to limit the maximum current being absorbed to a safe value. As you can imagine, this involves a great deal of circuitry, and most people will never need one. I do have a current sink - it's called a dummy load, and can be set for 4, 8, 12 or 16Ω. I have never had a need for anything more advanced in my workshop, but I did design one for a company I worked for because there was one type of supply that required 'soak testing' to ensure the voltage never fell below a critical voltage level.
These are specialised supplies, and require significantly more electronics than a 'simple' power supply. An audio power amplifier is a 4-quadrant power supply if it can amplify DC, but the normal transistor complement is nowhere near sufficient to allow it to be used as a power supply. Because these are so specialised, they are mentioned in passing, and details will not be provided here.
However, there is one simple supply that can source and sink current - a shunt regulator (often nothing more than a resistor and a zener diode). Note that I mention this in the interests of completeness, even though it's of no practical use in 99% of cases. More information is available in Voltage & Current Regulators And How To Use Them.
If you think that you really need a 2-quadrant or 4-quadrant power supply, you could look at the OPA549, but it's rather limited since it's a single IC and has fairly low power dissipation. It's also expensive, but it does include programmable current limiting (set with a resistor or a pot). You could also use an LM3886 IC power amplifier, but the available current is even more limited, and getting the heat out of any IC will always be a challenge. There are several other similar options, but none that I'd really recommend. This is simply because it's not necessary in most cases.
It's commonly believed that MOSFETs don't suffer from second breakdown, and therefore should be 'better'. However, the vast majority of MOSFETs available are designed for switching, not linear operation. They also suffer from a failure mechanism that's remarkably similar to second breakdown, but it's usually spoken of in hushed tones, lest anyone find out about it. Ok, that might be a stretch, but in almost all cases, MOSFETs are optimised for low RDS-On (on resistance) and high switching speeds. The only MOSFETs that are specifically designed for linear operation are lateral types, as used in Project 101. These have a very different set of output characteristics from 'vertical' MOSFETs (e.g. HEXFETs and their ilk), with a high RDS-On and lower transconductance (roughly equivalent to gain).
Many people (including me) have used switching MOSFETs in linear circuits, and with care they will work. Some of the early types were almost suitable due to comparatively high RDS-On compared to the latest and greatest. However, the design of MOSFETs has evolved, and linear operation is no longer something you can rely on. They will often work quite well (I've tested and verified this), but in general they are simply not recommended (and that's the manufacturer's recommendation, not just mine).
When you look at the SOA curves for MOSFETs, you'll see curves for various time limited operation, but nothing for DC. The difference between the allowable voltage and current in a lowly IRF540N shows 10ms, 1ms and 100µs curves, but nothing for DC. Most are the same, and only a few show DC characteristics (mainly older devices that may or may not still be available). You may be able to use a MOSFET if it's significantly derated, but you would need to run extensive (and likely destructive) tests to determine if it will survive in your application.
You can often see just from basic specifications if a MOSFET is likely to work in linear mode. The first clue is a high RDS-On, usually greater than 0.5Ω. An example is the IRF840, rated for 500V at 8A. However, the TO-220 case is terrible for dispersing heat because the tab is small. With 30V drain-source voltage, only 4A is available, or 8A with 15V. These are the same as the TIP35/36, but the latter have a larger case and better heat dispersion. You will never get 120W of heat out a TO-220 package, so the MOSFET must be operated at a lower current (or use multiple devices in parallel). With 100V across the device, an IRF840 can deliver up to 1.25A, and although this is significantly better than the TIP35/36, you'll still be unable to get the heat (125W) out of the TO-220 package if the power is anything other than a transient event.
MOSFETs cannot be used as regulators 'open-loop' (no feedback) either, because the gate-source voltage is highly variable (from one device to the next and with temperature). However, the IRF840 might be a good choice if you need a 400V regulated output at relatively low current (preferably less than 100mA). It will need extensive protection, both to limit the power dissipated, and to ensure that the gate-source insulation cannot be damaged (this requires a 12-15V zener diode).
The RDS-On of a MOSFET operated in linear mode is irrelevant. The power dissipated is the product of drain-source voltage and current. If you imagine that a low RDS-On makes a difference, then you don't understand how linear circuits work (and yes, I have seen this claimed, hence the comment here).
It's easy to see why switchmode supplies have taken over for high current outputs. The entire SMPS will be smaller than just the transformer, and will also cost a great deal less. At the time of writing, I had a quick look on eBay and found (for example) a 24V, 10A SMPS for just over AU$23.00 up to around AU$30.00 or so. It's impossible to compete with this price, and even if they cost twice as much, it's still way cheaper than one you could build using linear techniques. This applies to many different voltages and currents, but the choices are limited. You can get 5V, 12V, 24V and 48V SMPS at various current ratings. 'Traditional' suppliers are more expensive of course, but you'd still be hard-pressed to build a linear supply for less than even the most expensive SMPS.
None of this makes the circuits here redundant or of no use, as it's all about the principles. Supplies such as the one shown above were used regularly before the advent of low-cost switchmode supplies. Early SMPS were both complex and expensive, and having worked with them many years ago, I have first-hand experience with them. Unlike those today, they were always repaired after a failure (which were fairly common), and even the repair process was tricky. Like all SMPS, everything had to be fully functional, or the supply would blow up again when tested. Before I devised some specialised test jigs, technicians used to power-on a 'repaired' supply with a broom-handle, lest the SMPS blow up in their face. This is not made up - it's 100% factual.
One thing that building a supply can provide is flexibility. If you need (say) 13.8V with a preset current limit, you'll probably pay dearly for that (that describes the requirements for a lead-acid a battery charger). There are many other places where your needs aren't provided for by COTS power supplies, and unless you're an experienced SMPS designer you don't have many choices. Under these conditions you will end up having to use linear supplies, and even more so if high-frequency noise is an issue. In some cases, you can use a COTS supply followed by a linear regulator, which reduces the size, weight and cost, and you get the best of both worlds.
Voltage regulators aren't actually hard to design, but it's important to consider all of the factors. It's not just finding a transistor that can pass the current you need, but finding one (or more than one) that can handle the power, won't be subjected to second breakdown, and has the thermal ratings needed to ensure it can be kept to a reasonable temperature. Thermal derating has to be factored into the design, along with the input voltage variation. While none of this is difficult, there are many pieces to the puzzle, and they all have to fit together.
You also need to factor in the transistor gain at the current it has to carry, as most transistors vary their gain across the current range. Choosing a suitable heatsink can be a challenge as well, and if you don't understand how everything fits together then the end result is a lottery. Failure to keep the transistors within their safe operating area means that the regulator will fail when you push it to its limits, unleashing the full incoming DC upon your circuitry.
Getting very good regulation (both input or 'line' and load) demands more complex circuitry, so it needs to be tested thoroughly to ensure that the regulator doesn't oscillate at any load. This can be difficult if you use a high-gain error amplifier, and it's made worse when current limiting is included. Foldback limiting can be particularly hard to get right, as the voltage and current curves must remain within the safe operating area at all times, compensated for elevated heatsink temperatures of course.
The last thing I want to do is turn people off building their own regulator designs, as you will learn a great deal by doing so. What I do want to do is provide enough information so that your design has some chance of working without failure, hence the details presented here. It's especially important to be aware that extremely good regulation isn't often needed. You do need to be able to provide a voltage that's close to the desired figure, but unless you're working with precision test gear you rarely need perfect regulation.
What you do need is low ripple and some control over the maximum allowable output current. Once you understand that exceptional voltage stability is rarely needed, that makes your job that much easier. Most circuits won't care one bit of the voltage falls by a couple of hundred millivolts from no load to full load, as that's still far better than you'll get from a transformer, bridge and filter capacitor. Some circuits do care though, so a thorough analysis of the regulator's requirements is always necessary.
![]() | + + + + + + + |
Elliott Sound Products | +Relays & How To Use Them - Part 1 |
![]() ![]() |
Relays (and in particular the electro-mechanical types) might seem so-o-o last century, but there are countless places where it simply doesn't make sense to even consider anything else. Although one could be forgiven for thinking that there must be a better way to switch things on and off, in many cases a relay is the simplest, cheapest and most reliable way to do it. Relays are electro-mechanical devices, in which an electromagnet is used to attract a moveable piece of steel (the armature), which activates one or more sets of contacts. The relay as we know it was invented by Joseph Henry in 1835. It has been in constant use ever since, and they are likely to be with us for many decades to come.
+ +This article mainly covers 'conventional' (i.e. electro-mechanical) relays, but there are also several different types of solid-state relays. We'll look at some of those later, but very few are suitable for use in audio circuits. Some shouldn't even be used to turn on transformers, even though their specifications may lead you to think that they would be ideal.
+ +Relays are not well understood by many DIY people, and there are many misconceptions. The purpose of this article is to give a primer - what the Americans might call "Relays 101". It's not possible (or necessary) to describe every different relay type, because they all operate in a similar manner and have more points of similarity than differences. Relays are used in nearly all automation systems, both for industrial controllers and in home automation systems. One of their great benefits is that when off, no power is drawn by the relay itself or the load. There is virtually no 'leakage' current via the contacts, and the insulation materials will normally have a resistance several gigaohms (GΩ).
+ +Many websites discuss relays, but the intention here is not just to provide a primer, but to look at ideas that will be new to many, and possible pitfalls as well. There are places where relays are used where you might expect them to last forever, but they don't. Since relays are normally so reliable, we need to examine the things that can go wrong, and learn how to specify a relay for what we need to do.
+ +There are thousands of different relays on the market. They range from miniature PCB mounting types intended for switching signal or other low voltage signals, up to very large industrial types that are used to start big electric motors and other industrial loads. These are usually referred to as 'contactors', but that's nothing more than a different name for a really big relay.
+ +Being electro-mechanical devices, this means that there are both electrical and mechanical components within a relay. The electrical part (not counting the contacts) is the actuating coil, which is an electromagnet. When current passes through the coil winding, a magnetic field is created which attracts the armature (i.e. a solenoid). Provided there is enough current (known as the pull-in or 'must operate' current), the armature will be pulled from its rest position so that it makes contact with the remainder of the magnetic circuit. In so doing, the relay contacts change from their 'normal', 'rest' or 'reset' position to the activated or 'set' position.
+ +A single electromagnet can activate several sets of contacts, but in most relays the number is generally no more than four sets. More may cause problems, because the armature will have to be able to move too many parts, so the return spring needs to be more powerful as does the electromagnet. The contact alignment also becomes critical, to ensure that every set of contacts opens and closes and has sufficient clearance for the intended voltage. Some of the things that make relays so popular are ...
+ +It should be noted that automotive relays are a special case, are specifically designed for use with low voltage (12 or 24V) use, and one end of the coil is often connected to internal parts of the relay. Automotive relays must never be used with mains voltages, or where there is a significant voltage difference between the coil and contacts. The insulation is not rated for high voltages, even if the coil is not connected to anything internally. Most also draw significantly more coil current (typically 200mA or more) than 'general purpose' types (40-50mA). However, automotive relays are also rated to handle up to 150A or more at 12V DC.
+ +It's quite easy for a microcontroller to activate a small relay, which activates a bigger relay, which in turn activates a contactor to power a large motor in an industrial process. This can be thought of as a crude form of amplification, where a very small current may ultimately result in a huge machine starting or shutting down. There's even something called 'relay logic', where relays are literally used to implement logic functions (see Relay Logic for a bit more info on this seemingly odd usage).
+ +The references have more information and for some very detailed explanations, reference [ 1 ] is worth a read.
+ + +The essential parts of a simplified relay are shown below. In most relays, the coil is wound on a former (or bobbin), and is fully insulated from everything else. The coil (solenoid) along with the rest of the magnetic circuit is an electromagnet. Most relay specifications will tell you how much voltage you can have between the two sections, and it's not uncommon for relays to be rated for 2kV isolation or more. Don't expect miniature relays to withstand high voltages unless you get one that's specifically designed for a high isolation voltage. We'll look at this in more detail later.
+ +The relay is shown as de-energised (A) and energised (B). The coil is usually not polarity sensitive, and can be connected either way. Be aware that there are some relays where the polarity is important, either because they have an in-built diode, they use a permanent magnet to increase sensitivity (uncommon), or because they are latching types. Latching relays are a special case that will be looked at separately. The contact assembly is made from phosphor-bronze or some similar material that is both a good electrical conductor and is flexible enough to withstand a million or more flexing (bending) movements without failure. The contacts are welded or riveted into the contact supports/ arms and can be made from widely different materials, depending on the intended use.
+ +The contact 'arms' are typically fastened to the body of the relay mechanism, sometimes with rivets, occasionally with screws. Each contact is separated by a layer of insulation, and the contacts are usually also insulated from the magnetic circuit (the yoke and/or armature). The separate parts of the contact assembly are insulated from each other. Not all relays have a physical spring to return the armature to the rest position. In some cases, the contact arms are designed to act as springs as well. You will also see relays that have the moving contacts attached directly to the armature - the octal base relay shown in Figure 1.2 uses this method.
+ +The relay shown has contacts that are most commonly called 'SPDT', meaning single-pole, double-throw. The term 'double-throw' means that one contact is normally open ('NO') with respect to the common, and the other is normally closed ('NC'). The 'normal' state is with the coil de-energised. When the rated voltage is applied to the coil, enough current flows so that the armature is pulled in to close the magnetic circuit, the 'NO' terminal is now connected to common, and the 'NC' terminal is open circuit.
+ +This allows you to disconnect one signal or load of some kind, and connect a different one. Alternatively, a circuit may be operational only if the relay is de-energised, and is disconnected when power is supplied to the coil. Another very common configuration is called DPDT - double-pole, double-throw. This provides two completely separate sets of contacts, with both having normally open and normally closed contacts. 4PDT is now easily decoded - it means 4-pole double-throw. You will also find SPST relays - a single set of (usually) normally open contacts.
+ +The photo shows a very, very small sample of relays, picked to show the diversity and the internals of some typical components. There are many others, including many different styles of reed relays as well as several intermediate sizes of conventional relays. You can see that one relay has an octal base - exactly the same as used for many thermionic valves ('tubes' if you must). Although the relay I have shown is many years old, this style is still available, because it makes it easy to replace relays in industrial control systems.
+ +In fact, there are very few relays that have been discontinued. There may be changes to the contact materials (see below for more) and cases might change from metal/ Bakelite to plastic, but the basic styles and contact configurations have remained. There are so many controllers that rely on relays used in industrial processors that replacement relays tend to be made available for an eternity compared to 'consumer' goods. Relays are not an audio product - they belong to a different class of equipment where failure may mean the loss of $thousands an hour. However, they also have a place in audio, as seen in several ESP project articles.
+ +It should be remembered that relays were first used in telegraphy, followed by telephone systems, so they are the product of the first ever branch of 'audio' and the catalyst for most electronic equipment - the telephone. Like so many of the things we take for granted these days, the telephone system has been the originator of a vast array of products and techniques that are now part of almost everything we use. If you wish to see an early example, it's covered in 'Morse Code - The start Of Electronic Messaging'. The term literally came from the use of 'relay stations' that were required to transmit messages over distances greater than could be covered with a single telegraph link. Initially, this was done manually (receive & transcribe the message, then re-transmit it to the next station - preferably without errors!), until the electromechanical relay (EMR) was developed. This cut out the 'middle-man'. Time has only increased the number of relays as we know them, with no sign of them vanishing anytime soon.
+ + +There's a special class of relays that are intended for protecting 'life-and-limb'. Standard relays are generally extremely reliable, but they don't have the necessary internal structure to qualify as a true safety relay. In general, unless a relay datasheet specifically states that the design meets the requirements for a safety relay, assume that it's just an electromagnetic switch. Safety relays are designed so that it is impossible for both normally-open and normally-closed contacts (NO/ NC) to be closed simultaneously, even if one contact set is welded closed due to excess fault current or other internal contact damage. This isn't easy to achieve!
+ +The contacts of a safety relay use 'force-guided' contacts, where no fault can allow both sets of contacts to be closed simultaneously, regardless of internal contact failure. Many/ most 'standard' relays rely on the spring tension of the 'common' contact support for return action (most contact supports are phosphor-bronze or similar high-conductivity spring material). A force-guided relay uses the armature's return spring to actively pull the normally open contacts open when the relay is deactivated. Contact arm 'springiness' is no longer a potential limitation, and in many small relays, it's the 'springiness' of the common contact arm that provides the restoring force to the armature. Most 'true' safety relays use a combination of two or more relays, interconnected so as to provide a fail-safe power disconnection on demand. Full coverage of safety relays is outside the scope of this (and subsequent) articles, as they must be fully certified to be classified as a true 'safety relay'. Of course you can build one for yourself, but it won't be certified and may not meet the requirements of the 'real thing'.
+ +Most safety relays also provide signalling contacts, so operation can be monitored remotely, for example as part of a multi-stage control system. Should any part of the system be shut down due to a fault, then all other equipment that forms part of the system as a whole will also be stopped. Safety relays are often connected to an approved 'Emergency Stop' button, and there may be several of these throughout a large system, typically wired in series so that a wiring fault (e.g. an open circuit) shuts down the system rather than leaving an emergency stop button inoperative. This would be unacceptable in any installation.
+ +When it comes to safety systems, things become complex (and expensive) fairly quickly, because no-one wants to be killed or injured by a malfunctioning machine. For the relay circuits shown here, none is suitable for use as a true safety relay. Most are safe enough for a single circuit, but 'solid-state' relays (SSRs) must never be used where safety may be compromised by semiconductor failure. Almost all semiconductor switches fail short-circuit, including thyristors (SCRs or TRIACs), MOSFETs, IGBTs and bipolar transistors. Where safety is critical, a properly designed electromechanical system will win every time. This may be a little confronting to many people, as we tend to think of semiconductors as having an indefinite (not quite infinite) life. This is true when everything is designed properly, but failures must never compromise safety.
+ + +For any given relay, there are specifications that describe the maximum rated contact voltage and current. Relays for high voltages need contacts that are further apart when open, or may be operated in a vacuum. Those for high current need a contact assembly and contact faces that have low resistance and can handle the current without overheating or welding the contacts. The maximum contact ratings must never be exceeded, or the life of the relay may be seriously affected. In particular, make sure that the relay you use can handle the peak inrush current of the load.
+ +There are many factors that influence inrush, but be aware that it can be as much as 50 times the normal full-load current. With inductive loads (transformers and motors for example) the worst case inrush current is limited only by the winding resistance plus the external mains wiring impedance. Note that zero-voltage switching (with solid state relays in particular) should never be used with these loads - ever! Capacitive loads and electronic power supplies present challenges, and are also generally not appropriate for solid state relays, but for different (and complex) reasons.
+ +Some heavy duty relays (contactors) only have a single pair of contacts, typically normally open. There are also 3-phase contactors that have three sets of contacts - one for each phase, and these are very common in industrial control systems. They are used to switch heavy current and/or higher than normal voltage, and have greater contact clearance and arc suppression features so that an arc cannot be maintained across the contacts when they are open. For particularly large currents (or for DC which is a potential relay contact killer), there may be a magnet or even a forced air system to direct the arc away from the contact area. These are not common with normal relays.
+ +Contact faces are made from various metals or alloys that are designed for the intended use. Some common materials and their applications are shown below [ 2 ]. This is not an exhaustive list, and you may see other metals or alloys referenced in relay specifications.
+ + +++ ++
+Material(s) Symbol(s) Comments + + Hard Silver Ag, Cu, Ni + A standard contact material used in many general purpose relays, the copper and nickel add the hardness. Single contact minimum 20V/50mA. + Long contact life, but tends to oxidise at higher temperatures. + + Silver Nickel Ag, Ni + More resistant to welding at high loads than hard silver, with high burn out resistance. A good standard contact material. Minimum contact load, 20V/50mA + + Silver Cadmium Oxide Ag, CdO + Used for high current AC loads because it is more resistant to welding at high switching current peaks. Material erodes evenly across the surface. Not + recommended for breaking strong DC arcs because of the wear this creates (one side reductions). Minimum contact load 20V/50mA. Note that Cadmium was originally + included in the list of materials prohibited under the European RoHS Directive, but is now exempt for this purpose (although this may change again at any time). + + Silver Tin Oxide Ag, SnO2 + The tin oxide makes the material more resistant to welding at high making current peaks. It has a very high burn out resistance when switching high power + loads. Low material migration under DC loads. Minimum contact load 20V/50mA. Useful where very high inrush currents occur, such as lamp loads or transformers. + Silver Tin Oxide is frequently chosen as the replacement relay contact material for Silver Cadmium Oxide. + + Silver Tin Indium Ag, SnO, InO + Similar to Silver Tin Oxide but more resistant to inrush. Minimum contact load 12V/100mA. + + Tungsten W + More resistant to welding at high loads than hard silver, with high burn out resistance. A good standard contact material. Minimum contact load 20V/50mA + single contact. Used for some heavy duty relays. + + Gold Plating - 10µm Au + Used for switching low loads > 1mA/100mV. This plating will be removed by friction and erosion after around 1 million switching cycles even in 'dry' + circuits (i.e. those with no DC and/or negligible AC). Used in single and twin contact forms (twin contact is useful in dusty environments). + + Gold Plating/Flash - 3µm Au + Has the same qualities as 10µm Au but is less durable. It is generally used to prevent corrosion / oxidation of relay contacts during storage. + + Ruthenium Ru + A rare element that is highly resistant to tarnishing, and used primarily in reed switches/ relays and other wear resistant electrical contacts. + + Rhodium Rh + A rare, silvery-white, hard, and chemically inert transition metal. Like Ruthenium, it is a member of the platinum group of elements. Used in reed switches + Table 2.1 - Common Contact Materials+
From the above, you'll see that some contact materials require a minimum voltage and/or current. At lower voltages and currents (such as 'dry' signal switching circuits) there isn't enough current to ensure that the contacts will make a reliable closure, which may result in noise, distortion or intermittent loss of signal. Mostly this isn't a problem, but it's something you need to be aware of.
+ +Where good contact is needed with very low voltages and currents, gold or gold plating is a good choice. Note that gold is not a particularly good conductor, but it has the advantage that it doesn't tarnish easily, so there's rarely a problem with oxides that may be an insulator at normal signal voltages. Where silver (or many of its alloys) is used, relays may be hermetically sealed to prevent oxidation. The black tarnish (silver sulfide) is an insulator. It's not a good insulator, but it can withstand a few hundred millivolts (typical signal level) with ease. Some reed relays have the contacts in a vacuum, and this is common with high voltage types. An arc is difficult to create in a vacuum because there is no gas.
+ +A common term you will hear is 'contact bounce'. When the contacts close, it's more common than not that there will be periods of connection and disconnection for anything up to a few milliseconds or so. The time depends on the mass of the contacts, the resilience of the contact arms and the contact closing pressure. A good example is shown below, taken from the reed relay shown in Figure 1.2. This is significantly better than most others, but shows clearly that even the 'best' relays have contact bounce. A certain amount of 'disturbance' can also be created when contacts open, but this is a different effect.
+ +The horizontal scale is 50µs per division, so you can see that the contacts make and break several times in the first 150µs. After that, the closure is 'solid', with no further unwanted disconnections. Sometimes you can minimise bounce effects by operating two or more sets of contacts in parallel, but that's not a guaranteed reliable method. Once one could purchase a mercury-wetted relay - the 'contacts' were based on a small quantity of mercury which formed an instant contact with no bounce at all. There are (were) many different types at one stage.
+ +Mercury-wetted relays used to be common for laboratory use to obtain test waveforms with pico-second risetimes, but of course the European Union's RoHS legislation has caused them to be banned completely. Mercury? Oh, no - you can't use that! Strangely, the EU still allows fluorescent lamps (both compact and full size) a few of which probably have as much mercury as a small laboratory mercury wetted relay. One gets thrown away after a few thousand (or hundred) hours and the other will be kept forever. I'll let you guess which is which.
+ +The vast majority of relays have break-before-make contacts. This means that one circuit is disconnected before the other is connected. Make-before-break relays also exist, but they are uncommon and were mainly used with telephony systems where a disconnection might result in a dropped phone call. If you really need make-before-break I expect that finding one that's both available and sensibly priced will be a challenge. If you need this functionality, see Project 219.
+ +One area where electro-mechanical relays have real problems is switching DC. A relay that can handle 250V AC at 10A can generally be expected to handle a maximum of 30V or so with DC, because the voltage and current are continuous. With AC, both voltage and current fall to zero 100 or 120 times each second (for mains frequency applications), so the arc is (comparatively) easily quenched as the contacts open. With DC, there is no interruption, and an arc may be maintained across the contacts - even when they are fully open.
+ +This is a very serious issue, and is something that is overlooked by a great many people. Even if the relay contact voltage and current are such that the arc extinguishes each and every time, the mere fact that there is an arc means that the contacts are under constant attack. With an arc, material is typically moved from one contact to the other. With AC, the polarity is usually random, so contact material is moved back and forth, but with DC it's unidirectional. It takes a long time with very robust contact materials like tungsten, but it still happens, and eventually the relay will fail due to contact erosion. The manufacturer's ratings are the maximum AC or DC voltage and current that will give the claimed number of operations. If either the rated voltage or current is exceeded, the relay will probably have a short life. DC is the worst, and DC fault conditions are often catastrophic for a relay that's intended to provide any protective function.
+ +In some cases a magnet can be used to help quench the arc created as the contacts open. Because the arc is conducting an electric current, it both generates and can be deflected by a magnetic field. Magnetic arc quenching (or 'blow-out') is rarely provided in relays, but it may be possible to add it later on provided you know what you are doing and can position the magnet(s) in exactly the right place. You might see this technique used in high current circuit breakers, and even in some relays (although they are more likely to be classified as contactors).
+ +There are countless 'speaker protection' circuits on the Net that may not actually work when they are most needed. To see how it should be done, have a look at the way the relay contacts are wired for Project 33. When the relay opens it puts a short across the speaker, so even if there is an arc, it passes to ground until a fuse blows. Any speaker 'protection' circuit that doesn't short the speaker could leave you well out of pocket, because not only is the amplifier probably fried, but so is the relay and the speaker it was meant to protect. A relay that can actually break 100V DC at perhaps 25A or more is a rare and expensive beast, but that's what might be needed for a high power amplifier.
+ +The subject of relay contact materials, arc voltages and currents, metal migration during make and break operations (etc., etc.) is truly vast. It's the subject of academic papers, application notes and large portions of books, and it's simply not possible to cover everything here. Suffice to say that manufacturer's recommendations and ratings are usually a good place to start, and the maxima should never be exceeded. The number of electrical operations can be extended significantly by de-rating the contacts (using 10A relays for 5A circuits for example), and AC is nearly always much less troublesome than DC.
+ +This discussion covers snubbing networks and other measures that may be needed to protect the contacts from the load in Part 2. This is a very complex topic, and depends a great deal on the exact nature of the load. In many cases nothing needs to be done if the voltage and current are both well inside the maker's ratings. In other cases extreme measures may be needed to prevent the contacts from being destroyed. DC is the worst, and high voltage and/or high current will require very specialised relay contacts and arc-breaking techniques. If possible, consider solid state relays for DC, because they don't use contacts so can't create an arc.
+ +This really is a science unto itself, and thanks to the InterWeb you can find a lot of really good data. Unfortunately, it can be very difficult to find information that is both relevant and factual, so don't expect to find what you need on the first page of the search results, and in general ignore forum or Usenet posts. There's a great deal of disinformation out there, and whether it's by accident, design, or just people claiming to know far more than they really do is open to debate. Suffice to say that a great deal of such 'information' is just plain wrong.
+ +In a great many cases, the only way to get a solution that works is by trial and error. This is especially true if you have a difficult load - whether because the supply is DC, the load is highly inductive, or high currents and voltages are involved. For large-scale manufacturing, getting a custom design is viable, but the costs will be high and can't be justified for small runs or one-off projects. I've covered a very small subset of possible failure modes and contact erosion - there is so much more to learn if you have the inclination.
+ + +A common way to designate a relay's contact arrangements is to use the 'form' terminology. For example, you will see relays described as '1 Form C' in datasheets, catalogues and even in web pages on the ESP site. This terminology is roughly equivalent to referring to SPST or DPDT for example.
+ +++ ++
+Form A Normally open (NO) contacts only + Form B Normally closed (NC) contacts only + Form C Changeover contacts (normally open, normally closed and common), Break before Make + Form D Changeover contacts (normally open, normally closed and common), Make before Break ¹ +
¹ Uncommon, see below. +
So a 1-Form-C relay has a single set of changeover contacts, 2-Form-A has two sets of normally open contacts, etc. Nearly all relays use break-before-make contacts. That means that during changeover, the normally closed contacts open before the normally open contacts close. Form-D (make-before-break) relays are very uncommon, and there's a period when the moving contact is connected to both the NO and NC contacts. Most 'Form D' relays that used to exist are now discontinued. If this is something that you really need, I recommend Project 219, which shows how to use a pair of break-before-make relays to achieve make-before-break. There are still some instances where this is necessary, but it's not a requirement in most cases.
+ + +One would think that this is too simple to even discuss, but it's definitely otherwise. The coil is an inductor, and because it's wound around a magnetic material (usually soft iron or mild steel) the inductance is increased. It's also non-linear. When the coil is not energised there's a large air-gap in the magnetic circuit, and this means the inductance is reduced. Once the relay is energised, the magnetic circuit is completed, or at least the air-gap is a great deal smaller, so now the inductance is higher.
+ +I used an inductance meter to get the values shown below, but if you need an accurate measurement you'll have to use another method. The inductance is in conjunction with the coil's DC resistance, and that changes the reading so there's a significant error. True inductance can be measured by using a series or parallel tuned circuit with a capacitor to get a low frequency resonance (< 100Hz if possible) if you really want the real value. It's not often needed and you rarely need great accuracy, and although an inductance meter has a fairly large error used this way, but it's fine for the purpose.
+ +Inductance meter measurements taken from two of the relays pictured above gave readings of ...
+ +++ ++
+Octal Base 10R open 335 mH 186Ω Coil Resistance + closed 373 mH + STC 4PDT open 283 mH 248Ω Coil Resistance + closed 303 mH +
How large is the error? I checked the octal based relay using a series 5.18µF capacitor, and measured the peak voltage across the cap (indicating resonance) at 61Hz with the armature open and 37Hz with it closed. This gives an inductance of 1.3H open, 3.6H closed, so the error is substantial. There's plenty of scope to get the frequency measurement wrong too, because the 'tuned circuit' created has low Q and the frequency range is quite broad - expect the result to be ±25% at least, depending on how closely you can get an accurate peak voltage while varying the frequency. The formula is ...
+ ++ L = 1 / (( 2π × f )² × C )+ +
+ L = 1 / (( 2π × 61 )² × 5.18µ )
+ L = 1.3H +
Although the error is large, the simple fact of the matter is that we don't really care. I included the inductance purely to demonstrate that it changes depending on the armature's position, but the coil inductance isn't provided by most relay manufacturers because you don't need it. These data are provided purely for interest's sake. Since inductance is part of the relay's 'being' (as it were), you can't do anything about it.
+ +The combination of coil inductance and the moving mass of the armature means that relays will have a finite contact closure time. The actual time will vary from one relay to the next, but it's unwise to assume that it will be less than around 10ms for a typical SPDT 10A relay (such as the Zettler relay shown in Figure 1.2). I ran a test, and that relay provides contact closure in 9.8ms, not including contact bounce time. Smaller relays will be faster, and larger relays slower. This isn't something you'll find on most spec sheets, and the only way to find out exactly how fast (or otherwise) your relay is, will be to test it.
+ +When power is applied to a relay coil, you might expect that it will pull-in instantly. However, the coil is an inductor, so the operating current is not reached as soon as power is applied. For example, with a 280mH coil, it may take up to 2ms before there's enough current to attract the armature. The coil current has to reach around 75% of the normal operating current (steady-state) before the magnetic field strength is great enough to attract the armature. The armature also has mass, so it has to accelerate from rest, and this takes time as well. The delay isn't usually a problem, but it does mean that you can't expect an electromechanical relay to provide instantaneous connections. If you need something to happen at a very precise time, then you'll have to use a solid state relay (see below for more information). It's not possible to guarantee accurate timing, even if multiple tests show it to be consistent. Over time, it will change due to mechanical wear and gradual contact erosion.
+ +Because the coil is an inductor, it also stores a 'charge' as a magnetic field. When voltage is removed, the magnetic field collapses very quickly, and this generates a large voltage across the coil. The standard fix is to include a diode, wired as shown below (Figure 4.1A). However, adding the diode means that the relay will release slower than without it, because the back-EMF generates a current that holds the relay closed until it dissipates as heat in the winding and diode. The flyback voltage will attempt to maintain the same current flowing in the coil as existed when the current was being applied. Of course it can't do so because of losses within the circuit.
+ +A relay coil's magnetic strength is defined by the ampere turns, and the current is defined by the coil's resistance. Let's assume as an example that a relay needs 50A/T (ampere turns) to activate reliably. A single turn with 50A will provide 50A/T, as will 10 turns with 5A, but they are impractical unless the relay is intended to sense an over-current condition (used for electric motor start switches for example). It will be more useful to have a larger number of turns with less current, so we might wind 1,000 turns onto the bobbin. The wire will be fairly fine, and may have a resistance of around 240 ohms. Now we only need 50mA to get the 50A/T needed, so applying 12V will produce 50mA through the 240 ohm winding. Since there are 1,000 turns at 50mA, that works out to 50A/T again, so we have the required magnet strength and a sensible voltage and current.
+ +Please note that this info is an example only, and the actual ampere turns needed for a typical relay is fiendishly difficult to find on the Net. If you really need to know, you'll have to test it yourself by adding a winding with a known number of turns. If you add 50 turns and the relay pulls in at 600mA, that's 30A/T. Since you always need to allow for coil self-heating and/or a lower than normal supply voltage, you'd need to use more turns or a higher current. Most relays are designed to act with around three-quarters of the rated voltage. A 12V relay should activate with a voltage of about 9 volts. This does vary, and many datasheets provide 'must operate' and 'must release' voltages.
+ +A pretty much standard circuit for a relay is shown below, along with a useful modification. A voltage is applied to the input (typically 5V from a microcontroller), and that turns on Q1 and activates the relay. Without D1, the voltage across Q1 will rise to over 400V (measured, but it can easily exceed 1kV) when the transistor is turned off, which would cause instant failure of Q1. D1 (sometimes referred to as a 'freewheeling' or 'catch' diode) acts as a short circuit to the back-EMF from the coil, so the voltage across Q1 can only rise to about 12.6V. However, as long as enough current flows between the relay coil and D1, the relay will not release. It may take several milliseconds before the armature starts to move back to the rest position after Q1 is turned off.
+ +I tested a relay with a 270 ohm coil having 380mH of inductance - although the latter is not a specified characteristic in most cases. If you need to know the inductance you will probably need to measure it. With just the diode in circuit, there is enough coil current maintained to keep the relay energised for some time after Q1 turns off. The release time is a combination of electrical and mechanical effects. If the resistor (R2) is the same as the coil resistance, the 'flyback' voltage will be limited to double the supply voltage, easily handled by the transistor I used.
+ +You can also use a zener and a diode, typically using a 12V zener. It can be rated for up to twice the applied voltage, in which case the peak voltage will be about 3 times the supply voltage. A zener is slightly better than the diode/ resistor combination shown, and is seen in more detail below. The graphs below show the behaviour of the circuit with and without the resistor and diode. The measured 400V or more is quite typical of all relays, which is why the diode is always included. Voltage peaks that large will destroy most transistors instantly, and while a high voltage transistor could be used that simply adds cost. The flyback voltage is created by exactly the same process used in the standard Kettering ignition system used in cars, but without the secondary winding. It's also the principle behind the 'flyback' transformer used in the horizontal output section of a CRT TV set (remember those?) or flyback switchmode power supplies.
+ +Workshop tests were done to see just how much voltage is created, and how quickly a fairly typical relay could be operated. I used the 'Low Cost SPDT' relay shown in Figure 1.2 for the tests. The results were something of an eye-opener (and I already knew about the added delay caused by a diode!). The relay I used has a 12V, 270 ohm coil and has substantial contacts (rated for 10A at 250V AC). With no back-EMF protection, the relay closed the normally closed contacts (i.e. the relay fully released) in 1.12ms - this is much faster than I expected, but the back-EMF was over 400V - it varied somewhat as the switch contacts arced on several tests. When a diode was added, the drop-out time dragged out to 6ms, which is a considerable increase, but of course there was no back-EMF (Ok, there was 0.65V, but we can ignore that). Using the diode/ resistor method shown above, release time was 4ms, and the maximum back-EMF was 24V (double the supply voltage). This is a reasonable compromise, since there are many transistors with voltage ratings that are suitable for the purpose.
+ +The blue trace shows when the NC contact is made as the relay releases, and is from zero to 12V. The peak relay voltage ((A) - No Diode) measured over 400V on my oscilloscope, and due to the voltage range little detail about the voltage collapse is visible. In both cases, the relays were wired in the same way shown in Figure 4.1, but using a switch instead of a transistor. The second trace shows the release time and voltage spike when a diode and 270 ohm resistor are used to get a higher release speed. The diode isn't essential, but without it the relay circuit will draw twice as much current as it needs because of the current through the resistor. Note that the horizontal scale is 1ms/ division in (A) and 2ms/ division in (B), and the vertical scale for the relay back-EMF (yellow traces) is also changed from 100V/ division (A) down to 10V/ division in (B).
+ +The kink in the relay voltage curve is caused by the armature moving away from the relay pole piece and reducing the inductance. The 'NC' contacts close as the relay releases. As you can see, this is 4ms after the relay is disconnected (with the resistor + diode in place). With no form of flyback (back-EMF) suppression, the relay will drop out faster because the current is interrupted almost instantly (excluding switch arcing of course).
+ +These graphs are representative only, as different relays will have different characteristics. You can run your own tests, and I encourage you to do so, but in all cases the behaviour will be similar to that shown. Upon contact closure of the normally open contacts, I measured 2.5ms of contact bounce (not shown in the above oscilloscope traces). These tests might be a little tedious, but are very instructive.
+ +When the resistor has the same value as the coil's internal resistance, the back-EMF will always be double the applied voltage. If the resistor is 10 times the coil's resistance, the peak voltage will be 10 times the applied voltage (both are plus one diode voltage drop of 0.7V). This relationship is completely predictable, and works for almost any value of coil and external resistor. It's simply based on the relay's current. If the relay draws 44mA, the collapsing magnetic field will attempt to maintain the same current. 44mA across the external 270 ohm resistor will generate 12V, and if the resistor is 2.7k the voltage must be 120V (close enough).
+ +While this trick was common with early electric clocks (but without the diode because they hadn't been invented at the time), it seems that few people use it any more. That's is a shame because it works well, limits the peak voltage to something sensible, and reduces the relay release time compared to using only a diode.
+ +If you search hard enough, you will find it mentioned in a few places, and it's been pointed out [ 8 ] that simply using a diode can cause the relay to release too slowly to break 'tack welding' that can occur if the contacts have to make with high inrush currents. This can happen because the armature's physical movement is slowed down, and it doesn't develop enough sudden force to break a weld. It's far more complex than just an additional delay when a diode is placed in parallel with the coil.
+ +
|
The zener diode scheme shown above may be a bit more expensive than a resistor, but it allows the relay to deactivate much faster. The most common arrangement will be to use a zener rated for the same voltage as the relay's coil and supply. In the example, the release time was 2.6ms, and that's significantly faster than obtained using a resistor and diode (4ms). A higher voltage zener will be faster again, with a 24V zener giving a drop-out time of 1.84ms. If the voltage is too high you may end up needing a more expensive drive transistor to get the voltage rating, but using more than double the supply voltage won't improve matters by very much. Overall, this arrangement is probably the best compromise. It's faster than a resistor for not a great deal of extra cost, and doesn't require you to try to purchase parts that may not be readily available at your local electronics shop.
+ +I also tested the circuit shown with a 100nF ceramic capacitor in parallel with the coil. The flyback voltage measured 86V, and the relay released in 1.23ms. That's a good result, but the voltage is higher than desirable and the cap needs to be a high-reliability type to ensure a long life. This makes it more expensive than other options, but there may be situations where this turns out to be the best choice for the application, with or without a series resistor.
+ +Other transient suppression techniques can be used that don't affect the armature release speed greatly, including using a carefully selected TVS diode, a low voltage MOV or a resistor/ capacitor snubber network. The latter is generally not cost effective and is rarely used now, but was fairly common in early systems and is still useful with AC relay coils. If relays are to be used towards their maximum contact ratings, be aware that these are often specified with no form of back-EMF suppression, which ensures the fastest possible opening time for the contacts. If you decide to use a TVS, you either need a bidirectional type, or add a diode in series. MOVs will work well, but their clamping voltage is something of a lottery so you need to allow a safety margin for the switching transistor's peak voltage rating that accommodates the voltage range of the MOV (or TVS - they aren't precision devices either).
+ +What about the diode ratings? The diode must be rated for the full supply voltage as an absolute minimum. That part is easy, because the 1N4004 diode is not only ubiquitous, but it's as cheap as chips. There aren't many applications where you need more than 400V relay coils. It can be tempting to use 1N4148 diodes, and although their voltage rating is usually fine, they are rather flimsy and their current rating is only 200mA continuous or 1A peak (1 second, non-repetitive). I don't really trust them for anything other than signal rectifiers, but a lot of commercial products use them across relays.
+ +The diode current rating should ideally be at least the same as the relay coil current, not because it's needed but to ensure reliability and longevity. For most general purpose relays, the 1N4004 is a good choice - 1A continuous, 30A non-repetitive surge (8.3ms) and a 400V breakdown voltage. Remember that the peak current through the diode will be the same as the relay coil current, so if you have a (big) relay that pulls 2A coil current, you need a diode rated for at least 2A, preferably more. You can rely on the rated surge current for the diode, but it's better to allow a generous safety margin. The cost is negligible.
+ +So, you may have thought that relay coils were simple, and you only need to add a diode so the drive transistor isn't destroyed when it turns off. Now you know that this is actually a surprisingly complex area, and there are many things that must be considered to ensure reliability and longevity. It's only by research and testing that you know the effects of different suppression techniques and the limitation that each imposes.
+ + +To confuse matters more, some relays are designed so that the coils can be run from AC, without any noticeable 'chatter' (vibration that causes noise - often very audible) and possibly continuous contact bounce. AC relays can usually be operated from DC with several caveats, but a DC relay coil should never be used with AC. Larger AC relays use a laminated steel polepiece, yoke and armature to reduce eddy current losses that would cause overheating, but this is not generally a problem with comparatively small relays. The current flow in a DC relay coil is determined by its resistance, but when AC is used there is a combination of resistance and inductive reactance - covered by the term 'impedance'. If the maker doesn't tell you the coil's current, it will have to be measured, as it can't be determined by measuring the coil's resistance.
+ +There's a little secret to making the coil work with AC, and that's called a 'shading' ring (or shading coil). If you look closely at the photo of the larger octal relay in Figure 1.2, you can see it (well, ok, you can't really see it clearly, so look at Figure 4.4 instead). There's a thick piece of plated copper pressed into the top of the polepiece, and that acts as a shorted turn, but only on half the diameter of the centre pole. The shorted turn causes a current that's out-of-phase in its part of the polepiece, and that continues to provide a small magnetic field when the main field passes through zero. However unlikely this might seem, it works so well that the AC relay pictured above is almost completely silent, with no chatter at all.
+ +This is the very same principle as used in shaded-pole AC motors (look it up if you've never heard the term). The small magnetic field created by the shading ring is enough to hold the relay's armature closed as the main field passes through zero, eliminating chatter and/or high speed contact movements that would eventually wear out the contacts just by the mechanical movement. Chattering contacts will also create small arcs with high current loads that will damage the contacts and possibly the load as well.
+ +AC relays can be used with DC, but a few problems may be encountered. You will need to reduce the DC voltage by enough to ensure that the coil can pull in the relay reliably but without overheating. You might also experience possible armature sticking - see below for more info on that phenomenon. In my case, the 32V AC relay works perfectly with 24V DC, but it draws almost double the current that it does with AC. The coil has a resistance of 184 ohms and draws 62mA at 32V AC - an impedance of 516 ohms. For roughly the same current, it should be operated at no more than 12V DC, but it will not pull in at that voltage. At 24V DC the coil will draw 129mA and dissipate over 3W, and it will overheat. The pull-in current with 32V AC is 104mA, because the inductance is low when the armature is open and more current is drawn. That means that the impedance is only 307 ohms when the armature is open.
+ +Never use a DC relay with AC on the coil, as it will chatter badly and may do itself an injury due to the rapid vibration of the armature. Contacts will almost certainly close and open at twice the mains frequency rate (100 or 120Hz). If you must operate a DC relay from an AC supply, use a bridge rectifier and a filter capacitor. Release time will depend on the value of the filter cap, coil resistance, etc. If there is a capacitor across the relay coil of more than a few microfarads (depending on relay size of course), you don't need a diode because the capacitor will absorb and damp the small back-EMF. You can still include the diode if you like - it won't hurt anything, but it won't do much good either.
+ +The yoke and armature of most relays is usually just mild steel, not the 'soft iron' that you'll see claimed in many articles. Mild steel is magnetically 'soft' in that it doesn't retain magnetism very well (holding a magnetic field is known as remanence), but it does have some remanence so may become slightly magnetised. This can lead to the armature sticking to the polepiece, and that can be a real issue. If the armature sticks, the contacts will not release back to the 'normal' state when coil current is removed. This can be overcome by a stronger spring, but then the coil needs more current to pull in the armature against the tension provided by the spring.
+ +In many DC relays, the centre polepiece may have either a very thin layer of non-magnetic material on the top (where the armature makes contact) or a tiny copper pin, placed so that the armature can't make a completely closed magnetic circuit. This small gap is designed to be enough to ensure that the relay can always release without resorting to a stronger spring. You will almost certainly see this technique applied in 'sensitive' relays - those that are designed to operate with the lowest possible current.
+ +With AC relay coils, if you need back-EMF suppression then you have to use a bidirectional (non-polarised) circuit. This can be a TVS with suitable voltage rating to handle the peak AC voltage, two back-to-back zener diodes, again with a voltage rating that's higher than the peak AC voltage, or a resistor/capacitor 'snubber' network. It may be necessary to allow a higher back-EMF than you might prefer to ensure that the armature returns to the 'rest' position without being slowed down by the suppression circuit.
+ + +This article will not cover drive circuits in any detail. This is simply because there are so many possibilities that it would only ever be possible to cover a small selection. Common circuits are shown throughout this article, but there are many others that will work too.
+ +I've shown the most basic NPN transistor drive, where the relay coil connects to the supply rail and the drive circuit connects the other end to earth/ ground. A PNP transistor can be used instead, but used to switch the supply to the relay coil (the other end is earthed). Relays can be driven by emitter followers, but that's not very useful as a stand-alone switching circuit, but can be handy in some cases. Some relays with particularly low coil current can be driven directly from the output of an opamp, and using 555 timers as relay drivers is also common.
+ +You can also use low-power MOSFETs (such as the 2N7000 for example), and once upon a time even valves were used to drive relay coils in some early test equipment and industrial controllers. There are dedicated ICs that can be used, and of course any relay can be activated using a switch (of almost any kind) or another relay. You might want to do that if a low power circuit has to control a high power load, and relays are used as a form of amplification. For example, your circuit might have a reed relay switching power to a heavy duty relay that applies mains power to a contactor's coil (if you recall from the intro, a contactor is just a really big relay).
+ +Where switch-off time is particularly critical, controlled avalanche MOSFETs might be appropriate. These are specifically designed to allow any transient over-voltage to be dissipated harmlessly in the parasitic reverse-biased diode that's a standard feature of all MOSFETs. Don't push any MOSFET that is not specifically rated for avalanche operation (such devices may be classified as 'ruggedised' or avalanche rated) into forward voltage breakdown. For most relay applications I wouldn't even consider this approach, as it's simply not necessary for most 'normal' drive circuits. If you want to play with using avalanche rated MOSFETs, the IRF540N is a low cost MOSFET that should survive with no diode in parallel with the coil.
+ +Driving AC relay coils is most commonly done using either a switch or another relay. It's certainly possible to make an electronic circuit that can drive an AC coil, but in general it would be a pointless exercise. The vast majority of all control systems will use DC coils, and it's an uncommon instance where AC coils are the only relay you can get that will handle the power of the controlled system (whatever it might be). If that is the case with a microcontroller or other IC based controller, then it's far easier to use a relay with a DC coil to switch power to the AC relay coil.
+ +You need to be aware that switching the coil of a relay on or off can induce transients into low-level circuitry. PCB layouts generally need to be carefully optimised to ensure that the relay power - including the return/ earth/ ground circuit - is isolated from the supply used for the low-level circuitry. If this isn't done in audio circuits, clicks and pops may be audible when relays operate. For control or measurement systems, the relay coil transients may be interpreted as valid data, causing errors in the output. If you opt for a circuit using a diode and zener for example, the turn-off transient is very fast, which makes it more likely to induce transients into surrounding circuitry.
+ + +Taking relays to the extreme, you can even have relay logic! This used to be quite common for process controllers and other industrial systems, where control switches and relay contacts are arranged to create the basic logic gates - AND, NAND, OR, NOR and NOT (inverter) and XOR. One of the most common (and complex) forms of relay logic was used in telephone exchange ('central office') switches. These interpreted the number dialled and routed the call to the requested destination - often through several exchanges. The exchange switches used a combination of conventional relays and rotary 'stepper' relays. A uniselector worked on one (rotary) axis, and the step-by-step two axis stepper (one rotary and one vertical) was commonly known as a Strowger switch after its inventor. Later exchange switches used a crossbar matrix switch, with the last of them being electronically controlled.
+ +The diagrams used to describe relay logic are generally referred to as 'ladder' diagrams, and you'll also see the term 'ladder logic' used. This used to be (and perhaps still is in some cases) a required area of study for anyone involved in industrial electronics. It is so entrenched that many microprocessor based control systems are still programmed using a ladder diagram, even though most of the functions are in software. One manual I saw for a 'logic relay' extended for nearly 300 pages!
+ +The three drawings above show the fundamental logic building blocks - AND, OR and XOR (exclusive OR) gates. Diodes are omitted for clarity. With an AND gate, Input1 AND Input2 must be high to energise the two relays, and the circuit is completed. In the second, if Input1 OR Input2 is high, the circuit is completed. It remains so if either or both inputs are high. The final one is the XOR gate. The output will be asserted only if Input1 and Input2 are different. If both are high or low, the circuit is not completed. Inverse versions (NAND, NOR) are achieved simply by using normally closed contacts instead of normally open as shown. There is no inverse for the XOR gate. Inverted logic can be used with relays in the same way as with semiconductor logic.
+ +This is a very specialised area, and while it's certain that there are still some early relay based logic systems still in use, in most cases they will have been replaced many years ago. Unlike a microcontroller, re-programming a true relay logic system is generally done with hard wiring. All the required inputs are brought to the main 'logic' unit, and the outputs control the machinery.
+ +Inputs can include push-buttons, pressure sensors, limit switches, thermal sensors, magnetic detectors and/or the output signals from another relay logic unit. Outputs are typically motors, heaters, valves for water, hydraulic fluid, gas, etc. Generally not thermionic valves (aka 'tubes'), although that's possible too - older high power RF amplifiers for high frequency welding systems for example.
+ +Another related use for relays is a switching matrix. Crossbar telephone exchange switches are one example, but matrix switches are used to divert all manner of signals to a required destination, and to direct outputs of other equipment to the right place. Process control, automated test equipment, audio, video and RF switching matrixes are just a few of the possibilities. Reed relays are particularly well suited to matrix switching systems for low power signals.
+ +Relay logic and matrix switching are vast topics, and I have no intention to go into any more detail. There is so much information and the applications so diverse that even scratching the surface would occupy several books. If you are at all interested, it's worth doing a search for 'relay logic' or 'relay matrix' - you'll be surprised at the number of web pages that are devoted to the topics.
+ + +Most detailed specifications for relays will provide the pull-in (or pick-up) and release (drop out) voltages. These vary widely depending on the relay's construction, but you might see figures that indicate that a particular relay should pull-in at 75% of the rated voltage, and should release when the voltage falls to 25% of rated voltage. Based on this, a typical 12V relay should pull-in at about 9V, and should release when the voltage has fallen to 3V. This is a test you might be able to run yourself, but in the majority of cases it doesn't make a lot of difference. The pull-in and release voltages may also be referred to as the 'must operate' and 'must release' voltages, and they vary with different relays.
+ + + +Most circuits are designed to switch the power to relays quickly, commonly using a circuit such as those shown in Figure 4.1. The full voltage appears almost instantly, and when the transistor switch turns off the supply current is interrupted immediately. The relay current continues to flow via the diode, but that doesn't affect the actual voltage at which the relay releases. What these numbers do tell us is that once a relay has pulled in, a significantly lower voltage and current will keep it in the energised state. This means that it's possible to reduce the current and keep the relay energised. This leads us to ...
+ + +The time it takes for the contacts to move from one set of contacts to the other depends on many factors. One that I measured took 4ms to make the transition from NC to NO, which for the particular relay meant the moving contact had to move about 0.5mm (an average velocity of about 125mm/s if you think that's useful). This was reasonably consistent whether the relay was energised or de-energised, but without the diode the time was extended to almost 12ms due to contact bounce. Why? Because the armature can accelerate much faster without the diode, and the higher speed means more bounce. Pull-in time is dictated by the maximum available current (about 40mA for the relay I tested) and the coil inductance. Release time with the diode will generally be similar, but without a diode, acceleration is much faster. The diode (or other peak voltage suppression technique) is almost always needed, because the coil voltage can exceed 500V when the circuit is broken. (This is discussed in Section 4).
+ +The test circuit is shown below. The output is normally low, and can only become high when both NO and NC contacts are disconnected. All 'standard' relays are break-before-make, and while make-before-break relays used to be available, they seem to have been discontinued.
+ +As long as the moving contact is between the NO and NC fixed contacts, there is no current flow, so the output is pulled high by the 1k resistor. Without the diode, my scope counted (typically) more than 40 transitions when the relay is de-activated without the diode. With the diode, activation and de-activation showed 10-15 transitions - all the result of contact bounce. It's not often that you need to know the transition time, but in some instances it might be useful (not that I can think of too many). Understanding that contact bounce is very real is important, and knowing how it can be measured (for either contact) can be useful. The individual contact bounce for either contact by itself is measured by removing the connection to ground from the contact that you're not measuring.
+ + +There is one application where the release or drop-out voltage needs to be known. In some systems (especially battery operated), it may be important to get the maximum possible efficiency from a relay. This means that the coil is supplied with a low holding current after the relay has been activated. This is the minimum safe current that will keep the relay energised, and battery drain is reduced accordingly. Early systems used a resistor, but there are now ICs available that use PWM to modify the current profile after the relay has settled [ 3 ].
+ +When first activated, the relay coil receives the full voltage and current for a preset period, after which the circuit reduces the current to a known value that will keep the relay energised. If you plan to use this type of device, you will need to know the coil inductance because that's needed so the proper PWM switching frequency can be set. A simple system such as that shown below may be all you need though. It doesn't have the high efficiency of a switchmode solution, but it's simple, cheap and effective. I've assumed a relay coil resistance of 270 ohms.
+ +Looking at the simple R/C circuit, when Q1 is switched on, C1 is discharged and can only charge via the relay coil. The coil therefore gets the full voltage and current when Q1 is turned on, but as C1 charges, they are both reduced. It will eventually be reduced to exactly half the normal current, in this case about 22mA instead of 44mA. The same trick can be used with higher than normal supply voltages, allowing the resistor to limit the current to a safe holding value, but providing a 'boosted' current as the relay is energised. Putting up to 24V or so across a 12V coil momentarily usually won't damage it, provided the long term operating current is not more than the rated value. In most cases the coil current can be halved and the relay will not release. This must be tested and verified of course. The capacitor should be selected to give a time constant of at least 100ms, which is usually enough time for the relay to pull in properly. The time constant is determined by ...
+ ++ t = R × C where R is the series resistance in ohms (R2), and C is in Farads (C1)+ +
+ t = 270 × 470µF = 126ms +
Using a larger capacitor is quite alright. The goal is to ensure that the relay gets a minimum of 90% of its full rated coil current for at least 5ms for typical small relays. A 470µF cap with the relay tested gives 40mA or more coil current for over 13ms - a good result. Heavy duty relays may need more time, and the capacitor should be larger than determined from the above calculations. There is no maximum value and all caps (above the minimum suggested) will work, but if too large the cap will be physically larger and more expensive than is necessary for reliable operation. Always test your final circuit thoroughly to make sure it works every time.
+ +The pulse width modulation (PWM) driver is a little harder to understand unless you have some knowledge of PWM circuits feeding inductive loads. The PWM driver is 'symbolic' only, and does not represent any particular device. 'Ct' is a timing cap, used to set the operating frequency. When the circuit is triggered, the relay gets a steady current for a preset time (perhaps 1/2 second or so - the waveform is not to scale). Then the internal transistor turns on and off rapidly, usually at 20kHz or more. D1 is now either a very fast or preferably Schottky diode, and every time the switch turns off, back-EMF maintains current through the coil. If the final duty cycle is 50%, then the average current through the coil and diode will be 50% of the maximum (44mA reduced to 22mA for the demonstration relay). The advantage is that there is no power lost in an external resistor, and because of the switchmode circuit the current drawn from the supply will only be 11mA ... in a perfect world. In reality there will be some losses, so supply current may be a little higher than the ideal case.
+ +The driver IC is a switching regulator, so the overall efficiency is much higher than the resistor-capacitor version. The cost is relative complexity, and the ICs are more expensive than a transistor, but if battery life is paramount then you don't have a choice, other than to use a latching relay. The current reduction can be well worth the effort if you need to conserve power. In many cases a microcontroller can be programmed to do the same thing, driving a switching transistor instead of the dedicated IC. Ideally, if you plan to use a PWM efficiency circuit, if possible get relays intended for that purpose. General purpose (solid yoke and armature) relays may overheat due to eddy-current losses if the ripple current through the coil is too high.
+ +I ran a test of the PWM efficiency circuit on a general purpose 12V relay with a nominal 240 ohm coil and an inductance measured at 300mH. Even with a 1kHz drive waveform, there was only very minor heating detected in the yoke. For the 'main' test, I used a 1N4148 diode and a BC550 transistor (neither is ideal, but both ran almost cold) and drove the base with a 5kHz squarewave. The input current measured 48mA with a steady-state input, and it fell to 11.7mA when driven by a 50/50 squarewave. Although the voltage across the coil varies across the full 12.8V range (the diode forward voltage is added to the supply voltage), the current through the coil is fairly steady at 23.4mA with about 5mA of ripple, so eddy current losses are lower than you might expect. The fast switching waveform will cause interference in low level signals that are nearby, and that will probably rule out PWM control in audio or test and measurement applications.
+ +Note that the measured inductance is wrong according to a low frequency test as described earlier, but we still don't care. Most inductance meters test at a fairly high frequency, and PWM is performed at a high frequency too. The measured inductance is a good indicator of the minimum PWM frequency that can be used, and if it turns out that it's higher than measured, that simply means there's less ripple current with PWM operation.
+ +Regardless of the type of circuit, the optimum hold current may be more or less than the 50% used as an example. This means that the resistor value may not be the same as the coil's resistance, but is adjusted to suit the relay. Likewise, the duty-cycle of a PWM circuit may also need to be changed to suit the relay. The 50% figure works with most relays, but some will be happy with less, others may need more.
+ +An unexpected advantage of using an 'efficiency' scheme (whether active or passive) is that the relay's release time is reduced because there's a much lower magnetic field and less back-EMF. However, this is something that you'd have to test thoroughly for your particular application, because every relay type will be somewhat different from others, even if superficially the same.
+ +Keep in mind that the relay coil is temperature sensitive because of the thermal coefficient of resistance of the copper wire (about 0.004/°C). This can be approximated to 4% resistance change for each 10°C. When the relay coil is hot the pick-up voltage will be increased in proportion to the temperature. This may be because the coil has been operated for some time and become warm (or hot), or due to high ambient temperature. The drop-out voltage will also be increased, so the relay may release at a higher voltage than expected. In most circuits this is not a problem, but it is something you may need to consider in some applications.
+ +There is at least one version of a very flawed efficiency circuit on the Net. The circuit uses normally closed contacts to short out the series resistor, so when the relay operates the short is removed and the resistor is in circuit. There's only one problem - the resistor is placed in series with the coil before the relay armature has contacted the polepiece. This means that the relay will probably never really close properly because its full current isn't available for long enough. If contact pressure is too low (as it almost certainly will be), resistance may be much higher than it should be and contact failure will follow, or it may not make contact at all. The idea might work with some relays, but may not work at all with others. It would be a clever idea if it could be trusted, but it's far too risky in a high current application. I strongly recommend that you avoid copying the mistake. I tested it, and the relay activated just far enough to open the NC contacts, but not enough to close the NO contacts. The armature was in limbo, at about half travel. Epic fail.
+ +A technique that was once an option was to use an incandescent lamp in series with the relay coil. If chosen carefully, the lamp's cold resistance would allow the relay to pull in reliably, but as the filament became hot the resistance increased until equilibrium was achieved between the lamp and the coil. Unfortunately, this isn't an option any more, because the range of suitable incandescent lamps has shrunk to the point where it will be difficult to find one with the characteristics needed. Using a series lamp was never a 'precision' technique, but the user could usually find a lamp that was suitable. This will be very difficult now, but you might get lucky and find just the right lamp for the relay being used. However, don't count on it, and consider that the lamp may become unavailable at any time.
Reed relays are often used when switching low-level ('signal') voltages. Because the contacts are hermetically sealed in a glass tube there is no risk of contamination, and the only limit to their life is mechanical wear of the contact surfaces. Because the contacts close and open with no sliding forces, mechanical wear is minimal. The reed switch is yet another product that came out of the telephone system - it was invented by an engineer at Bell Labs in 1936. Reed switches are used with a separate magnet for door and window switches for intruder alarms and for safety interlocks on machinery. When the magnet (attached to the moveable part of the door/ window) moves a few millimetres away from the switch, the contacts open signalling that the safety cover/ door/ window has been opened. There are countless other applications as well.
+ +The reed switch itself uses two magnetic contact arms/ blades, one of which is flexible. There is no mechanical hinge or pivot, so reed switches can be considered to have no moving parts as such. The flexing of the moveable contact arm is designed to be well within the normal elastic range of the metal, so metal fatigue is not a limiting factor. A semi-precious metal is used for the contact faces. When the two contact arms are surrounded by a solenoid, one becomes magnetised with a North pole, and the other is South. Since opposites attract, the two contacts are drawn together, closing the circuit. In some cases a bias magnet is used to provide a normally closed contact, and the solenoid opposes the magnet to open the contacts. A bias magnet can also be used to increase sensitivity, but at the expense of being potentially unreliable in the presence of other magnetic materials. A bias magnet can also be used to create a latching relay, and the coil's polarity is reversed to open the contacts again.
+ +Most reed switches have a single pair of normally open contacts, but there are versions with normally closed and changeover contacts [ 4 ]. A reed relay consists of the magnetically operated reed switch inside a solenoid. The two parts may be completely separate, or sealed into a small enclosure as seen in the photo above (top right, Figure 1.2). They are also installed in small PCB mount cases, looking somewhat like an elongated IC. Reed relays are mostly designed for low voltage, low current applications. The contact opening is very small and usually cannot withstand high voltage, although high voltage reed switches do exist! 200V AC at up to 1A is not uncommon. Reed switches and relays can be rated for billions of operations, depending on the load. If the voltage or current is towards the maximum rated for the switch it may last for less than 1 million operations due to contact erosion.
+ +Reed relays are very fast. I tested the one shown in Figure 2 up to 1kHz, and it was switching at that speed. The output was more contact bounce than anything else, but at 500Hz there was an almost passably clean switching waveform (still with about 150µs of contact bounce though). Contact bounce notwithstanding, that is very fast for a relay of any kind. Operating it at that kind of speed isn't recommended because of contact bounce, and even at a rather leisurely 100Hz you get a billion (1E9) operations in a little over 115 days.
+ +Reed switches were used for commutation of some high-reliability brushless DC fan motors before semiconductor Hall effect sensors became available. Even in this role the switches would most likely outlast the bearings ... somewhere in the order of 9½ years for a billion operations. No, nothing to do with relays as such, but interesting anyway.
+ +If you ever need to know, reed relays typically need around 20-30 ampere turns to activate, so if you have to make your own coil for a reed switch you'll need to use about 1,000 turns at 30mA for typical examples. They vary, so you will need to run tests for yourself. It's obviously far easier to buy one than to mess around winding your own coil, but it can be done if you like to experiment. I tested one with 30 turns, and it required 1A (close enough) to operate, so that's 30A/T. Remember that you need to add a safety margin, so you'd probably aim for around 45A/T for a reed switch that operates at 30A/T to ensure that it will always pull in with the rated voltage - even if the resistance has increased due to self-heating of the winding.
+ + +There are many different types of latching relay, sometimes also known as bistable relays (two stable states). A conventional relay is a monostable, having only one stable state. Some latching relays use an 'over-centre' spring mechanism similar to that used in toggle switches to maintain the selected state, and others use a small permanent magnet. There are single coil and dual coil types as well. A single coil is a bit of a nuisance because the driving electronics become more complex, but dual coil types are usually somewhat more expensive. With a single coil, the driving circuit needs to be able to provide pulses with opposite polarities, which typically requires four drive transistors rather than two. Latching relays have the advantage that no power is consumed to maintain the relay in the 'set' or 'reset' state.
+ +However, there is a disadvantage as well. If power is interrupted while the relay is 'on', it won't release. When power is restored, the relay is still 'on', and that may have consequences for the circuitry (and/ or operator safety). If it's a requirement for the relay to be in the released state at power-on, additional circuitry is needed to force it to release. There will be a brief period when power is passed by the relay, before the 'power-on reset' circuitry can activate. Latching relays must never be used in safety-critical circuits, because unintended supply of power (which may be mains voltage) could cause injury or death. Attention to the smallest details is essential for any switching that has safety implications.
+ + + +The photo shows one kind of latching relay - it uses a magnet with two pole pieces on the armature, which pivots around its centre point. The coil is centre-tapped, so it can be latched one way or the other by energising the appropriate half of the winding. This type of relay only needs a momentary pulse on the appropriate coil to set or reset the contacts, and the pulse will be in the order of perhaps 250ms. This means that the relay draws no power most of the time, only when it changes state.
+ +Unless the relay has an additional contact set that can be used to monitor which state it's in, there's no way to know. Because it has two stable states, there is no real distinction between 'normally open' and 'normally closed' because both states of the relay are equally valid. For this reason, latching relays should never be used to turn on/off machines or power tools. For example, if there's a power outage while the machine is running, when power comes back on the machine will start again. This can easily create a risk of serious injury because the machine will start without warning.
+ +If a microcontroller is used to drive latching relays, in theory it knows (thanks to the internal programming) which state the relay is in. However, if the equipment is portable and is dropped, the relay may change state due to the G-force created when it lands. Without separate contacts, the micro has no way to know that the relay's state has changed. This is a very real problem and it must be addressed in the software so that invalid states can be recognised and dealt with appropriately.
+ +The drawing shows the way the relay works. The magnet assembly has a central pivot, allowing the entire armature to rock back and forth. When there is no power to either coil, the armature can be in either position and will be stable. If current is applied to the set (or reset) coil so the top of the yoke becomes a magnetic South pole, the bottom becomes North. In this state, the magnet and its pole pieces will be repelled from both ends, and will snap clockwise so unlike poles are together. Again, the relay can remain in this state indefinitely, until the other coil is pulsed briefly and it will change state again.
+ +If the set coil is pulsed multiple times with no intervening pulses to the reset coil, nothing happens. Once the relay is in one state, multiple pulses or continuous current to that coil has no effect. It's only when the other coil section is pulsed that anything happens, and that will cause the relay to change state.
+ +Below are two simplified circuits of dual-coil (A) and single coil (B) latching relay drivers. As is readily apparent, the dual coil version is far simpler, and just uses a transistor to connect one side of the coil or the other to ground to set or reset the relay. The two transistors should never be turned on at the same time because the relay state will be indeterminate when power is removed. Otherwise, no harm is done. Note the way the diodes are connected - this only works if the coil and drive transistors are connected as shown, and the peak voltage across the transistor that remains off is three times the supply voltage (3 x 12V or 36V in this case).
+ +The single coil (B) is more complex, requiring another two transistors and resistors. Note that diodes can't be used to suppress the back-EMF because the polarity across the coil changes. Well, you can use diodes, but you have to add four of them. You need a diode from each end of the coil to earth/ ground, and another to the supply. The resistor shown (R5) is simpler and cheaper, and again assumes the coil resistance to be 270 ohms and limits the flyback voltage to double the supply (24V in this case). There should be no concern about the extra dissipation in the resistor, because it's on for such a brief period.
+ +Some explanation is needed. If a signal is applied to 'Input 1 - Set', Q1 will turn on. This will turn on Q3 because the lower end of R3 is now at close to zero volts and Q3 gets base current. Q2 and Q4 remain dormant. Current therefore flows through Q3, the relay coil, then Q1 to ground. If voltage is next applied to 'Input 2 - Reset', Q2 and Q4 turn on, and current flow is now through Q4, the relay coil (but in the opposite direction), then Q2 to ground.
+ +With the Figure 8.3 (B) circuit, it is imperative that the software (or other control system) can never apply a signal to both inputs at the same time. If that happens, all transistors turn on, and the transistor bridge becomes close to a short circuit across the supply. This will almost certainly cause transistor failure and may damage or destroy the power supply.
+ +While it's possible to include a 'lock-out' function to prevent this type of failure, that will simply add more complexity. A crude (but probably effective) method would be to connect a Schottky diode between the base of Q1 to the collector of Q2, and another from the base of Q2 to the collector of Q1. When either transistor is turned on the diode bypasses any base current intended for the other transistor.
+ +There are other ways a single coil can be driven, and if the relay coil voltage is significantly less than the supply voltage Q3 and Q4 can be replaced with appropriately sized resistors (270 ohms for a 24V supply and 270 ohm relay coil for example). If you use a resistor feed, the parallel resistor and/or diodes aren't needed. It's still far more effort than a dual coil relay though. Basically, the whole process just gets messy, and the moral of this story is quite clear - if at all possible, use dual coil latching relays.
+ +There are also 'bistable' latching relays, where one impulse operates or 'sets' the contacts, and the next (on the same coil) 'resets' them. If this type of relay is used, there should always be a spare set of contacts that can be used for an indicator or to tell a microcontroller the current state of the relay. Without that, there is no way to know which contacts are closed, and such an arrangement must be used with great care if it controls anything that could cause damage if the relay is in the wrong or unexpected state at power-up.
+ +A fairly common control application is where you have two push-buttons to turn a machine on or off. These are sometimes mechanical, but momentary contact switches can be used as shown above. Provided the safety interlock switch is closed, when the 'On' button (normally open) is pressed the relay energises. The circuit is completed by the first set of relay contacts (A) which cause the relay to remain energised. It will remain on for as long as power is applied, or until the 'Off' button (normally closed) is pressed or the safety interlock switch opens. Power to the equipment is provided by contact set B.
+ +As shown the 'Off' button and safety interlock have absolute precedence, and as long as either is open, the 'On' button cannot switch the circuit on. There might be several additional contacts in series with the 'Off' button, perhaps used for sensing that a safety screen is in place or other switches that signal that the machine is safe to turn on. Should any safety switch open while the machine is in use, it will stop because the relay will de-energise. It cannot re-start until all interlock switches are closed and the 'On' button is pressed.
+ +This is a very basic form of relay logic, acting as a set/ reset circuit with an 'AND' function in the 'Stop' circuit. The safety interlock and the 'Stop' button must be closed before the machine will operate. Including other logic functions is just a matter of adding more contacts, relays, sensor switches or external switching devices.
+ + +The common term is something of a misnomer, but anyone 'in the business' knows what a solid state relay (SSR) is, and may even know how to control them and what loads are safe with a given type. There is a huge variety of different types, not just for switching devices but for input requirements as well. Some SSRs are designed exclusively for use with AC, others are exclusively DC. A small number of commercial SSRs can be used with AC or DC. In this respect they are far more restrictive than conventional (electro-mechanical) relays, but they also offer some unique advantages. Needless to say, they also come with some unique disadvantages as well.
+ +SSRs can use a wide variety of isolation and control techniques, including reed relays (which strictly speaking makes it a hybrid), DC/AC converters, mains frequency transformers, or (and most commonly) infra-red light within an IC package. This creates an optocoupler, and these outnumber the other techniques by a wide margin. If significant power is being controlled, the control circuitry may use various means to amplify the relatively low output current from the optocoupler [ 6 ].
+ +Like conventional relays, SSRs provide galvanic isolation between input and output, commonly rated for 2-3kV as a matter of course. Rather than using a coil to operate the relay, most SSRs use an optocoupler, so the activating medium is infra-red light rather than a magnetic field. Where an electro-mechanical relay may require an input power of up to a couple of Watts, SSRs generally function with as little as 50mW, with some needing even less.
+ +However, where the contacts of a conventional relay may dissipate only a few milliwatts, an SSR will usually dissipate a great deal more, with high power types needing a heatsink to keep the electronic switching device(s) cool. This is because the switching element is a semiconductor device, and therefore is subject to all the limitations of any semiconductor. This includes the natural enemy of all semiconductors - heat! Common switching devices are SCRs, TRIACs, MOSFETs and IGBTs, and each has its own specific benefits and limitations.
+ +Be particularly careful if your application has a high inrush current. The worst case maximum current must be within the ratings of the SSR, or you run a very real risk of destroying your relay. SSRs have a bewildering array of specifications (some are more inscrutable than others), but the maximum allowable current will always be specified (typically as the 'non-repetitive peak surge' current). Note the use of the term 'non-repetitive' - that means whatever the maker says it means. It might be for 20ms (one cycle at 50Hz), it may also mean for some other specified duration (e.g. 1ms), and if you are lucky there will be a graph and even some info on how to deal with inrush current. For more information on this topic, please read the Inrush Current article.
+ +++ ++
+Switching Used For Comments + SCR ½ Wave AC Two are commonly used in reverse-parallel for high-power full-wave AC + TRIAC Full Wave AC Generally only used for low power versions (10A or less for example) + MOSFET AC or DC AC and DC versions are available, but are generally not interchangeable +
To make things more interesting, many SCR and TRIAC based SSRs are available with internal zero-voltage switching circuitry. This means that when switching AC loads, the electronic switching will only allow the SSR to start conducting when the applied AC voltage is close to zero. This is a simple way to reduce electrical interference, but you must be aware that they are only suitable for resistive loads.
+ +![]() | Never use a zero-voltage switching SSR with transformers or other inductive loads. Doing so ensures maximum + possible inrush current, which can result in tripped circuit breakers and possible damage to the SSR itself. To see a complete article describing this phenomenon + and more, read Inrush Current Mitigation. Inductive loads behave very differently from what you might expect when switched on! + |
To see come of the techniques used for MOSFET relays, see the article MOSFET Relays which describes the various techniques that can be used. DC MOSFET based SSRs simply use a MOSFET and an opto-coupler. There is generally little or no advantage to using the pre-packaged version over a discrete component equivalent, except in cases where the certification of the SSR is needed for safety critical applications.
+ +
|
The general arrangement shown in the schematic of Figure 9.1 is common to most SCR and TRIAC based SSRs. The optocoupler can be purchased as a discrete IC in either 'instantaneous/ random' or 'zero-crossing' versions. In this case, 'instantaneous' simply means that the opto-TRIAC will trigger instantly when DC is supplied to the LED, regardless of the AC voltage or polarity at that moment in time. The zero-crossing versions will prevent triggering unless the AC voltage is within (typically) 30V from zero. Examples are the MOC3051 (instantaneous/ random phase) or MOC3041 (zero crossing).
+ +As noted above, zero-crossing trigger ICs or packaged SSRs must never be used with transformer or other inductive loads, and they are completely unsuited for use in phase controlled light dimmers. They should be used when switching resistive loads (including incandescent lighting) or capacitive loads (some electronic loads might qualify). They are also commonly used for switching heaters, especially when thermostatically controlled, as there is almost no electrical noise when the AC is switched as the voltage is close to zero.
+ +![]() | Most TRIAC based solid state relays are not suited for use with electronic loads, and that includes lighting such + as compact fluorescent or most early LED lamps. In some cases they might seem to work, but if the mains current waveform is examined you may see current spikes of several amps + occurring every half-cycle - for a single lamp! This will (not might - will) eventually lead to failure of the lamp, the SSR or both. Electronic loads should only ever + be switched using electro-mechanical or MOSFET relays, and should be tested thoroughly as a complete installation, and verified to ensure that operation is safe for both relay and load. + |
You will no doubt have noticed that there are two prominent notes with regard to solid state relays. These are just two of the things that you have to be very aware of if you decide to include a SSR in your project. The comments regarding electronic loads are particularly important, and an 'electronic load' is anything that has a bridge rectifier across the mains, then uses a capacitor or active PFC circuit to create a DC voltage. Virtually all switchmode power supplies meet the definition of an electronic load, and therefore most cannot be controlled by a SSR unless such usage is specifically permitted in the datasheet. If it's not mentioned, then assume that it's not allowed. If you choose not to accept that this is true, you will almost certainly damage the load and probably the SSR as well. It's something that's not well documented, poorly understood, rarely tested properly and can cause significant damage, including the risk of fire.
+ +You also need to carefully read through the documentation to make sure that your supply and load can never exceed any of the limits described in the datasheets. A momentary over-voltage generally won't cause the contacts of a standard relay the slightest pain, and even short-term excess current is usually not a problem. With a solid state relay, no limiting value can be exceeded ... ever. You also have to ensure that the voltage and/or current don't change too fast, because SCRs and TRIACs have defined limits, known as DV/DT (critical change of voltage over time) and DI/DT (critical change of current over time). If either is exceeded, the device may turn on unexpectedly. You will also see these terms written as ΔV/Δt and ΔI/Δt.
+ +The maximum peak voltage can't be exceeded either, and woe betide you if the load draws more than the rated peak current. You also have to use a heatsink if the load current would otherwise cause the temperature to rise above the rated maximum (typical junction temperature might be around 100°C). There are many disadvantages, but sometimes there is no choice. For example, you can't use a mechanical relay in a 'phase-cut' dimmer because it can't act quickly enough. You also can't ensure that a mechanical relay switches on at a particular phase angle of the AC waveform - for example the ideal for an inductive load is to apply power at the peak of the AC waveform. This is easily done with a SSR.
+ +The MOSFET relay shown above is based on the one described in the article MOSFET Relays. There are several types, including those intended for DC operation, but the one shown is a fairly common arrangement. Exact details will differ, but the general principles are the same. Some photo-voltaic couplers have the turn-off circuit (R2 + Q1) inbuilt, and it's needed because the MOSFET gates have extremely high impedance and significant capacitance. Without the turn-off circuit the MOSFET could remain (partially) conducting for several seconds after LED current is removed. Because the photo-voltaic cells have very limited output current, turn-on time may be much slower than expected.
+ +The same principles are also used with a pair of IGBTs. These are useful for very high power or high voltage applications. IGBTs can also be used in DC solid state relays where a MOSFET may be unable to give the required performance. There are countless possibilities with semiconductor devices, but all components have limitations, and it can be difficult to make the right decision when there are so many variables. IGBT based SSRs are even available as miniature low current devices (around 1A), and the PVX6012 is an example if you want to run a search for the datasheet. It's worth reading, if only to see how they are made and see some specifications. They are non-linear and are unsuitable for switching signal voltages.
+ +It's worth looking at the (generalised) advantages and disadvantages of semiconductor compared to electro-mechanical relays.
+ +Advantages ...
+Disadvantages ...
+The inability of most SSRs to provide changeover contacts or multiple sets of contacts can be a serious limitation, and can also increase costs significantly. It costs very little to add another set of contacts to an electro-mechanical relay, but with the SSR you need an extra high current switching device, and an improved driver to suit. In most cases if you need a circuit to be normally closed with power off then you're probably out of luck. Such things do exist, but I've never come across one other than in datasheets.
+ +Although solid state relays offer some worthwhile advantages, they have many limitations that will negate their use in a great many applications. Especially if you need multiple contacts or changeover (double throw), then you will have difficulty finding what you need and it will almost certainly be far more expensive than a standard electro-mechanical relay. In some cases it will be simpler and cheaper to make your own SSR using a suitable opto-isolator and SCR, TRIAC or MOSFET.
+ +One area where MOSFET and IGBT based SSRs excel is interrupting high voltage, high current DC, which is fundamentally evil. At voltages over around 30V and if there is enough current available through the circuit, DC will simply arc across the contacts of most mechanical relays and switches. With enough current, the arc may melt the contacts and contact arms until the air gap is finally big enough to break the arc. Think in terms of an arc welder, because that's the sort of conditions that can exist with enough voltage and current. A MOSFET doesn't have that limitation, and can break any voltage or current that's within its ratings.
+ +There are also many small (DIP6, DIP8 or SMT) MOSFET relays available. These are not suitable for high current, but some are likely to be a good choice for switching audio and other low-level signals. Voltage ratings range from around 60V up to 300V or more. Example include the G3VM-61G1 (60V, 400mA AC), LH1156AT (300V, 200mA AC) and PVDZ172N (60V, 1.5A, DC). These are chosen more or less at random, and there are hundreds of different types. As expected, all those I've seen are SPST normally open. Operating principles are much the same as described above, but everything is in a single package. For AC/DC types the voltage rating is the peak AC or continuous DC voltage.
+ +For AC types (using two MOSFETs), generally you can expect the 'on' resistance and distortion to be low or very low, but the signal isolation won't be as good as a reed relay. Any leakage current will almost certainly be distorted, but will normally be only a microamp or less at typical signal levels and should be below audibility, but that depends on the load impedance. Overall performance of low voltage types will be similar to CMOS devices like the 4066 quad bilateral switch. However, you get much higher signal voltages and complete isolation between the control and switching circuitry. This can be especially useful for test and measurement applications.
+ +Solid state relays should never be used as a safety-critical shut-off system. Because failure commonly means a shorted switching device, should the SSR fail the load will be permanently energised. You must know your load characteristics, and be aware that many SSRs may not turn off if the load has a characteristic that generates transients fast enough to cause spontaneous re-triggering of the SCR or TRIAC. Some non-linear loads may cause the SSR to trigger on only one polarity, causing half-wave rectification and a net DC component in the load's supply circuit (typically the mains). Some SSR problems (even if transient) can cause serious malfunctions in other equipment that shares the same power source. For example, transient half-wave rectification of the mains may cause transformer saturation, serious motor overload (saturation again), tripped circuit breakers and general havoc.
+ + +Here are a few things that don't really fit into any of the categories discussed so far, but hopefully you'll find useful.
+ + +If you happen to have a relay with a removable cover (they are quite common) you may find after a while that the cover either won't stay on or it rattles. The quick and easy way to make sure that the cover stays on is to apply a couple of drops of 'super-glue' (aka 'krazy-glue' in the US), and that will keep the cover on very nicely. There's only one problem - the relay will be ruined afterwards!
+ +Super-glue and all cyanoacrylate adhesives give off fumes, but the nasty part is that the fumes carry microscopic particles of adhesive into places where you really don't want them - the contact surfaces for example! Yes, it's true. You can ruin a relay just by gluing the cover on. This happened to a friend, and he found that the normally open contacts no longer closed when the relay was activated. While I'm sure that the contacts could be made to work again with multiple activations of the relay, when something like that happens in a critical circuit it can no longer be trusted and the relay should be replaced. I don't know which adhesives would be safe in this case, but a water-based glue would probably be alright, as would hot-melt adhesive. Silicone based sealants/ adhesives may or may not cause a problem - I've not tested silicone and for the time being I have no need to.
+ + +If the contacts get a little pitted or just look like they need cleaning, beware of using 'emery' or any other abrasive paper. Yes, it will clean the contacts, but it will also leave behind minute particles of abrasive. Some of these particles will be just sitting on the contact surface, and others will be embedded into the contacts. None of the common abrasives is conductive, and there is always the possibility that the contacts may not make properly - if at all. Any abrasive particles must be removed, or you may have intermittent contact in the future.
+ +One way to clean off any residue is to use paper - ordinary printer paper is usually good enough. Give it a very light spray with WD-40 or equivalent, and press the contacts together with your finger as you slide the paper between the contacts. Make sure that you apply enough pressure to make the paper contact both surfaces properly, but not so much that you deform the contact arms. You should do this several times with a clean piece of paper each time, until the paper comes out clean, with no residue of any kind. Despite the outlandish claims you may see that "WD-40 is evil and cannot, must not, be used with electronics" these claims are a complete fabrication. None of the 'water displacement' type sprays will harm most electronics, but be careful using them with some plastics.
+ +Ideally, contacts will be cleaned using a contact or points file - a thin file specifically designed for cleaning between closely spaced contacts. However, I have never had a problem when using the method described above, and if you only need to remove light tarnish the paper alone may well be sufficient. The microscopic roughness of the paper is enough to remove silver sulfide (for example) very effectively. Never use a contact file on plated contacts. Many 'signal level' relays use a very thin layer of gold (which does not tarnish), but a file will remove it completely, rendering the relay useless for the task.
+ + +In the discussions about coils, ampere turns and other interesting titbits, a few tests were done with a reed relay to determine how many ampere-turns were needed to close the contacts. Taking this to the extreme, it means that a reed relay can be used to detect current, and in particular an overload. Will it be accurate? No, not really, but it will be capable of signalling to other circuitry that an over-current condition has been detected. Mostly, extreme accuracy isn't needed - if a circuit is meant to deliver 5A and suddenly you find it's drawing 10A or more, you only need to know that there's a problem, and can use the contact closure to shut down the circuit.
+ +In this case, the reed relay coil is in series with the load, rather than being connected in parallel with a voltage source. Because heavy gauge wire can be used, the 'burden' (voltage dropped across the sensor) can be minimal. If you used a resistor instead and measure the voltage across it, you may lose anywhere between 100mV (10 milliohm resistor at 10A) to 1V (0.1 ohm resistor at the same current). With 0.1 ohm, you also waste 10W. The loss is much less with 10mΩ, but the resistor will be very hard to get, and you need more complex electronics to detect the voltage reliably.
+ +I tested a reed switch and found reliable activation with 30A/T, so 30 turns will detect a current of 1A. By the same reasoning, 3 turns (of heavy gauge wire) should detect 10A, but will probably be less sensitive because 3 turns can't be spaced out along the length of the reed switch very well. If you want to use this technique you will have to experiment to get the detection threshold where you want it to be. You also have to accept that it's not a precision solution, but it will work without the need for low value shunt resistors, it will be extremely reliable, and needs no electronics at all. An example of the basic technique is shown below ...
+ +
|
The photo on the right shows a test version, using 8 turns (with three wires in parallel). It activates reliably at 4A, so the winding can be worked out to be 32 ampere turns. Not too different from the 30A/T I got while testing with 30 turns around the reed switch. In both cases the extra winding was simply wound around the outside of the reed relay shown in Figure 1.2, so the threshold was probably a little lower than it would be without the original winding, which increases the distance from the coil to the reed switch and therefore reduces the sensitivity. Needless to say, the relay can still be activated by applying 6V to the original coil, so it could be used as a dual-purpose relay. By playing with the polarity of each coil there are several new uses for the relay, as it can sense both voltage and current and can add or subtract them ... all in one small package.
+ +If you make a current sensor using a reed switch, the switch and coil should be very firmly mounted to prevent movement. Even a small amount of relative movement will change the detection threshold, and be warned that a really serious overload can compress the coil purely by the power of the magnetic field. You also need to be mindful of the reed switch's maximum A/T rating. Some vendors publish figures for the maximum field strength, and some I've seen can be as low as 50A/T. For example, you might want to monitor the current from a battery pack. A shorted Ni-Cd battery can deliver a prodigious amount of current, and it may be sufficient to damage the reed.
+ +These days you can get current detector ICs that use a Hall-effect device to measure the current, but you probably can't get them from your local 'walk-in' electronics shop, and because everything is done for you there's no fun to be had playing around in the workshop. You can also get ready-made reed switch current sensors, but they are not common. The reed switch approach also has many significant advantages [ 7 ], in that it doesn't need a power supply or any amplification to provide a useful output.
+ +Some older (up-market) cars had lamp failure indicators that used reed switches with a few turns of wire around the outside. If a lamp failed, the reed switch would not close and some basic relay logic was then used to light a warning lamp. Compare this to a semiconductor approach that will use 10 or more components and a PCB to achieve the same thing.
+ + +A relay makes an ideal polarity protection device. Unlike using a diode or MOSFET, there is almost no voltage drop and no heatsink is needed even for high current loads. Very high current applications are easily protected - 150A at 12V is easy using a heavy-duty automotive relay (that would need a mighty big MOSFET!). The disadvantage is that the relay coil draws current, so the technique is not suitable for applications where current drain must be minimised. It is possible to include an 'efficiency' circuit as described above, but IMO there's not much point - especially if the load draws a high current anyway, and that's where this arrangement is best suited.
+ +The relay contacts are never expected to break the load current, so even fairly high voltages can be accommodated quite safely.
+ +The circuit shows how it's done. If the incoming supply is the wrong polarity, the relay coil gets no power because it's blocked by D1. Without power, the normally open contacts remain open, and no power is supplied to the load. The relay can only be energised if the incoming DC is the correct polarity, and the circuit will provide DC to the load only when the normally open contacts are closed.
+ +If you want, add an LED as shown. If the supply is connected the wrong way, the LED will come on as a warning. Alternatively you can just use another LED with a series resistor after the contacts to indicate that the polarity is correct and power is available. You can use a 1-Form-A (SPST) relay with only a normally open contact set if you wish. The 'NC' contact is not needed for polarity protection.
+ +This circuit can also be used in a battery charger for example. In that case, you's use it with the battery as the 'DC Input', so it will only work if the battery connection is the right polarity and when the relay closes it will connect the charger. Naturally, this can't work if the battery is heavily discharged or completely dead and there's not enough voltage to energise the relay. If you use it for a lead-acid battery, the battery will likely be ruined if the voltage is too low to energise the relay, so whether it connects the charger or not is a moot point.
+ + +It's also worth pointing out that the techniques described here apply equally to other magnetically operated devices - in particular, solenoid actuators of all kinds. There is a vast range of these devices, and solenoids are used to operate valves (air, water, gas, etc.) and many other functions in consumer and industrial equipment. Dishwashers and (clothes) washing machines are two common examples, using solenoid valves to turn the water on and off. Many up-market cassette decks of days gone by used solenoid control, with 'soft touch' buttons that required little or no force to operate.
+ +Those mentioned are not time-critical, but industrial actuators often have to react within a specific time, and if slowed down excessively might mean that the machine will not operate properly, or will mangle the very products it's designed to build. Many years ago I watched a component insertion machine in action, placing through-hole components into a PCB. If any part of the system failed to operate at exactly the right time, the result was damaged components and places in the PCB where a part should have been, but wasn't. This happened for a variety of reasons, one of which was solenoid valves failing to release quickly enough. Most of the old through-hole insertion machines used pneumatic actuators, all driven by solenoid valves.
+ +The pick-up and release times not only have to be as fast as possible, but more importantly they must be absolutely predictable. For this reason, a diode directly across the coil is generally the worst possible 'cure' for back-EMF, because it not only delays the release, it also slows down the released actuator so it may not achieve the required velocity to overcome any friction or sticking force (frequently referred to as 'stiction').
+ + +The vast majority of relays have contacts that break-before-make, which means that there is a short period where the NO and NC contacts are open when the relay is activated or deactivated. This isn't usually a problem, but it can cause issues with some circuits. One such application would be a speaker switch for a valve (vacuum tube) amplifier, as valve amps can react very badly if the output is open-circuit. This isn't an issue if there's no (or very little) signal, but if the amp is being driven hard (a guitar amp for example) it might be damaged during the open-circuit condition.
+ +Make-before-break (Form D) relays are now uncommon. Omron used to make one (G2A-4L32A) but it's obsolete and I found no alternative. Before this section was published (October 2021) I found that there's virtually nothing that describes the 'all contacts open' condition. I ran a few tests on a couple of relays suitable for speaker switching to determine just how long the contacts are open (i.e. somewhere in between the fixed contacts). It's entirely up to the reader to decide whether this will constitute a problem in use. Mostly it does no harm, but there are applications where an open-circuit could cause damage.
+ +The circuit shown was my test setup, and while the moving contact (connected to the relay's armature) traverses from the NC to the NO contacts (and vice versa), there can be no output across R1. A captured waveform is shown next, and it's unambiguous - there is an easily measured time when both contacts are open-circuit. The 'oscillations' you can see in the trace are caused by contact bounce, and all relays and switches are similarly afflicted.
+ +The two relays I tested were those shown Figure 1.2 (low cost SPDT and the octal base relay, but with 24V DC across the coil). The timing doesn't depend of whether the relay is being activated or deactivated, with both being roughly the same. A typical trace is shown above for reference, and after testing many times (and with both relays, plus a push-on, push-off footswitch) similar results are apparent with all. The 'break' time as the moving contact changes from one set of fixed contact to the other is surprisingly consistent, at around 6ms (including contact bounce period). It's possible to wire a pair of conventional relays to create a make-before-break circuit, but that will involve some electronics. It's not difficult to do, but make-before-break relays aren't something that many people want (as evidenced by the Omron version being obsolete with no replacement).
+ +While the application is simple, it does require a power supply, and it is now a project (see Project 219 for details). It is basically suited to one application - changing speakers 'on the fly' with a valve guitar amp, because an open-circuit speaker connection can cause damage. It may not be open for long, but 6ms may well be enough time for a high-voltage 'flash-over' in the output stage. There's no need with a transistor amp, as they don't care if the output is open-circuit or not. With any make-before-break relay, there is a period when both output loads are connected in parallel. Valve amps don't care about this, but it may stress a transistor amp! If you happen to need a make-before-break relay for other applications, the details should allow you to adapt it to suit.
+ + +A somewhat obscure relay actuator does away with the electromagnet and uses a piezo-electric element to move the contacts. These require a high actuating voltage, which may be up to 400V. Unless the piezo element is fairly long, the total movement is small, so contacts must be very close together, so switching high voltages is impractical. There's very little information available, other than a few patents. I've not seen any mainstream manufacturer offering them.
+ +The advantage (at least in theory) is that they will only draw a small current when operated (to charge the capacitance of the piezo element), and draw no current at all once the contacts are closed or opened. Don't expect to find one any time soon, but if you do, please let me know and send a photo. Like many other inventions, I have no doubt that these seemed like a good idea at the time, but they don't seem to have many practical applications. A little more info is available at the NASA website.
+ + +Something that most people don't have to think about is atmospheric pressure, since most of our projects will be used reasonably close to sea-level. If you're designing for something that's expected to be well above sea level you need to make allowances. Paschen's curve describes the breakdown voltage (usually of air) at various pressures, and if you have a product that will be used at 42km above the earth, the pressure is only 50mmHg (50mm Mercury, 50 Torr or 0.066 atmosphere). The dielectric strength of air at that pressure is only 320V/mm, reduced from 3kV/mm [ 9 ]. Most engineers will downgrade that to perhaps 1kV/mm to account for dust, high humidity, etc. In some cases the downgrade will be more, sometimes less, depending on the application.
+ +If you're planning to design relays or spacecraft you need to know all the details, but this is just a brief mention of the topic. You'll need to gather a great deal of information if you have an esoteric use for relays, and you won't find it here.
+ + +Back when the telephone system was completely reliant on relays and rotary selectors, there was a vast amount of information available, but you had to be in the industry or you'd never find it. Although there is a lot of on-line archived documentation, much of the original stuff has disappeared. However, a serious search will turn up some gems from the past. An example is a 126 page document published in 1970, and covering 'post office' type 3000 relays. Every possible aspect of the design and specification is described in detail, covering coils, contacts, pull-in and release times, magnetic circuits, contact alignment and adjustment procedures, etc.
+ +Almost all relays feature galvanic isolation, meaning that there is no conductive path between the drive coil or circuit and the switched load, and the input and output sections (and their connections) are physically placed so that all wiring can be kept separated. Note that some encapsulated reed relays may not provide acceptable physical isolation (known as creepage and/or clearance distances) to meet many standards, and can only be used with SELV or in circuitry that is not accessible to the user. The dielectric strength of many plastics falls at elevated temperatures, so keeping relays away from heat sources is a good idea.
+ +With electromechanical relays, magnetism and a mechanical linkage are the media used to couple the input to the output, and it's done in a way that usually prevents most noise from being coupled either way. Solid state relays generally use infra-red light from an LED to either a photo-sensitive semiconductor junction or an array of (tiny) photo-voltaic cells. Isolation voltages range from a few hundred volts up to several kilovolts, and many electro-mechanical and SSRs carry certification for CE, UL, CSA, VDE and various other standards bodies worldwide.
+ +Most relays are designed so that even catastrophic failure will not create a path between the two sections, so a traditional relay might have its contacts completely melted or have the coil burnt beyond recognition due to severe overheating, but the galvanic isolation remains and no current flows from the drive to the load or vice versa. In the same way, the infra-red LED might be blown to bits because it was connected across a 15V supply with no resistor, or the switching devices might fail due to a gross overload. Again, no current can pass the barrier. There are conceivably some faults that might cause a flashover (a lightning strike for example), but if that happens not much else survives either. When the transient has passed, the insulation will probably still be intact.
+ +Coil back-EMF prevention is, perhaps surprisingly, one of the more complex areas with electromechanical relays and other solenoids. It's very common to see a diode used, and in simple, low power circuits it will be just fine. In many other cases the diode can cause problems that you wouldn't normally be expected to have thought about. Where fast de-activation is needed, you need to do much better than a diode, and using an additional series zener is a good solution. The budget version is to use a resistor, which isn't as good but will be acceptable in many applications.
+ +If you do your homework, study datasheets and run some tests, you'll find a relay that will do just what you need. In some cases you'll find that a solid state relay is the best choice, but most of the time you'll quickly discover that an electro-mechanical relay is a far better option. In some datasheets and discussions you'll find that much is made of the high sensitivity of SSRs reducing wasted power, but in reality the switching semiconductors will often dissipate far more power than even the most insensitive electro-mechanical relay of similar load ratings. With any SSR, you must do your homework, and be aware of the many things that can go wrong. Also be aware that a fault in an SSR may cause damage to other equipment, even if it's not controlled by the SSR but just happens to be on the same mains feed.
+ +As with everything in electronics, you will have to compromise somewhere. On the whole, conventional relays usually have fewer compromises than solid state versions, and offer far more flexible switching. With a mere half watt input, you can control 2kW or more with ease, and you can expect it to work for hundreds of thousands of operations, even at full load. Switching losses are minimal, no heatsinks are needed, and reliability is outstanding if you use the right relay for the job. Importantly for many people, electro-mechanical relays are far easier to get and usually much cheaper than a solid-state equivalent.
+ +There are also many applications where nothing can beat a solid state relay. Complete freedom from arcing, which is really important in hazardous environments with flammable material, such as gas or fine suspended particles (powders, flour, etc.), exceptionally fast (SCR and TRIAC types) and predictable response times, and lack of contact bounce can be critical in some designs. The process of design is based on knowing the options that are available so you can choose the one that will work best in your project. There is no 'best' solution for all applications, and it's up to you to choose the solution with the smallest number of entries in the 'disadvantages' column.
+ + +Part 2 - Contacts, Arcing & Arc Suppression
+ ++ 1 + Panasonic Small Signal Relay Technical Info. (Digikey)+ +
+ 2 Contact materials - The Relay Company
+ 3 HV9901 PWM Relay Driver (Supertex)
+ 4 Magnetic Reed Switches - Meder Electronics
+ 5 Permanent Magnet Latching Relay - Wikipedia
+ 6 Solid State Relays - Omega
+ 7 Reed + Sensors Vs. Hall Effect Sensors - Digikey
+ 8 AppNote 0513 - Application of Coil Suppression with DC relays (TE Connectivity, Relay Products)
+ 9 Breakdown Voltage - Wikipedia +
![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Relays & How To Use Them - Part 2 |
![]() ![]() |
The introduction to relays article covered the coil, driver circuits and discussed contact materials and ratings. This is Part 2 of the article, and looks at the contacts in greater detail. In particular we'll examine the many and various ways that the contacts can be damaged by and protected against arcing. Not by using specialised devices though - this section just covers the ways that readily available relays can be used to break 'difficult' loads without too much stress on the contacts.
+ +There are countless different loads and supply sources of course, and it's only possible to look at general principles. Some are 'text book' examples that have been used for many years with reasonable success. These are the circuits that you'll often see in product schematics and application notes, and they generally give quite good results.
+ +AC loads can be especially hard if the load is inductive. Transformers and motors fall into this category, and there are some tricks that can minimise inrush current upon connection and 'flyback' voltages when the relay releases. Even some resistive loads can cause problems, particularly if the load is incandescent lighting which causes a very high inrush current. In some cases it will only be possible to get very reliable zero-voltage switching operation by using a solid state relay (SSR), but even electro-mechanical relays can be surprisingly accurate if you are willing to add a micro-controller that monitors the AC phase and verifies the relay's operating timing.
+ +The science behind contact materials is very involved, and I don't have the necessary equipment to examine contact surfaces at the molecular level. Some of what you will read below might sound like science fiction, but the references will show quite clearly that these effects all exist, however unlikely they may seem. If you have access to a microscope you can look for yourself, but to see the real problems you need an electron microscope - well outside my price range.
Some relays have what is called 'bifurcated' contacts. This simply means that the contact arm is split in two, with contact material on each of the two sections. Depending on how it's done, this can reduce contact bounce if the two sections are of different widths and therefore have a different mechanical resonant frequency.
+ +Solid state relays (SSRs) are also covered here, and primarily those using SCRs (silicon controlled rectifiers) or TRIACs (bidirectional SCRs). The common term for these is thyristors, which is a contraction based on the combination of the vacuum tube version called a thyratron + transistor. These devices offer exceptionally fast switching, and come in a wide variety of different styles. Because they are semiconductors, in most cases you need to include a heatsink to maintain the operating temperature below the rated maximum. In some cases you can replace an EMR with an SSR, but there are design rules that must be followed to prevent failure of the SSR, the load, or both. The general principles are covered, but it's not possible to explain everything in a single article and I have no intention to even try. There are entire books written on the subject, so I can barely scratch the surface.
+ +The following is adapted from a relay datasheet [ 8 ], and shows the derating curves for both AC and DC operation. For the relay to meet its life expectancy, the current and voltage must not exceed the limits shown by the red curves. Should the ratings be exceeded, the relay contacts will be subjected to arcing that will either reduce the life or destroy the relay contacts. A serious overload (e.g. 14A at 56V for a power amplifier DC protection circuit) will destroy the relay - probably the first time it's used!
+ +The graph shown above is quite possibly the most important graph you'll ever see when it comes to relays switching DC. The relay itself doesn't matter very much, because the only thing that normally changes is the maximum current. The data can be extrapolated for higher current relays, but unless that datasheet specifically provides a similar graph showing higher DC current switching capacities, assume that 30V DC is the maximum permitted voltage for rated current. The current derating required at higher voltages is very clear. At 40V DC, the allowable current is reduced to less than 2A, with an absolute maximum voltage of 100V DC at 500mA or less. Ignore this at your peril.
+ +Relay ratings and limits are not subject to argument, and nor do they indicate that the ratings can be exceeded at the expense of contact life. These limits should be considered absolute, and if a relay contacts ever create a sustained arc, the relay is ruined. The photo in Figure 4.0 is a perfect example of a catastrophic failure. This can occur the very first time the relay is operated at excessive voltage and current - there is no 'second chance'.
+ + +The contacts of most relays are designed to slide a little as they open and close. This process helps keep the contacts clean, and is designed to remove oxides, sulphides and other contaminants from the surfaces. When a relay manufacturer specifies the maximum number of operations (typically between 100,000 and 1,000,000) this may be referring to mechanical life only, where the contacts are 'dry' (not carrying current). Sometimes you will see two figures, one being the mechanical life and the other being the life at full rated load.
+ +Reed relays are an exception, as they are hermetically sealed to eliminate external contamination and usually use contact materials that doesn't need a wiping action to maintain conductivity. See Part 1 of this article for info on the materials used.
+ +As the contact surfaces rub against each other, there will always be a small amount of wear, and because oxides are harder than the base material, minute particles of oxide may act as an abrasive and increase contact wear. When a relay is designed to use sliding contacts, this has been accounted for when the relay is manufactured, but if the relay is used in an area where there is significant vibration the wear may be accelerated. This is a real phenomenon, but is rarely the cause of contact failure unless the relay is operated dry for millions of cycles. If this is the case, then a semiconductor switch should be used instead.
+ +One thing you must do to ensure minimum contact wear with DC relays is to ensure that the ripple voltage on the DC supply is not so great as to cause any buzzing or armature movement. Using an unfiltered or poorly filtered DC supply will cause mechanical movement of the armature and contacts, and this will accelerate mechanical wear. The P-P ripple voltage should typically be no more than 10% of the DC value (e.g. 1.2V peak-to-peak ripple on a 12V supply). Less is better, but usually isn't absolutely necessary.
+ + +The major problem with all electro-mechanical relays (EMRs) is contact arcing. However, well before the arc is created, there is the small issue of contacts melting. Not the entire contact of course, but perhaps only a few molecules. This effect happens as the contacts close (make) and open (break). When we examine even the smoothest surface under a powerful microscope, it's quite obvious that it's not really smooth at all.
+ +So while the contacts of a new relay might look perfectly smooth, if examined with high magnification you find that this isn't the case. This general unevenness is called 'asperity' and it exists even in surfaces that appear to be mirror-smooth. It is inevitable that there will be high and low points at the atomic or molecular level, and as the relay is used these will move around as the contact material melts and is transferred from one contact to the other. That's not a misprint or facetious comment - it really does happen. Mostly it's at the molecular level, and it even happens when the relay is switching a small current. However, a relay used to switch 1V RMS signal levels at perhaps a milliamp or so will never arc, and there's not enough power to melt anything.
+ +Currents of well under 1A can cause a sufficiently small contact point to melt. Consider that you can get fuses rated at less than 50mA, so it's quite apparent that if the conductor is thin enough it can me made to melt at surprisingly low currents. Of course the mass of the contact itself acts as a heatsink, so don't expect your contacts to be destroyed straight away - it may take well over 100,000 operations before you can even see any pitting. Metal migration and/or evaporation at the atomic or molecular level may only move a few molecules each time, and if the polarity is random (with AC supply and load) then the migration evens out - any material lost with one polarity is regained when the polarity reverses.
+ +The temperature of parts of relay contacts at the moment of connection or disconnection can easily reach over 4,500°C, just at the critical point where all the current is concentrated in a very small region of the total surface. That this will happen is a certainty because of the microscopic peaks and troughs across the surface. There will inevitably be peaks that make the initial or final contact, and because they are so small the current density is extremely high. The contact material will melt and may be literally blasted away from the contact point because of the very high temperatures reached. Surrounding air becomes superheated, it ionises, and it's the ions of air and metal that eventually (well, after a few microseconds or so) create the arc.
+ +The melting processes described are very short-lived, and may only exist for nano or micro seconds. In general, there will be some degree of contact melting even if your application never produces a visible arc. At relatively low voltages and currents you can expect some of the contact material to melt each time the contacts open or close. This means there will be a small quantity of material transferred between the contacts.
+ +Material | Conductivity | Melt Voltage | Arc Voltage | Arc Current + |
Copper | 100 % | 0.43 | 13 | 0.43 + |
Gold | 77 % | 0.43 | 15 | 0.38 + |
Nickel | 25 % | 0.65 | 14 | 0.5 + |
Palladium | 16 % | 0.57 | 15 | 0.5 + |
Fine Silver | 105 % | 0.37 | 12 | 0.4 + |
Tungsten | 31 % | 0.75 | 15 | 1.0 + |
+ Note: Copper is the reference material in the above table. Other materials are shown relative to the conductivity of copper. ++ +
Melted contact material will tend to collect on the cathode (negative) contact, and there will often be material loss due to the melted contact material boiling and/or burning which disperses the molten material. While these effects are at the molecular level, over tens of thousands of operations there will always be some visible damage. If the contacts are under-specified the relay will fail prematurely.
+ +In the table, the 'melt voltage' refers to the voltage that exists between each of the contact surfaces, assuming that there is a molecular bridge (a pair of high spots for example) between the two. If the voltage across the bridge exceeds the figure shown, the material will melt. The size of the bridge is immaterial, but in most cases it will be microscopically small. Arc voltage and current are discussed in the next section.
+ + +If you thought some of the stuff above was a bit scary, consider that everything changes for the worst when the current is several amps, and that's when we must find ways to minimise the arc. An electric arc can reach temperatures of over 19,000°C, and is no different from the arc welding process, where molten material is transported from the welding rod to the surface to be welded. DC is the worst, because the current is always in the same direction, so material will typically migrate from the cathode to the anode, carrying atomic or molecular particles of material with it. With AC (and assuming random switching), the polarities of the contact electrodes will change, so some material migrates first one way, then the other. In all cases where an arc is created, there will be some material loss due to splattering, and not all molecules from one contact are collected by the other. When there is material transfer via AC, the vaporised metal will tend to migrate from the hotter electrode to the colder one.
+ +An arc may be developed as the contacts open or close, and this depends greatly on the contact surface and the nature of the load. If the arc is sustained, the contacts will be destroyed. Sustained arcs can normally only be created as the contacts open, because the arc is automatically extinguished once the contacts are touching each other. However, if there's so much contact damage that the contact surfaces touch briefly only during the contact bounce period, then an arc may well develop between the contacts. The gap might only be a few micrometres initially, but if the arc is maintained it won't take long before the contacts are completely destroyed.
+ +Different metals have differing voltages and current that will allow an arc to form, and these are shown in Table 2.1. If the voltage and current are below the minimum, no arc will be created. However, if either the voltage or the current is above the arc rating for the contact material used, there will be an arc. A small amount of arcing is sometimes needed with contact materials to remove oxides (or sulfides in the case of silver), but all arcs are destructive and must be stopped as quickly as possible.
+ +Provided the voltage and current are below the figures shown in Table 2.1, no arc can usually be created. If either voltage or current exceeds the arc threshold for the contact material in use, then there will be an arc. The voltage and/or current don't have to be steady state, and momentary transients can initiate the arc. Once the voltage and current fall below the values given the arc will normally extinguish, provided the gap between the contacts is wide enough.
+ +The relay contacts are designed to separate enough to ensure that the arc will be extended until its impedance is high enough that the arc current can no longer be maintained. Because of the differences between AC and DC, a relay rated for 10A at 250V will be heavily derated if it is used with DC. It's common to see a 250V AC relay derated to 30V DC for its rated current (you can see the ratings clearly on the Zettler relay in the photo below - centre-top in the picture). Should you choose to ignore the maximum voltage (especially with DC) you can expect the relay to fail. This can happen the very first time it's used, and failure caused by a serious arc will be total and permanent. There are ways that the arc can be suppressed though, and that is the primary purpose of this second part of the relays articles.
+ +The selection of relays shown is the same as that in Part 1, and is shown again here for reference. Most of the tests I conducted used the octal based relay, for the simple reason that the cover is easily removed. There's no point trying to observe an arc if you can't see through or remove the cover. In some ways this is an 'unfair' test, because the relay has very solid contacts and wide separation, but the trends are still very obvious and it's easy to see if a technique makes a difference or not.
+ + +Over the years, several different techniques have been developed to quench contact arcing, or in some cases it can be possible to prevent an arc from starting at all. The latter is the ideal case, and a well engineered snubbing circuit can be surprisingly effective. These techniques apply equally to switches, because they also have contacts and are often operated at voltages that exceed their ratings. Big, solid toggle switches can handle a fair amount of abuse as can equally big relays. The idea here though is to do what we can to prevent the abuse, and allow the use of a smaller and cheaper relay (or switch). Alternately, if the switch or relay is kept the same, we can expect it to last for the life of the equipment.
+ +The methods used depend on the load and the supply. Some arc suppression techniques are only applicable to DC, and others can be used with AC or DC. AC is always easier because the current passes through zero either 100 or 120 times a second depending on the mains frequency. Higher frequencies (400Hz for example, commonly used in aircraft electrical systems) may create additional problems, but most aircraft parts are specialty items and will not be covered here.
+ +Arc suppression is often needed to reduce RF interference, especially if the equipment will be used anywhere near AM radios or where EMI can't be tolerated because it will cause other equipment to malfunction. The earliest radio transmitters used were based on a spark gap - a fancy name for contacts supporting an arc. The RF noise created is wide band, and can travel a surprising distance. Early radio (or wireless as it was known at the time) transmissions across the Atlantic Ocean used spark-gap transmitters.
+ +You will often see a small arc generated as the contacts close. This seemingly odd behaviour is usually the result of contact bounce. Relay and switch contacts almost never make perfect contact when operated, even though it appears so to the naked eye. An oscilloscope will show clearly that the contacts make, break and make again several times when a relay or switch is operated. The contacts and supporting arms have mass and resilience, and when the two contact faces are brought together they bounce several times before settling with the contacts touching each other as they should. When (not if) this happens, an arc is created each time the contacts separate, and because the distances involved are usually very small, it's easy for the arc to be maintained for the few microseconds when the contacts have separated.
+ +Lest you think that I'm exaggerating and that it can't possibly be as bad as I claim, cast your eyes on the following photo. What you see in the photo is all that remained of the upper contact set after a sustained arc. The relay shown is a heavy-duty industrial type, and internally it's almost identical to one that I used for some testing (but not to destruction).
+ +Sometimes, the easiest way to get a wider contact separation and reduced chance of developing an arc is to use two or more sets of contacts in series. By increasing the effective total gap between the contacts, you obtain a much greater voltage rating without affecting the current. Even then, you need to employ methods to prevent the arc from forming - seeing a relay with a continuous arc at 5A or more is a scary thing to behold, and you know immediately that if it's not stopped fast you'll have an ex-relay on your workbench. If it happens, the contact arms may be heated to such a high temperature that they lose their elasticity ('spring') and will not provide proper contact pressure. It only takes a few seconds!
+ + +Using a magnet to 'pull' the arc away from the contacts can work very well, but it's not a common scheme. The magnet needs to be very close to the contacts and/ or very powerful. Neodymium magnets are available that will do an admirable job, but the magnet's polarity and the arc polarity determine effectiveness. This is something that requires trial-and-error testing, and of course it's essential that you can see the arc and whether the magnetic field is effective. Magnetic arc-quench are sometimes referred to as using 'blow-out' magnets. With the right combination of magnets and contacts, a more-or-less conventional relay can be operated at up to 80V DC at 20A [ 9 ].
+ +The magnet(s) must be in exactly the right position for the relay in use, and must not polarise the relay's magnetic circuit, as that could easily prevent the relay from activating and de-activating as expected. Because everything depends on the relay's construction, the magnetic strength and positioning of the magnet, no further details will be presented here. Because it's so uncommon, most people will never have seen it done, and further discussion would not be useful. Feel free to experiment, but be aware of the pitfalls.
+ +There also the (not insignificant) problem of mounting the magnet onto the relay body. The magnet must be firmly attached so that it can't move or fall off, and in the exact position determined by testing. This isn't trivial, because relay cases aren't always made with adhesive friendly materials, and Neodymium magnets have an external plating that can degrade with time. If the magnet were to fall off, you have a loose magnet inside your equipment and a protection relay that won't work. I expect that few readers will find either option to be desirable.
+ +Commercial (permanent magnet) arc-quenching relays exist, primarily to cater to the electric vehicle market. The polarity is critical for correct operation, so using a magnet is unpredictable if used for a speaker protection circuit because the fault can be positive or negative DC, and the magnet's polarity and position cannot be optimised for both. Electromagnet arc quenching relays also exist, and they use the fault current to generate a magnetic field that's correct for the polarity of the fault current. These are primarily industrial products, and are not suited to most hobbyist applications.
+ + +A simple, effective and very common technique is to use a resistor and capacitor in series across the contacts. This arrangement is commonly referred to as a 'snubber' circuit, and they are used extensively in all sorts of different designs. The capacitor absorbs some of the energy that would otherwise be dissipated in the arc, and if we reduce the available energy we can expect the arc to be extinguished faster than it would without the snubber circuit. Note that adding a snubber as shown simply reduces the arc, and assumes that the relay is being used at no more than its rated current. Adding a snubber helps to minimise EMI (electromagnetic interference) created by the arc, but does not mean that the relay's limits can be exceeded!
+ +There are some 'rules of thumb' that are applied to snubbers used across contacts, and these give the designer a good place to start. The following fall into that category - this isn't meant to be the only range of values that can be used, but you have to start somewhere ...
+ ++ R1 - 0.5 to 1Ω per contact volt+ +
+ C1 - 500nF to 1 µF per contact amp +
For example, if you wanted to switch 48V DC at 10A, R1 could be 24Ω, and C1 would be around 20µF. If AC is being switched, the series impedance of R1 + C1 must be large compared to load impedance, or current will be delivered to the load even when the contacts are open. Because AC is less troublesome than DC, the capacitance value can be reduced considerably, and I'd suggest that C1 would only need to be around 1µF at the most. This limits the current to about 15mA when the contacts are open with a 48V 50Hz supply. This is only an example, and your load needs to be tested carefully to ensure that the residual current doesn't create more problems.
+ +Although it may not seem very likely, this basic snubber is surprisingly effective. I tested a 40V DC load at 4A using a 10Ω resistor and 1µF capacitor, and the only evidence of an arc occurred when the contacts closed. This was due to contact bounce. Without the snubber, there was a very noticeable arc as the contacts opened, exactly as expected. In case you were wondering, the resistor is there to keep the current to a manageable level when the contacts close, and it should not be omitted - even though the arc quenching action is far better with no resistance.
+ +While the contacts are open, C1 will charge to the full supply voltage. The capacitor will usually be a metallised film type, and these usually have a very low ESR. When the contact close, the cap is shorted, and the peak current can be extremely high. This can lead to severe contact erosion due to melting as discussed above, and the worst-case scenario is that the contacts weld closed. This will tend to happen anyway, and normally the return spring is strong enough to break the weld when the relay is de-energised. If the current is high enough, at some point in the future the weld will become permanent or there will be so much contact erosion that the relay fails.
+ +R1 in the circuit shown limits the peak current to 2A, but it can be reduced further to get an improved arc quench as the contacts open. When the contacts first open, the ideal is to use only the capacitor, as it will keep the voltage across the contacts below the arc voltage for the few microseconds it takes for the gap to be wide enough to prevent any arc from starting. As discussed, this will create a very high peak current when the contacts close, so I suggest that the following circuit be used. Note that it can only be used with DC.
+ +Adding D1 means that the capacitor is almost directly across the contacts, so can absorb close to the total energy that would otherwise create an arc. D1 must have a 1ms surge current rating that's at least the same as the load current, but preferably a lot more. The contact current as the contacts close is limited by R1 (and the load of course), so R1 can be a much higher value than it can be without the diode. For normal applications, it should be around 10 times the value that you would have used with no diode, so around 240Ω is perfectly alright. The enhanced version should be able to prevent arcing almost completely if the capacitor is sized appropriately. Larger capacitance means better arc quenching capability, but the diode's peak current is extended so a larger diode might be needed.
+ +This circuit is particularly well suited for use across the DC standby switch as used in many guitar amplifiers. These often have no protection circuit at all, and the only reason that a sustained arc isn't created when the switch is opened is because the current is comparatively low, usually less than 100mA. Adding the Figure 4.2 circuit will completely eliminate the arc if the cap is sized properly, and it should be rated at no less than 1kV for most valve amplifiers.
+ +In either circuit, the selection of the capacitor is critical. The cap used must be capable of withstanding the peak current, and naturally requires a voltage rating that's well above the source voltage. X-Class mains caps are a good choice for most applications, because they have a high voltage rating and are designed to handle the spikes and noise that normally rides on the mains. In any event, the capacitor you use needs a high surge current rating, and this needs to be verified from the specifications. If you use any old cap that comes to hand you are likely to face bitter disappointment when the cap eventually fails and the relay contacts burn up or the cap shorts out. The same applies if you skimp on the diode.
+ +The snubber circuit (whether 'traditional' or 'enhanced') needs to be as close to the relay contacts as possible. Long leads mean inductance, and that can easily partly undo the benefits of the circuit. Leads should ideally be no more than ~25mm in total length to keep stray inductance low.
+ +I mentioned earlier that most tests were done using the octal based relay seen in Figure 3.1. It's noteworthy that at a current of 5A DC and an unloaded voltage of 80V, even that relay could sustain an arc across the contacts when fully open. The relay has a contact spacing of about 0.8mm, and while that doesn't sound like much it's significantly greater than most of the smaller relays used in electronics projects. Another I measured has contact clearance of only 0.3mm. Using the enhanced snubber, the arc was negligible as the contacts opened - most of the time there was no arc at all, but occasionally a small flash was visible. I was only using a 1µF capacitor for initial tests, and increasing that to 5µF eliminated the arc almost completely. Running the contacts in series (see Series Contacts below), it was possible to switch 5A at an unloaded voltage of 80V with no snubber at all! Well, until it decided to develop a continuous arc when the voltage was increased only slightly!
+ +If the relay was expected to handle this voltage and current in a real circuit, I would use at least 10µF of capacitance and a high current diode (at least 3A). Even then, before deciding that it would do the job, I would insist on testing the circuit for at least 10,000 operations, and use a data logger to record each break to ensure that there was no arc at all for the full 10,000 operations. I would not be game to use it unattended without this test, and certainly wouldn't suggest it as a project or use it in a commercial design until I was absolutely sure that it was up to the task.
+ +In some cases, the snubber is installed in parallel with the load. The capacitor has pretty much the same function in this case, as it holds the load voltage up so that when the contacts open the voltage across them is momentarily only a few volts. Capacitor and resistor selection are the same as before, and the resistor is still used to limit the peak current into the capacitor when the contacts close. With no resistor, the current is limited only by the circuit impedance and the cap's ESR, so a very high peak current will flow.
+ +Snubbers can be used with resistive or inductive loads, and the standard version works with AC and DC. However, in no event should you assume that just because you've made the calculations given here or elsewhere that all will be well. Every case needs to be tested thoroughly, because it only takes one instance where the arc decides to become continuous and the relay is ruined, and quite possibly other circuitry as well. You can often get away with almost anything with AC, but any DC application poses special problems and requires equally special attention.
+ +If it's at all possible, the AC source for the DC should be switched. If the DC is obtained from a bridge rectifier and filter capacitor, switching the AC to the rectifier is preferable to switching the DC, but of course this isn't always convenient or applicable. There's much to be gained by using a MOSFET relay or a discrete MOSFET switch for DC, but great care is still needed so that peak current is well within the MOSFET ratings. Also, beware of a failure mode with MOSFETs that's remarkably close to second breakdown in bipolar transistors.
+ + +When you have a DC supply and the load is inductive, even seemingly benign voltages and currents can cause serious arcing. Just as relay coils have back-EMF, so do other inductive loads. These include other (generally larger) relays, motors, solenoids of all kinds, magnetic clutches, etc. Adding a diode in parallel with the load will eliminate the back-EMF just as it does with a relay coil, and again will increase the release time of the connected relay, solenoid or clutch. Whether this is a problem or not depends on the application.
+ +Use of a diode in parallel with the load doesn't mean that nothing else needs to be done, especially if the load draws a high current or needs high voltage to operate. The extra diode only suppresses the back-EMF from the load, but it does nothing to protect the contacts against a DC arc. In such cases you'll probably need to use a snubber and diode as shown in Figure 4.3
+ +While you might think the above is overkill, something along these lines will often be necessary if the load operates from a high voltage. Any DC voltage above 30V or so means that specialised relays will be needed, but even a relay rated for 30V DC may be able to be operated at higher voltages if the proper precautions are taken. Manufacturer's data generally assumes that you will use the relay 'as bought', without any corrective measures. If you are careful, run tests and apply proper arc quenching circuitry, you may be able to extend the rated voltage. By how much depends on the relay itself, and some will have a safety margin built in, others not. You will never know until it's tested, and in some cases that will mean a very rigorously designed test that punishes the contacts right to the point of failure.
+ +Ok, the average hobbyist isn't going to design a test jig and run tests at that level, but if you happened to be making aerospace products there would be no choice. The main point here is that testing is essential, at least at the basic level. Something that seems as though it should work fine may or may not actually perform as expected when subjected to real life conditions.
+ + +Note: Most AC loads don't need anything to clamp the transient, as you'll find in most equipment that uses AC relays, solenoids or motors. The most common way to eliminate clicks and pops that are carried by the mains wiring are suppressed with a simple snubber, and then only if the switching noise is obtrusive and/or would cause the equipment to fail conducted or radiated emissions tests (for compliance with local regulations). It's uncommon to see any additional protection, so unless you really need to eliminate any back-EMF, you don't need to add anything to the circuit.
+ +For AC loads and some DC loads where use of a diode will slow down the release time of a solenoid valve or other actuator, a TVS or a MOV can be used. These will limit the transient to a preset maximum. TVS diodes are available in a wide range of voltages, and come in two forms - unidirectional and bidirectional. They are similar to zener diodes, but are capable of much higher instantaneous peak current - a typical 30V TVS might be capable of clamping over 500A, an instantaneous power of 15kW or more. The duration of the peak current must be very brief at the maximum ratings of course, and will typically be less than 1ms.
+ +With any TVS, you also need to be careful of the junction capacitance. With low voltage devices, this can be over 5nF, and the capacitance and load inductance form a parallel tuned circuit. Again, it depends on the application whether this will cause a problem or not. AC applications must use a bidirectional TVS diode, and unidirectional devices are suitable for use with DC circuits.
+ +MOVs are another way to minimise high voltage transients, but their breakdown voltage is not well defined so your circuit needs to use contacts with sufficient clearance to ensure that the worst case breakdown voltage is still well within limits. Be aware that MOVs will slowly fail with repeated over-voltage conditions. The degradation either means that the protection is lost, or they may suffer from thermal runaway. This will cause the MOV to explode or catch on fire. Some are equipped with an internal thermal fuse.
+ +You would use one or the other - a TVS or a MOV, depending on the circuit, the likely voltage transients and the nature of the load itself. For DC applications, a unidirectional TVS diode can be used, but not if it will cause a problem for the load. The most common will be delayed reaction due to the current that is generated by the back-EMF. TVS diodes are a better choice most of the time, as they don't suffer degradation over time with repeated over-voltage 'events'. Some MOVs are designed for high reliability, provided the maximum impulse current is reduced from the allowable maximum. For example, a 300A MOV may last for 100 full-current events (at 20μs), 1,000 events at 100A, but that's extended to 1,000,000 times if the impulse current is only 50A (from Panasonic ERZE10A series datasheet). If you intend using an MOV, make sure that you fully understand the implications and potential failure modes.
+ +The back-EMF from AC motors is usually not great, and with transformers (with an output load) it's generally close to zero. The load absorbs most of the back-EMF, other than that caused by leakage inductance. A load with an inductance of 1 Henry and a series resistance of 5Ω will draw 0.73A at 230V. If the current is interrupted at the peak of the AC waveform, you could get a peak voltage of up to 1.6kV (assuming exceptionally low losses). The duration will be low - probably less than 100μs. The energy is also low - the available current may only be a few milliamps.
+ +If you do need to clamp this voltage (which is sporadic with random switching), it's up to you to consult datasheets and decide what to use. Some people may claim that use of a MOV is dangerous, but that's only true if it's underrated. It's a fact of life that components can (and do) fail, so sensible precautions must always be taken that a failure doesn't cause a risk of fire or injury. A MOV with an internal thermal fuse is a wise precaution.
+ + +An easy way to get a higher voltage rating from relays is to use two sets of contacts in series. The current rating isn't affected, but the effective open contact gap is doubled so breaking an arc becomes less challenging. In this instance though, the term 'high voltage' does not imply kilovolts, but AC voltages below 500V or DC voltages below 70V or so. True high voltage relays are another matter altogether, and may have contacts within a vacuum or a pressurised inert gas.
+ +Using conventional relays at higher than their design voltages is possible, simply by connecting contacts in series. You need to be certain that the dielectric strength of the contact insulation is up to the task (the datasheets may help there), and in general you can expect little or no help from relay manufacturers because you're using the product in a way that wasn't intended. An example of this arrangement is shown below.
+ +The way the contacts are arranged need not be exactly as seen above, and in some cases will be dependent on the relay contact pinouts and printed circuit board layout. The end result must be tested though, because there may be relay base pin or PCB spacings that aren't capable of withstanding the full voltage without flashover.
+ +Using this scheme, a common double-pole relay rated for 30V at 10A DC can now be used with a 60V DC supply. The snubber circuit is still a very good idea and it should not be omitted. If used with AC, in theory it would be capable of switching 500V, but the insulation and/or pin spacings may not be good enough to allow this. The maximum voltage as detailed in the datasheet really is the maximum and should never be exceeded.
+ +The above graph was adapted from a Schrack RT2 PCB mounting relay datasheet. It shows quite clearly that at maximum rated current of 8A, the DC voltage must not exceed 32V for a single pair of contacts, or 64V with two sets of contacts in series. As the load current is reduced you can apply more voltage, but the absolute maximum DC voltage is limited to 300V due to the relay base pin spacings (only 2.5mm between pin centres for the contact pins). As noted in the graph itself, these voltages apply for a resistive load. It's not stated, so assume that the voltages and currents shown apply when there is no snubber circuit in parallel with the contacts. However, even with a snubber, it's better not to exceed the voltages and currents suggested by the maker.
+ + +Never use a pair of DPDT contacts on the same relay to reverse the polarity to a motor or other load. It may be economical, but it's a disaster waiting to happen. The contact clearances are small in most relays, and applying the full voltage across the NO and NC contacts is asking for trouble. Should an arc develop, it will be directly in parallel with the supply, and will have very low series resistance (as shown in Figure 5.1). The diagram below shows the right and wrong way to do it.
+ +With most motor applications you need to be able to turn off the motor anyway, so using two relays isn't a major penalty. The other problem with using a single relay is that it can be switched from forward to reverse with no intervening stop period, so the motor will draw extremely high current and may be damaged. The circuit shown as 'Do NOT Do This !' is positively dangerous, to the power supply, the motor and the relay.
+ +The relays are shown de-energised in both cases. To switch the motor properly, use two relays ('Do This Instead' in the drawing). The circuit is not too different from a transistor 'H-bridge', and as with the transistor version you must ensure that both relays can never be operated at the same time, as that will short circuit the power supply. If you use relays with three sets of contacts it is possible to devise a lock-out that will prevent both relays from being energised simultaneously. The lock-out circuit can also be done electronically, in the circuitry that drives the relay coils.
+ +I've shown both relays as DPDT (2-Form-C), but you can use 2-Form-A (double pole, normally open contacts only), and you only need to be concerned with the general principles of arc suppression. There will only ever be minor arcing across the contacts with low voltages, but for higher voltages you will need to use snubbers for arc suppression. In the second circuit there are two sets of contacts in series, so 30V DC relays can withstand 60V DC.
+ +When Relay 1 is operated, the positive supply is connected to the left side of the motor, and negative on the right. Relay 2 reverses the polarity. When both relays are at rest (de-energised), the motor has no power. This isn't the only way it can be done of course, but the general principles will be the same.
+ +Sometimes, it's required that the motor should stop as quickly as possible. The easiest way to achieve that is to short circuit the motor when it's turned off. Figure 5.2 shows how this can be done. When both relays are de-energised or energised, the motor is shorted to either the +ve or -ve supply. This removes any constraint about having both relays on at the same time, but at the same time, the motor will always be shorted when it's not running. For some applications this is a good thing, but not always.
+ +With both relays de-energised, the motor windings are both connected to the +ve supply. If Relay 1 is operated, current flows through the NC contacts of Relay 2, through the motor, and then to GND (negative supply) via the NO contacts of Relay 1. The process is reversed when Relay 2 is energised.
+ +Choose the method that provides the functionality you need, either with or without the short across the motor when it isn't being used. Be aware that shorting a running motor can generate some serious mechanical stress, and it's not always the best option. You'll need to test your motor to ensure that the stress of a short when it's at maximum speed doesn't create problems.
+ +You must be absolutely certain that the arc drawn from the contacts opening under load cannot be sustained. If that happens, the relays and power supply will be destroyed, there will be a great deal of smoke, and there won't be much left after the DC has done its worst. It's a nice, simple way to reverse a motor, but it has dangers that you must understand. Relay selection is critical if you use this method.
+ + +Many loads show significant inrush current, and that creates considerable stress on the contacts when they close. Some examples are listed below, but there are many variations. Tungsten lamps are being phased out all over the world, but they will still be used for many industrial processes and will never go away completely. Toroidal transformers are much worse than transformers with E-I laminations, and some electronic loads include active inrush current limiters but most don't. Stray capacitance on long wiring runs might seem an unlikely source of inrush current, but it can be a real problem - especially since the impedance is very low. I suggest that you read the article Inrush Current Mitigation for more info.
+ +Examples of loads that produce significant inrush current transients at contact closure are as follows ...
+ ++ 1 - Tungsten lamps, where cold resistance is 7% to 10% of their normal operating resistance+ +
+ 2 - Transformers and ballasts, where inrush may be 5 to 20 times their normal operating current
+ 3 - Electronic loads, typically power supplies for appliances, computers, lighting, etc.
+ 4 - Large AC solenoids and most motors
+ 5 - Capacitors placed across contacts or capacitive loads with no (or inadequate) series current limiting resistance
+ 6 - Stray capacitance in long cable runs
+
There are few choices for the hobbyist or even industrial designers - use relays that have heavy duty contacts, and contacts with good thermal and electrical conductivity and welding inhibitors. This will typically mean a silver + cadmium oxide alloy for the contacts, or perhaps silver tin oxide. For most power switching functions, 10A, 250V AC relays are common and very reasonably priced, and especially for hobbyist applications few circuits need more. For example, saving a few cents to get a 5A relay for a 4A circuit would just be silly. Industrial systems are very different of course, especially since some equipment may subject the relays to a torturous on/ off cycle.
+ +For large toroidal transformers (anything above 300VA), a 'soft start' circuit such as Project 39 is recommended. That uses relays, and the recommended relays are 10A, 250V types. These were selected because I know they will take the abuse, they are readily available and inexpensive. In general, a soft start facility is highly recommended for use with transformers, and if possible the peak inrush current should not be greater than the relay's maximum current rating. This ensures a long contact life with normal usage.
+ +An inrush limiter can also be used with tungsten filament lamps, and this will not only reduce the very large current surge, but prolongs the life of the lamps because there's reduced thermal and magnetic shock. Lamps can also benefit if driven by a solid state relay with zero-crossing switching. This isn't as good as a properly designed inrush limiter, but it does reduce the starting current quite significantly for low wattage lamps. Very high power lamp filaments have considerable thermal inertia so zero-voltage switching may not be quite so successful.
+ +Inrush 'events' aren't limited to inductive, tungsten filament or electronic loads though. Many installed fluorescent lighting systems have power factor correction (PFC) capacitors wired in parallel with each luminaire, and these present almost a dead short circuit at the moment of power-on. The initial surge current can be astonishingly high, and is only limited by the impedance of the wiring. These circuits cause great stress on any switch or relay that's used to control them, but there are few commercial soft-start units available. This becomes an extraordinarily complex problem for large installations, and while it's very interesting, it's not possible to try to cover it here. PFC capacitors are also used with motors and other inductive loads, and they cause problems there too.
+ + +Most inductive loads have an iron core, and the high inrush current is caused by core saturation when power is applied. This applies to all AC powered inductive loads - DC is different and will be looked at separately. A very few AC inductive loads may not use an iron core at all, so saturation is not a problem. However, I can't think of any off hand, so there's not much point discussing something that is unlikely to be found in any real application.
+ +While it might not sound like it could possibly be true, the optimum part of the AC waveform to switch any inductive load is at the peak of the AC waveform. One might feel that zero volts would be ideal, but one would be very wrong. This is simply because of the way an inductor works. When presented with an initial high voltage, the current cannot increase instantly, but increases at a rate determined by the inductance and the circuit resistance/ impedance. If we have a circuit resistance of 10Ω and we apply 325V DC to a 10H inductor, the initial current is zero, and after 10ms the current will only have risen to about 313mA. It will take over 2.5 seconds before the current has risen to 30A, and the maximum current is limited by the resistance. However, this assumes an inductor that can never saturate, and these are few and far between (air-cored inductors are free from saturation).
+ +A transformer or other AC inductive load may well have an inductance of 10H, and the steady state magnetising current will typically be less than 50mA - often much less (especially for toroidal transformers). Before you continue with this discussion, I strongly recommend that you read the article Inrush Current Mitigation. This article includes oscilloscope traces and other material that fully explains the phenomenon and how to deal with it.
+ +If the mains to any inductive load is switched at the peak of the AC waveform, inrush current is limited to a comparatively safe value. This can be combined with a soft-start circuit using resistors or thermistors, combined with a relay to short them out after the inrush event has ended. Many designs using thermistors omit this part, so after a momentary power outage the peak current is limited only by wiring and circuit resistance, because the thermistors are still hot and at their minimum resistance. This can create havoc, with tripped circuit breakers (for example) causing a potentially dangerous situation to arise.
+ +While switching at the peak of the AC waveform is highly desirable to minimise inrush current, it also creates a very fast risetime pulse on the mains that may create problems with other equipment. It's also very difficult to do with any accuracy with EMRs, because each different type will have a different pull-in time, and it changes with age and may even be affected by temperature. Once EMRs are synchronised with the mains, we also get the problem of unidirectional contact material transfer - just as we do with DC. If this is attempted, the microcontroller needs to be programmed to ensure that the polarity of the mains can be switched, so the relay will operate for 50% of the time with positive half-cycles, and 50% of the time with negative half-cycles. Why a microcontroller? It's extremely difficult to even attempt synchronised switching using anything else.
+ +The only sane way to attempt any form of switching that's synchronised to the mains waveform is to use a solid state relay (SSR). Despite their potential problems (especially with electronic loads), they can be triggered very accurately at the time you require, and for difficult loads you can simply include an electromechanical relay in parallel. This isn't as silly as it might sound at first. The SSR provides accurate control of the point where the AC waveform is switched, and it only needs to be in circuit for a couple of milliseconds.
+ +The general idea is shown above. To trigger the circuit on, both inputs will go high together. The SSR will trigger immediately, and a few milliseconds later the contacts will close. To turn off, the EMR is switched off first, and enough time has to elapse to ensure the contacts are fully open. Then the drive to the SSR can be removed, and it will turn off by itself as the current passes through zero. You might wonder why a snubber has been included. You may not need it, but if there's significant line inductance between the relay and load, there is a possibility that an inductive 'kick' (back-EMF) may re-trigger the SSR. The snubber slows down fast risetime pulses and prevents over-voltage from back-EMF from the load or wiring.
+ +Even if used for high current loads the SSR should run cool, because it only ever has to handle half a cycle of AC. The thermal inertia of the package will be sufficient to prevent overheating provided the switching duty cycle is fairly low. For rapid switching the SSR may need a heatsink, but it will be much smaller than would be the case without the relay.
+ +When the EMR takes over, the many and 'interesting' problems that can occur with an SSR and electronic load are eliminated. When the load is switched off, the EMR should always release first so the load current is then broken by the SSR. 20ms (16.66ms for 60Hz) is plenty of time for this to happen smoothly and cleanly - every time. I built an inrush current test unit that has just that - an SSR is used to make and break the circuit, and the electromechanical relay carries the current after it's triggered.
+ +Inductive loads not only have the inrush problem, but if the circuit is broken while the load is drawing current, you get the back-EMF problems discussed earlier as well. The parallel relay + SSR solution deals with that too, because the SSR will always cease conduction as the current passes through zero. The SSR doesn't arc and although the normal relay has the full voltage across its contacts, there won't be an arc because they are fully open by the time the SSR opens the circuit.
+ +The benefits of the hybrid solution have not been ignored, and they are used in industrial applications. Several manufacturers make hybrid SSR/ EMR combinations with the required logic built-in. One major benefit quoted is the dissipation of an SSR by itself, which will be around 1W for each amp of load current. A conventional relay has extremely low losses by comparison, so this allows very high power relays to be made without the need for a heatsink, and without the contact erosion that comes with all EMRs switching appreciable current and voltage.
+ +It's very important to understand that SSRs using TRIACs or SCRs cannot be used with DC. Both of these devices require the current to fall to zero before they will switch off, and that doesn't happen with DC. There is a device called a 'gate turnoff' SCR (GTO-SCR or GTO thyristor), but they are usually quite difficult to use and are mainly employed in large industrial controllers. They are commonly used in high power inverters and variable speed motor drives, and will not be covered here because they are not used as relay substitutes.
+ +It's also important to note that SSRs do not provide the complete circuit isolation that you get with an EMR. There will always be some leakage current, because the thyristors are semiconductor devices and do not have infinite impedance when turned off. The snubber circuit (if used) makes leakage worse, because the capacitor will pass an AC current proportional to its value. The leakage current must be considered in an application as it may cause some loads to malfunction.
+ +DC inductive loads include relay coils, solenoid valves, magnetic clutches or brakes, and motors. A diode in parallel with the load will eliminate the back-EMF, but as mentioned earlier this will slow down the release of solenoids of all kinds (including relays). The remedies are exactly the same as those discussed for relays in Part 1 of this article, and may include just a diode where release time is not critical, or diode plus a resistor or zener if a small delay can be tolerated. Where the minimum possible delay is needed, you'll need to use a bidirectional TVS or perhaps a MOV, and the switching device (or SSR) will have to be rated for the worst case voltage peak when power is removed.
+ +As with any DC load, contact arcing is the primary concern. At voltages below 30V and currents less than 10A, there are many low cost relays that will do the job just fine, but higher voltages will create problems. Snubbing circuits are a start, but you may also need to use series contacts to ensure that the arc can be extinguished with 100% reliability. If at all possible, use a MOSFET, IGBT or transistor with a high enough voltage rating to withstand any back-EMF (after clamping it with a TVS or MOV of course). With no clamp, expect peak voltages of 500V to 2kV, especially with circuits with high inductance.
+ + +In most areas, fully capacitive loads are very uncommon, but as mentioned above there are countless places where capacitors are used in parallel with inductive loads to improve the power factor of the circuit. These create problems because of very high inrush current, and it may be necessary to include series inductors to reduce the inrush to manageable levels.
+ +While not capacitive, one very common load is switchmode power supplies. These are not capacitive loads, because they rectify the mains and smooth the DC output with a capacitor. The filter cap does not reflect a capacitive load because the diodes in the bridge rectifier prevent the capacitance from influencing the incoming supply with any reactive component. They present a non-linear load only. This point seems to have been lost on many people (including electrical engineers who should know better), and is true whether you believe me or not.
+ +Where the capacitance does cause serious problems is at the moment of switch-on. The cap is fully discharged, and acts like a short circuit for the first few microseconds. Inrush current is limited only by the series resistance of the circuit. Attempting to use any thyristor based SSR for these loads is a disaster, and there are some interesting oscilloscope captures in the Dimmers & LEDs article that show what can go wrong. Where this becomes interesting is when the thyristor controller is supposed to be fully on. No problem with resistive or even inductive loads, but it's very different with electronic loads. Because these are so common, their behaviour needs to be examined.
+ +A typical electronic load is shown below, but the switchmode power supply is replaced by a resistor that draws the same power as would the supply itself. The problems are caused by the bridge rectifier and capacitor - not by the switchmode circuitry. A thyristor cannot remain turned on if the current through it is less than the holding current - this is a value specified in the datasheet. With an electronic load, no current can flow until the incoming voltage is higher than the voltage across the filter capacitor. Therefore, a TRIAC or SCR based SSR does nothing until the peak mains voltage is slightly higher than the cap voltage, even with continuous or pulsed gate current applied. When the SSR switches on, it does so with an extremely fast risetime. The only thing that limits the current peak is the mains wiring inductance and resistance, along with any (token) limiting circuits in the load.
+ +The circuit for the electronic load is very common, and is used at mains voltage and low voltages after a transformer. Parasitic lead inductance has not been included, but there's a token limiting resistor in the load itself, sized to keep its dissipation below 5W. Once the circuit reaches 'steady state' conditions, the SSR cannot conduct until the incoming mains peak is slightly higher than the capacitor voltage, and it will switch off again once current stops. This will occur just after the AC waveform peak. Because the conduction period is so short, the peak current must be a great deal higher than normal. This type of load develops large peak current at the best of times - the SSR only makes it worse.
+ +For the electronic load simulations, I used 230V AC at 50Hz, and the output power is 300W, dissipated by the load resistor. The peak current seen in the trace below is 84A, and remains above 42A for 50us. The RMS current is 5.3A - four times higher than it should be for a 300W load. This will never be immediately apparent unless you take careful current waveform measurements. This must be done with an oscilloscope, because few RMS meters can handle the very high peak-average ratio, and they will read low. The SSR needs to trigger just 500µs after the incoming AC equals the DC voltage across C1 for the current waveform below to be generated.
+ +The red trace is the DC voltage, green is the mains input current and blue is the mains input voltage. With a switch or a conventional relay, the total load power isn't changed, but the peak current is limited to 10A and the RMS current is then 2.7A - a significant difference. This is the reason that thyristor based SSRs (SCRs or TRIACs) should never be used with this type of electronic load. The circuit and simulation have been exaggerated a little for clarity, because in reality there will be more resistance (largely from the mains power feed), and there will also be small inductors on the mains side of the rectifier to minimise interference. The peak current in a 'real' circuit driven this way will probably be less than half that measured here, but at 40A peaks that's still very stressful on the components. This is also a repetitive high current, so the SSR would need to be rated for the worst case peak current - continuously.
+ +A hybrid relay is another matter. If designed to switch on at the mains zero crossing and immediately thereafter the load is taken up by an EMR there's no problem. Inrush current is minimised, there's no contact arc, and the load will switch off when there's no current. That's an ideal situation that can only be achieved with a hybrid SSR+EMR circuit. Electronic loads pose special problems, but if you haven't investigated them thoroughly (with bench tests to verify your theory) it's quite easy to miss the problems and you end up with equipment that fails (or doesn't work) for no apparent reason.
+ +Just in case you were wondering, using an SSR with zero-voltage switching (for an electronic load) but without a parallel EMR may not work at all. By the time the incoming peak voltage is high enough to allow current to flow, the zero crossing detector circuit will have inhibited switching, so nothing will happen. A zero voltage switching SSR can only work if it's shorted out by relay contacts before the first half-cycle has completed.
+ +Note that using zero-voltage switching for inductive loads (including transformers) results in the maximum possible inrush current, and must be avoided.
+ + +Hybrid relays were suggested above, and while you can certainly build your own, you can also buy them ready-made [ 7 ] (example only, others also exist). They are made by quite a few different companies, and are designed specifically to solve the problems of both SSRs and EMRs, as described above. Contact arcing is eliminated, so the EMR's life is not reduced by arc corrosion, and the heat problems of SSRs are eliminated by the bypass system. A heatsink isn't needed, because power is dissipated for only 10ms or so. However, there will probably be a limitation on the number of on/ off cycles in a given period.
+ +These have their own page, as the possibilities are extensive. To see information on the different types, see Hybrid Relays using MOSFETs, TRIACs and SCRs. Because they are specialised (and expensive) you may be tempted to built your own, and provided you have the skills to build it (and verify every aspect of its function and safety) there's no reason not to do so.
+ +Don't expect to be able to rush out and buy one easily, because they are considered as fairly specialised industrial devices, but they do exist. As described earlier in this article, the most common arrangement is a TRIAC to perform the actual switching, with an electromechanical relay in parallel to handle the load current. There is no longer any need for a heatsink for the SSR section, because it's only in-circuit for a very short time, and the EMR doesn't suffer from arcing because it's designed to open first. Once enough time has elapsed to ensure the contacts are open, the SSR is then turned off. This only takes a few milliseconds, so it doesn't create any issues with timing in most applications.
+ +Another major advantage is that EMI (electromagnetic interference) is reduced to almost nothing, because there is no arc from the contacts. This may be more important than anything else in large data centres (as just one example), where EMI can create havoc with nearby computer systems. Most are designed for AC only, and while there's no reason that a MOSFET hybrid relay can't be produced (which would allow DC operation), I only found a couple of references when I searched.
+ +++ ++
++
Note carefully! There are two types of hybrid relay. One uses a reed switch to activate a TRIAC or back-to-back SCRs, and while this does qualify for the term 'hybrid', it's not + what's discussed here. The only hybrid that truly deserves the title is a semiconductor switch with an electromagnetic relay in parallel, which provides the benefits outlined in this section. + Reed relay 'hybrids' are (fairly) readily available, but do not provide any significant benefit for normal uses compared to opto-isolated SSRs. They are useful for products that need + immunity from ionising radiation (where photo-diodes will conduct due to radiation bombardment, e.g. X-rays, Gamma rays, etc.). +
There isn't a great deal of information available on the internal circuitry of any hybrid relays (other than the ESP article linked above). While there are circuit diagrams, most are greatly simplified. One of the more complete schematics found in an image search was that shown in Figure 7.1 on this page, and even that is greatly simplified as it doesn't show the control circuit needed to ensure that the EMR is open before drive is removed from the SSR section. Not that it's especially difficult - both relays are turned on at the same time (the SSR will always be first to conduct), and a simple timer will ensure that the EMR is deactivated perhaps 10ms before the SSR drive is removed.
+ +It appears that hybrid relays are comparatively 'new' components that have not reached their potential. Simple switching functions are the most common processes in power applications, and it's probably only a matter of time before hybrids become more readily available. Having said that, I certainly wouldn't suggest that you hold your breath waiting - many industry people probably don't even know these products exist. However, it is certainly one of the best ways to ensure long contact life and low EMI for any switching system.
+ +It should be noted that hybrid relays are not suitable for safety-critical applications, where it may be mandatory that protection is provided by mechanical separation of contacts with no part bridging the contacts themselves. Because they use semiconductors, hybrid relays can (and some will) fail, and the most common failure mode for any semiconductor is short-circuit. However, if used appropriately, this is quite possibly one of the best solutions currently available. Cost is (of course) a consideration here, and I was unable to locate any pricing info on any hybrid relay currently available.
+ +One area where a MOSFET hybrid relay would be ideal is for loudspeaker DC protection. DC voltages above 30V at any significant current are notoriously difficult to interrupt, causing a large and destructive arc across the contacts that can destroy the relay (as well as the 'protected' loudspeakers). A hybrid solution takes these difficulties away, and the parallel EMR means that there is no added distortion because the MOSFETs are shorted out in normal operation. Unfortunately, this isn't quite as easy as it sounds though, because of the requirement for floating power supplies to provide MOSFET gate voltage. This issue has been solved (at least in part) by the introduction of a new MOSFET driver IC (the Si8751/2 - referenced in the ESP article and in Project 198 (MOSFET Solid State Relay). Also, see Project 227, which is a hybrid relay designed for loudspeaker protection.
+ + +To ensure maximum contact life, arc suppression is vitally important. The best solution is one that prevents the arc from igniting in the first place, but this can be very difficult to achieve. Use of snubbers, diodes, TVSs or MOVs will hopefully prevent the arc from starting, or at least will draw sufficient energy away from the arc so that it can extinguish well before the contacts are at their maximum separation. Be careful is you use MOVs, as they experience a 'wear-out' phenomenon that causes degradation over time. This can result in the MOV exploding or catching on fire (I've seen both happen at different times). A TVS diode is more expensive but likely to be more reliable unless the MOV is specifically designed for repeated over-voltage.
+ +Getting a reliable solution can take some experimentation, but if it's not done there is always a risk. As already noted, DC is fundamentally evil, and it can be very hard to prevent an arc from forming once you have a voltage over 30V or so. While solid state relays can solve the problem, they are not always appropriate. Most SSRs can't be used with audio signals because they create gross distortion. Bidirectional MOSFET relays are one solution, but they are expensive and are likely to remain so.
+ +Hybrid relays can be used, and with some ingenuity you can build your own, using a conventional relay, a TRIAC and optocoupler, a simple zero-crossing detector to get a reference point, and a microcontroller to look after the timing. This can be done with a budget 8-pin micro for most applications, and it's not at all difficult. If the load is inductive, you need to switch on at (or near) the peak of the AC waveform, and for capacitive, electronic or resistive loads (including incandescent lamps) you need to switch on just after the zero crossing.
+ +Electromechanical relays will nearly always have lower losses than their 'solid state' equivalent. Most TRIAC and SCR based SSRs will show a voltage drop of around 1V, and the device will dissipate around 1W per amp of load current. So, if the current is 10A you must be able to dissipate 10W of heat - that requires a heatsink. An equivalent EMR may have a contact resistance of less than 10 milliohms (0.01Ω), so the contact dissipation will be no more than 1W for the same current.
+ +Even this is higher than you'll normally find. Note that you can't measure the resistance with an ohmmeter because there's not enough current to ensure proper contact. I checked the octal relay I used for most of my testing, and my ohmmeter claimed over 0.6Ω, but a test using 1A DC and measuring the voltage across the contacts showed that the actual resistance was 12mΩ. This gives a dissipation of 12mW at 1A (calculated as I²R) which is easily handled by the contact assembly itself. A more recent test at 10A AC showed the resistance to be 6mΩ, so the contacts will dissipate only 600mW. Most power relays will be similar.
+ +Part 1 - Types, Selection & Coils
+ ++ + 1 Relay Care+ +
+ 2 ENG_CS_13C3236_AppNote_0513_Relay_Contact_Life_13c3236r.pdf
+ 3 ENG_CS_13C3203_Contact_Arc_Phenomenon_AppNote_0412.pdf
+ 4 ENG_CS_13C9134_Contact_Load-Life_AppNote_0613_13C9134_-_Relay_contact_performance_enhancement.pdf
+ 5 SSR + EMR Hybrid Relays
+ 6 Solid State Relay Handbook
+ 7 Hybrid Relay Switching - Echola Power Systems (The original link has gone, but there is some info on the Net.)
+ 8 NAiS COMPACT PC BOARD POWER RELAY - JW Relays (Matsushita Electric Works, Ltd.)
+ 9 Blowout Magnets - What They Are & Why Use Them? (Durakool) +
![]() ![]() |
![]() | + + + + + + |
Elliott Sound Products | +Relays - Part III |
![]() ![]() |
You could be forgiven for thinking that this topic has been 'done to death', but relays are still one of the most efficient and cost-effective ways to switch power. DC is and always will be a problem, and unless you use a properly designed hybrid relay, contact arcing is an ongoing issue with voltages above 30V. While I have shown a number of solutions in various articles, a hybrid circuit that takes over conduction before there's more than 10V or so across the contacts remains the safest way to prevent contact arcing.
+ +This article is intended to tie up a few 'loose ends' in the first two articles, as well as provide additional information on topics that have been covered but not always in-depth. An example is the 'efficiency circuit', which is primarily intended to reduce the holding current for an electromagnetic relay (EMR). However, it can do a great deal more - in particular speed up both activation and deactivation, something that you won't find much information about elsewhere.
+ +Galvanic isolation is often a critical factor. This simply means that there is no 'galvanic' connection (via any conducting material) between the 'control' and 'controlled' sides. The requirements for isolation vary depending on the usage. Medical applications usually require a very high isolation and test voltage and extremely low leakage, but a horn relay in a car generally requires no isolation at all (many share a terminal for the coil and contacts). Relay (and IC relay controller) datasheets specify the insulation resistance and maximum working voltage, but it's up to the user to ensure that the isolation barrier can't be breached in normal use. Possible breaches can be caused by insufficient creepage/ clearance distances on a PCB, internal debris created by an exploded capacitor, moisture/ dust ingress amongst many other possibilities (often application specific).
+ + +An 'ordinary' relay can switch DC easily if the current is low enough. For example, I'd have no hesitation using a standard relay to switch 100V DC provided the current is limited to less than ~250mA. Most small (PCB mounting) relays only have a small contact gap when open, in the order of 0.5mm. This is just sufficient to break 500mA at 100V, but it's right at the upper limit of the capabilities of most small relays and cannot be recommended. You may (or may not) find this information in the datasheet.
+ +Fortunately, there aren't many applications that require high-current DC to be interrupted by a relay. Loudspeaker protection is one, but that has been covered thoroughly already. Most circuits that use relays operate with AC, where the relay provides very effective isolation between low voltage circuits and hazardous voltages - e.g. the AC mains. The range of equipment that can be switched using a relay is almost unlimited, and they are ultra-reliable when correctly specified.
+ +That doesn't mean that there's nothing more to be said on the topic. It's also worth pointing out that there have been patents taken out for SSRs (in particular) that are fatally flawed. A patent document is often a good way to get 'new' ideas, but a design only needs to be sufficiently different from others and be 'novel' - i.e. not obvious to a person experienced in the field. In some cases, it may perform poorly (in some cases not at all!), so perusing ideas is not always fruitful. A verified design means that (hopefully) the author has built and tested it, and can say with some certainty that it works as claimed.
+ +The relay style used for the following explanations and tests is shown above. This is the type of relay that's used with Project 39, and they are readily available from most suppliers. It's rated for 10A at 250V AC or 30V DC (3A with a power factor of 0.4). The coil is 12V, with a resistance of about 270Ω. It is unremarkable in all respects, and the contact separation of 0.5mm is typical of most relays of this type.
+ +With MOSFETs (and IGBTs), there is no static drive current, because the gate circuit is insulated from the rest of the device. However, for fast switching, you may need over 1A to charge and discharge the gate capacitance. A charge is placed on the gate to turn it on, and has to be removed again to turn it off. The current is determined by the effective gate capacitance, switching voltage rise and fall time, and is limited by any series resistance (generally between 4.7 and 22Ω). If the rise/ fall time is 1μs (pretty slow by modern standards), the charge current is determined by the voltage, rise/ fall time, and capacitance. For example ...
+ ++ Icap = Vpeak × C / Rise/Fall Time+ +
+ Icap = 12 × 6nF / 1μs = 72mA +
Most MOSFETs and IGBTs specify the gate charge in coulombs, so to convert from coulombs to capacitance simply divide the gate voltage (typically 12V) by the charge. A gate charge (Qg) of 71nC (an IRF540N for example) has an effective capacitance of 12V/71nC, or 5.9nF. This is something of an over-simplification though, because the gate charge varies as the drain-source voltage changes. A simulation of an IRF540N switched with a 12V, 1μs rise/ fall time pulse showed a peak gate current of up to 180mA. While important for high-speed switching, this isn't a major consideration for SSRs. Relatively low switching speeds do cause high dissipation, but switching is usually sporadic - generally less than one transition (i.e. 'on' to 'off' or vice versa) in any one-second period.
+ +There are several ways that the gate capacitance can be determined. One datasheet value you'll see is Ciss, which is the sum of the gate-source (Cgs) and gate-drain (Cgd) capacitances. For the IRF540N, that's 1,960pF (1.96nF). However, it doesn't consider the effect of feedback via Cgd, which increases the actual current that will be required from the gate driver. There are so many interdependencies that no simple formula can hope to provide an answer that's accurate, but fortunately we don't care much.
+ +We aren't making high-speed switchmode supplies, but a comparatively simple MOSFET/ IGBT relay. Being able to provide up to ~100mA instantaneous gate current is 'nice', but people also use photovoltaic optocouplers that can only provide a few microamps. Switching is slow, but it may not matter. This is where the designer has to do his/ her homework. It's always nice to know what you can get away with and what will come back and bite you. This article is not intended to cover gate charge in detail, and here the discussion ends.
+ + +Hybrid relays have been covered in an article and a project, and the project version has been built and tested to verify that it can break any likely DC fault current up to 20A or so. This would normally cause a fatally destructive arc that will not just burn the contacts, but will probably cause the entire contact structure to be completely destroyed. One major advantage of a hybrid relay is that the semiconductors don't have to carry the load current, other than for a brief period at switch-on and switch-off. For most applications, this means that smaller devices can be used, provided they are rated for the voltage and current of the supply and load. No heatsink is required, because they conduct for such a short time.
+ +A hybrid relay can completely solve any issues with breaking high-current DC, at any voltage up to the rated maximum. EMRs have the advantage that no external cooling is required at anything up to the maximum continuous current rating (DC or RMS). It's not unusual for the contact assembly to operate at an elevated temperature when used at full current. For this reason, many relays have a derating curve, similar to that shown for semiconductors. If the ambient temperature is greater than 25°C, the maximum current falls accordingly. This also applies to the coil, which is generally limited to a maximum of 120°C. Any heating from the contacts is also experienced by the coil, as they are in a sealed enclosure with mechanical interconnections. Not all datasheets show this information.
+ +Another form of a hybrid relay hasn't been covered, and that uses a miniature relay (most commonly a reed relay) to control a semiconductor switching stage. While this is a hybrid in the strict sense of the term, it doesn't solve any of the issues that afflict semiconductor switches (notably SCRs and TRIACs). It's uncommon (and irksome) to use a reed relay to activate MOSFETs, because there's normally no voltage available for the MOSFET gate(s). Providing an isolated voltage source is harder than it sounds, because the DC-DC converter must provide isolation that meets international standards for safety. This generally means that it must be rated for continuous operation with at least 275V AC between input and output, with anything up to 5kV used for testing. An example is shown next.
+ +The control is shown using a switch, but it can also be a transistor, small-signal MOSFET, logic circuit or whatever is available. Cheap and cheerful converters such as the commonly available B1212S-1W (12V in and out, 1W [83mA] rating) are completely unsuitable for mains usage, but is fine for lower voltages. These are readily available for under AU$5.00, but it's not recommended that the voltage differential exceeds ~100V for most. There are exceptions, but you have to look at the datasheets very closely if you need mains voltage isolation.
+ +A reed relay can also control SCRs or TRIACs. Their isolation voltage and current capacity is more than acceptable, but you must select the appropriate base - some have minimal clearance between control and switch pins. Reed relays are particularly rugged devices, and while the contact clearance is small, they are usually capable of at least 200V. High-voltage types have the contacts in a vacuum, and can switch up to 15kV (for a price of course). I tested a reed switch (with minimal contact clearance), and it arced at 1kV DC, but was perfectly able to make and break 500V DC. Used with 230V AC I'd expect it to be just fine, although I have no specifications that claim that's within ratings.
+ +The latest gate driver ICs solve all issues with providing a separate supply - see Project 198 for an example. There's another that will be covered shortly, as I have some samples on order and will run tests when they arrive. I did purchase an evaluation module, and these new ICs are very good.
+ + +There is a new type of relay available now, which is very different from anything we're used to. At present, Menlo Micro is the only known manufacturer, and these relays are electrostatically actuated. They are only available as an SMD part, and are designed for RF switching at up to 3GHz. They can be used with lower frequencies (including DC), but there are some particular restrictions if you wanted to switch DC.
+ +These are MEMS (micro electromechanical systems) devices that use IC processing techniques to fabricate sub-miniature mechanical structures. The technique isn't as new as you may think though, as it was patented by NASA in 1974. The first patent I came across dates back to 1933! There are quite a few patents covering this technique, but adoption has been minimal because a comparatively high voltage is needed to create the piezo deflection needed to activate a set of contacts. The Menlo Micro device uses an internal charge-pump to generate the voltage needed. The biggest issue with this technique is getting a fairly rigid piezo element to flex far enough to operate a set of contacts.
+ +Because these are highly specialised, it's unlikely that too many hobbyists will be experimenting with piezo relays any time soon. I have no idea of the pricing - this isn't disclosed, so we can probably assume that they are expensive.
+ +Electrostatic relays are widely represented in patents, but are few and far between in real life. The general idea obviously appeals to any number of inventors, but the requirement for a high actuating voltage and minimal contact pressure mean that they are generally impractical. There may well be some specific areas in research where they can be utilised, but don't expect to find any from major suppliers. Since they work by electrostatic attraction/ repulsion, the available force is inversely proportional to the electrode spacing. With 'reasonable' voltages, the electrodes must be very closely spaced (and therefore providing minimal travel), and may require a vacuum to prevent arcing and/or contamination. I don't expect that any readers will ever use one, and MEMS processes are the most likely to produce a usable device. I'm not convinced that there's much merit overall, but the actuating power will be very low, which may be an advantage in some systems.
+ +If you want more information you're limited to looking through patent documents, as I found almost nothing other than patents and 'scholastic' papers that have to be purchased. I expect few people will bother.
+ +Note that the term 'static' relay is sometimes used when referring to solid state relays, as there are no moving parts. This is rather unwelcome terminology, as it only adds confusion without adding useful information to the reader. 'Static' and 'electrostatic' have very different meanings, although we refer to 'static electricity' as the high voltage generated by walking across carpet (for example) and the resulting discharge - often accompanied by the person exclaiming 'rudeword!' with some gusto.
Something that was covered in the Relays, Part I article is a so-called 'efficiency circuit', used to reduce power once the relay has pulled in. However, the explanations were simplified. Most relays will continue to hold with as little as 1/10th of their rated voltage, but it's safer to not allow the current to fall by more than ~65% from the rated maximum. For example, a 12V relay may have a 'must release' voltage of 1.2V, but it wouldn't be sensible to allow the coil voltage to fall below 4V (33% of the rated voltage). For the purpose of this explanation, a 'small' relay is a 10A SPDT (single-pole, double-throw) type with a 12V coil having a resistance of 270Ω.
+ +Mostly, an efficiency circuit is expected to reduce the coil current, but it can do so much more if you need it. The greatest gain can be in speed - with an efficiency circuit the relay activation and deactivation times can be reduced significantly. This point is rarely made, and I've not seen any analysis performed elsewhere to show how this can be achieved.
+ +There are two things that you can do. The first is to use a higher supply voltage to activate the relay. Pull-in time will be reduced dramatically, and the efficiency circuit will then reduce the voltage to (say) 4V while the relay is activated. The second trick is to use a resistor instead of a diode in parallel with the coil (actually a resistor and diode in series).
+ +The resistor speeds up the deactivation time, but because the coil is only receiving 33% of its normal current, the relay will drop out even faster. The lower current means less stored charge in the magnetic circuit, so it will release in less than 5ms (for a relay of the general type shown in the photo).
+ +If the coil resistance is 270Ω, the normal current would be 44mA. If we reduce that current to 15mA using a 1.2k resistor (operating current will be 16mA), the holding power is 390mW (vs. 530mW). The only other part is a capacitor, selected to ensure that the coil gets the full 24V at power-on, and still has at least 12V after ~10ms. That indicates a 33μF cap (close enough).
+ +Even if you only use a 12V relay with a series resistor (270Ω for this example) and a 24V supply, pull-in time is already reduced. This is because the relay coil has inductance, and that delays the current risetime. With a higher available voltage (and resistance), the DI/Dt (aka ΔI/ Δt, where Δ indicates rate of change) is increased. If we assume ~1.5H coil inductance, the risetime is halved when the 270Ω coil is powered from 24V via a 270Ω series resistor. Adding a 33μF capacitor in parallel with the resistor halves that again!
+ +With this, we can decrease the risetime from 12ms (12V supply) to 6ms (24V supply with a 270Ω series resistor), down to 3ms (33μF in parallel with the resistor). The current risetime is one of the things that affects the on-time, with the rest depending on mechanical inertia and the distance that has to be covered - almost always less than 1mm with small relays.
+ +Release time also depends on the coil current. If the current is maintained by a parallel diode, the current takes a significant time to fall below the 'must-release' value. Without a diode, the current collapses almost immediately, but this creates back-EMF that can destroy the driving transistor. It's not at all unreasonable to expect the back-EMF to exceed -400V with a 12V relay. A diode reduces that to -0.65V, but current is maintained for around 10ms. This delays the magnetic release, and the mechanism still has inertia that delays the release a bit longer. The 'typical' release time for a standard small relay is around 10ms when a parallel diode is used.
+ +This can be reduced to around 4ms simply by just using the diode in series with a resistor having the same value as the coil. The back-EMF will be 24V - it can be determined for any resistor value by using the ratio of the external resistance divided by the coil resistance, plus 1. A 270Ω coil with an external 560Ω resistor will generate a back-EMF of 36.8V (with 12V across the coil). The diode now only serves to prevent the external resistor from dissipating power needlessly.
+ +If an efficiency circuit is used, the coil current is reduced during normal operation, so there's less energy to dissipate when the current is interrupted. This makes the release time faster again. If you really need the fastest possible activate and release times from an EMR, the next circuit employs both a high-speed efficiency circuit and a rapid dropout due to reduced operating current and allowing a higher back-EMF.
+ +The circuit shows both techniques in use, using a higher than normal voltage and a very basic efficiency circuit that drops the coil voltage to just under 4V after C1 has charged. With 2kΩ in parallel with the relay, it releases out in about under 2ms, vs. ~6ms if the resistor is shorted (leaving just the diode). Because the voltage is reduced, the back-EMF is limited to about -17.5V.
+ +Although the 'control' is shown as a switch, it can be a transistor or a small-signal MOSFET, wired either in the +18V or 'ground' connection to the circuit. A typical use may be a 2N7000 wired from the bottom of the relay circuit to ground/ -Ve supply. The efficiency circuit is a single 'block' of circuitry, with the relay, 2 resistors, one diode and one capacitor. It's polarity sensitive, but can otherwise be used with any switching circuit that you may already have wired in the equipment. Note that the back-EMF is higher than normal though - typically about 30V with the values shown (and the 18V supply). The supply voltage can be anywhere between 15V and 24V, with R1 adjusted to suit. R2 will typically be somewhere between 1k and 2.2k - a higher value releases faster but has a higher back-EMF.
+ +This arrangement provides more than enough current to keep the relay activated, but still ensures that the release time is minimised. It's possible to get it better, but 2ms is very respectable, and the arrangement only adds three cheap parts - two resistors and one capacitor. Operating current is reduced from the nominal 44mA to ~22mA, a total power dissipation of 264mW (compared to 528mW for the relay alone). Pull-in time is around 4ms - much faster than if the relay were powered from 12V.
+ +It's also worth examining the overall efficiency improvement. The coil (and added series resistor) power may be reduced to around two-thirds the normal (so from 530mW to 390mW as described above), but the contact dissipation will remain unchanged. It's unlikely that it will be increased, because the relay's armature will remain fully engaged. However, even if the contact resistance is only 10mΩ, the contact dissipation is 1W at a current of 10A. For the test relay, I measured a N/O contact resistance of 10.4mΩ when closed, and this is 'typical' for this style of relay.
+ + +The results described above are based on simulations, which are very accurate if all influences are allowed for. However, if I'm to make assertions about the operation of a relay, then a proper bench test has to be performed. Without that it's just supposition because we're dealing with an electromechanical sub-system.
+ +With an 18V supply and a 1k series resistor in parallel with 33μF (not 100μF as shown - I wanted to test 'worst case'), I measured 1.6ms dropout time with a 2k back-EMF resistor. Without the diode and resistor dropout was only 1ms, but with no suppression the back-EMF was far too high (well over 100V). The static coil voltage was 3.8V, so the holding current was just over 14mA. Used 'normally', dropout was 10ms with 12V and an anti-parallel diode. Activation time was only 4ms with the efficiency circuit, vs. ~10ms without. The pull-in measurements are difficult to perform accurately due to contact bounce.
+ +I used a scope set up for single sweep, triggered from the relay supply (after the switch). Contact release was picked up using the second channel of the scope, with a resistor from an external supply to the 'N/O' (normally open) contacts, and with common connected to ground. This lets you see the instant that coil current is disconnected and also the instant that the N/O contacts open.
+ + +There is something you need to be aware of when an efficiency circuit is used. This is especially true if the supply voltage is the same as the relay coil's voltage. If the relay is activated/ deactivated repeatedly, there's a maximum operation rate that can be achieved. This is imposed by the feed circuit (R1, C1), and if you don't wait long enough for C1 to discharge, the relay may not reactivate.
+ +With the values shown, C1 can be considered fully discharged after 5 time-constants. One time constant is simply R1 × C1, so ~60ms. After deactivation, you need to wait for at least 300ms before attempting to activate the relay again. This isn't a real limitation, because if you tried to operate an EMR more than 3 times per second, it won't last very long. Even a relay with a claimed life of (say) 1×106 (1,000,000 operations) would only last for about 3.8 days.
+ +If you need that sort of switching frequency, there's only one choice - an SSR. Use MOSFETs for DC, and a TRIAC (or back-to-back SCRs) for AC. There are countless industrial applications that do need fairly high repetition rates, but even there a maximum rate of 7 operations/ second isn't a limitation.
+ +Somewhat predictably, I have no intention of trying to cover every possible application for relays (of any kind), because they are limited only by the imagination of designers. If you are designing a circuit that requires switching, you need to select the best switching device to suit the application. I doubt that anyone would consider a switchmode power supply using an EMR to be sensible, even if so many explanations show 'switches' that look just like the schematic representation of a mechanical switch.
+ +You must also be careful if the assembly housing the relay is subjected to mechanical shock or vibration. Because the coil current is reduced, there's less magnetic force available to keep the relay closed. Mechanical shock might cause the relay to release spontaneously, so if vibration (etc.) is present, you must perform thorough tests to ensure that the relay remains activated under all foreseeable conditions. If not, you'll need to increase the holding current until it's stable.
+ + +There are other options for reducing the coil current, but most get complex and expensive fairly quickly. An integrated switch such as the MAX4624 can be used for example, but there are some serious drawbacks. For example, the IC has a maximum voltage rating of 5.5V, and if you used a 12V relay, the maximum voltage you can apply is only 10V using a sensible 5V supply. An example is shown next, but at almost AU$8.00 each, the MAX4624 will cost more than the relay it's controlling. The example comes from Stackexchange (by 'stevenvh'). It's an elegant solution, but is not without its problems.
+ +Cost aside, there's also an inevitable delay as the 100μF cap (C2) has to charge before the switch is allowed to change state. This delay is provided by R1 and C1. When the 'control' switch is closed, voltage is applied to the relay and C2 via D1. A few milliseconds later, the voltage on pin 1 is high enough for the MAX4624 to change state, and the relay voltage is boosted to about 9V, which should be enough for it to energise. C2 discharges, leaving the relay coil powered with about 4.3V, enough to keep it held in. Unfortunately, the control circuit has to provide the power to charge C2 and the relay current, making it a fairly unattractive proposition. It is clever, but somewhat impractical.
+ +Other switching schemes can be used instead, but most will simply add parts for no great advantage. The advantage of the circuit shown is that it lets you use a 12V relay with a 5V supply, with a resulting power saving. The disadvantage is that the relay must have a 'must activate' voltage of no more than 8V, and the IC is expensive. The fact that the 'control' circuit has to provide the current to charge C1 and power the relay is a further disadvantage. Control circuits are normally expected to be activated by minimal current. That can be achieved, but it needs more parts.
+ +One technique that has been used is to power the relay from a PWM (pulse width modulated) supply. This avoids dissipation in resistors, but circuit complexity is increased quite dramatically. I'm a little surprised that no-one has offered an IC solution, as it would be quite useful.
+ + +There are several ICs designed for driving relays, one of which is the DRV110 (TI), which is designed to provide a period of full voltage, after which the IC operates with PWM (pulse width modulation) to reduce the power. Everything can be selected with external resistors and capacitors, and an external MOSFET is used to drive the relay coil. This is a good option, but for relays that may only require 500mW or so it's not worth the effort.
+ +The circuit is adapted from the datasheet, and with Rosc grounded the default frequency is 20kHz. This is a simplified circuit, based on the 8-pin version of the IC. Some of the pin names are (IMO) suboptimal, and 'keep' doesn't quite measure up - the capacitor determines the time that full current is applied to the relay coil. If Rpeak is 0Ω, the maximum current is the default of 300mA. The 14-pin version of the DRV110 has a 'hold' pin, so the holding current can be specified. The default is 50mA, so this IC would be pointless with a general-purpose 12V relay with a 270Ω coil, as they only draw 44mA anyway.
+ +This type of device is well suited to relays that have a high coil current, particularly those that require minimal wasted current or where the maximum coil current is designed to be short-term. High voltage relays and small contactors are examples. A simple PWM efficiency circuit can be made using a 555 timer, with a bit of extra circuitry to stop oscillation (with a 'high' output) for a couple of seconds after power is applied.
+ +PWM efficiency circuits usually provide a useful reduction of the relay's release time, because the holding current is lower than normal. This isn't why they are used, but it comes free.
+ +A 555 timer makes a fairly nice PWM efficiency circuit, although it uses more parts than a dedicated IC. The example shown will reduce relay coil dissipation from ~520mW to ~130mW by halving the current after the timeout set by R1, C1 and Q1 (the oscillator starts after about 150ms). The circuit is presented as an example, but there's not much room for simplification. Ideally, you'd use a 7555 (the CMOS version) as that draws less current, and you don't need C2. If this were made using SMD parts it would be tiny, and the cost would be minimal. It is fully programmable if the oscillator is changed from the 'minimum component count' version to a standard astable. As it stands it's pretty good, but unless you are really worried about current drain there's probably little point.
+ + +One type of relay hasn't been covered at all in the other articles, mainly because they are fairly uncommon and many people will never of heard of a 'step relay'. I don't have any, so the photo was 'borrowed' from the Net, but they are unique. The actuator operates a small wheel that either forces the contacts to open, or allows them to close. Momentary power will activate the relay, advancing the stepped wheel to the alternate position. The contacts are shown in the closed position in the photo. Note the distance the armature has to move. This indicates that the voltage/ current needed to change the contacts from open to closed (or vice versa) will be significantly greater than a normal relay, but of course it's only momentary.
+ +Some allow a preset sequence, with up to four different sets of connections from a pair of contacts. These are generally fairly expensive, and aren't particularly readily available. The one shown has a single contact, but has provision for a second set that's not fitted. One issue with these (and bipolar latching relays) is that there's no simple option to provide feedback to a controller so it knows the current state of the contacts. This is most unfortunate, because if the controller and the relay are out of sync, there's a chance that 'bad things' can happen. Just how bad depends on the application.
+ +Without a feedback mechanism, one must go to some trouble to find out if the relay is open or closed. This adds complexity, and partly negates the gains obtained by using the step relay in the first place. With a plastic mechanism, I wouldn't expect the unit shown to have a long life, certainly not when compared to a conventional EMR. Is it useful? That depends on your application, and how much trouble you're willing to go to to provide feedback. Without that, a power failure could easily see a controller and the load(s) at indeterminate positions in their normal cycle. For example, the lights could easily be on when the controller thinks they're off, and vice versa.
+ +Without a separate set of isolated contacts to indicate the current state, you'd need to add a circuit to detect the presence of voltage/ current in the controlled circuit, and send that data via an approved isolation device to the controller. It's not a major issue - a couple of resistors and diodes plus an optoisolator will do it, but it's more circuitry that can fail over time. Using parts that aren't readily (and consistently) available leaves you open to a system that can't be repaired once the parts can't be obtained any more.
+ +A step relay that used to be used in the millions was the uniselector, used in old 'rotary' telephone exchanges. These were a work of art, beautifully made, precision stepping switches that were designed to be operated countless times a day. This was known as the Strowger 'step-by-step' system, named after the man who invented it. These exchanges ('central offices' in US parlance) were driven by telephones with rotary dials, but electronic phones were developed that could emulate 'decadic' dialling - a string of pulses corresponding to the digit on the rotary dial. Uniselectors had 10 active connections, corresponding to the digits '1' to '0' (1 to 10 pulses respectively). Later versions used both vertical and rotary positioning, providing greater flexibility. Predictably, a complete discussion is outside the scope of this article, but there's a lot of info on-line if you find this interesting.
+ + +Many years ago, we only had BJTs (bipolar junction transistors) for 'solid state' switching. They have been supplanted in almost every application by MOSFETs or IGBTs (insulated gate bipolar transistors). One of the main problems is simply base current - this must be provided to turn on a BJT, but the power is wasted. A BJT can have a very low saturation voltage (fully on), but that requires a significant base current.
+ +If a BJT has an hFE of 100, you need to supply a base current of at least 1mA to switch 100mA efficiently. Normally, you'd add a safety margin and supply 2mA minimum base current. This becomes a real problem when you need to switch 10A or more, as most BJTs have reduced gain at high current, so even more base current is required. On the positive side, a saturated (fully on) BJT can have a low collector to emitter voltage, which may be around 550mV with a collector current of 10A (1A base current, MJL21194 transistor). The power lost at the collector is 5.5W, with another 1.3W dissipated by the base-emitter junction.
+ +Compare that to a MOSFET (even a lowly IRF540N) - static gate current is zero, and the saturation voltage is ~440mV, due to the RDS (on) of 44mΩ at 25°C. It's not hard to see why MOSFETs have taken over for switching, and they are much faster as well.
+ +There are a few applications where BJTs are commonly used, but they are almost all low power, low speed circuits where the limitations are not a concern. Usage in SSRs is close to zero, as they are generally unsuited to this application.
+ +IGBTs (insulated gate bipolar transistors) share the low gate drive benefits of MOSFETs with the high voltage capabilities of bipolar transistors. These are generally faster than standard BJTs but slower than MOSFETs, and are most often used where high voltages and/or high current must be switched. For example, the Toshiba GT40WR21 has a rated voltage of 1,800V, with a 40A current rating. They are available as modules (3-phase, half-bridge [totem-pole] connections) with voltage ratings up to 4,500V and current up to 1,800A (same device!). However, if you have to ask the price, you can't afford one.
+ +An IGBT that can handle 600V at 280A can be obtained for less than AU$12.00 if you ever need to handle that much power. An N-Channel IGBT (by far the most common) essentially combines a low-power MOSFET driving a high-power PNP transistor. They are particularly rugged, and are used in high-power SMPS, UPS systems and inverters (induction cooktops, microwave ovens, motor speed controls, etc.). Their total dissipation might be greater than a BJT, but with no gate current to speak of the overall efficiency is very high. The market appears to remain strong, as new devices are released fairly regularly.
+ +The internal structure of an IGBT is not two separate devices - everything is formed on one die. However, the 'equivalent circuit' is fairly accurate. IGBTs are thought by some to be 'old-hat' due to the availability of SiC (silicon carbide) and GaN (gallium nitride) MOSFETs, but they are still very common, especially where cost is at a premium. It's rare to see them used in SSRs, although it is possible. If used with AC, an 'anti-parallel' diode may be required, because there is often no intrinsic diode as found with MOSFETs it's included in some, but not in others).
+ +Also unlike MOSFETs, IGBTs do not conduct bi-directionally - current can only pass between collector and emitter when the gate voltage is above the threshold. I haven't described any IGBT relays in detail in any of the articles covering SSRs and hybrid relays, simply because they are not suitable for use with audio, and MOSFETs are usually a better choice for medium-power AC control. However, there's no reason that an IGBT cannot be substituted where the designer thinks it's appropriate.
+ +An example of an IGBT SSR is shown above. It uses the same scheme as shown in Fig. 1.1, with added anti-parallel diodes (D1, D2). These must be rated for the same current as the IGBT, because they have to carry the full current when the IGBT is reverse polarised. As noted above, this is not a common arrangement, but it will work well. Each IGBT will dissipate a peak power of 23W (at 10A, and based on a 2.3V saturation voltage). The average dissipation will be around 6.5W for each IGBT plus about 2.5W for each diode (device dependent of course). That's not wonderful, and a heatsink is essential for both IGBTs and diodes. This is one reason that you don't see IGBTs used for SS relays - their dissipation is too high.
+ +In general, IGBTs are used where high voltages and currents are required, along with moderate switching speed. Around 60kHz is usually considered the upper limit, but it depends on the specific device. MOSFETs can operate very much faster and it's not uncommon to see switching speeds of 500kHz or more. An IGBT also has an intrinsic forward voltage, nominally 0.65V but that's only at low current. It's common to see a forward voltage quoted as around 1.5 to 3V or so at maximum current. A MOSFET has an intrinsic resistance, RDS (on), so the voltage across the device can be calculated with Ohm's law. Losses exist in all switching devices (including EMRs). For semiconductors there are two types - forward conduction loss and switching loss, with the latter determined by the transition time between 'on' and 'off', and the switching frequency.
+ +The loss in an EMR is due to the contact resistance along with the resistance of the contact arms. The latter is minimised in contactors (very large relays) by improved construction techniques that don't rely on thin phosphor-bronze (or similar) spring materials to carry the contacts. These are described in Relays, Part I.
+ +So, you can use IGBTs for relays, but unless you have a voltage that's out of range for MOSFETs (too high), they aren't a good choice. On the positive side, there are no issues with holding current, spontaneous re-triggering due to ΔV/ Δt constraints or other undesirable effects (including high electrical noise) that you get with TRIACs or SCRs. You pay for it with higher dissipation though - a TRIAC at 10A will dissipate about 10W, vs. almost double that with IGBTs and diodes (a total of about 18W based on the figures shown above). SiC and GaN MOSFETs are eroding the advantages of IGBTs to some extent, but if you happen to need a 1MW inverter, it will still use IGBTs [1].
+ + +The rapid increase in the uptake of electric vehicles has seen an increase in the number and variety of relay solutions offered by major manufacturers. While you might expect that these would all be 'solid state', that's not the case at all. For safety isolation, no-one will rely on MOSFETs or IGBTs because they fail short-circuit. High voltage relays were once more of a curiosity for most designs (power distribution systems excluded), but as automotive battery voltages increase, relays that can safely and reliably break 800V or more have become a requirement. In many cases these devices will be classified as a contactor, but that's simply a word that means "big relay".
+ +One example is the KILOVAC EV200 Series Contactor from TE Connectivity (aka Tyco), which are designed to handle up to 1,800V or 1,000A (but not both at once!). Like many similar high-power relays, these incorporate either internal or external efficiency circuits (economisers) to minimise coil dissipation (see Section 3.4). The contacts are specially designed, and many are polarity-sensitive. Operation with reversed polarity requires derating to minimise contact erosion. Most have hermetically sealed contacts, and the contact enclosure may be evacuated (a vacuum) or filled with gas (hydrogen, nitrogen, or 'exotic' gas mixtures).
+ +Not long ago, a distributor search for 'high voltage relay' would get few results, but that has changed. The EV200 series mentioned above is popular, but as expected, a 900V, 500A contactor won't come cheaply. However, a couple of hundred dollars isn't much in the greater scheme of things, and a great number of those you'll find are designed specifically for electric vehicles and charging stations. The available range can only grow, as electric vehicles become more popular.
+ +Mechanical contacts are the only option when 100% reliability is essential. Even if the contacts arc continuously, the arc will stop when the contacts have been eroded to nothing, so there may be a very nasty fault current, but it won't last long if it has an 800V battery system behind it. Semiconductors will just melt and become a short-circuit, so unless there's a suitable fuse there's no safety mechanism. A fault condition will result in serious damage, but a suitable mechanical contact system may be able to clear a fault before major damage is done.
+ + +As you can see from this and the other two relay articles, there's so much that you can do if you really need to. It's not often that you need very fast activation from an EMR, but reducing the release time can be very beneficial. However, like all things it must be taken in context. A DC detector such as Project 33 must have a delay to accommodate low frequencies, and that can't easily be reduced. It's certainly possible to use more sophisticated circuitry to detect DC faster than the 50-60ms detection time of P33, but that simply leads to far greater complexity and more opportunities for things to go wrong.
+ +If a DC detector takes 50ms to deactivate a relay, it really doesn't matter much if that's extended by a few milliseconds as the relay drops out. This reality notwithstanding, there's no good reason to delay the deactivation any more than is dictated by the laws of physics. The efficiency circuit is such a simple concept, but it's not used very often which is a shame. One reason that I didn't suggest it for Project 33 is that it requires some calculations and (possibly) a bench test to make sure that it works reliably - particularly for the hold-in current.
+ +There are so many possibilities that it's simply not feasible to cover them all. Some devices are better suited for use as relays than others, so trying to use BJTs (for example) is not recommended. It's up to the designer to work out the best technology for any given application. Many applications just need galvanic isolation between 'safe' low-voltage circuitry and the mains, and this is something that EMRs have been providing for over a century, and they remain one of the most popular switching devices of all time.
+ +Rarely considered is the gain of a relay. If it takes (e.g.) 44mA to control a 10A load, the gain can be said to be 227 (10A / 44mA). The nice thing is that this requires almost no support circuitry, no heatsink, and it's just an inexpensive part that is soldered into a PCB. Nothing else comes close.
+ +Of course, the end result depends on whether the drive (controlling) and controlled circuits need to be isolated or not. Non-isolated circuits are very common, although they are generally considered to be 'simple' switching circuits. With SSRs, everything gets harder when isolation is required, unlike EMRs where it comes free. DC remains a problem though.
New ICs have made this easier, especially when you need high isolation voltage (controlling mains voltage for example). This is always a particularly difficult undertaking if you build your own circuit, as it must be safe under all conditions. This is one reason that EMRs have remained so successful - they provide the required isolation easily, and the constructor doesn't have to do anything special.
+ +Active arc quenching, hybrid relays or just an SSR by itself can all prevent contact damage with DC, and the techniques shown here all work ... albeit with caveats in some cases. Any design has to be optimised for the task, and this is done during the development of the project. You also have to get your priorities right, as saving a couple of dollars and ending up with an unreliable product isn't a good trade-off. Compromise is always a part of design, simply because building something that can never fail will cost too much (and it will use commercial products so it may fail anyway!). Engineering is (at least in part) the art of compromise.
+ + +There is only one reference in this section, as the others are covered in Part I and Part II in this series. The reader should also read Hybrid Relays, as this discusses more options. The article Solid State Relays has more information on the options available, and covers both advantages and disadvantages of each type.
+ ++ 1 Wide-bandgap semiconductors: Performance and benefits of GaN versus SiC - (SLYT801, TI) ++ + +
![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Spring Reverb Units |
![]() ![]() |
Reverb is one of those effects that simply will not go away. While there are some excellent DSP (digital signal processor) based reverb systems that sound very natural, the unique sound of spring reverb tanks is still preferred by a great many guitarists and many electric/ electronic organ players as well. It becomes obvious that the sound of a spring reverb must be a classic when it becomes available as a software plug-in for computer based recording systems.
+ +For detailed info on the history of reverb, the first stop will often be the Accutronics website. There is also a lot of information provided showing various drive methods, a simple recovery amplifier, overload characteristics, and much more.
+ +The problem for the hobbyist or DIY builder is that some of the information is too detailed and the circuits are too generalised. This makes component selection difficult, and makes it almost impossible for the average enthusiast to work out what is needed for their application. While it's nice to have so many choices (they make a lot of different tanks), it's extremely difficult to work out the combination of the most suitable tank, optimum drive circuit, and the ideal recovery amplifier.
+ +With this in mind, and given that I have a reverb design amongst the projects, there is actually nothing specified as to the best tank to use to ensure good performance. I have an old 4FB2A1A tank that was used for some of the tests described. This tank is a Type 4, and has drive coil impedance of 1,475 ohms (~1.5k), pickup coil impedance of 2,250 ohms (2.2k) and is designed for medium reverb time (1.75 to 3.0 seconds). All this information is available from the Accutronics website and can also be seen below.
+ +Something you don't see every day is a video of reverb coils in action. The video was captured by a reader who let me know about it. Taken with a high-speed video recorder and replayed in 'slow motion', you can see the way the transducers work.
+ +It should be mentioned that the info provided is often at variance with reality. A measurement of inductance (for example) gives a very different value from that calculated, but an impedance scan shows that the quoted figures are fairly close. Inductance measurements on transducers often give incorrect results, because the coil resistance is high enough to trick the meter into claiming the inductance is much higher than it really is. Back-EMF created by the spring 're-energising' the magnetic bead doesn't help either.
+ +
Figure 1 - Accutronics Reverb Tank
Figure 1 shows a complete spring reverb tank. I was originally going to show a photo of one I already have, but it looked a bit too gruesome, so I got a new one to do some further experiments with. While the old one has seen better days (I think it's at least over 30 years old) it still works perfectly. The single biggest problem with it is that the input coil has a very high impedance, making it rather difficult to drive. The new tank is a 4AB3C1B - 8 ohms input, 2,250 ohms output, long delay. For all tank info, see Part Numbering Details, below.
+ +
Figure 2 - Drive Transducer Details
Above, you can see a close-up of the drive coil. All coils are colour coded to show their impedance. This info was not included in the Table 1, but it is included in Table 4 at the end of this article. It is also available from the Accutronics website. This can be helpful if the model number is missing or can't be read.
+ +
Figure 3 - Pickup Transducer Detail
Here's a close-up of the pickup coil. The basic design of these tanks dates from around 1960, and has changed very little in all the time since then. As a result, it is possible to replace even extremely old tanks if necessary, and a direct replacement is almost always available. The transducers of this tank are virtually identical to those in my 30-odd year old tank.
+ +
Figure 3a - Pickup Transducer Magnet
Here is a picture that you won't see very often. This is a photo of one of the transducer magnets, taken from a broken reverb spring. To get an idea of the size, the magnet is sitting on a piece of 5mm grid graph paper. You can see the ends of identical magnets in the two photos above of the drive and pickup transducers, but it's hard to gauge the size from those other photos. These magnets have a rotary movement within the magnetic field of the pole pieces, and the signal is passed down the spring as a torsional (rotary) wave. However, the magnets do not 'spin', they move with a (roughly) circular motion.
+ +
Figure 4 - Accutronics Reverb Tank Drive Coil Impedance
The above impedance scan was taken of an Accutronics 4FB2A1A reverb tank (my old one). I used a woofer tester normally used for measuring the Thiele-Small parameters of loudspeakers to do the impedance graph. The impedance was also verified using a vector impedance meter which gave virtually identical results. The spikes at the high end of the sweep are caused by the measurement signal causing a disturbance in the springs and confusing the reading. Although the impedance is different from the claimed or calculated value at various frequencies, the difference is inconsequential. In theory, the impedance at 6kHz should be around 8.8k, but is closer to 6.5k - while this might seem like a large error, it makes little or no difference to the way the tank behaves.
+ +Note that in the circuits shown below, I have used standard polarised electrolytic capacitors, including in locations where there is no polarising voltage. This is (perhaps surprisingly) perfectly alright provided the voltage (AC or DC) never exceeds about 1V. In all cases where polarised electros are shown the actual voltage will be less than 100mV. The exception is Figure 6, because the voltage across C4 may exceed the 1V threshold because it's a discrete design. An addition to the original circuits is C3 in Figures 5, 7 and 14. This cap allows more signal level before clipping, but it is optional. If it's not used, you are unlikely to run out of drive with Figure 5, but I recommend that it be used in the Figure 7 and Figure 14 circuits.
+ +It is important that all opamps are bypassed with a 100nF cap between the supply rails, and/or from each supply rail to ground. These are not shown on any of the drawings! The cap(s) need to be located as close as possible to the IC to prevent oscillation. If an NE5532 oscillates it's not always visible at the outputs, but distortion is increased dramatically. Needless to say this applies to all the circuits shown. Bypass caps should be 100nF 50V multi-layer ceramic (aka 'monolithic') types. Two 10µF electrolytic caps should be used at the power supply inputs to ground, one from the positive supply and one from the negative. Larger caps can be used if preferred.
+ + +The drive circuit for any spring reverb tank is critical - by far one of the most critical part of the final system. The drive coil has a nominal impedance specified at 1kHz, but it also has considerable inductance. It is actually quite difficult to drive properly. It is necessary to know the optimum drive level, but this is specified as a current, not a voltage. While many commercial amplifiers just use an ordinary opamp to drive the input coil, this seriously limits the available drive current. Most opamps can supply no more than ±10mA (peak), with many not even able to achieve that. For an 8Ω drive coil, anything that cannot provide at least ±100mA is going to cause problems.
+ +In order to produce a constant drive level into a coil as the frequency varies, it is necessary to drive the coil with an amplifier that produces constant current, rather than the much more familiar constant voltage. This can be done with equalisation, but it is preferable to use a dedicated amplifier with a high output impedance. This approach is easier, and it automatically adapts itself to the actual (as opposed to nominal) value of impedance at any frequency of interest. However, current drive may accentuate the upper frequencies, and in some cases this might be considered excessive. You also need to consider the fundamental frequencies and harmonic produced by a guitar, as the level falls off above 1kHz, so you may not need quite as much drive voltage as initial calculation may imply.
+ +It is common to include a high pass filter as well, because the spring reverb effect doesn't work well at low frequencies. While it is possible to get good low frequency performance, it's generally undesirable because it tends to muddy the sound too much. Accutronics provides a table of impedance and RMS drive current at 1kHz, but some of the information that one really needs is missing. To rectify that, I have added a column giving the approximate inductance and deleted the columns that are unimportant. The following table applies to the Type 4 tank - the 425mm long, 4-spring version as used by Fender and many others.
+ +It's worth noting that some of the info on the Accutronics site is either misleading, contradictory or just plain wrong. For example, they state that the drive coil should be driven to near saturation, say that the saturation level is 2.5A/T (ampere turns), then show a graph with the nominal level shown as 3.5A/T. Although no explanation is given, that indicates a level 3dB more than the 'recommended' value.
+ +Type 4 Input | Coil Impedance | DC Resistance | RMS Current | Peak Current ¹ | Inductance + |
A | 8 | 0.81 | 28mA | 80mA | 1.27mH + |
B | 150 | 26 | 6.5mA | 19mA | 24mH + |
C | 200 | 27 | 5.8mA | 17mA | 32mH + |
D | 250 | 36 | 5.0mA | 15mA | 39mH + |
E | 600 | 58 | 3.1mA | 9mA | 95mH + |
F | 1,475 | 200 | 2.0mA | 6mA | 234mH + |
There are several different tank styles available, most of which are somewhat smaller than the Type 4. Being smaller, this means fewer (or shorter) springs, and different reverb characteristics. The Type 4 has been the unit of choice for many guitarists for a very long time, although some do prefer the other types. The details in this article are equally applicable to any reverb tank, but some small changes may be needed to account for different impedances.
+ +The first thing that the intending user should look at is the 1kHz impedance. For example, a coil with an impedance of 1,475 ohms at 1kHz requires a voltage of 2.95V RMS to produce 2mA coil current. At 10kHz, this rises to 29.5V because the coil impedance rises to 14.75k. While this voltage and current are certainly achievable, the drive amp ideally needs a supply voltage of over ±40V - often, this is simply not available. It is possible to use a cheap transformer though - see Transformer Drive below. The voltage at a more sensible frequency (say 5kHz) is still far more than can be obtained from an opamp, and will be around 20V RMS.
+ +For use with an opamp or small IC power amplifiers, we must use a lower supply voltage, and with these the high impedance drive coil is of no use. All in all, the 8 ohm coil is the most attractive, although current is higher than we might like at 28mA RMS (about ±40mA peak). The optimum impedance for opamp drive is 150 ohms ('B' input coil), and even at 10kHz when the impedance has risen to 1500 ohms, the voltage remains below 10V RMS. With a peak current of 9mA, an opamp will require a couple of small transistors to boost the output current, and we end up with a circuit such as that shown in Figure 5. For opamp drive, the 150 ohm coil doesn't allow much headroom though - it would be nice if there were an intermediate impedance available. Something around 50 ohms would be perfect.
+ +An 8 ohm coil is a good choice if the power supply is adequate, and a boosted opamp (or - with many caveats - a chip amp such as an LM1875) will be needed to drive the coil. The maximum voltage needed is 2.24V RMS (at 10kHz) to be able to provide the full 28mA needed for maximum output. While the chip amp seems like a good choice and is superficially easy to use, the cost is considerably higher than a boosted opamp. Current drive is harder too, because most IC power amplifiers are not stable at unity (or low) gain. This demands additional complexity to achieve unconditional stability. If a chip amp is used it's generally easier to use a resistor in series with the tank drive coil, but that's not without its problems. The following circuit will drive 8 to 250 ohm coils well, without any major changes.
+ +
Figure 5 - Basic Drive Circuit For Low Impedance Coil
The circuit shown above requires that the input coil be isolated from the reverb tank chassis. The Accutronics website shows a current drive arrangement that doesn't require an isolated input coil, but I recommend that you stay well clear of it because it's wrong. The circuit is meant to be based on a 'Howland current pump', and while these can be made to work very well, the results may be unpredictable if you don't know how to set it up properly. The Accutronics circuit shown as 'drive4.pdf' has several serious mistakes, and will not work at all ! Pretty much everything about the circuit is wrong, which is more than a little disappointing (and as of November 2019, it still hasn't been fixed!). While the 'drive5.pdf' circuit is basically functional, it's an extraordinary example of poor design, as the high frequency response as shown is woeful (even into a resistor!). I recommend that all Accutronics drive circuits be avoided.
+ +The input voltage required for full drive from Figure 5 is determined by the coil impedance and the value of R2, and with a 150 ohm coil and R2 set to 150 ohms as shown provides 6.5mA/Volt. Note that the value of R2 and R7 must be selected based on the coil impedance. The following table gives the suggested values for these resistors, based on the coil impedance. Some experimentation may be needed, but only the values of R2 and R7 need to be altered to affect the gain and proper drive characteristics.
+ +Coil Z | R2 | C2 | R7 | Current | Volts @ 6kHz | mA/ V (1kHz) + |
8 ohms ¹ | 33 ohms | 47µF | 150 ohms | 28mA RMS | 1.34 V RMS | 30mA/V + |
150 ohms | 150 ohms | 10µF | 3.3 k | 6.5mA RMS | 5.85 V RMS | 6.6mA/V + |
200 ohms | 180 ohms | 10µF | 3.9 k | 5.8mA RMS | 6.96 V RMS | 5.6mA/V + |
250 ohms | 220 ohms | 10µF | 5.6 k | 5.0mA RMS | 7.50 V RMS | 4.5Ma/V + |
600 ohms ² | 330 ohms | 10µF | 12 k | 3.1mA RMS | 11.2 V RMS | 3.0mA/V + |
1,475 ohms ³ | n/a | n/a | n/a | 2.0mA RMS | 17.7 V RMS | n/a + |
++ ++
+Notes: + 1 When driving an 8 ohm reverb coil, R3 and R4 may need to be reduced in value (3.9k is suggested) or the output transistors + may run out of base current causing the circuit to clip prematurely. + 2 The Figure 5 circuit is marginal with the 600 ohm coil, as it is unable to provide more than about 7V RMS, so high frequencies may cause + clipping. It's unlikely that you'll ever hear the distortion though. + 3 Again, the Figure 5 circuit is not suitable for the 1,475 ohm coil, as it can't provide a high enough voltage to get good results. + The circuit will run out of drive voltage at about 3kHz, and a high voltage drive circuit such as those shown in Figures 6 or 7 is needed. +
While it might seem perfectly alright to use an opamp to drive coils that need less than 10mA, it's more likely than not that it will be disappointing. Most opamps can provide up to ±20mA (peak), but that is usually a measure of the short circuit current. Attempting to get a useable signal level at the maximum current may get (just) enough level, but allows no headroom. In some cases you can use two opamps in parallel (with 'current sharing' resistors at the outputs), but even that can be marginal. The small extra effort to make a boosted opamp circuit such as that shown in Figure 5 is usually well worth it. However, if you use both halves of an NE5532 opamp in parallel, that combination can drive coils from 150 to 250 ohms fairly easily, and will generally be acceptable. Use a 10 to 22 ohm resistor at the output pin of each opamp (pins 1 and 7), and U1B will copy the output of U1A and sum the current from each opamp.
+ +
Figure 5A - Dual Opamp Drive Circuit For 150-250 Ohm Coils
The above drawing shows how it's done. With the NE5532 opamp, the circuit can drive a 300 ohm load easily, and is an economical alternative to the Figure 5 circuit, both in cost and PCB real estate. The values for R2, R7 and C2 are the same as shown in the above table, selected for the drive coil impedance. This circuit is not suitable for 8 ohm tanks, but you may just get away with it if you are willing to sacrifice some output level.
+ +After getting a new 8 ohm tank for some experiments and to take a few measurements, it turns out that the coil can be driven somewhat harder than claimed. I was able to drive the 8 ohm coil to 250mA at 1kHz before saturation (almost 10 times the current claimed). The saturation current remains roughly the same at all frequencies from around 300Hz and up, and at 1kHz the voltage was measured at 2V RMS. This rises to 8V RMS at 5.8kHz, the highest frequency where useful output was measured. I drove the input transducer from an LM1875 amplifier, feeding the coil via a 10 ohm resistor. Amp output at 5.8kHz is a little over 8V RMS, and it was the resistor alone that reduced the drive voltage at lower frequencies. However, I don't recommend that you drive the coil to the maximum, because it may shorten the life of the unit. Somewhat surprisingly, using almost 10 times the rated coil current does not produce almost 10 times the output level - you will be lucky to get even twice the output. On that basis, using a much higher drive current should not be attempted (other than for experiments of course).
+ +The Figure 5 circuit is capable of driving an 8 ohm coil to several times the maximum rated current at any frequency. The values shown above will all provide slightly more than the manufacturer suggested transducer drive current with an input voltage of about 1.5V RMS. A voltage to current converter is defined by its transconductance which is shown in Table 2 (above) in mA/V. For example, if the circuit provides 5mA/V, you get an RMS current of 5mA with 1V RMS input, or 10mA with an input of 2V.
+ +Be aware that if the coil is heavily overdriven you'll get some distortion, and too much overdrive can damage the coil due to overheating if the available voltage and current is excessive. This is actually fairly unlikely - even with 250mA in the 8 ohm coil the dissipation is negligible, but there is a very real risk of mechanical damage. It is best to avoid clipping the drive amplifier, so some headroom is needed. Amp clipping may be worse than core saturation, and the level needs to be monitored for best results. In general, it seems to be ok to drive the coils with up to double the rated current, but I wouldn't go beyond that.
+ +The circuit of Figure 5 is not suitable for the higher impedance coils - 600 ohms is marginal, 1.465k is not at all suitable. Fortunately, the signal level above 2kHz or so drops off at ~6dB/ octave, so maximum drive voltage is never needed at 6kHz. See below for more on that topic. R2 sets the sensitivity, and can be determined as ...
+ ++ R2 = VIN / ICOIL ++ +
For example, for an 8 ohm coil and 28mA input from 1V, R2 becomes 1 / 0.028 = 35 ohms.
+ +A 33 ohm resistor is fine in this case. R7 is based on an estimation, where the resistor value is roughly 20 times the coil's 1kHz impedance. Reduce the value of R7 for less treble response and vice versa. With R7 set at 20 times the coil impedance, high frequency response is 3dB down at about 5.5kHz. The resistor sets the amplifier's output impedance, so it can't keep rising with increasing frequency. The effect is identical to using a voltage amp with a series resistance. The choice of C2 is somewhat personal, and it should ideally be a bipolar electrolytic type as indicated. The values shown will give a fairly good drive level down to about 100Hz with all coil impedances, and if less bass response is desired it's better to reduce the value of C1. As shown the -3dB frequency is 159Hz. A smaller value will give more aggressive bass rolloff and vice versa.
+ +We also need to consider the maximum voltage needed to provide the required current into the coil at high frequencies. At 10kHz, we need more voltage than the maximum possible from an opamp. Fortunately, response to 10kHz is not only unnecessary but is also undesirable, and it won't be reproduced by the tank anyway. An upper limit of ~6kHz is usually more than enough, so there is some headroom, although it is marginal with the 600 ohm coil (I suggest that you use the high voltage circuit for that if you want the maximum headroom).
+ +For an 8 ohm coil, R2 needs to be 33 ohms for a 1V RMS input voltage. The dissipation of Q1 and Q2 is about 160mW at full level and 1kHz, and remains relatively constant with frequency. Peak dissipation is below 500mW - well within the ratings of the BC639/640. Feel free to use BD139/140 if you prefer. They will run cooler because they are considerably larger than the TO92 devices. The power supply demands are naturally noticeably higher than would be the case with a higher impedance coil, but are still easily handled by P05 or similar.
+ +As the drive coil impedance rises further, the voltage needed to drive the coil exceeds that available from any common (cheap) opamp. The current is low, but this doesn't help a great deal if there is no easy way to get the voltage needed. The 1,475 ohm coil requires a voltage of just under 15V RMS at 5kHz, and almost 30V RMS at 10kHz. Allowing for a maximum sensible frequency of 7kHz, you'll need 21V RMS to drive it. This can be obtained easily enough using the ±35V supply for a typical 100W solid-state guitar amp (such as Project 27). The drive circuit must be discrete though, because no common opamps are rated for such a high voltage. While a pair of opamps in bridge would work, obtaining stable current drive in this configuration is not easy. Even in that configuration, the maximum voltage available without distortion is ~20V RMS - not enough for the high impedance coil, especially since some headroom is desirable.
+ +
Figure 6 - Basic Discrete Drive Circuit For High Impedance Coil
The discrete amplifier is not designed for outstanding performance because it's just not needed. It will drive the high impedance Accutronics coil to the full 2mA RMS required though, and the circuit as shown should satisfy anyone who has a high impedance tank. Since I'm one of those (since I have my old high-Z tank as well as the new one), I built the circuit to verify that the simulation is correct and because I want to be able to use the tank I have. It works as described, and is certainly not a difficult or expensive circuit to construct. I do suggest that the power supply rails are decoupled with a resistor and a capacitor as shown. C5 and C6 can be made larger if you prefer.
+ +VR1 is an adjustment to enable the output voltage to be adjusted to zero. This is important to ensure maximum headroom. C4 (10µF bipolar electrolytic) is included to ensure that no DC flows in the drive coil. DC causes the magnetic circuit to saturate, and this reduces sensitivity and greatly increases distortion. It is also important that the circuit is driven from a low impedance. In the interests of simplicity there is no additional decoupling in the network of R1, R3, D3 and VR1, so a high impedance source may allow some hum and noise from the power supply to enter the amp's input. A low impedance source lets C1 act as a coupling cap and also decouples any noise. C1 is chosen to provide the desired low frequency response. With 100nF as shown, the -3dB frequency is about 72Hz. Reduce the value of C1 to reduce the amount of bass and vice versa - this is often a very personal choice.
+ +This simple circuit has a deliberately limited output impedance, and the constant current characteristic only extends to about 6.5kHz. The circuit has the equivalent of using a resistance in parallel with the coil as shown in Figure 4 - all drive circuits require a high frequency limit. In this case, R5 (22k) is effectively in parallel with the coil, although it may not look like it at first glance. The response of any spring reverb tank is very limited above ~5kHz anyway, and there is little point trying to get very high frequencies. Even if the tank could provide good HF response, it would sound unnatural because natural reverb at high frequencies is very uncommon. Despite the high operating voltage, this circuit will still struggle if you drive it a bit harder than normal. With 2V input at 5kHz, the amp will clip - this is unlikely to be a problem though, since the energy at this frequency is usually much less than at lower frequencies.
+ +
Figure 6A - Simplified Discrete Drive Circuit For High Impedance Coil
The circuit shown above is a simplification, but with the high supply voltage it will never run into problems due to the DC offset. Rather than messing around with a zener and pot, we just accept that it has around -2.5V DC offset, so a polarised cap can be used for C4. C1 can be increased for better low-frequency performance (this applies to Fig. 6 as well).
+ + +All of the tanks can be driven from a voltage amplifier with a series resistance or an equalisation (treble boost) circuit. The value for the series resistor needs to be the same as R7 listed in Table 2 to get the same performance, and it becomes apparent quite quickly that the voltage needed can be quite high. For example, if we assume a 150 ohm coil with a 3.3k series resistor, you need over 20V RMS to get the rated current at all frequencies of interest. The only tank coil that can be successfully driven from a voltage amplifier is the 8 ohm version with a 150 ohm series resistor. This only needs about 5V RMS, and that's easy to achieve with a chip amp or boosted opamp.
+ +All other coil impedances need a lot more voltage, in general far more than it is reasonable to achieve. The 600 ohm coil needs over 35V RMS via a 12k resistor, and that's more than you get from an amp running ±35V supplies. The alternative is to include a filter before the amp that provides a 6dB/ octave boost from ~70Hz or so, then the coil can be driven directly from the amp's output. A suitable filter would be a 10nF capacitor feeding a 2.2k load, and that will provide a voltage increase of 6dB/ octave up to ~5kHz.
+ +Again, equalised voltage drive is best suited to low impedance coils and can work well, but it's hard to recommend any form of voltage drive as being appropriate for a number of reasons. A major drawback is that the low resistance coil is connected directly to an amplifier, so even a tiny DC offset may cause partial drive transducer saturation. While you can use an isolating capacitor, it needs to be fairly large (at least 470µF for an 8 ohm coil) and there is no net benefit.
+ +A simple way to use voltage drive (with equalisation) is a bridged amplifier, for example using two Figure 5 amplifiers. One is driven with inverted phase (180°) in the same way as a bridge tied load (BTL) power amp for speakers. You can get the necessary voltage swing easily, and the reverb tank must use an isolated input connector. It will certainly work, but the output capacitor is still necessary unless you are willing to add a DC offset control. It's hard to recommend this approach because the drive amp is twice as complex, and as already noted using a voltage amp with EQ or series resistor is not ideal.
+ +So, while voltage drive (single ended or BTL) with an equaliser can be used, it's really not recommended and no circuits will be shown.
+ + +Another way you can drive a high input impedance reverb tank is to use a transformer. Small transformers may be rated for about 350mW or so, and are fairly cheap (around AU$5.00 each for the one I used for testing). The core is small and will saturate quite easily, but even if the transformer core does saturate you won't hear it. The springs don't have the fidelity to reproduce anything cleanly. The transformer is used in reverse, so the 'primary' is used to drive the reverb tank and the 'secondary' is used as the primary. Transformers work happily either way, and there is no reason not to use them backwards.
+ +There is one thing that you will have to do if you go this way ... experiment. Because transformers will vary and the coupling between the transformer and drive transducer is something of an unknown quantity, you have to be prepared to try different variations of the circuit until you get a good result. Unlike direct drive using a current amplifier, using a transformer may create unpredictable response. The arrangement shown below has been tested and it works as described. Input level will normally be around 1.5V RMS.
+ +
Figure 7 - Transformer Drive Circuit For High Impedance Coil
Very small transformers are available from various suppliers, and the one suggested has a primary of 1k ohm centre-tapped, and a secondary rated for 8 ohms. Because transformers have no intrinsic impedance of their own, this can be used in reverse using the circuit shown in Figure 7. The amplifier drives the secondary, and the 'primary' is used for the output. This will work whether the input coil on the reverb tank is isolated or not - it makes no difference either way. However, you do need to be mindful of earth (ground) loops which may cause instability.
+ +The impedance ratio as noted above is 1k:8 ohms, so the turns ratio is the square root of the impedance ratio ...
+ ++ ZR = 1k / 8 = 125:1+ +
+ TR = √125 = 11:1 +
Therefore, if we supply 1V to the 8 ohm winding, we should get around 11V across the full secondary (ignoring the centre-tap). If the reverb drive coil has an impedance of 1k, the drive amplifier will 'see' an 8 ohm load. As seen from Figure 4, the actual reverb drive coil impedance varies over a wide range, but that will not cause stress to the driver circuit shown in Figure 7. Getting 17V RMS at 6kHz is easy, and only needs an input voltage of a little over 1.5V from the driver amplifier.
+ +The circuit shown is almost the same as Figure 5, and the circuit is operated in voltage mode rather than high-impedance current mode. The gain is two (set by R2 and R7), and the voltage to current conversion is done by R8 and the transformer itself.
+ +If you wanted to drive a 600 ohm coil with one of these transformers, you could simply use half the primary winding (as the secondary of course). Using half of the winding doesn't mean the impedance is (nominally) 500 ohms - it actually ends up being only 250 ohms, ¼ of the impedance of the full winding. This is ideal for driving 200 or 600 ohm input coils on the reverb tank. With a 600 ohm drive coil, the transformer's nominal impedance at 1kHz will be about 20 ohms in theory, but the way the transformer is made might cause that to be somewhat different (these are hardly precision components). I did try using the 8:250 ohm alternative with the high impedance coil and while it works, you'd need to be very careful to avoid transformer saturation because it's marginal at best. No problems at all with a 600 ohm reverb drive coil though.
+ +
Figure 8 - Close-Up Of A typical Miniature Transformer
These transformers were originally designed for use as output transformers in transistor radios and similar (very) low output amplifiers. The early versions used a laminated iron core, but those you get now often use ferrite. A few details about the transformer are in order. Predictably there is almost nothing in the info from any seller other than the impedance ratio and power handling (350mW). The primary resistance is 330 ohms (end-to-end, ignoring the centre tap), and the secondary resistance is about 3.25 ohms. Primary inductance measured 1.7H and secondary inductance is about 7mH. The core measures 14.5 x 11.5 x 6.5mm and the whole thing weighs all of 6 grams. Although the one shown in the photo was obtained in Australia, similar transformers appear to be available from many sources worldwide.
+ +It doesn't matter if the one you can get is larger or has a different impedance ratio, as long as it's within 50% or so of the unit I used. I wouldn't entertain anything smaller than the one I tried because it will saturate too easily. In reality, you can get by with a tranny that provides a step-up of anywhere between 3 and 12 times, so something rated for (say) 2,500:50 ohms (~7:1 turns ratio) is just as easily used and may even work better than the one I have. You might need to adjust the gain of the drive amplifier slightly, but everything else will be unchanged. You will need to test the circuit to ensure there's no transformer saturation at all frequencies and levels of interest.
+ +From the tests I performed, the transformer will almost certainly saturate well before the reverb drive coil. I leave it to the constructor to determine whether this is a problem or not. It's also possible to use a small mains transformer, which will have lower losses and be much harder to saturate at the current levels needed. A 230V to 12-0-12V (24V CT) is ideal, and the full 24V secondary is used as the primary, with the secondary driving the reverb tank. If your mains is 120V, you'll need a single 9-12V winding to get a useable step-up ratio. If you choose to use any transformer to drive the coil, you will need to experiment to find the optimum drive parameters.
+ +I'd like to thank 'PhAbb' for suggesting the use of one of the el-cheapo 1k:8 ohm transformers to drive high impedance coils. (He knows who he is, and that's the main thing.)
+ + +Recovering the signal is every bit as important as driving the coil properly. The recovery circuit that's shown in the 'drive1' PDF at Accutronics is barely adequate, and will be rather noisy. It is important that the opamp used gives its best performance with low to moderate source impedances, and maintaining a high load impedance is essential for optimising the signal level. Both high and low frequency response should be tailored to suit the expected response of the tank itself. I would suggest that a range from 200Hz to about 6kHz is about right. Output above 7kHz is almost nil, so a wide bandwidth pickup amplifier is not needed. The relatively low bandwidth maximises signal to noise ratio - essential since the output level is generally well below 10mV even at maximum drive level.
+ +Type 4 | Impedance | DC Resistance | Inductance * | VOUT (RMS, Typ) + |
A | 500 | 42 | 65mH | 3.0mV + |
B | 2,250 | 200 | 270mH | 6.5mV + |
C | 12,000 | 800 | 1.7H | 15mV + |
The table shows the impedance, DC resistance, approximate inductance and claimed output level for the three output coils available. The inductance value was calculated, based on the claimed impedance at 1kHz, so it should not be taken as gospel. It does give a reasonable starting point though, and can be used to estimate the peaking frequency caused by C1. To calculate this for yourself, use the formula ...
+ ++ f = 1 / ( 2π × √( L × C )) ++ +
Several opamps are well suited to the task, and of these the dual NE5532 (or NE5534 for a single version) is one of the better choices. These opamps have low noise and excellent drive capability for low impedance loads. Based on the datasheet values, they are best suited for a source impedance of 3kΩ or so, which is ideal for the type 'B' coil. The NE5532 has rather uninspiring DC offset figures, but that's not an issue in this application. You could also use the OPA2134 dual opamp - it's quiet, but (IMO) too expensive and overkill for a reverb circuit. With a typical output level of around 6mV (which is frequency dependent), a total recovery gain of about 150 (43dB) is needed to obtain a 1V output. Although this can be obtained from a single opamp, the result may not be satisfactory. An output level of around 500mV is usually sufficient for a 1V 'dry' signal. More than that means that the reverb is dominant, tending to 'drown out' the original signal (of course, you may want to be able to do that, so you can use more gain if necessary).
+ +Accutronics recommend adding a capacitor in parallel with their 'B' (2.25k) coil, for 'improved high frequency response'. While this is a good idea, the Q of the tuned circuit may be found to be too high. If this proves to be a problem, adding a resistor in series with the capacitor tames this nicely, reducing the peak amplitude and spreading the HF boost over a wider range. This is included in the circuit below, but the resistor and/ or capacitor value will probably need to be tweaked to get the sound you want.
+ +
Figure 9 - Generalised Recovery Amplifier Circuit
The circuit shows the basis for the recovery amp. Gain is 40dB (x100), and may be reduced if necessary (not very likely). I do not recommend attempting more gain from a single stage. Vary the value of R1 to adjust the degree of high frequency peaking created by C1, and change C1 to raise or lower the frequency peak. With the values shown, the peak is about 3dB at 2kHz (relative to the 1kHz level). A smaller resistor increases the peak amplitude, and a smaller cap increases the frequency. 4.7nF gives a frequency of around 4.6kHz and increases the peak's amplitude to 9dB, still with the 2.2k resistor for R1. These components are interactive, and also depend on the inductance of the transducer. Accutronics suggest a 2.2nF capacitor directly in parallel with the pickup coil, but that's unlikely to be entirely satisfactory for most players. I don't think many people would be happy with a 33dB boost at 7kHz, as it will have little audible benefit.
+ +R2 prevents the opamp from swinging to the supply rail if (when) the reverb tank is unplugged. It must be high enough to prevent the coil inductance from causing premature high frequency rolloff, and should be at least 10 times the nominal coil impedance (100k is shown, but anything from 22k to 220k will be alright). Gain will need to be increased (by adding another stage) for the 500 ohm coil and reduced for the 12k coil. The latter is probably a poor choice, and I suggest the 'B' coil if you have the opportunity for be choosy. It seems to be the most popular option so should be easy to get.
+ +Gain is varied with R4 - lower values give less gain. A pot can be used at the output for level control. The NE5532 can drive low impedances easily, and the pot should be 10k (audio taper is preferred). The disadvantage of this arrangement is that all the gain is concentrated in the one opamp, and if the signal level is higher than expected (which reverb tanks can do fairly easily), there is a risk of clipping. However, for this to occur, the tank's output would have to exceed 80mV peak. I have tried, but was unable to get anywhere near that much. C6 is included to roll off the extreme top end, and that will help to reduce the apparent noise created by the high gain amplifier stage.
+ +For maximum flexibility a two stage amplifier can be used with the level control between the two, but it's unlikely to be needed in reality, unless you have the 500 ohm coil and need additional gain. The second stage will typically only need a gain of 2-3, and this helps keep noise low. I've tested a recovery amplifier (using NE5532 opamps) with a total gain of up to 1,000 and was easily able to get an output level of 1.5V RMS - somewhat less than the specifications for the tank would imply. Noise was audible with no signal, but only when my workshop amp's gain was turned up so far that the result with signal would be deafening.
+ +Reverb recovery amplifiers are fairly straightforward, but the impedance and sensitivity of the output transducer should be chosen based on response and noise. The 'B' output coil is probably the best of them, as it combines a reasonable impedance and output level, both of which are well suited to most low noise opamps. The high impedance coil does provide more level, but it's also more sensitive to load impedance and may suffer from high frequency attenuation. The low impedance coil doesn't have enough level, and may be more sensitive to radiated magnetic fields because of the extra gain required.
+ + +The final step is to decide if you want to add a clipping indicator or level meter to the drive amp. Having some form of metering allows the drive level to be set to the optimum, maximising output level. While generally not included in guitar amps because of the added complexity (and marginal usefulness), for a studio or PA application it's essential. Provided the drive amp has sufficient headroom, initially I recommended against any form of compression or limiting, and suggested that a meter or indicator is a relatively simple addition. Well, having tried it, I actually would recommend using a limiter (see below for the details).
+ +The Project 60 LED level display is ideal. It's small enough to make it easy to fit into a small chassis, and is easily calibrated to indicate the maximum allowable drive level. The schematic (with values amended to provide about 1V RMS input sensitivity) is shown below. The meter is usually connected in parallel with the input to the drive amp, but there's no real reason that it can't be reconfigured to measure the signal level from the drive amplifier. See the project article for the required values of R3 and R4.
+ +
Figure 10 - LED Meter For Drive Level Monitoring
The meter should be operated in dot mode, because it is likely to be too irksome to provide the various different supplies needed to allow the unit to operated in bargraph mode, which increases the IC dissipation dramatically. The input sensitivity as shown is 1.25V with VR1 at maximum - this means that the sensitivity is pretty close to perfect for a nominal 1V input. LED current is set to about 11mA with R3 at 1.2k but it is easily reduced if needed - if R3 is increased to 1.5k the LED current is just under 9mA.
+ +In many cases, a simple clipping indicator will be sufficient. It's actually harder to do than the LED meter though, because there are no PCBs available for a suitable circuit. If you don't mind some Veroboard wiring, you can use the circuit below.
+ +
Figure 11 - Clipping Indicator For Drive Level Monitoring
It's a pretty simple circuit and will work well with the suggested opamp, which is cheap but has limited performance. That's not a problem for this circuit. The second stage is a comparator, and is used to 'stretch' the overload peak so the LED is on for long enough for you to see. While the circuit might seem like overkill, really simple circuits just don't work well enough to be useful. It's important that the meter doesn't load the input or drive signal - depending on where you choose to connect the indicator - so a high input impedance is essential. The 100k pot is used to set the circuit gain so that a signal that just exceeds the coil current will trigger the LED.
+ +If connected to the high impedance coil driver's output, the signal level applied to the circuit must be reduced because it's too high for the opamp. Input impedance needs to be as high as possible, and if used with the high impedance drive circuit, replace VR1 with a 1Meg pot. You may use as many of these circuits as you need, but most constructors will just use one for the drive amp.
+ + +In most cases, reverb units are designed to allow a 'dry' signal (no reverb), and use a pot to adjust the reverb level to the output. Sometimes (for example if used with a PA mixer), only the 'wet' signal (reverb only) is needed. To be really useful, the support circuitry should allow both modes of operation. Some guitarists might like to experiment with using a second small amp and speaker just for the reverb - it's an interesting sound.
+ +The input stage can be balanced if desired, and likewise the output(s), although I've only shown an unbalanced version. It's useful to provide a high input impedance, but unless you plan to plug the guitar straight into the reverb circuit (not really recommended), there's not much point in having an input impedance above 22k or so. The 'Drive' control will typically be a preset, but if a limiter (such as that shown below) is used the pot isn't needed at all.
+ +
Figure 12 - Unbalanced Input, Mixer & Output Stages
The direct (dry) path has unity gain, so 1V input gives 1V output with the level control at maximum. The reverb input signal can be continued to a separate output if desired. If that's done, the wet (Reverb Out) output will have a level that depends entirely on the reverb recovery amp's gain, as it may be fed to another amplifier or mixer. The mix of dry and wet signals is set by the Reverb control - it's not really possible to give a gain figure, because it will vary widely, depending on the input source, selected reverb tank, etc. With the circuit shown in Figure 12, the gain is unity with the 'Reverb' pot at maximum.
+ +While there are many more possibilities, the purpose of this article is to give ideas, rather than complete details of a defined project. Using ESP boards, there is a wide range of additional possibilities. Using a P94 'universal' preamp/mixer allows the addition of tone controls, as well as full mixing capabilities. The P113 headphone amplifier is ideal as a driver for low impedance tanks (8 ohms is no problem), and the second channel can be configured as the recovery amplifier. The only things missing are the simple clipping indicator and discrete high impedance drive circuit.
+ + +Rather than all the tedious messing around with level meters or clipping detectors, you might simply want to include a limiter before the drive stage. This ensures that the maximum drive level can't be exceeded, preventing the drive amp from clipping or the reverb tank drive coil from saturation. Yes, of course it's overkill, but the added cost is actually quite small. This limiter is almost perfectly matched to the driver circuits shown above, and the output level trimpot (VR2) will normally be set to about halfway. This is quite possibly the single most worthwhile addition to a reverb circuit, as it's really easy to get the perfect level and it will be very consistent.
+ +
Figure 13 - Compressor/ Limiter For Drive Amplifier
This circuit was developed for a project, and is one of the simplest I've ever seen. Despite that, it works very well, but the choice of opamp is limited because it must have a high drive capability. The NE5532 is perfectly suited to this, and that's what I used during the project development. In use, the level control (VR2) will be preset to give the maximum drive to the tank, and the compression control VR1 will be varied as needed. Make sure that the circuit is driven to maximum compression before setting VR2. The panel indicator LED is optional, and if not used will reduce the output level to about 2V RMS.
+ +Some drive amplifiers that you find elsewhere may need to have their gain reduced if the compressor is included, because they may already have a high output level (up to around 3V RMS). The input sensitivity of all the drive amps shown here is around 1.5V RMS, so they should not need any modification. The compressor is quite capable of driving a unity gain stage to the maximum level needed for a typical tank. I tested it with the Figure 7 circuit, driving my old high impedance tank. Performance was exactly as expected - it works very well indeed. For step-by-step details on how to make your own optocoupler, see Project 145.
+ +With the values shown, the limiter will be at the limiting threshold when the 'Compression' pot is at maximum resistance and with an input voltage of around 150mV. When the 'Compression' pot is at the minimum setting, input sensitivity is around 1.5V RMS. Gain can be increased or reduced by changing the value of R4. A lower value gives a higher gain and vice versa. If the first stage has a gain higher than 4 (e.g. R4 is less than 820 ohms), use a 47µF or 100µF electro in series with R4 to keep DC offset low. Polarity is not important because the voltage will be well under 100mV.
+ + +Another 'compression' system was suggested by a reader, who says it was used by a British company called Grampian (which operated through the 1960s and ceased trading some time in the 1980s). Their reverb unit used mainly germanium transistors, but the clever part was the use of a lamp in series with the tank. This approach isn't new (well, it can't be if it was used in the 60s ), and it's been used in many small studio monitor speakers to protect the tweeter (often a small compression driver with horn loading).
In the original Grampian reverb unit (Types 636 and the 666 which came along later), the tank drive was a simple transformer driven push-pull low-power amp with no feedback, and the lamp was in series with the tank, bypassed with a 2.2µF capacitor. I would expect the sound to be a bit on the rough side given the overly simplified design (especially with germanium transistors), but I've been told that it doesn't sound too bad at all. Circuitry has come a long way since then, and it's now easy to make a circuit that will outperform anything from that era. The lamp is still clever though, and I don't know of any other manufacturer who has used that approach. Having said that, it does appear that Hammond also used a lamp limiter, but I could find no details about how it was implemented.
+ +
Figure 14 - Lamp Based Compressor For 8 Ohm Tanks
In the schematic, you can see that R2 has been augmented by the lamp, wired in series. The amplifier is still set up for an 8 ohm tank, and uses current drive to the tank's drive coil, but the gain is determined by the resistance of the lamp and R2 (SOT means 'select on test'). The value of R2 will probably be around 10-22 ohms, and the lamp shown will have a resistance of 40 ohms with 6V RMS across it. At low levels, the lamp's resistance will be very low (probably no more than 10 ohms), and as the level is increased, the filament gets hotter and resistance increases. This reduces the gain and provides the compression effect. The circuit's gain will be constantly changing, depending on the input level.
+ +In the Grampian circuit, the lamp was on the front panel, marked 'Overload' - rather pointless really since I don't know of any guitarist who watches the gear while playing. Be that as it may, the compression provided by the lamp is (apparently) very nice. I've not tried this, but if you can get hold of a suitable lamp (plus spares!) it should work very well. Lamps have been used for many years in small speakers as noted above, and to stabilise the gain of Wien bridge sinewave oscillators, so the technique has wide ranging applications. You may find that suitable lamps are getting hard to find though, so if you do come across them, grab a few while you can. Anything rated at between 6 to 6.5V at 150mA should work well.
+ +The Grampian unit simply had the lamp in series with the tank's drive coil, but the major benefit of the approach shown above is that the amp's gain changes with level, so it becomes very difficult to overdrive the input coil. Remember that the circuit is a voltage to current converter, so the total output voltage doesn't change much, but the current through the drive transducer is varied as the lamp's resistance and/or input voltage changes.
+ +Naturally, it's also very easy to include a switch so that the lamp circuit can be switched in or out, with just a resistor to ground in place of the lamp+resistor shown. The switch also provides a means to get reverb back should the lamp fail. You can experiment with the lamp, but it seems likely that anything rated at between 6 to 6.5V at 150mA should be close to optimum.
+ + +The following table is adapted from the data provided on the Accutronics website, and as an example I have highlighted the specification indicated by each character of the (new) 4AB3C1B tank I have. The table here is only for Type 4 tanks - some of the impedance options are different for the Type 1 tanks, and they are not included. The ideal arrangement for most applications will use a fairly low input impedance, medium output impedance, and have an insulated input so you can apply current drive. Reverb time is up to the user, as is the mounting style. Some of the more obscure mounting options will probably be very hard to find.
+ +Char #1 - Reverb Type | 4 = Type 4 + |
+ | |
Char #2 - Input Impedance | A = 8 Ohm (White) + |
B = 150 Ohm (Black) + | |
C = 200 Ohm (Violet) + | |
D = 250 Ohm (Brown) + | |
E = 600 Ohm (Orange) + | |
F = 1,475 Ohm (Red) + | |
+ | |
Char #3 - Output Impedance | A = 500 Ohm (Green) + |
B = 2,250 Ohm (Red) + | |
C = 10,000 Ohm (Yellow) + | |
+ | |
Char #4 - Decay Time | 1 = Short (1.2 to 2.0 sec) + |
2 = Medium (1.75 to 3.0 sec) + | |
3 = Long (2.75 to 4.0 sec) + | |
+ | |
Char #5 - Connectors | A = Input Grounded / Output Grounded + |
B = Input Grounded / Output Insulated + | |
C = Input Insulated / Output Grounded + | |
D = Input Insulated / Output Insulated + | |
E = No Outer Channel + | |
+ | |
Char #6 - Locking Devices | 1 = No Lock + |
+ | |
Char #7 - Mounting Plane | A = Horizontal, Open Side Up + |
B = Horizontal, Open Side Down + | |
C = Vertical, Connectors Up + | |
D = Vertical, Connectors Down + | |
E = On End, Input Up + | |
F = On End, Output Up + |
The colour indicated for the input and output coils is for the plastic bobbin, and is a secondary way to identify the impedances. This can be useful if the type number has been removed. The 'outer channel' (i.e. the outer chassis) dimensions are 425 x 111 x 33mm (16.75" x 4.375" x 1.313").
+ +The mounting plane is surprisingly critical, particularly the horizontal options. If a tank intended for 'open side down' is mounted with the open side up, the ferrite magnets will be so close to the pole pieces that even a small bump will cause them to touch and generate lots of unpleasant noise.
+ +It is important to ensure that when the chassis is mounted, it is provided with some protection against vibration. There must be nothing that can touch the springs, as that will ruin the sound. Never use any kind of foam as a partial support, because it will eventually decompose. If the foam residue gets onto the springs you almost certainly will never get it off well enough to restore the natural sound of the tank.
+ + +![]() ![]() + |
![]() | + + + + + + + |
Elliott Sound Products | +Hobby Servos, ESCs And Tachometers |
Copyright © 2018 - Rod Elliott (ESP)
+Published January 2018
+ Contents+ + +
+ Introduction
+ 1. What Is A Servo ?
+ 1.1 Hobby Servos
+ 2. Motors
+ 3. How A Servo Works
+ 4. Servo Tester
+ 5. Testing ESCs
+ 6. Modify A 180° Servo For 360°
+ 7. Build Your Own Servo
+ 8. Proportional Integral Derivative (PID) Controllers
+ 9. Averaging Receiver Pulses
+ 10. Electronic Speed Control
+ 11. Regenerative Braking
+ 12. Tachometer Design
+ 13. Speed & Position Monitoring
+ Conclusions
+ References +
Servos are used extensively in robotics, hobby planes, boats, cars, etc., animatronics, lighting and in industry. There are countless different ways that servos are used, but in this article I concentrate on the hobby side of things - i.e. servos that are used in hobby/ robotics systems and use the standard RC (remote control) PWM control protocol. Even here there are many different types, but the industry as a whole has concentrated on a standard protocol that is used in the vast majority of applications. These servos are used with all manner of remote control (RC) systems. While these self-contained servos are the most visible to hobbyists, servo systems are used in a vast number of applications. Some include ...
+ +This is far from an extensive list, but it gives you some idea of the diversity of servo systems. Railways (both scale models and the 'real thing') use servos to control track switches (aka points) and signal arms, as do cranes, lifts (elevators) etc., etc. Not all are electrical/ electronic - some hydraulic/ pneumatic systems can also use servos based on 'fluid logic' or a hybrid using electronic control of the hydraulic system. It's not even essential to have a mechanical output to decide if something is a servo or not. A thermostat controlling temperature is just as much a servo process as any other system listed above, and indeed this is a very common usage - even if it doesn't meet the strict definition of a servo.
+ +Other common examples of servos are used by non-technical people, although it's not generally realised what they are. Cars are a good example. The most obvious is cruise control, which allows you to set the desired speed and the system adjusts the engine power to maintain the desired speed. Power steering is another, a comparatively small input from the driver applies an amplified version to the steering rack. Finally (of course) there's 'power assisted' brakes and ABS (antilock braking system), which use similar principles and/or feedback control. We may not think of these as servos, but they fit the definition. Early automatic transmissions used fluid logic to control hydraulic servos to perform gear changes. Servos may have rotary or linear output.
+ +In general, a servo system is one that accepts an input, and produces an output that is in direct proportion to the original input, but often amplified by a factor ranging from tens to millions of times. Precise control is effected by means of negative feedback. The amplification can be power, distance or both. Servos are also used in 'reverse', where a large input is reduced to a very small (potentially microscopic) output, allowing finer control than a human could normally be expected to provide unassisted.
+ +Not all servos are immediately recognisable as such because the range is so broad it covers an enormous range of different mechanisms. If you look up the definition of 'servo' the number of possibilities is huge, but for the most part you should ignore the Australian colloquialism (here in Oz, a 'servo' is also a service-station, aka an outlet that sells petrol ('gasoline')) but that has nothing to do with the topic here.
Servos are a 'closed-loop' system, and rely on feedback from the controlled output that stops/ maintains active control when the target position has been achieved. This can be a point is space, a pressure/ force, RPM (revolutions per minute) or any other quantifiable entity. A human can be part of a servo system (in a broader sense of the term), and what is observed by the operator is corrected as necessary to achieve the desired result. Mostly, we assume that a servo has its own feedback system, but the human operator is no less a 'feedback amplifier' than an electronic circuit (but humans are usually less predictable).
+ +In some cases, you will see the inference that stepper motors are a form of servo, but this usually is not correct because most stepper motors are commonly used 'open-loop', i.e. without feedback to confirm that the desired position has been reached. Instead, it's assumed that the stepper motor has advanced by the programmed number of steps, and this is usually very reliable unless the system is overloaded or develops a fault. Open-loop systems always need to establish their limits when the processor is started, and this can be seen with ink-jet printers for example. The back and forth operation of the print head establishes the start and end positions and verifies that there are no obstacles preventing full movement. High precision or potentially high-risk stepper motor systems may also incorporate a servo (feedback) circuit to confirm that the exact target position has been achieved.
+ +Most quad-copters and other multi-rotor systems use multiple electronic speed controls, but have no control surfaces as such. In this case, the 'target' is a specific motor speed, and this is as easily controlled by a servo system as anything else. Rather than a position sensor, a tachometer generator can be used to verify that the desired speed has been reached and/ or maintained. Most low-cost 'drones' rely on the operator to ensure stable flight (a 'human feedback' system). Traditional servos may still be used to position cameras or release payloads on demand.
+ +![]() | Note: Despite the fact that this article includes schematics and circuit descriptions, this is not a series of projects. The circuits are provided by way + of explanation, and although they should be functional as shown, they are not construction projects. However, they are a good start for anyone wishing to experiment, and if you are new + to the world of hobby servos, you should find the info useful. The servo tester (Figure 11), demonstration servo system (Figure 12) and electronic speed control (Figure 14) have been built + and tested. + |
In the following, a capacitor is nearly always needed directly across the motor terminals for EMI (electromagnetic interference) suppression, but it has not been shown in the drawings for clarity. You will need to decide whether this is needed or not for your application. In some cases, the motor wires may also be fed through ferrite beads for additional EMI reduction, especially if you find that the receiver misbehaves in use. This indicates that you have an interference problem, and a complete cure may be quite hard to achieve (especially with high speed brushed motors).
+ + +The term 'servo' actually covers a very wide range of applications, but in this context, it means a motion controller as used to steer a model vehicle, operate model airplane control surfaces (ailerons, flaps, elevators and rudder) or provide limb movement for robotic systems. These are commonly known as 'hobby servos' (hereinafter known simply as 'servos'). Until comparatively recently, these servos were the mainstay of remote control enthusiasts and other modellers (e.g. model railways), but have become far more widespread as people experiment with robotics, 'battle-bots' and other mechanical systems that were once the subject of science fiction.
+ +The earliest servo systems were commonly known as 'Synchro' or 'Selsyn' (self synchronising) motors. These were in use from the early 1900s, and were the first electrical method of remotely positioning anything from gun turrets to antennas, or sending this information from a manually activated system back to a control room for monitoring the position of the equipment. They were used extensively in aircraft instrumentation, allowing the use of wiring rather than pipework to transmit properties such as airspeed (etc., etc.). The general principle is a powered rotor, and 3-phase stator windings. The rotors were supplied from the same voltage source (115V, 400Hz for aircraft). The 3-phase stators of each are connected together (in the proper order), and movement of one rotor (the transmitter) would unbalance the 3-phase signal until the other (the receiver) was in perfect synchronisation (i.e rotational position). In many cases the transmitter and receiver were fully interchangeable, so either 'end' could be rotated and transmit its position to the other end.
+ +There's a lot of info on-line if you know what to look for. I must admit that I've always wanted a Selsyn system, although it's more for the sake of having one than actually having a use for it. Some things are just too interesting to be ignored. Alas, my quest has not been fruitful thus far (primarily because they would constitute a very expensive toy).
+ +Ultimately, a servo system has a control input and a feedback input. It's goal is to reduce the error between the two to zero. The difference between the the inputs represents a mathematical equation, and it is deemed solved when the error term is zero. While this sounds easy enough (opamps do exactly the same thing), it's far more complex when there are mechanical systems in play, as they have mass, friction, inertia and momentum. A good servo system relies on an understanding of control loop theory, and must consider the manifold different mechanical time constants present within the system as a whole. The task is made more difficult when the load on the controlled mechanism changes, whether due to increased friction, added mass, wind or water loads (airplanes and boats for example) or any other change - expected or unexpected.
+ +This is not a simple task, and doubly so if a malfunction (or failure to obtain a 'true zero' result) could threaten life or limb. With the recent noise about 'autonomous' cars which require servo controls for almost every major function, this becomes all too real. Needless to say, this is not a topic that's explored here. However, the principles remain much the same, even if the 'self driving' car is equipped with artificial intelligence. The servo is a sub-system that does what it's told (well, that's the idea at least).
For so-called 'mission critical' applications, the vast majority of servo systems will be based on the Proportional Integral Derivative (PID) controller, as it can solve the 'equation' more efficiently, despite wide variations in the applied load.
+ + +Hobby servos range from miniature low-cost types with plastic gears and minimal torque through to fully metal geared models with ball bearings and big motors that can be extremely powerful. The same pulse-width control system is also commonly used for electronic speed controls (ESCs) that operate motors for wheels, propellers, helicopter rotors and even 'electric turbines' (pretend jet engines). This is a fairly crude form of PWM (pulse width modulation), but in practice it works quite well.
+ +A fairly standard hobby servo is shown below, along with the accessories. There are four different horns, shock-mount rubber bushes, internal eyelets that fit inside the bushes so they're not compressed by the mounting screws, mounting screws and a screw to attach the selected horn to the splined output shaft. Not all servos will come with everything shown of course.
+ +
Figure 1 - Hobby Servo With Standard Accessories
Servos have used a de-facto standard PWM technique since around the 1990s to control the position of the output shaft. The pulse is fed to the servo via a control line. The control line does not supply power to the motor, this is done by a control chip inside the servo housing. There is little or no power needed for the control signal, perhaps a few microamps at most. Most servos are limited to a maximum of around 6V, although some (especially larger types) may use up to 24V. Current ranges from a few hundred milliamps up to several amps, depending on the type used and the load applied.
+ +The motor is equipped with a gearbox (often known as a 'gearset') to increase torque and reduce the motor's speed. Small motors have high speed but very limited torque, and the gearbox is essential. Gears can be plastic (usually nylon), metal or 'Karbonite' - a reinforced plastic that has minimal wear but is much stronger than nylon. Some (at least claim to) use acetal (Polyoxymethylene, aka POM), a particularly strong, wear resistant plastic that's common in high-performance engineering components.
+ +
Figure 2 - Servo Gearbox (Metal Gears)
The output shaft is splined to eliminate slippage, and the shaft is fitted with an actuator, commonly known as a 'horn'. The horn can be a full disc, a two or four armed activator, or a single arm actuator so it can be matched to the requirements of the model. Most hobby servos are supplied with at least a couple of different horns, and a selection is shown above. Note the stop pin on the final output gear (far left on the largest gear wheel). If you wanted to convert this servo to continuous rotation (described further below), the pin has to be removed or cut off, or it will foul the gear that overhangs and badly damage the servo. The brass ring below the splined output is the final bearing, which can be lifted off the shaft. Just because a servo uses metal gears, that does not guarantee that heavy loads can be accommodated. The gears in the servo shown have a very basic tooth profile, not one that is optimised for minimum power loss or friction. However, the gear train is commendably free of backlash.
+ +
Figure 3 - Servo Motor, PCB (Removed) And Feedback Pot
A feedback potentiometer (pot) is connected to the output shaft to send position information to the servo amplifier. It's buried directly below the output shaft (you can see the three red wires from the pot to the PCB). This particular servo uses an AA51880 controller IC, with two dual MOSFET ICs (one P-Channel and one N-Channel) to drive the motor. I haven't shown the PCB as there's really nothing to see as it's all SMD parts, but if you wanted to you can get the datasheet easily and there are example circuits included. It's almost guaranteed that the circuit used in these servos is almost identical to the circuit shown in the datasheet.
+ +The control signal for nearly all hobby servos is a very basic form of PWM, but there are a few characteristics of the signal that are fairly critical. The pulse repetition rate is usually 20ms (50Hz), but 40ms is also common. These provide a 50Hz pulse train (20ms between each pulse). The minimum pulse width is 1ms, and this equates to zero speed for an ESC, or full left (anti-clockwise) rotation of a servo. The maximum pulse width is 2ms, full speed for an ESC or full right (clockwise) rotation of a servo. The centre position (or half speed) corresponds to a pulse width of 1.5ms.
+ +In some texts found on the Net, you may see the claim that the minimum pulse width is 0.5ms (500µs) and the maximum is 2.5ms. Other limits may also be seen (e.g. 800µs to 2,200µs) in the documentation. While some servos can accept these wider ranges, most use the 1ms-2ms protocol and some may damage themselves if the 'standard' range is exceeded. Some digital servos have a range from 900µs to 2.1ms (2,100µs), but they can be programmed to operate over the standard 1-2ms range. Many transmitters limit the travel to ±45° (or less) for many of the axes, because ±90° is far too radical for a control surface, rudder or steering system. This depends on how the model is set up of course - 90° of servo movement need not necessarily provide 90° movement of the axis being controlled.
+ +
Figure 4 - Servo Horn Positions For Varying Pulse Widths
While the drawing above shows the most common rotational directions, some servos may be different. In particular, continuous rotation types may be the opposite of that shown. These (usually) have an accessible trimpot to allow the servo to be set to stationary with a 1.5ms pulse width, which allows for some deviation from the ideal that may be encountered with some RC systems. Many remote control transmitters rely on human feedback for exact positioning of steering or flight controls, and high accuracy is not a given (especially for low-cost controllers). Some have 'trim' adjustments to allow the centre positions to be set from the transmitter. Not all servos have the full 180° range - the ones pictured above have a ±45° range when the pulse is varied from 1ms to 2ms.
+ +There is a 'dead time' of between 18-19 (or 38-39) milliseconds between each little pulse. Some systems don't care too much about the duration of this dead time, others can be quite fussy and won't work properly if it's too long or too short (note that this is completely different from the 'dead-band' discussed elsewhere). The main purpose is (or was, later transmitters & receivers generally use different methods) to allow other channels to have their control signal transmitted - this modulation scheme is called 'time division multiplexing'. This is similar to the way multiple telephone calls are multiplexed onto a digital data stream for example, and the same requirements for synchronisation apply.
+ +Each servo channel is assigned a 'time slot' in the transmitted signal from the radio controller (or any other system - e.g. hard-wired or infra-red light can also be used). This is the responsibility of the transmitter and receiver systems, and it is not part of the servo protocol. It will be apparent that you could (in theory) have 10 × 2ms pulses in a 20ms time period, but this would give the receiver no time to determine which pulse belongs to which channel. Most systems allow a maximum of 8 channels, but the majority of controllers have fewer - three to six channel systems are the most common.
+ +The additional time allows the transmitter and receiver to synchronise, so the send and receive time slots are aligned. We really do want to be certain that the input signal we create for (say) channel 3 goes to channel 3 of the remote system. Without synchronisation, mayhem would ensue, with whatever one tries to control doing everything wrong. Most people would consider this to be undesirable.
Some remote controllers allow the same scheme (often (incorrectly IMO) known as PPM, or 'pulse position modulation') to be used in the model itself, so multiple servos can use a single control wire. Synchronisation is still required, but the servos must be designed for this purpose. Otherwise, a separate multiplexer is used to separate the signals into individual channels so each servo gets the appropriate commands.
+ +
Figure 5 - Internal Diagram Of A Servo
The gearing depends on the servo itself, and the final speed and torque needed. The drawing above shows a two stage reduction, but the unit pictured in Figure 3 actually has a four stage reduction gearbox. There are four pinions, but two aren't visible in the photo. The first (and smallest) is mounted on the motor shaft and drives the lowest gear in the centre of the mechanism. The drawing has been simplified for clarity.
+ +The internals of a servo are shown in the photos above, but that only goes part way to explaining how it all works together. The control IC is obviously the 'brains' of the operation, with the motor and gearbox providing the power. The feedback pot tells the controller when the output shaft has reached the desired position, and this is provided as a DC voltage that depends on the pot's wiper position. The pot and output shaft are locked together, and ideally there will be zero backlash in the coupling.
+ +Any backlash would result in the position being incorrect, but most hobby servos have a fairly significant dead-band (where the shaft doesn't move with small changes in the control signal). There will always be some degree of backlash in the gear train, because some clearance is essential so the gears and pinions don't bind together. There may be a significant change in the amount of backlash as a servo (or any other geared motor) wears. Proper lubrication is essential, but it's not generally mentioned in user manuals.
+ +The electronic dead-band is unfortunate but unavoidable. Without it, the servo would oscillate around the desired set point, increasing battery drain and possibly making the control unstable in normal use. The dead-band is usually set by the controller IC, and the datasheet may (or may not) explain which resistor can be changed to reduce the dead-band to the minimum. Doing so may not even be possible, especially if the circuit is all SMD parts (which is now normal).
+ +Some more recent designs use a digital data stream to send the information, allowing for more robust error checking and/or correction, and the repetition rate is no longer relevant. However, it's been maintained to ensure backward compatibility for analogue pulse-coded transmitters and receivers. Eventually, it's probable that these digital protocols will become dominant and servo design changed accordingly. In the meantime, there is unlikely to be any change because there are millions of products and designs using the present system.
+ +
Figure 6 - Circuit Diagram Of AA51880 Servo Controller (Typical)
The general idea of the servo driver is shown above. This is a (redrawn) example circuit provided in the AA51880 datasheet. The AA51880 IC provides the drive for the motor, and as shown it uses external N and P-Channel MOSFET transistors for the motor drive output. This is done for higher power servos - miniature types can be driven directly by the IC. By using external transistors, the allowable current is limited only by the MOSFETs and the IC is not stressed. The AA51880 allows the use of a single pair of PNP transistors, a PNP/NPN H-bridge or the MOSFET H-bridge as shown above. Device selection is based on the motor's power (and therefore its current - voltage is typically limited to 5-6V).
+ +Figure 6A shows the circuit for the M51660 IC servo. The M51660 is a fairly old IC now, but it is (or was) very common in commercial servos. The servo-motor is driven using a full-bridge driver, so power can be applied in both directions (forward/ reverse), and dynamic braking can also be used (although it isn't provided by this particular IC). The only input is the pulse, with 1ms corresponding to full left (anti-clockwise), 2ms is full right (clockwise) and 1.5ms is the centre ('neutral') position.
+ +The IC is analogue, but many of the latest servos are digital, either in whole or in part. Very similar analogue ICs are the M51660 and NJM2611, and while they have a different pinout, the functionality appears to be almost identical. One of the first was the NE544, which is again almost identical in terms of internal (and external) circuitry. Yet another option is the MC33030 which is somewhat similar, and is available in surface mount. However, it expects an input voltage, not a variable width pulse so external conditioning circuitry is needed for RC usage.
+ +The 5k feedback pot is connected to the output shaft, which is geared down from the motor for increased torque and reasonably sensible operational speed. The pot is only used over about 180° of its travel (a normal pot has ~270° rotation). Many servos limit operation to ±45° for the default pulse width variation. The internal functions of the IC provide pulse-width decoding for the input signal, where the pulse width is translated into a control voltage to be compared with the output from the feedback pot. The circuit also creates a 'dead band' to prevent hunting (aka jitter). This is a condition where a servo overshoots the intended position, corrects and undershoots, ad infinitum. Digital servos generally have a smaller dead band, which improves positional accuracy. Many of the functions can be adjusted by varying the values of the external parts. For example, in the above circuit, R3 connects to the 'RDB' terminal (resistance, dead-band), which is called the 'error pulse output' in the datasheet. No details are provided as to the effect of changing the value.
+ +Most (all?) of the latest servos use SMD (surface mount device) ICs and other parts, making them a lot smaller than the SIP (single (staggered) inline pin) arrangement used for the M51660. The operation is essentially the same though, with the electronics controlling the motor until the voltage from the position pot matches the internal voltage derived from the pulse width of the input signal. The absolute voltage of the control input is immaterial, provided it's within the range the IC can process normally. Some more recent (and more expensive) servos use digital processing which can yield some worthwhile benefits.
+ +In this article, I do not intend to look at transmitters or receivers - only servos, but ESCs are discussed too because they use the same control protocol.
+ + +Although this article is not so much about motors, they are an integral part of servos and are also used with ESCs to provide motive power for models, so a brief discussion is worthwhile. There is a truly vast amount of info available on the Net, and there's no point adding to the repository with yet another article on the subject. However, there is more than enough here to at least get you started.
+ +From the punk era (1979), look up "I Like 'lectric Motors" (by Patric D Martin) - it more or less sums up my own feelings on the subject.
The two main types of electric motor used for modelling are brushed DC motors and 'brushless' motors. The latter are not DC motors, even though they are commonly referred to as BLDC (brushless DC) motors. These motors are three-phase AC synchronous motors, and they require three AC waveforms, with each displaced by 120°. This creates a rotating magnetic field. The controller for these motors has to create the three phases at the right frequency, and ensure that the timing is right. If the motor slows down under load, the drive frequency must also be reduced. For modelling motors, the speed controller is commonly known as an ESC (electronic speed control).
+ +Because these motors are synchronous (they run at the exact speed determined by the 3-phase AC input frequency), feedback is used to adjust the frequency to suit the rotation speed. The same type of motor is standard for DC fans, and a Hall-effect sensor is generally used to determine rotational speed and synchronise the electronics. They are also used in hard disk drives to spin the platter(s). These motors may also be referred to as 'EC' (electronically commutated) motors, and are becoming more common in high power applications (up to 12kW (16 HP) motors are readily available). They feature unusually high efficiency at any power level, and may eventually eliminate many traditional induction motors. However, there's a lot more to go wrong, and the ultimate in reliability is still the induction motor.
+ +Sometimes, you may also hear these referred to as 'PMAC' (permanent magnet AC) motors. They are becoming very common for electric power of cars, bikes and boats (full sized ones). Many of the more powerful ones are liquid cooled, using a pump and radiator just like an internal combustion engine (ICE). These usually require an external controller - unlike EC motors that mostly have the controller built into the motor itself. The introduction of electric vehicles (EVs - whether fully electric or hybrid) has expanded the range of motors dramatically, but they all use the same underlying principles. EV motors are not part of this discussion.
+ +
Figure 7 - A Selection Of Motors
The motors shown above are a BLDC motor with ESC (top), a speed controlled tape recorder motor (lower left), general purpose DC brushed motor (lower centre) and the platter motor from a DVD drive (lower right). The BLDC motor and its speed controller are mounted in a piece of aluminium channel so it can be used on the workbench. Because it's an 'outrunner', the rotor is on the outside and without mounting it can't be run. The cable seen with the 3-pin connector at the end (foreground centre) is the control cable, which accepts a PWM output from a model receiver.
+ +Synchronous motors run at the exact speed determined by the AC frequency and the number of poles. For example, a four pole synchronous motor running at 50Hz spins at 1,500 RPM. Speed is determined by the following ...
+ ++ RPM = ( f × 2 × 60 ) / n (Where 'f' is frequency, 'n' is number of poles and 60 is the number of seconds in one minute) ++ +
Unlike induction motors (as used in many household products such as fans, refrigerators, etc.), there can be no 'slip' (the difference between the AC frequency and the rotor (aka armature) speed). If synchronous operation is 'lost' the motor loses almost all power and stops. Many 'BLDC' motors are referred to as 'outrunners' because the rotor is outside the stator, so most of the outer part of the motor spins, with only a small area for mounting the motor at the output shaft end. The AC synchronous 'outer rotor motor' was originally made by papst GmbH and they were used for (vinyl) turntables and tape recorders, where the outer rotor acted as a substantial flywheel to provide very low vibration levels. There is little historical info on these, but I still have one in my workshop. These motors started as induction motors, and once up to speed the permanent magnets would allow the rotor to 'lock' onto the rotating magnetic field to achieve synchronous operation. Starting torque is low, and no (significant) load can be applied until synchronous speed is reached.
+ +The functions for BLDC motors are generally provided by a fairly comprehensive micro-processor, such as those made by ATMEL (e.g. ATMega or similar) which seem to have the lion's share, but other microcontrollers can be used as well. Controllers designed for brushed DC motors are (usually) far less complex, but some of those incorporate additional functions, such as dynamic braking. The motor is shorted out (under user control) which makes it stop very quickly. Braking can be via PWM (so it's controlled) or instantaneous upon reaching the 'stop' condition - typically a 1.0ms pulse from the controller unless the controller also provides the facility to reverse the motor. Dynamic braking is uncommon with reversible motors, but it can be done with a comprehensive controller.
+ +Brushed DC motors use permanent magnets and a commutator, a segmented contact arrangement attached to the rotor. Electrical contact is made using carbon (graphite) brushes, so the motor effectively creates its own rotating field. The brushes are arranged to ensure that as the magnetised armature pole approaches one of the magnetic poles in the stator (the fixed part of the motor), the next pole is connected by the commutator so each rotor pole can never be fully attracted to a stator pole - it's a continuing sequence of attraction and repulsion, switched by the commutator. Reverse is achieved simply by reversing the polarity of the power supply.
+ +Most common brushed DC motors use two permanent magnets for the stator (one North, the other South magnetic pole). The rotor is almost always three poles, although some use five poles for more power and smoother operation. The commutator may have the same number of segments as the motor has poles, but sometimes there will be (perhaps many) more. DC is applied to brushes that make contact with the commutator. The brushes are usually carbon, and eventually they will wear out and for small motors, the motor usually has to be replaced. A drawing showing the essential parts is shown below.
+ +
Figure 8 - Brush Type DC Motor Components
Larger brushed motors (such as those used for mains powered drills, vacuum cleaners, etc.) are classified as AC/DC (no, not the band with the same name ), although the band did choose the name when it was seen on a sewing machine motor. They are also known as 'universal' motors, because they can use AC or DC. These do not use permanent magnets, but use separate stator windings. The stator and rotor windings are usually in series, but parallel operation is also used (as well as a combination of the two in some cases). These motors can run in reverse by reversing the connections to the stator or rotor windings (but not both). Many are optimised for their 'normal' direction and the brushes will arc excessively if they are run in reverse. The brushes are usually replaceable in these motors. Most are multi-pole, and often use two commutator sections for each pole (e.g. a 12 pole motor has a 24 segment commutator). Note that the armature current in any brushed motor is AC, even if it's operated from DC.
Series wound DC motors are also used as the starter motor for most cars, as they have extremely high stall torque, and like all series wound DC motors they can reach dangerous speeds if operated with no load. Nearly all small motors use permanent magnets and a laminated steel (aka 'iron') rotor with the windings attached, although some are 'coreless' (aka 'ironless'), meaning that they do not use a steel core. This is done where extraordinarily fast response is required, because the coreless rotor has very low mass and minimal inertia. Torque is generally lower than for a similar sized 'iron cored' rotor.
+ +Many motors (especially BLDC types) for hobby applications are rated in K/V (also shown as KV or KV), where 'K' means unloaded RPM. You may assume from this that a 2,000K/V motor would run at 2,000 RPM with a 1V supply, 4,000 RPM with 2 volts, etc. However, this figure can only ever be taken as a guide, and probably will never be reached in practice. The common assumption is not strictly correct (look it up if you want more details), but it does give an approximate figure that may be helpful in a limited number of cases. It can be helpful to compare otherwise similar motors, but that only works if the maker's specifications are accurate. In general, a high KV motor will spin fast, but has little torque, while a lower KV rating means lower speed but higher torque.
+ +Don't confuse K/V (or any of its derivatives) with kV which means kilo-volts (1kV is 1,000V).
+ +One trick that may be useful ... an old hard disk drive (HDD) motor can make an excellent tacho-generator. If you use only diodes to rectify the AC output there will be some non-linearity and temperature dependence, but for most applications this won't matter that much. Greater accuracy can be obtained using precision rectifiers (see Precision Rectifiers - ESP AN001) for details. If you want/ need a high precision system, you almost certainly won't be messing around with hobby motors or servos, so this won't apply. There is no 'ideal' HDD motor for the task - some are 3-wire ('delta' or 'Δ' connection) and others are 4-wire ('star', 'wye' or 'Y' - three phases and a 'neutral' connection). Rectification is easy, but you need 6 diodes for a 3-wire connection. Only three diodes can be used for a 4-wire version, but it has a lot more output ripple. A six diode rectifier is always preferable. The common ('neutral') wire of 4-wire ('star' connected) motors can be ignored for a tacho-generator, even though the motor is wired differently internally.
+ +You can use any DC motor as a tacho-generator, but some brushed DC motors may place an unacceptably heavy load on the drive motor. Another method to measure speed is to use a photo-interrupter with a slotted disk that passes between a LED and a photo-transistor. The disk is attached to the motor shaft, and the pulses can be integrated (after pre-processing to get equal pulse widths regardless of speed) to produce a DC voltage that's proportional to the motor's RPM. In some cases the disk is coded, so the drive system knows not only the speed, but the rotation angle at any moment in time. Whatever method you use to get a speed dependent voltage, this can be compared to the control voltage to keep the motor speed constant despite varying loads. The techniques for speed control vary, but most these days will use a micro controller rather than the analogue techniques that used to be common. Tacho-generators are available as specialised devices so you don't have to use whatever comes to hand, but hobbyists usually will use something they already have, rather than buy an expensive commercial unit.
+ +There's another common technique used to monitor (and correct) the speed of a brushed DC motor, and that's to measure the back-EMF (back electromotive force). The motor is powered using PWM, and during the 'off' period, the motor will generate a back-EMF that's directly proportional to the motor's speed. The higher the speed, the higher the back-EMF. This can be monitored using analogue or digital techniques and used to control the motor's RPM. The circuit requires gating techniques to ensure that only the back-EMF is measured, and not the applied voltage. It's easy to do, but does add some complexity to the circuit.
+ +
Figure 9 - Back-EMF Waveform From PWM DC Motor
The waveform from a more-or-less typical DC motor is shown above. The motor was supplied with 10V via a PWM speed controller (Project 126). As the MOSFET turns off, there is an inductive 'kick' seen on the rising edge of the waveform, equal to the supply voltage. It's only brief, lasting around 500µs. After that, the voltage seen is the motor's back-EMF (including ripple which is normal). As the motor is loaded, the back-EMF falls, and this is used to provide feedback that increases motor power to restore the preset RPM. The average back-EMF level is a little under 5V in the example shown. Note that the back-EMF voltage is measured from the positive supply in the example shown.
+ +When the back-EMF falls, the voltage between 'power pulses' rises referred to zero volts (so falls referred to the supply voltage). If the motor is driven faster (by a vehicle going down an incline for example), the back EMF will become negative because the power supply voltage is fixed at 10V in this example. See Regenerative Braking below to see what happens when the motor is driven faster than its normal speed for a given supply voltage.
+ +The other motor type in common use (but not for models) is the induction motor (aka 'squirrel cage' motor). These are asynchronous, and rely on 'slip' between the rotating magnetic field and the armature. This means that the rotor always runs slower than the frequency and number of poles would suggest. Slip generates a current in the armature, which generates magnetic flux that tries to maintain synchronous speed, but cannot. Without slip, there is no armature current and zero torque. A four pole induction motor at 50Hz will typically run at ~1425 RPM at full load (5% slip). There are many different types, including shaded-pole (used for fans and other very low power devices), capacitor start and 'PSC' (permanent split capacitor), 'split-phase' using a resistive start winding and centrifugal switch, the rather obscure 'repulsion' motor (several versions exist), and of course 3-phase induction motors. The latter are the work-horses of industry, and are one of the most common machines ever built. While they are interesting (at least I think so), they are not discussed further here because they are irrelevant to this topic.
+ +For anyone interested, there's more information about motors in the article Clock Motors & How They Work. The article concentrates on motors used in clocks, but you may find it interesting as similar principles apply and there's a lot of detailed explanations.
+ +In most cases where the output is to a wheel or other land based propulsion system, the motor will require a gearbox or some other form of speed reduction. There are few small motors that run with high torque at low speeds, so the reduction gearing increases torque and reduces speed to something more sensible. Running a 100mm diameter wheel at a leisurely 10,000 RPM would result in a speed of over 52m/s at the periphery - equivalent to 188km/h. This is clearly much too fast for most applications, so reduction gearing is essential. The same applies to propellers for boats and even many planes. The only time extreme speed is used is with small propellers as commonly used on miniature quad-copters.
+ +Most applications require lower speeds, and the tip of the propeller/ rotor blade should not exceed the speed of sound (about 343m/s). A 30mm diameter propeller can spin at over 200k RPM before it even comes close to the speed of sound, but if increased to 300mm, the maximum is around 20k RPM (still well within the capabilities of many motors). Remember that any speed reduction system (whether gears, chains or belts) incurs some loss of power because no mechanical system can be 100% efficient. Frictional losses dominate in all cases, and can be surprisingly high. This means that the motor always needs more power than you expect at the output.
+ +Something that seems to cause some confusion is the claimed number of turns for 'performance' motors. A 23T motor has 23 turns of wire for each pole, so draws a lower current (and spins slower) than an otherwise equivalent 13T motor. Unfortunately, it seems to be very hard to get any definitive information on this, but the occasional forum post may have some factual details. There is very little detail available, other than to point out that some motors are designed specifically for racing, and many of those are not intended to be reversible - they are specifically designed for maximum performance in the direction indicated on the motor itself. They will run in reverse, but may draw excessive current and will perform badly. The range of 'turn values' is quite broad, ranging from less than 4 turns (BLDC motors only it appears) up to 80 turns or more.
+ +For those who like to experiment for the least possible expenditure, old battery drills are a great source for motors with (usually) a double-reduction planetary gearbox. Although many have a torque adjustment, this needs to be defeated for the motor/ gearbox to be useful. I've used these units for a few tasks in my workshop, with one used to provide motorised operation of the X-axis of my small milling machine. Another is used for a coil winder. They generally have lots of torque, and are easily adapted to the speed controller described further below, or the unit shown in Project 126. The project is just a speed controller - it doesn't accept a servo input and has no speed regulation (its original purpose was for LED dimming for lighting applications).
+ +One other type of motor deserves a mention, although they are unlikely to be used in most models. Pancake motors get their name from the fact that they are flat discs, rather than cylinders. Many use an ironless rotor, so they have very low inductance, zero 'cogging' (the tendency of the iron poles to align with magnets). The ironless rotor and method of construction means that they can have a very fast response time due to very low inductance, and the low inductance means that brushed types suffer minimal commutator arcing. These characteristics are also shared by other ironless rotor motors. Pancake motors can be brushed or brushless, with the latter needing a similar controller to other BLDC motors. Because of the relatively large diameter rotor, pancake motors can offer improved torque compared to conventional designs. Some are made using printed circuit boards, with the windings created as PCB tracks. It's common for this type of pancake motor to use the PCB as the commutator as well as the windings, resulting in a very compact design that's (theoretically) relatively cheap to make.
+ + +A servo is an electromechanical feedback system. As already noted in the intro, there are many different types, but here we'll look at basic position control systems. The input pulse is translated into a voltage internal to the controller IC, be it analogue or digital. This voltage is compared against a feedback voltage derived from a rotary encoder - most commonly a potentiometer for hobby servos. The idea is shown below, as a purely analogue process. While it's shown with a dual supply (±12V), this is for ease of understanding. Dual supplies are rarely used for hobby motor drive systems, so polarity reversal is achieved by using an H-bridge as shown in Figure 12.
+ +The drawing shows the essential elements. The error amplifier detects that there is a difference between the voltages from the control (VR1) and feedback (VR2) pots. Any difference is amplified and applied to the motor drive circuit. Should the output of the amplifier be positive, the motor will spin in one direction, and it will spin in reverse if the polarity is negative. This allows the motor to be driven at a variable speed in either direction. Note that the drawing does shows only the most rudimentary loop stabilisation network, namely C1. These networks can become quite complex, but are always necessary to ensure that the phase shift through the mechanical linkages does not result in an unstable system. The gain also needs to be low enough to ensure that the total electromechanical system remains stable, but high enough to ensure there is a minimal dead-band (a range where the motor has insufficient drive voltage to function).
+ +
Figure 10 - Conceptual Diagram Of A Servo
It should be apparent from the above that the error amplifier is simply a differential amplifier. When both inputs (from the 'Control' and 'Feedback' pots) are at the same voltage, the output must be zero, so the motor doesn't move. If the control pot is moved, the motor is driven in the appropriate direction as to cause the 'Feedback' pot to produce the exact same voltage as provided by the 'Control' pot. When the two are equal, the motor stops again. The gain (set by R2 and R4) can be increased to reduce the dead-band, but if set too high the servo will oscillate (called 'hunting'). There is a complex set of electrical and mechanical time constants that can make it very difficult to stabilise a high gain servo. This is made worse when it's controlling an external mechanism, because that will also affect the mechanical time constants and make a stable system unstable (or vice versa).
+ +Mechanical systems have a direct analogue in electronics. Friction is the mechanical equivalent of resistance, 'springiness' (compliance) translates to capacitance, and mass is the equivalent of inductance. So-called 'stiction' (the tendency of some mechanical parts to need extra force to get them moving) is a form of hysteresis. A gearbox can be represented as a transformer, although this 'conversion' is rarely needed. Unfortunately, these mechanical equivalents are often next to impossible to calculate (except for the 'transformation ratio' of a gearbox), and this is made worse by the fact that they are not constant. Most servo controlled load-bearing actuators perform very differently on the workbench from how they behave in normal use. This (and the fact that 'normal use' may cover a very wide range), means that even if you do work out the electrical analogues of the mechanical variables, the data obtained are likely to be useless. Commercial servos get around these constraints by making the dead-band large enough to ensure it will not cause instability in most systems.
+ +The general scheme of a servo doesn't care if the processing is analogue or digital, as long as it provides the same end result. The error amplifier is the heart of the system - it determines if the 'control' voltage is higher, lower or the same as the 'feedback' voltage. If the control voltage is different from the reference, the motor turns in the required direction and the gearbox drives VR2 until the voltages are the same. Each change of input (control) voltage will cause the error amp to react, and drive the motor in the required direction to restore equilibrium so the system is at rest, but with the output at a position mirroring that of the 'control' pot. Should the output shaft be forced into a different position, the servo treats that in the same way - the motor will be driven until the output position is where it should be (within the ability of the motor to overcome the load).
+ +All of the hard work has already been done with a commercial servo or off-the-shelf servo controller IC, so the user only has to provide the control signal. Now it should be easy enough to visualise a bit of additional circuitry that converts the pulse width into a voltage, and that is used instead of the 'control' pot shown. However, the control pot is still used - it's in the transmitter, and is one of the controls provided to the operator. When the control pot is moved, the voltage is encoded, transmitted, received, decoded, and used to control the servo.
+ +There are two main types of servos available - analogue and digital. There is no difference as to how the servo is controlled by the user, and the main difference is the way the servo motor is driven by the internal controller circuitry. Digital and analogue servos both have (usually, but not always) identical housings for a given size, and use essentially the same type of motor and gearbox. Most are interchangeable with each other (i.e. digital and analogue). Digital servos can be more responsive than their analogue counterparts, because processing power (necessary for complex feedback loop stability calculations) is now so cheap.
+ +The difference is how the motor is controlled by the internal electronics. The motor in an analogue servo receives a signal from the servo controller at a nominal 25-50 times a second, based on the repetition rate, and this is the default refresh speed of the servo, which determines the positional accuracy and stability under load. Digital servos can achieve much higher position refresh rates depending on the code in the controller itself. By monitoring and updating the motor position more often, a digital servo can deliver full torque from the start of movement and can increase the holding power and accuracy of the servo at the selected position.
+ +The rapid refresh rate and high processing power may also allow a digital servo to have a smaller dead-band. The response of the servo is improved, and along with the increased holding power and the rapid maximum torque delivery, digital servos can accurately set and hold the actuator position better than their analogue counterparts. This isn't to imply that the same can't be achieved with an analogue servo, but it requires more circuitry and will be more expensive to make.
+ +Some digital servos can be programmed. This may include direction of rotation, centre and end points, fail-safe options, speed and dead-band adjustment. Programming is not always required though, as even most programmable digital servos operate like 'normal' analogue servos out of the box. Digital servos are usually more expensive than analogue types, and may require (sometimes significantly) more power from your batteries.
+ +Another term you will see is 'servo-motor'. While this often refers to the motor inside a hobby servo, the term actually means something else in industrial systems. A servo motor is a motor with a feedback system to indicate exactly how many revolutions (or part thereof) it's done, and the motor itself may be AC or DC, brushed or brushless, and may use an 'ironless' rotor for very fast response time. The feedback system is commonly a rotary encoder which can report not only the number of revolutions done, but also the speed. Positional accuracy can be extraordinarily high, and they are common in large (and expensive) laser cutters and other industrial systems. These fall way outside the scope of 'hobby servos'.
+ +There are several different wire colours used with hobby servos, and you should consult the maker's information to ensure that you don't apply reverse polarity. This will destroy the internal electronics, usually instantly. The colours vary from one maker to another, but it's become standard to have the red wire (positive) as the centre pin. Reversing the control and earth/ ground wires usually causes no harm, but you should always check. The maximum rated voltage must never be exceeded or damage is almost guaranteed. Most are designed for operation between 4.5V and 6V DC. The current required depends on the size of the servo and the load applied. If the load is greater than the design maximum, you can strip gears or burn out the motor and/or electronics.
+ + +If you use servos, one thing that you really need is a servo controller - a device that you can use to test servos, without having to hook them up to receivers and adjust the servo position (or motor speed) from the transmitter. There are actually quite a few designs published on the Web, but very few are particularly accurate, some will be close to unusable, and none that I've seen provide separate controls for the repetition rate (or dwell time) and the servo control pulse width.
+ +Most servos are designed for ±90° rotation, giving a nominal 180° of angular movement. However, it's quite common that the actual rotation is less than this. Most standard servos can also be modified to allow full 360° rotation, in forward or reverse. See further down this page to see what needs to be done to achieve this. You may need to modify the servo circuitry to get a usable speed range when a standard servo is adapted for continuous rotation.
+ +To be able to test a servo properly, we will ideally have control over the repetition rate (between pulses) and the pulse width. The circuit shown below does this using two pots, one to adjust repetition rate (VR1 is a preset) and the other to set the pulse width (VR2). The repetition rate should be set for 20ms (50Hz) and the pulse width is 1.5ms with VR1 centred, corresponding to the centre position of a servo. Alternative timings can be used for the repetition rate, but 50Hz is close to ideal. VR1 can be a front panel pot if preferred. D3 is a Schottky diode to protect the circuit against reverse polarity. D2 is a 'power on' LED of any colour you like.
+ +
Figure 11 - Circuit Diagram Of Servo Tester
U1 controls the repetition rate, and is a 'minimum component count' astable oscillator. Timings assume exact values and adjustment will be needed using VR1. The formula below assumes that the high output level is 5V, but that's usually not the case in practice. However, the repetition rate isn't especially critical and most servos will be happy enough if the timing is a little outside the optimum value. The frequency is (ideally) determined by R1 + VR1 and C1 (the resistor and trimpot are in series) ...
+ ++ f = 0.72 / ( R × C ) Where f is frequency, R is resistance in ohms, and C is capacitance in Farads ++ +
The trimpot gives a fairly wide frequency range so setting 50Hz will be easy to achieve (VR1 should ideally be a multiturn trimpot). The centre frequency is set for 50Hz (20ms). The output of U1 is fed to the differentiator circuit (C4, D1 and R2). The circuit ensures that only a very narrow pulse is used to trigger the pulse generator (about 60µs wide at the 555's trigger level of 1.67V). D1 is a BAT43 or similar low current Schottky diode. and is used to ensure that the pulses from U1 don't exceed ~5.6V at pin 2 of the 555 timer. The switch is recommended to allow selection of a nominal 20ms or 40ms so you can use and/or test both rates.
+ +U2 is a monostable, and controls the pulse width. The theoretical timing based on the values shown are 902µs minimum, 2.002ms maximum, with a centre position giving 1.45ms. This is determined by R3 + VR2 and C5 + C6, again in series and parallel respectively, and using the formula for a 555 monostable ...
+ ++ t = 1.1 × R × C Where t is time ++ +
Of course, this again assumes that the cap values are exact, and that VR2 really is 10k. Mostly, they will be a little different and it may be necessary to adjust the value of C5 and/ or R3 to get the range required. Trimpots can be used, but it's unlikely that anyone needs the level of accuracy that can be achieved by making very fine adjustments. Servos are not really precision devices, and in most cases they rely on user feedback rather than absolute precision. If preferred, C4 can be 82nF and 10nF in parallel, which will give almost perfect 1-2ms.
+ +Note that it is certainly possible to do everything with a single 555 timer, but the accuracy is nowhere near as good and it's difficult to adjust the repetition rate independently of the pulse width. Even that is possible, but some interaction is inevitable. For the cost of a second 555 timer and a few cheap parts, the circuit shown has greater flexibility and is much more easily calibrated. While simplicity is always a virtue, that's not true if the circuit ends up with needless interactions that make it harder to use. The monostable circuit is very predictable, but free-running (astable) oscillators are not as good.
+ +The circuit is easily wired on Veroboard, but if there is sufficient demand I will make a PCB available. Nothing in the circuit is particularly critical unless you are aiming for very accurate pulse widths and/ or repetition rate. As already noted, the latter is generally not at all critical, provided it falls within the general range of 15ms up to a maximum of perhaps 60ms. If it's too long, the servo may not be able to maintain the set position and/ or it will chatter. If too short it may cause problems with the servo circuits.
+ +The circuit should be powered from a 5V source with enough current to drive the tester and the servo(s) you need to test. The current will usually be less than 1A, but some may need more. The servo itself connects with a 3-pin connector, providing GND, +5V and Control. Most servos have the +Ve pin in the centre so that a reverse connection doesn't cause the release of the 'magic smoke'. However, you must check to make certain. While the majority of servo manufacturers have standardised the connections, some older types may be different. Wire colour codes are not standardised, with the possible exception of using red for the positive (but even that is not 100% reliable).
+ +If you don't want to build your own (why not?), you can buy 'servo testers' for a rather paltry sum, but naturally you don't actually learn very much by using an off-the-shelf product. Yes, it's convenient and painless, but the greatest advantage of making your own tester is that you can see exactly what it does, and tweak things to suit your needs. When you buy one, you get a complete unit, no schematics, often erased IC type numbers, and you have no idea what it's doing or how to change anything.
+ + +ESCs (electronic speed controls) are usually not servos. They accept the same PWM signal, but most have no feedback mechanism, so the motor slows when loaded. To be classified as a 'true' servo system, some form of monitoring the shaft RPM is needed, with correction applied internally to ensure that the speed remains at the preset value. Given the processing power of some ESCs it would be easy to add, but in the case of models it may be preferable that additional loading does slow the motor, and it will be corrected as needed by the operator (another example of the 'human feedback' system at work).
+ +When testing ESCs, you'll likely find that the ESC provides the 5V power, and an external supply isn't necessary. This depends on the ESC of course, as not all are exactly the same. 5V power is provided by an on-board 'BEC' (battery eliminator circuit), but you must check the manual for your ESC to find out if it has a BEC or not. Where provided, the BEC has a regulated 5V output.
+ +ESCs come in a wide variety of different forms. Those used for brushed DC motors are completely different from those used for brushless DC motors, which are actually three-phase AC motors. The ESC generates the 3-phase output at the required speed to run the motor. The basic operation is discussed above, in the 'Motors' section.
+ +ESCs designed for brushed DC motors are (usually) far less complex, but some of those incorporate additional functions, such as dynamic braking. The motor is shorted out (under user control) which makes it stop very quickly. Braking can be via PWM (so it's controlled) or instantaneous upon reaching the 'stop' condition (typically a 1.5ms pulse from the controller).
+ +Some ESCs allow forward and reverse (a 1.5ms pulse stops the motor), while others do not. Again, you need to verify the facilities provided and ensure that the system is used in accordance with the instructions. Some ESCs allow programming (albeit rudimentary in some cases), and it may be far easier to do using a tester than having to mess around with transmitters and receivers. While the tester described can do basic 1-2ms pulse widths, there may be other functions in some transmitters, so the tester may not be able to do everything.
+ +However, based on the research I did for this article, it's unlikely that there will be anything that you can't test, with the exception of a fully digital system that uses a different control protocol. I've also used the tester with a Mystery MY30A ESC and an 'outrunner' (outer rotor) motor, and it behaves perfectly in all respects.
+ + +Most standard 180° servos can be modified to obtain continuous rotation. It's usually better to get one that's designed for the purpose, but it may not always be practical. There are two changes needed, with the first one being the position pot. This must be disconnected from the output shaft, but you usually won't be able to re-use it to set the 'off' position. It's often suggested that a pair of equal value resistors be used, but it's better to use a trimpot, as that allows you to calibrate the servo for no movement with a 1.5ms pulse from the controller.
+ +The second change is to remove the stop-pin, which limits the movement to 180°. Depending on the servo, this may be easy or difficult, but either way a rotary tool can be used to cut off the pin which is located on the main output gear. The gears should be removed from the gearbox if possible, or you'll end up with plastic or metal filings throughout the gear train. These will cause undue wear, and may even cause the gearbox to seize. If the pin is plastic, it may be possible to cut it off using side-cutters, but make sure that it cannot catch on the gearbox internal stop lugs or foul any of the remainder of the gear train.
+ +If you can't figure out what to do from the descriptions and photos shown above, there are several on-line guides that show all the parts and mods needed for the conversion. Bear in mind that most servo motors are not rated for continuous operation at full power, so you need to use something else if you need fairly powerful continuous drive motor. You'll usually be a lot better off using a dedicated motor and gearbox assembly along with a matching ESC.
+ + +In general there's little or no need to build a servo, because they can be bought fairly cheaply and have everything you need. Sometimes, you may need a simple servo that is purely voltage controlled, along the lines of that shown in Figure 10. You could just use a DC Servo Motor Controller/ Driver IC of course, and although that removes most of the complexity it may also be hard to find. The other reason to build your own is for the sake of learning and experimenting. I designed a circuit many, many years ago that was used as an educational tool, and a similar approach ensures the minimum of complexity. This is especially true for gearing, which is hard to do without a gear cutter or a supply of mating gears, pinions, shafts and end plates.
+ +The easy solution is to use a length of threaded rod, directly attached to the motor shaft. A nut is driven by the threaded rod, and this also drives a slide pot which is used for feedback. This gives a linear servo (aka linear actuator), and although it won't have much power with a small motor, it can use as big a motor as you're game to install. A large motor also means high current and much larger power transistors of course.
+ +If you use a second motor driven from the first, you can use it as a tacho-generator, so you can accurately set motor speed, rather than actuator position. These are available commercially, but are often very expensive. The output of the tacho-generator is connected in place of the feedback pot. It will need to be filtered to ensure there are no high voltage spikes that will cause erratic speed control.
+ +The conceptual circuit shown in Figure 10 is actually usable, but the requirement for a dual supply makes it awkward to use in a battery powered system. To make it workable with a single supply requires an 'H-bridge' circuit, using four transistors to control the motor current. The basic circuit is still fairly simple, but it's important to set the gain appropriately. If it's too high, the circuit will 'hunt' (oscillate around the set point), and if too low there will be an excessive dead-band. This is no different from commercial RC servo systems which are limited by the same constraints. The capacitors marked with '*' (C1 and C2) are optional - they may be needed with some systems as they depend on the characteristics of the motor and feedback system. The values shown are intended as a starting point.
+ +
Figure 12 - DIY DC Servo System
The circuit shown above is a fully built and tested servo system, and it works exactly as expected. It uses a single supply, so it's usable in most models or in conjunction with other systems that use hobby servos or ESCs. It is a great way to demonstrate how a servo works, either for yourself or others who are interested. The one I built is designed specifically for teaching purposes, in this particular case to demonstrate to my grandsons who are showing great interest in models and how things work. Note that if you use higher value pots, the value of R5 can be increased to reduce zener current.
+ +This circuit does not use PWM for control, but uses a DC level instead. This is provided by the 'Control' pot, and the 'Feedback' pot is driven from the output of the gearing system used. 1k pots are shown, but if you happen to have higher values they can be used too, but you'll need a unity gain buffer between the pots and the error amplifier or pot loading will make the system non-linear. Note the opamp specified - LM358. This opamp allows the input voltage to include 0V (ground), and this is a requirement for the arrangement shown. Most opamps do not allow the inputs to get within less than 1.5-2V of the supply rails, so a (close to) 0V based circuit as shown cannot be used.
+ +The motor is supplied with variable voltage DC, so the output transistors will need a heatsink. It's more efficient to use PWM drive, but that's also a lot more complex to set up and it needs more parts. PWM has an advantage though - it can usually overcome friction more easily, but for a demonstration circuit the added complexity isn't worth the effort. The simple circuit shown above can also make a fairly effective motor speed control, with a second motor used as a tacho-generator. The output voltage from the tacho-generator is used in place of the 'Feedback' pot with appropriate rectification, filtering and attenuation as required to ensure that the voltage is smooth and within the range of the servo amplifier (1.2 - 5V). Motor speed is changed by varying the 'Control' pot and the selected speed will be held reasonably constant as the motor load changes.
+ +Note that when used as a servo or speed control, you will experience hunting if the gain is too high, and/or if the feedback caps (C1 and C2) are too small (the arrangement shown below didn't require the caps). Hunting means that the servo will 'hunt' for the correct setting, but will overshoot and undershoot continuously (it's an indication that the feedback loop is unstable). Because the feedback loop consists of electronic and mechanical time constants, it can be difficult to get a small dead-band (where the servo circuit has no control) as well as no hunting. An ideal system will show a tiny (or no) overshoot, and will settle at the set position with no sign of instability. This can be surprisingly hard to achieve in practice.
+ +
Figure 13 - Completed Demonstration DIY DC Servo System
The above photo shows the completed demo system. There is an extra dual opamp used as buffers for the pots, because the only ones I had to hand were 100k and would suffer poor linearity. Other than that, the circuit is identical to that shown above. Note the heatsinks on the output transistors, which get way too hot without the extra surface area to dissipate the heat. The heatsinks are small, but adequate. The feedback pot is driven by a length of 3mm threaded rod, with a rivet-nut attached to the pot actuator which was bent at a right-angle to make that possible. The extra caps (C1 and C2) weren't needed, and the gain is as close to perfect as one can hope for. The supply voltage is 12V DC. Resolution is determined by the motor - it's a 3 pole type, so the (theoretical) resolution is limited to 1/3 turn (120°) or 0.16mm, but this is not achieved in practice because the gain has to be kept low enough to ensure stability. This results in a dead-band of about ±0.5mm of control slide pot travel.
+ +A servo system can also be made using a PIC, and the extensive processing available allows you to customise the way it works to ensure the best possible performance. However, this approach requires both analogue skills and good programming ability to get a workable solution. The PIC has the advantage of being re-programmable to perform differently, but of course you won't get a proper feel for the requirements of gain and stability unless you have already played with a purely analogue version.
+ +Another option is to use the 'guts' (basically just the servo control board) out of a small commercial servo, and equip the outputs with (semiconductor) power switches capable of handling the voltage and current needed for a large motor and gearbox. Battery drills have a surprisingly powerful motor and a planetary gearbox that can handle a lot of power. While this is certainly a viable option, it may require quite a bit of development work to get it working properly, preferably without demolishing the feedback pot. In general, it would be wise to take this path with some care. There's a lot of scope for damage and destruction when you start playing with powerful motors equipped with high torque gearboxes.
+ +However, if you want to explore this, you'll need to know how to change the appropriate parameters for the control IC. Whether this is possible or not depends on the availability of the datasheet for the exact IC being used. Without the essential data you'll be completely in the dark, as there's no way to know for certain what needs to be changed, by how much and in which direction.
+ + +High accuracy servo systems employ a number of discrete processes. The essential elements are proportional ('P'), integral ('I') and derivative ('D') with the system known as 'PID' [ 8 ]. In its most basic form, there is an amplifier to provide the 'proportional' part of the equation (linear gain), plus an active integrator and differentiator. While tempting, there are no plans to look into PID controllers at this stage, because they are far more complex than simple servos, and while essential for advanced production systems there's little advantage in hobby servo applications. In most hobby systems, the user is often the primary 'error amplifier'. As humans, we are able to apply the principles of PID naturally, and the electronic version (whether analogue or digital) is an attempt to replicate what we can do without even thinking about it. However, the electronic version can react much faster than we humans can, so PID control is common in industrial robots where both high speed and accuracy are paramount.
+ +This isn't something new - Proportional controllers have been with us since the 1700s (purely mechanical of course), and were well advanced by time vacuum tubes (valves) were used for industrial processors. The formal (mathematically derived) version of a complete PID controller came about in 1922, originally for steering ships [ 9 ]. Fully electronic systems rely on the concept of negative feedback, first applied to telephone repeater amplifiers by Harold Black in the late 1920s.
+ +While a full PID controller is capable of excellent results, tuning is necessary to account for system dynamics (in particular mechanical friction, inertia, momentum and/or resonance). These can be quite different even with identical machines if the process is even slightly altered between the machines themselves. Digital PID systems are now very common, with all functions able to be tuned to obtain the optimal values. Some digital systems offer 'self tuning' capabilities, where the controller learns the behaviour of the controlled system to arrive at settings that provide stable response and minimum settling time.
+ +There's a lot of information on-line, with most of it from knowledgeable people in academia or commercial producers of PID controllers. Unlike the situation with hobby servos, the info you can get is based on maths and science rather than opinion or how model 'X' performs with servo 'Y'. While this might be helpful if you have the same model and servo, it doesn't provide any understanding of the system dynamics or any truly useful info if you have a different model and a different servo (or motor, or anything else).
+ +Note that the derivative part of the circuit can be connected to the input differential amp's output or the feedback signal (with the polarity adjusted as necessary), depending on the particular system. Both methods are commonly shown in block diagrams and other literature, so have been shown here as 'Alternate Connections'. The existing connection (solid line) is removed if the alternate is used.
+ +
Figure 14 - PID Controller Concept
The drawing shows the essential parts of a PID controller, and is adapted from the schematic shown in an article published in 'Nuts and Volts' magazine in Jan 2005. The proportional gain block is the primary servo path, just like in any other servo amplifier. The integration circuit is responsible for correcting any accumulated error (it relies on the amount of error, its polarity and the time the error has been present). Finally, the derivative (differentiator) section monitors the rate of change of the error signal. By combining the three functions, it's possible to make the loop response faster that would otherwise be possible, but damped to ensure there is no overshoot or hunting. These depend on the load, and with simple (proportional only) servos it may be very difficult to set up a system that can cover a very wide range of output conditions. The PID controller has far greater flexibility than simply deciding to use a larger dead-band and is most likely to be found in industrial controllers, rather than hobby servos.
+ +
Figure 15 - Output Damping, Under-Damped, Over-Damped & Critically Damped
Using the test load shown, the level of damping of the above 'concept' circuit is controlled by the derivative output, set for three different damping levels. The integral was used only for the green (critically damped) trace, showing the correction for an 'accumulated' or long-term error. The gain of the proportional amplifier was unchanged for the three traces, and the integral and derivative signals were used to control the rate of change (damping) and final (long term) value respectively. The target value for the output was one unit, and this is provided by the integral amplifier given enough time (about 5.5 seconds for this example). Note that the capacitance is 100mF - 100,000µF, and simulates inertia. The inductor creates a long-term error, which is small but easily measurable. The final error after six seconds is around -2%, but that will reduce with more time.
+ +The 'critically damped' result could not have been achieved using only a proportional control signal - it requires all three processes. The signals are also interactive, so changing the integral or derivative affects damping, but this depends on the time constant of the integral signal which is shorter than optimal for the example shown. Without the PID processing, the best you could hope for with a proportional only controller would be closest to the 'under-damped' trace, with a little less overshoot but a significant long-term residual error. All traces are very much load dependent, so if the load changes, the processing must also change to suit. There is no reason that only one integrator or differentiator be used, and Bob Pease designed a dual derivative servo to balance a ball on a beam in 1995 (see What's All This Ball-On-Beam-Balancing Stuff Anyhow).
+ +The integral in particular generally requires additional processing over and above what's shown in the drawing, especially if the servo system ever runs 'open loop' (where the output's rate of change is limited by motor speed for example). This extra processing hasn't been included in the above, as it's intended to show the basic principle only - it is not a complete schematic, and is not something I'd suggest you build. If you want more information on PID controllers it's worth looking it up on the Net. There's a great deal to be found, and it will be apparent fairly quickly that this is a serious process and is not for the faint-hearted. I'm not going to pursue this further at this stage, because it's not germane to hobby servos. However, it was too interesting to ignore, hence the drawings and this brief introduction to the topic.
+ + +Most ESCs and servos use a fairly simple averaging circuit, an example of which is shown in the next section. Although the typical averaging circuit has rather poor performance, it's generally 'good enough' for the purpose. It's quite easy to make a much better circuit that responds faster and has far less ripple in the output, but for most hobby applications there really isn't any need. An averaging circuit based on a 4-pole active filter can achieve vanishingly small output ripple, with a response time that's at least four times as fast as a more 'traditional' approach. However, it requires quite a few additional parts, and this makes it less attractive for cost-sensitive applications or where every gram of weight makes a difference. Ok, the extra weight is minimal, but it's still something that has to be considered - especially for aircraft.
+ +One major advantage is very high linearity. This is possible with a simple integrator, but it actually makes overall performance worse because there will be far more ripple at the output. Ripple cause the motor current to 'surge' at the pulse repetition frequency (typically 50Hz), and while it will be of no consequence for a large motor with plenty of momentum, it may become an issue with very small motors. While the ripple can be reduced easily, this is at the expense of reaction time. It's easy to get the ripple under 10%, but when you do, the integrator can take up to 1 second to reach 90% of the target voltage. This is going to be alright for some models, but far too slow for others. This is especially true for high speed models, where one second may be the difference between crashing ... or not.
+ +With a repetition rate of 25Hz (40ms between pulses) and 50Hz (20ms), the average voltage is as shown in the table, assuming a 5V pulse. In reality, it may be more or less, depending on the accuracy of the receiver and its power supply voltage. It's easily calculated using the following formula ...
+ +++ ++
+VAverage = pulse width / dwell time × voltage so ... + VAverage = 1.5m / 20m × 5V = 375mV (for example) +
It goes without saying that the average can easily be calculated for any pulse width, repetition rate and input voltage (but I appear to have said it anyway ). The important thing to remember is that the average value will change if the pulse repetition rate or amplitude change during use. The transmitter and/ or receiver may change the pulse spacing (depending on the design), and the receiver may not be capable to ensuring that the peak voltage remains steady. Either change will affect the way the servo or speed control reacts to your inputs.
Pulse Width (5V Amplitude) | VAverage (50Hz/ 20ms) | VAverage (25Hz /40ms) + |
1.0 ms | 250 mV | 125 mV + |
1.5 ms | 375 mV | 187.5 mV + |
2.0 ms | 500 mV | 250 mV + |
The tricky part is working out the details for the integrator, as this determines the reaction time performance of the servo or speed control. The integrator circuit used in the ESC (next section) has a typical reaction time of around 500ms to get to within 10% of the target voltage. Having tested it fairly extensively (although not in a model), this appears reasonable, and is typically faster than most medium sized motors can accelerate or decelerate.
+ +The ripple at the integrator's output is about 4% of the average value. This isn't wonderful, but it is generally acceptable. The ripple can be reduced with a simple integrator, but that will extend reaction time. If an infinite time is allowed for integration, the ripple will be infinitesimally small, and the converse is equally true. It's called compromise, and no circuit is ever free of it. A simpler circuit requires more compromises, but with endless resources the performance can approach the ideal.
+ +If you need a faster reaction time and/ or less output ripple, you need a more complex circuit. I don't propose to describe a filter based integrator at this time, but one I experimented with has an output ripple of 1%, and is within 5% of the target voltage in under 80ms. No simple circuit can come close to this level of performance, but you'd only ever need it for extremely responsive servos or motors. A digital servo or ESC (using a microcontroller) can almost certainly offer a similar level of performance (perhaps even slightly better), but only if the programmer has written the necessary code and implemented it properly.
+ + +Making your own ESC for 'brushless DC' motors is possible, but isn't something that will be covered here. Brushed DC motor control is a great deal simpler, needing only a few fairly cheap parts. Mind you, you can buy them fairly cheaply too, so the only reason to DIY is to learn how to do so, or to make one that's a lot more powerful than those you can buy easily. The hardest part of the process is working with the narrow pulse width and the long delay between pulses. Whether it will work as expected also relies on your transmitter and receiver. Unless they provide consistent spacing between each pulse, decoding the signal becomes a great deal harder. The design shown below is (loosely) based on the circuit linked in the reference section [ 7 ], but is simplified for ease of description.
+ +There are several considerations in the design of an ESC, and making a 'perfect' unit is generally not necessary. The variation in pulse width is small, from 1ms (idle/ stopped) to 2ms (full speed), but with a 20ms gap between pulses. While it's not overly difficult to still obtain a reasonable voltage change despite the very low duty-cycle, minimising ripple either makes the system unacceptably slow, or increases complexity (see previous section). If the duty-cycle from your transmitter is not consistent, then the task is a great deal harder. An inconsistent duty-cycle will cause the motor speed to increase and decrease (slightly), with it speeding up with a shorter duty-cycle and slowing down when it increases. This may not be a problem though.
+ +
Figure 16 - Brushed Motor Electronic Speed Control
The circuit isn't complex (despite initial appearances), and uses 1½ LM393 dual comparators. The circuit can be used with a supply voltage of up to 24V DC, and the motor current determines the type of output MOSFET and diode needed. If the circuit is used at voltages below 10V, a MOSFET designed for logic levels is recommended. Don't use a voltage of less than 7.5V (full load) or the regulator will drop out (lose regulation) and performance will be erratic. If preferred you can use a LDO (low dropout) regulator in place of the 78M05 (medium power version of the 7805), but make sure you follow the bypassing recommendations as they are prone to oscillation. The inclusion of the regulator also means you have an on-board BEC (battery eliminator circuit).
+ +D3 will ideally be rated for the full motor current, and must be a fast or ultra-fast type. For small motors you can use the UF4001 or similar, or MUR1510 / HFA15TB60PBF ultra-fast diode for motors up to 10A or so. Use an IRF540N or similar MOSFET (or consider the IRF1405 - 5.3mΩ on-resistance and 169A peak). Note that very high current MOSFETs such as the IRF1405 cannot sustain the claimed current permanently, as the leads would melt! There is a vast range of suitable devices available for less than AU$2.00 (although you may need to buy in quantity to get a good price). In general, use a MOSFET (or paralleled MOSFETs) sufficient for not less than five times the expected current. For example, for a 10A motor, use at least a 50A MOSFET. For very high current motors, you must use paralleled MOSFETs, because the leads aren't thick enough to carry more than ~50A continuously. Likewise, even a fairly wide PCB track (250 mils/ 6.35 mm) can't carry more than 10A without substantial reinforcement, so hard wiring will almost certainly be necessary.
+ +Be warned that there is no current limit, so the MOSFET will get very hot if a high power motor stalls. Although the IRF540N is rated for 33A, it will die a horrible death if you try to push it beyond a few amps without a heatsink (same goes for diode D3). A heatsink for the MOSFET and diode is highly recommended for anything more than ~3A. There are many options for the MOSFET and diode, so use ones you can get easily that meet your needs. There is also no under-voltage cutout, so care is needed to ensure that the battery is not discharged too far. This is especially important with Li-Po batteries, so consider adding the necessary circuitry to detect your desired minimum allowable battery voltage (3V per cell is recommended). Unless you are running multiple battery packs, a single low-voltage cutoff can be used for the complete system.
+ +Setup calibration is needed. With a 1ms pulse train from a servo tester or receiver, carefully adjust VR1 until the motor is stopped, and there is no motor noise (noise indicates that there is some PWM signal getting through). Optionally, you may choose to have a small amount of the PWM signal present at idle - not enough to run the motor, but just sufficient to allow the motor to start with minimal additional input. As the pulse width is increased, the motor should start and run, with speed increasing as the pulse is made longer. At 2ms pulse width, the motor should be at or near its maximum speed. It may be necessary to adjust R8 (56k) to change the amplitude of the triangle wave (a larger resistance means a smaller peak amplitude and vice versa). Ideally, with a 2ms pulse from the receiver, the MOSFET should be fully conducting (not switching) providing maximum voltage to the motor.
+ +With the values shown, the PWM frequency is a little under 920Hz, with a peak-to-peak amplitude of 517mV for the triangle waveform. The frequency can be reduced by increasing the value of C2, and amplitude increased by reducing the value of R8 (nominally 56k). Reducing R8 also reduces the frequency. For example, at 47k, frequency is 790Hz and triangle amplitude is 600mV p-p. Note that changing the setting of VR1 will also affect the frequency and amplitude of the triangle waveform. The change of control signal voltage with a pulse width from 1ms to 2ms is 800mV (1.2V at 1ms, 2V at 2ms). The motor will get 90% of the expected power within about 500ms.
+ +The above circuit can be adapted to use LM358 (or similar) opamps, but there are quite a few changes needed to get reliable circuit operation. Unless a logic-level MOSFET is used, the opamp driving the gate must be run from the +12V supply. A 5V supply for the remaining two opamps does limit their performance, because the outputs cannot reach 5V - the maximum output voltage is about 3.5V in practice. I built an ESC using a pair of LM358 opamps (with the gate drive and input opamp (U1 A/B) run from the main +12V supply), and it works quite well. The pulse amplitude was limited to 5.1V with a zener diode and series resistor from the input opamp. You may find it necessary to use a trimpot in place of R8 to allow better control over the triangle waveform amplitude. If the amplitude is too high you can't get the full range (off to fully on), and if too low the speed range is limited.
+ +This circuit is not designed for a reversible system with 1.5ms pulses to stop the motor. That requires a more complex circuit, and the sensitivity to pulse width variations is a great deal higher. This article is only intended to cover the basics of RC circuits, and a reversible motor system is more easily built using one of the commercial servo ICs (or you can 'gut' an old servo and use the PCB from that). Servos can also be modified for continuous (360°) rotation as described above. However, service life will be compromised if you expect to run them for long periods and/or at high loading.
+ +Given the comparative complexity of reversible ESCs, along with the narrow pulse widths that the circuit has to work with, it's no surprise that many people find it's easier to use a PIC or other microcontroller, as most of the complexity becomes a software problem rather than hardware. If it doesn't work exactly the way you want, then it's (hopefully) a simple matter to change the program, without the need for major changes to the circuit itself. Provided the basics are in place and work properly, there's no need to redesign a PCB to accommodate the revised software. Of course, this depends on one's programming skills and the capabilities of the PIC itself.
+ +When it comes to brushless motor ESCs, the general approach is almost always to use dedicated ICs and a microcontroller. TI make a 3-phase driver (DRV8302) which is designed to drive output MOSFETs, and it relies on an external MCU (microcontroller unit) to provide the smarts needed to ensure proper rotation, receiver output decoding and fault monitoring. There are many other ICs dedicated to the task such as the L6235 or MC33035 (both require Hall sensors) or the A4960 (sensorless BLDC motor driver). The overall design is not trivial, despite the availability of ICs for the purpose. Given that commercial versions are available at relatively little cost, it's probably not sensible to try to build your own. This hasn't stopped many people though, and DIY versions are shown on many sites on the Net. If this is something you wish to look into, do so by all means, but I will not explore that option here.
+ + +While it may not seem like it, a simple ESC such as that shown in Figure 16 has regenerative braking by default. If the motor is driven faster than the applied voltage would normally achieve, the motor acts as a generator and forces current back into the circuit. The intrinsic diode in the switching MOSFET conducts and the current passes through that and back to the battery. You have control by turning on the MOSFET (using PWM), and when the maximum braking effort is required, the speed control will be reduced to zero. This can only work if the motor is driven by external forces (e.g. gravity).
+ +This isn't capable of providing the same braking effort as you get by shorting out the motor with a separate MOSFET (dynamic braking), but it will work in situations where limited braking capacity is required. It almost certainly won't work with anything propeller driven, but braking isn't likely to be needed with such systems anyway (planes can't be stopped in mid-air). For ground based models that may be expected to negotiate steep inclines the natural regenerative braking available may be as much as you need.
Traditional braking is achieved by using a MOSFET to short the motor. This provides the maximum possible braking force (without resorting to reverse polarity) but it is not regenerative. The energy from the motor is dissipated as heat, mostly in the motor itself. In theory, it may be possible to use the motor's back EMF to power a switching inverter that has a low voltage, high current DC input and the output puts 'excess' energy back into the battery, but this would require a fairly complex circuit that would not be economical for modelling. The alternative is specially wired motors designed for the purpose. Trains ('real' ones) and modern electric cars use a combination of regenerative and dynamic braking, and also provide friction brakes as these are necessary to bring the vehicle to a complete stop. This cannot be achieved by dynamic or regenerative braking alone, nor can they provide brake holding power to vehicles that are parked.
+ +To see just how regenerative braking can work, look at Figure 9 above. The back EMF generated by the motor is less than the supply voltage, because the motor was under load. As the load is reduced, back EMF increases (towards zero volts). Should the load (such as wheels driving a vehicle) try to cause the motor to go faster than its no load speed, the generated voltage will be greater than the supply, and will fall below zero volts. This causes the MOSFET's diode to conduct and pushes current back into the battery.
+ + +For speed control, you ideally need a tachometer. One solution is a dedicated tacho-generator, or you can use a motor (such as the HDD motor referred to earlier), or a rotary encoder. The rotary encoder can be home made fairly easily, provided you can print onto a plastic film (there are specialty films for laser and inkjet printers). While the circuitry needed for a rotary encoder is more complex than using a motor as a generator, the results can be extremely good, with very high resolution and minimal drift.
+ +When a slotted disk passes between an LED and phototransistor, the duty cycle remains the same regardless of the speed. That means the average output remains constant without further processing. It's necessary to provide a constant pulse width regardless of speed, so the result can be integrated to obtain a voltage that depends only on the speed of the motor. Fortunately, a 555 timer is ideal for this purpose, provided the signal from the encoder has a fast risetime. If possible, the frequency from the encoder should be much higher than the motor speed, so if you need a motor to maintain a constant (say) 6,000 RPM, it's better to have the encoder output at 1kHz than 100Hz (10 'slots' vs. 1 'slot', respectively). Doing so makes filtering easier, and improves the response time by a factor of 10. It might be quite alright to have a 100ms response time for one design, but it may be far too slow for another.
+ +We tend to think that a motor speed of 20,000 RPM or more is pretty fast, but it's only 333.3 revs per second, so a single slot encoder will have an output frequency of 333.3Hz. In electronics, this is slow, and it's easy to process signals up to 10kHz with even very ordinary parts. This would allow speeds of 600,000 RPM with a single slot encoder, or 60,000 RPM with 10 slots. You can use as many or as few slots as needed to get a frequency within the 'friendly' zone of between 500Hz and 5kHz. I call this 'friendly' because simple circuits using cheap ICs can handle that range with ease, but still have very good accuracy and linearity.
+ +Even though the frequency range is 'friendly', that doesn't mean that an over-simplified circuit will achieve good results. I like simple circuits, provided they are also elegant and perform as intended. This isn't possible if the design is over-simplified, because results will not be predictable. While you can (nearly) always end up with a circuit that works, it just doesn't work as well as it might if a little more thought goes into the design. In the drawing, note that the LM393 is a dual comparator, and not an opamp. Opamps are not fast enough to work in this circuit. The second half of the LM393 is not used in this circuit. You can also use the LM/LP311 single comparator if preferred, but it has a different pinout.
+ +
Figure 17 - Frequency To Voltage Converter (Tachometer)
A frequency-to-voltage converter is shown above. It uses a photo-interrupter to detect the motor RPM, with as many slots (or transparent sections) as needed to get a useful frequency range. As shown, it's perfectly usable for frequencies from 500Hz to 4kHz, providing an output voltage from 0.5V to 4V over that range. The slotted disk is attached to the motor shaft, and with (say) 30 slots, will be accurate over the speed range of 1,000 RPM up to 8,000 RPM. Adjust the number of slots to change the speed range, or use different values for C4, C7 and C8. The output of U3 is a series of 200µs pulses at a repetition rate determined by the RPM, and these are integrated by the output filter. You may find that performance can be improved by using the 7555 (CMOS version of the bipolar 555), as the output can swing to the supply rails (0V and +5V) provided the load current is low enough.
+ +The two integrator (low pass filter) sections have a -3dB frequency of 72Hz each, with a final -3dB frequency of 45Hz. The filter has a high output impedance, and will require a buffer before the servo speed controller unless it has a very high input impedance. This entire circuit can be used in place of the feedback pot in Figure 12 (which would be simplified because bi-directional operation isn't supported with the tachometer as shown). The output must be buffered so it can drive the 2k2 input impedance of the error amplifier. VR1 is used to calibrate the monostable (based on U3 - 555 timer) and the integrated output pulses produce 1V/kHz.
+ +
Figure 18 - Voltage Vs. RPM (Or Pulses/ Second)
This shows the circuit's linearity, providing almost exactly 1V/ 1,000 pulses per second (1V/kHz). The frequency can be expanded or reduced by modifying the monostable's timeout (vary C3), but the filter also needs to be re-calculated. As seen, the voltage is stable within 15-20ms. A better output filter can improve that, but use of a high-Q filter is not advised because it will cause overshoot. The amount of output ripple is determined by the integrator and applied frequency, so it will always be a compromise. If you were to use an active filter for the output, it has to be a Bessel alignment (minimum settling time) or you'll get overshoot at the output. The filter shown has a Q of 0.5, where Bessel is 0.577. The difference between the passive filter shown and an active filter based integrator will vary between negligible and extreme, depending on the filter's complexity. As with an input pulse integrator for a servo or ESC, a properly designed filter can give very fast response and low output ripple.
+ +It's possible to use much simpler circuitry to get a result, but it will not be as good as shown above. It's often tempting to use the simplest circuit that will work, but that will bite you on the bum if it turns out to be inadequate. Likewise, there's no good reason to make the circuit more complex to get improved performance, if the improvement can't be realised by the rest of the system. For battery powered systems in particular, the performance of everything will be degraded as the battery discharges. A tachometer that can give you a feedback signal from zero to maximum in under 20ms will outperform almost any motor.
+ +While the circuit can (in theory) handle an input frequency of up to 5kHz, the 555 timer doesn't have the output swing to allow that with a 5V supply. A higher supply voltage can be used, but that may add needless complexity to the final project. The supply needs to be regulated to ensure a consistent result. The 200µs pulse width can be changed by varying the value of C4. Make it 1nF to get a 20µs pulse width (good for much higher speeds, but reduce C3 to around 100pF) or 100nF for 2ms (for very low speeds). R10 and VR1 can also be changed if required - the system is flexible to suit your needs.
+ +Note that if you increase the pulse width, the final filter must be changed to suit or ripple will be excessive for low speeds. Likewise, if pulse width is reduced to allow for higher speed, the filter time constant can be reduced for a faster reaction. It's unrealistic to expect a tachometer to be able to cover a range of more than 10:1 without using a more advanced integrator, especially if a fast reaction is expected. However, there's not much point using a fast integrator if the motor only spins up slowly. For example, if the motor can't reach operational speed for (say) 10 seconds, then you don't need an integrator that responds in 10ms.
+ +You can also use a Hall-effect switching IC to provide feedback, but you need at least two magnets on some part of the rotating system. While one magnet can also be used, it will cause the system to become unbalanced unless a counterweight is provided.
+ + +The various systems described above all rely on some form of position monitoring. While this is generally a fairly ordinary potentiometer (pot) for common hobby servos, there are often requirements for much greater resolution. Industrial systems will almost invariably use monitoring processes that provide high accuracy with extreme longevity. A typical 'ordinary' pot may withstand perhaps 100,000 operations, but that could easily be exceeded in a few days with a high-speed machine.
+ +Opto-interrupters are common (as described in the previous section), but may suffer from limited resolution. There's a limit to the number of slots one can cut into a disc, and a definite limit to the speed available from the pickup photo-transistor. All forms of position sensor are limited - if you need exceptional resolution you must accept that speed cannot be too high. Likewise, if you need very high speed then resolution is compromised. Tacho generators are at the bottom of the pile for accuracy, and cannot provide positional information.
+ +A 'resolver' is specialised analogue encoder that incorporates a rotary transformer and two sense coils, 90° apart. These can provide very high resolution for angular position and can also be used to determine shaft speed. They are at the upper end of the price scale, and require fairly sophisticated circuitry to provide the drive signal and analyse the output signals. They are generally very robust, and well suited to adverse conditions (heat, shock, vibration, etc.). There are no electronic parts within the resolver itself - it's completely passive.
+ +Where requirements don't involve high accuracy or long service life (such as hobby servos), then there's no good reason to pay top dollar for very sophisticated sensors. It's not helpful to have a $500 sensor on a $20 servo, but the reverse may also be true. Attempting to get repeatable and accurate results from cheap sensors is equally unwise. This topic will not be continued here, because there are so many variables that I can only scratch the surface anyway.
+ + +As with many ESP articles, there's a lot to take in, but hopefully this article has helped your overall understanding of servos. They are used in so many applications that modern life just would not be the same without them, yet to most people they are very much an unknown technology. There probably isn't much call for a servo in an audio system (which is the main audience for the ESP site), but there are many 'non-audio' projects and articles, so it's not out of place to discuss these essential pieces of technology. Having said that, servos are used in most amplifiers to maintain bias current with varying temperature, and are sometimes used to eliminate DC offset in opamp and power amp circuits.
+ +Servos and ESCs are now much more common than ever before, with people experimenting with robotic systems and a huge number of multi-rotor 'drones' now being used for tasks such as real estate agents providing arial views of properties, shark patrols (important in Australia), or just being a general nuisance (usually unwelcome everywhere). Major on-line retailers are talking about using drones to deliver goods (not sure if that's a good idea or not), and of course we have self-driving cars - either just around the corner or years away, depending on who's discussing them.
+ +It's fairly obvious that a self-driving car (or truck) will use servos for everything that's normally done by the human driver, as this is exactly the kind of thing they are ideal for. Any autonomous device needs servos for control, since the requirement as to what to do is just a computer output, and it has to be interpreted into mechanical motion. At the very least, such servos will almost certainly be PID controllers, because of the need for very high accuracy and completely predictable behaviour. They will require even more processing to account for highly variable conditions, and to add fail-safe provisions to prevent accidents even if a system goes awry.
+ +We can expect that servos will become far more popular (and more advanced) than they are today, with new techniques and more accurate positioning systems. Hobby servos are out of their depth in any system where lives are at stake, but they too will evolve. There is evidence of this evolution already, but it will almost certainly accelerate in the coming years.
+ +We might as well get to know these systems before they are advanced to the point where mere mortals can no longer figure out what they do and how they do it. This is what happens once something is converted into microprocessor code that no-one will give you access to (it can be hard enough getting good info on some of the current analogue ICs). While the underlying electronics will change, the overall principles remain the same as they are now. The designers of fully digital systems also need to know the interactions between the electronics and mechanical parts, or it's impossible to get a fully optimised system. Fortunately (or unfortunately, depending on how you look at it), many of the tasks that used to require a physical prototype can now be simulated with the appropriate software, so the 'hands-on' part of the design process can sometimes be dispensed with. This is a shame, because that's the best way to learn how these systems really work.
+ +I briefly touched on PID controllers, and this opens a vast can of worms. These controllers can be very difficult to set up properly, and there are (many) entire books on the subject. The extra functions increase performance, but at some cost. The greatest cost these days is the time needed to optimise the system, especially for industrial processes where time constants are measured in hours or even days. Even for faster systems, it's not always easy to get the optimum set of parameters for the three functions, and it's not something I intend to cover.
+ +If you are working with ESCs for high current applications, be aware of component (MOSFET and diode) lead sizing and the total current your system will draw. Once you get over 20A or so, precautions must be taken to ensure that everything can handle the current without overheating. This applies to PCB traces and component leads. Just because a MOSFET (for example) is rated for 100A, this does not mean that the leads and soldered connections can carry that much current without a serious temperature rise. Paralleled MOSFETs and 'off board' wiring will usually be needed with very high current circuits to ensure that the component leads aren't stressed by thermal cycling or over-current. Consider that a typical TO-220 component lead has an area of around 0.6mm², which would normally have a current rating of about 7.5A. This is extended (considerably) only because the lead is short and assumed to have good heatsinking at each end.
+ +Servos are not simple, despite appearances. There are electrical and electromechanical factors at work every time the position is changed. How well (or otherwise) the servo achieves its target depends on so many factors that it's hardly any wonder that most people simply rely on a dead-band to make the system stable. This always means that there is some residual error, but for many things it doesn't matter. For others, it may be life or death, so it pays to know the subject and choose wisely.
+ +One thing is certain - the more you play around with the circuits and motors, the more you will learn about the all-important interactions between the electronic and mechanical components. Play with circuits, make a servo hunt because of excessive gain, and examine the effects of damping - both electronic and mechanical. This kind of hands-on experience will improve your appreciation for the techniques used now, and more advanced approaches such as PID controllers. It doesn't matter if the end result is hardware of software, as long as it does exactly what you want, when you want it.
+ + +Please Note: There are many other references that were used to double-check the validity of claims made, and to extract a few finer points about the systems and how they worked. Not all have been included above, as the reference list could easily become unwieldy. For those interested, the list above is a good starting point, but it's surprisingly easy to look at ten different sites (and/ or books) and get ten different answers. It's up to the reader to determine what looks as if it might be real and what is obviously (or not so obviously) bogus.
+ + +![]() | + + + + + + |
Elliott Sound Products | +Sinewave Oscillators |
Audio oscillators (aka audio signal generators) have been an essential piece of test gear for many decades. While laboratory instruments were available (at laboratory prices) from the very early days, it wasn't until the late 1950s that affordable signal generators became available for hobbyists. Most were still frighteningly expensive and of mediocre performance by modern standards, but there has been a ready supply of such instruments for professionals and hobbyists alike for many years now. Kit versions have been made by many different companies and 'hobby' circuits have been published in electronics magazines for as long as I can remember.
+ +An intriguing conundrum on the Net is the constant belly-aching from many vested interests that "sinewaves are simple", and are therefore a poor test of an amplifier's distortion performance. If this were true, then a low distortion sinewave oscillator would not pose any problems to build, indeed, it too would be 'simple'. This being the case, I challenge those who believe this nonsense to build a simple variable frequency sinewave generator with minimal (or no) distortion. It's simple, isn't it?
+ +Alas, this is not the case, and there are many different schemes published that desperately attempt to obtain a low distortion sinewave, without having to revert to complex high-bitrate digital synthesis, and without using the venerable (and now unobtainable) R53/ RA53 or similar NTC thermistor. Even a sinewave generator that has low distortion at one or two spot frequencies isn't easy, and a variable generator takes the difficulty to another level. Ideally, a high purity sinewave generator will not require tuned filters at the output to reduce distortion.
+ +In this article, I will concentrate on variable frequency oscillators, because while spot frequencies can be useful if you only need to check distortion at a couple of frequencies, most people like to be able to test filters, amplifiers, loudspeakers, and other devices that are generally expected to be able to reproduce more than one (or two) frequencies. However, there are many requirements for single-frequency sinewave oscillators, so they are not avoided altogether.
+ +The distortion should ideally be as low as possible, but anything below 0.1% starts to become rather difficult with many of the methods available. It's certainly possible to improve on this, but very careful adjustment of all the parameters (time constants, allowable stabilisation range, etc.) is needed to get good results. Some comparatively simple arrangements can give very good results, but only over a limited range (using common and readily available opamps). Indeed, opamps impose many additional limitations. Distortion is usually well within acceptable limits, but not many low-cost opamps will allow operation of any oscillator topology to much beyond ~30kHz. This is very limiting, as it is common for general purpose audio oscillators to have a range up to at least 100kHz, preferably more.
+ +Note that this article is very specific - it deals only with 'linear' oscillators - those that are designed to generate a sinewave. Even more specifically, the range is limited to audio frequencies, plus at least a couple of octaves either side. Most audio oscillators are expected to be able to cover the range from about 5Hz up to at least 100kHz.
+ +You won't find any multivibrators or other square/rectangle generators here, nor will you find RF oscillators, other than by a glancing reference.
+ +Note that the schematics presented here are for the purpose of illustration and education, and should not be considered to be fully functional as shown. In many cases the circuits will work as described, but this is not guaranteed and cannot be assumed. Some circuits incorporating feedback stabilisation loops using other than lamps or thermistors may require some effort to ensure stability under actual operating conditions. This is an article that describes the principles - it is not a collection of projects that have been built and fully debugged.
+ +No descriptions are provided for common function generator ICs. Basic function generators have been around for some time now, and there are several specialised ICs designed for just that purpose. However, most have mediocre distortion performance (typically around 1% for the better versions), and that limits their usefulness. Some of the ICs include the Exar XR2206, Maxim MAX038 and Intersil ICL8038, but not all are still available because they are now obsolete. If you are interested, look up the details for them - provided you don't need low distortion, one of them may be just what you need. Most use 'waveform shaping' to get a passable facsimile of a sinewave (see Section 7 - Waveform Shaping for an example). Several low cost function generators are available on-line, and most use one of the common ICs. You'll also see many DDS function generators at fairly low cost. None of the cheap function generators are suitable if low distortion is needed.
+ +Note that power supplies and bypass capacitors are not shown in the drawings that follow. Most of the opamp circuits in this article will require ±12-15V DC supplies. Although all gain blocks are shown as opamps, in many cases you will have to build a discrete 'opamp' or the circuit will not be satisfactory at high frequencies. Most IC opamps will typically be ok up to perhaps 30kHz or so, but if you need good performance up to 100kHz or more, you'll almost always need a discrete circuit or a very fast opamp. All opamps (whether IC or discrete) need ceramic power supply bypass caps close to the IC or other parts to prevent instability (either parasitic or continuous RF oscillation). Discrete opamps may be needed if a lamp is used for stabilisation, because the current needed is high enough to cause many opamps to increase their distortion beyond the quoted figures.
+ +An oscillator has two very specific requirements. The amplifier must be able to perfectly match the losses in the frequency determining network. The frequency determining network must be arranged so that the signal fed back to the amplifier results in positive feedback. If the gain exceeds that required for oscillation, the output will increase until it's distorted, and if too low the oscillation will die away to nothing. These constraints apply to all analogue oscillators.
+ +By definition, oscillators do not require an externally applied input signal, but instead use part of the output signal via a frequency selective feedback network as the input signal. It is the circuit noise and/ or offset voltage that provides the initial 'trigger' signal to the circuit when positive feedback is employed. If the gain criterion is satisfied, the output builds up over a period of time, oscillating at the frequency set by the circuit components [ 9 ].
+ +++ ++
++
Note: The capacitors in frequency networks will typically be MKP (polypropylene) caps for high performance, or MKT (polyester) for a general purpose unit. + 1% metal film resistors are recommended in all cases. Polypropylene is probably one of the better options where you need high stability. Never use multilayer ceramic caps in any + oscillator unless you actually want it to have high distortion and very poor (and unpredictable) frequency stability with temperature variations (this isn't a common requirement in my + experience). Polyester (PET, Mylar, etc.) caps have a positive temperature coefficient, and polypropylene is negative (but smaller). Polystyrene caps are very + good in this role, but they are hard to get and are only available in fairly low values. +
There is a very low distortion sinewave generator published as Project 174 that you may find useful if you need to build an audio oscillator. It uses a novel sample-and-hold circuit to achieve amplitude stabilisation, and distortion+noise is around 0.001% with good opamps. The oscillator circuit is the same as that shown in Figure 5, and amplitude control uses an LED/ LDR optocoupler. Another is Project 179, which uses a discrete circuit and a lamp with a 'padding' network to minimise the lamp induced distortion (particularly at low frequencies).
+ +Many early sinewave oscillators used a dual variable capacitor for tuning. This is a good option, but it means that all resistor values are high to very high, which affects noise performance and makes the circuitry susceptible to stray capacitance. This option has not been included in any of the circuits shown, but can be used if you have a suitable dual tuning capacitor available. A significant advantage is that variable capacitors tend not to become noisy like potentiometers, and their tracking is usually better. The minimises amplitude 'bounce' as the frequency is changed. Given that you'll be hard-pressed to find a tuning gang of more than 500pF or so, that means resistors have to be 20MΩ just to get down to 16Hz. Such high values also increase thermal noise from the resistors themselves, but this isn't as much of a problem as you may expect.
+ + +I mention this first because many people will be tempted by cheap DDS modules that are available from many on-line suppliers. These look like a good idea at first, but you'll almost certainly become annoyed rather quickly because they are nowhere near as good as the specifications seem to indicate, and they generally have fairly poor distortion performance. An analogue circuit (and interface) will always be easier to use. Frequency and level accuracy of analogue circuits may appear poor, but mostly we don't care too much about absolute accuracy.
+ +Many of the latest and greatest oscillators use DDS - direct digital synthesis, but such units are usually quite expensive. There are some very cheap ones, but they are not suitable for serious measurements. Even a 12-bit output is barely acceptable, as this will cause the minimum distortion to be around 0.04% - not bad, but certainly not very impressive. Lower resolution means higher distortion - 8 bit resolution gives a theoretical 0.5% THD with basic sample-rate filtering, and anything less than 8 bits is obviously pointless. While this can be improved with more advanced filtering, this increases complexity. For a usable system, I would not be happy with anything less than 14 bits, and preferably 20 bits or more. Needless to say, the digital clock frequency needs to be far greater than the highest output frequency. Distortion of a digitally generated sinewave with only sample-rate filtering falls by 6dB for each additional bit (the distortion is halved), so if we start from 7 bits (1%), 8 bits is 0.5%, 10 bits is 0.125%, etc.
+ +Unlike a digital audio format, very steep low pass filters (to remove the switching waveform) usually cannot be used with test equipment. This isn't because the filters are audible or create problems as such, but because the filters need to track the audio signal across the wide frequency range generally available - typically from less than 0.1Hz up to 5MHz or more. Tracking filters expected to cover that range are not easy to implement.
+ +The performance of test equipment should generally be at least 5-10 times better than the device under test ('DUT'). If this is not the case, you can't measure the response of an amplifier accurately if its response approaches that of the measurement system (both input and output devices). Needless to say, measuring an amplifier's distortion using a source that has perhaps 5 to 10 times more distortion than the amplifier is a completely pointless exercise. Even if the distortion is the same for the source and the DUT, the reading you obtain is obviously inaccurate and cannot be used meaningfully. This is a constant problem with most workshop systems - even those that are comparatively advanced.
+ +While 'DDS' might be the current buzzword for audio generation, and offer many additional features that most users will probably never use. It's important to understand the limitations of any test equipment that is driven via a menu, push-buttons (or a keyboard) and expects you to read an LCD to see the current settings. Compare this with an 'analogue GUI' where the knob pointer shows the setting, and you can just turn a knob to increase/ decrease amplitude, frequency or range. One small movement vs. many button-pushes wins every time.
+ +For a number of reasons, I mainly use a digital waveform generator these days, but it's actually a pain the bum compared to a fully analogue version. Now that I don't need the very low frequency ability of the digital generator so often, I will be changing back (after some essential repairs, since my preferred unit is now almost 50 years old).
+ +While many people expect that 'newer is better', that is most certainly not the case if functionality is sacrificed for bells and whistles. I do like the ability to press a button and get tone-bursts, but I don't like changing from sine to squarewave output, and having the oscillator spontaneously reset my sinewave level (yes, it does that, and it's bloody annoying!). I have no idea why anyone thought that was 'useful'.
+ + +One thing that's fairly recent in my workshop/ lab is the addition of a high-Q tuned filter. My most often used distortion meter is a fixed frequency unit, and operates at 400Hz and 1kHz. By adding a pair of filters with one tuned for each frequency of interest, I now have measurement capability that's around 0.007%, limited by the distortion meter itself. The filters I used are shown in the article Gyrator Filters, Figure 24, and I can simply switch from one to the other as needed. These filters have reduced the distortion from my arbitrary waveform generator from 0.02% to well below the resolution of the distortion meter. Analysis of distortion is always accompanied by monitoring the distortion meter's residual output, as that is crucial to understand the composition of the distortion. If it shows sharp spikes or significant excess noise, I know what to look for. This is an important step, but it's not done often enough so the true nature of the distortion components is hidden, with only a percentage THD provided.
+ +Providing only a distortion percentage can hide some very unpleasant surprises, and I've been monitoring the residual for as long as I've been taking distortion measurements. Most (but sadly, not all) distortion meters have an output for just this purpose, and observing the results on a scope or listening to the residual on a monitor speaker can tell you a great deal about the exact nature of the distortion 'artifacts' produced by the device under test.
+ + +When we speak of audio oscillators, the primary waveform is a sinewave. Having access to a squarewave is useful, but the sinewave is favoured for the vast majority of tests. If we wish to measure distortion, then the sinewave needs to be exceptionally pure, with a THD that is substantially lower than that of the device under test. While less than 0.01% THD is desirable, it is extremely difficult to achieve with any variable frequency oscillator. Obtaining very low distortion is comparatively easy for a single frequency tone generator, but these are not common because few people can afford the space or cost of a dedicated oscillator that can't also be used for general purpose tests.
+ +Most oscillators are simply an amplifier, with a tuned circuit (frequency selective filter) of some kind to set the frequency. In order to oscillate, it requires positive feedback. The amount of positive feedback needed is determined by many factors, including the losses through the selective filter. It is the filter that determines the frequency, and it can be either an all-pass (phase shift) or band-pass type. Band-pass filter based oscillators have a theoretical advantage, in that any distortion created by the amplitude stabilisation network is subjected to the action of the filter, so in theory distortion should be lower. In reality, this is not necessarily the case.
+ +In the tests I did for this article, I found that the filter doesn't make as much difference as one might expect. Even though the Wien bridge (the most commonly used audio oscillator topology of all) has only very basic filtering, it still has amongst the lowest distortion of any of the different types. The Wien bridge is common for a number of reasons, not least being that it has good frequency stability, is a simple circuit, and is easily tuned over a one decade (10:1) range. The general schematic of a more or less typical Wien bridge oscillator (one of the most common types) is shown below. We will then dissect the various parts so that operation is easily understood.
+ +As will be shown later, there are many different schemes for oscillators. Some are good, and others less so. For acceptable distortion, very few diode or zener stabilised oscillators are suitable, however there is one exception that will also be discussed. Almost any clipping stabilisation scheme can be replaced with a thermistor (best), an LED/LDR opto coupler (good) or a junction FET (varies from useless to good). Unfortunately, as we have already seen, thermistors that are usable for this application are virtually impossible to obtain. Occasionally R53/RA53 thermistors appear on on-line auction sites, but these are a rather unreliable source at the best of times.
+ +Any waveform can be converted into a sinewave if you apply enough filtering, but unless the filter is part of the oscillator it is difficult to impossible to make the filter and oscillator track perfectly. High Q filters that will remove the harmonics effectively require an amplifier with a very wide bandwidth. As always, some of the designs shown below are simply interesting - they may not be used by anyone reading this, but every circuit you see has something to contribute to the world of analogue electronics.
+ +Many oscillators are non-linear (function generators for example), and use waveform shaping to approximate a sinewave. While this is useful because there is no variation in level as the frequency is changed, distortion is usually too high to be useful. Anything above 0.5% is getting to the point where it's not useful for anything but frequency sweeps. Digital generators are not actually oscillators at all. The selected waveform is generated as a digital signal, and is converted to analogue using a digital-to-analogue converter (DAC). While many of the latest digital units are very impressive, they are also fairly expensive ($500 or more) and are difficult to justify for routine audio work.
+ + +While most of the other oscillator types will be lumped together, the Wien bridge has a special place in history, and is one of the most common audio oscillator configurations known. Since Bill Hewlett and Dave Packard started making them commercially in the late 1930s, total Wien bridge audio oscillator production would be in the hundreds of millions. There are very good reasons for this too. The amplifier only needs a modest amount of gain (3, or about 10dB), and the bandwidth only needs to extend to a little more than the maximum frequency expected.
+ +
Figure 3.1 - The 'Classic' Wien Bridge Oscillator
R2 (marked *) needs to be changed to suit the lamp's resistance, with the value shown being about right for a 28V, 40mA lamp (however, see note below). The lamp must be a low current type, and even so will cause some pain for most opamps. Increasing the value of R2 may not allow enough current through the lamp to allow it to stabilise the output level, unless higher supply voltages are used to allow sufficient lamp current. Opamps are not designed to provide more than a few milliamps during normal operation, but the lamp may require 20mA or more (peak) before its resistance rises enough to be useful. See below for a detailed explanation of how the stabilisation process actually works.
+ +![]() | You need to be aware that lamps are not as straightforward as they may seem at first look. While running some additional tests on these circuits, I found that there + are very large differences between lamps, even from the same batch. One may work properly, but another will cause the output level to 'bounce' uncontrollably. It should be possible to + get stable operation by varying the value of R2, and the lamp needs an RMS voltage across it of at least 1/10th of its rated voltage. + |
The Wien bridge itself is a phase shift network and a very basic (low Q) filter. At the critical frequency, there is a 0° phase shift, so there is positive feedback to the non-inverting input of the amplifier (in this case, an opamp). Figure 3 shows the general scheme of the Wien bridge, including the amplitude and phase response. You can see the basic filter response too. The upper capacitor causes the low frequency rolloff, and the lower cap causes the high frequency rolloff. The resistors (one in series, one in parallel) set the frequency - in this case 1.59kHz. This is calculated from the values of R and C (which must be identical for R1, R2 and C1, C2). Frequency is determined from ...
+ ++ f = 1 / ( 2π × R × C )+ +
+ f = 1 / ( 2π × 10k × 10nF ) = 1.59kHz +
Many early Wien bridge oscillators used a variable capacitor rather than a pot. While this idea has great merit (variable capacitors will last several human lifetimes), it also means that all tuning circuit impedances are extremely high. Variable caps are very limited, and may have a maximum of perhaps 500pF. If you need to get to 20Hz, this means that the resistors need to be 15.9M for the lowest frequency range. Even a small amount of stray capacitance causes errors, and very complex shielding is needed to prevent hum and noise being picked up by the high impedance circuitry.
+ +
Figure 3.2 - The Wien Bridge And Response Curves
Figure 3 shows the Wien bridge itself, along with the frequency and phase response curves. As you can see, the amplitude is about 10dB down at the peak (exactly one third of the input voltage), so the amplifier must have a gain of 3 to ensure oscillation. In reality, the gain must be greater, or the oscillator will refuse to start or will stop. Unfortunately, the gain requirement changes very slightly due to small resistor (or pot) differences, but if it's only a tiny bit higher than needed, the amplitude will keep increasing until the output stage clips. Distortion is unacceptable at this point. This is why some form of amplitude stabilisation is essential.
+ +With 1V input, the output of the Wien bridge is ideally 333.33mV - exactly one third. Even a very small variation between resistors and capacitors will change this though - a variation of ±1 ohm for the 10k resistors (0.01%) will change the gain requirements of the amplifier. The change is small, but it's enough to cause the oscillator to either stop, or increase level until it distorts the output. It may come as a surprise that a small incandescent lamp could possibly be accurate enough to allow the circuit to function in a useful manner.
+ +The lamp is positioned in the negative feedback path around the opamp, and when cold will have a low resistance (all metal filament lamps have a positive temperature coefficient of resistance). This means that the amplifier will have very little negative feedback, so will oscillate immediately. As the output level of the opamp increases, more voltage appears across the lamp, its current increases, and so does its resistance. As the resistance of the lamp goes up, the opamp gain is reduced at the same time. A lamp is a PTC thermistor.
+ +Within a relatively short period, the whole system (theoretically) reaches a state of equilibrium. Any attempt by the circuit to increase the output will result in greater lamp current, more negative feedback, so the level is prevented from increasing. In reality, it will increase, but hopefully only by a small amount. Likewise, should the level fall for any reason, current through the lamp filament falls, it cools a little, resistance falls, so gain is increased. Nearly all lamp or thermistor stabilised Wien bridge oscillators will show a variation of output level as the frequency is changed, so the stabilisation is definitely not perfect. Finding a lamp that provides a stable output is far easier said than done!
+ + +An alternative to the traditional Wien bridge shown above splits the amplification into two parts. The parallel section is used in an integrator circuit rather than as a passive network. Feedback is applied via the series network as shown below. This modification to the conventional Wien bridge network is claimed (by J.L. Linsley-Hood and others) to improve performance and reduce distortion caused by 'common mode defects' in the active device(s). This is real, and eliminating the common-mode signal does reduce distortion. Both opamps operate with zero common mode voltage, as one input is grounded and the other is a 'virtual ground'.
+ +
Figure 3B - Wien Bridge Oscillator With Two Amplifiers
A traditional Wien bridge as shown in Figure 4 has a significant common mode voltage (i.e. the signal voltage applied to both of the opamp's inputs), but in reality this is not usually a problem with modern devices, although the 'common mode' distortion may still be a limiting factor. The stabilisation network (lamp, thermistor, etc.) will invariably cause far more distortion than any modern opamp.
+ +Distortion is reduced if the output is taken from the output of the integrator (U1A) because it acts as a low pass filter, so removes some of the harmonics. Because of the way the feedback network and integrator stage operate, the second stage operates with a gain of two for stable oscillation. Diode stabilisation is possible, but requires several circuit changes and will give unacceptably high distortion. Lamp stabilisation should also be possible, with the lamp placed between U1A and U1B in place of R3, and the feedback resistor around U1B altered to suit.
+ + +The heart of any sinewave oscillator is the amplitude stabilising system. Without it, the level will continue to increase until the waveform is clipped and severely distorted, or oscillation will die out over a period of time - assuming it starts at all. The range where the amplitude is stable and has low distortion is limited, and it is simply not possible to make an amplifier with the exact gain needed and expect it to work properly. In all cases with analogue sinewave generators, some means of stabilising the amplitude is needed. This can use diodes, zener diodes, or more sophisticated AGC (automatic gain control) systems. However, the most common (and most effective) amplitude stabilisation systems have used non-linear resistances as described below.
+ +The demise of the specialised NTC thermistor that used to be the mainstay of audio oscillators is a serious blow, because the only available alternative is a small, low current lamp. These have a positive tempco, so the feedback network needs to be rearranged. Because their current demands are comparatively high (typically up to 20mA or more), this stresses most opamps. Lamps also have a fairly fast time constant, so distortion at low frequencies can be higher than is desirable because the resistance changes during the sinewave cycle.
+ +RA series thermistors used to be made by a number of vendors, such as ITT, GE and various others, but absolutely no-one manufactures these components any more. The Chinese make a range of audio oscillators, and one I have seen uses a small lamp use for amplitude stabilisation. There are several techniques that can be used, and each has its place. One of the problems is that there is little or no reference material that I could find that discusses the options, and the strengths and weaknesses of each.
+ +In short, these are the primary options ...
+ +There is one other option too, and that's to deliberately clip the signal, and rely on a tuned filter to remove the distortion produced. Diodes or zener diodes are common for clipping limiters, but the amplitude will change with temperature. If clipping is used, it needs to be symmetrical to minimise even order harmonics. That means that diodes (or zeners) need to be matched and maintained in close thermal contact for best performance.
+ +Filter complexity can be quite high for a low distortion output, and the circuit may end up needing to use multi-gang (3 or more) potentiometers for tuning. Apart from being extremely hard to get, these often have poor tracking. One or more fixed frequency filters can be used after a low distortion oscillator to reduce distortion, but this is generally limited to a few spot frequencies.
+ +Even a tiny change of gain of an amplifier used in an oscillator circuit will cause the signal amplitude to increase until it distorts, or decrease until it dies away to nothing. The only way we can prevent either of these from happening is to provide an amplifier with more gain than is needed, and use automatic gain control (or controlled clipping) to maintain the effective gain at exactly the right amount to keep the sinewave amplitude stable. This isn't as easy as it might sound.
+ +A problem that's becoming more of an issue than ever before is the rapidly shrinking availability of suitable JFETs. Where there used to be FETs for every occasion, most suppliers have reduced their stock to a few types that still remain popular. The very high performance, low distortion types have all but disappeared, unless you get them from China. This means they will have the type number that you ordered printed on the case, but inside could be anything that vaguely resembles a JFET (or even something else entirely).
+ +For anyone looking for exceptionally low distortion, have a look at Project 174, which uses a novel sample and hold circuit to stabilise the amplitude. Unfortunately, the circuitry for the stabilising network is far more complex than the oscillator itself, which should give you an idea of just how important this part of the circuit really is.
+ + +A thermistor stabilised oscillator is shown above. Note that the thermistor and R2 have swapped places, because the thermistor has a negative temperature coefficient of resistance (NTC). As the level increases, more current flows through the thermistor, its resistance falls and this applies more feedback. Additional negative feedback reduces the gain and therefore brings the output level back to the desired voltage.
+ +
Figure 4.1 - Wien Bridge Oscillator Using Thermistor
Most people who have used audio oscillators will have found that the level bounces after the frequency is changed. A level change is caused by imperfect tracking of the frequency pot, and the bounce is caused by the lamp (or thermistor or other stabilisation technique) time constant. It always takes a while until the level settles to the normal value, because it is extremely difficult to obtain critical damping. In extreme cases, the bouncing amplitude can continue for some time - especially at very low frequencies. There is an inevitable trade-off that must be faced with all amplitude stabilisation circuits ... use a fast acting system that settles quickly but has high distortion at low frequencies, or a slow acting system that bounces for some time, but gives good performance at low frequencies.
+ +In some (up-market) oscillators, different time constants are used depending on the frequency. This is hard to achieve if the time constant is dictated by a thermistor though - it is what it is, and it can't be changed. Electronic stabiliser circuits become even more problematical because of the increasing complexity of the overall solution. If the time constant is wrong, the oscillator may just operate in short bursts followed by silence. While this type of waveform can be useful, a poorly chosen time constant for the feedback stabilisation is not the way to achieve the desired result.
+ + +The photo below shows an R35 made by ITT. There is no real consensus on whether these are R53 or RA53, but the writing on the glass says R53, so I suppose that is fair indication that this device is an R53. I've also used the RA54, and as far as I can recall, there's no apparent difference. Most people have never even seen one, so I have remedied this by including the photo. I actually had to enhance the bead itself a little, because it's so small that it didn't show in the photo. The glass envelope is evacuated (i.e. a vacuum), and there is a getter at the end (note the silvered tip). The bead itself is tiny - apparently it's about 0.2mm in diameter, and it's suspended on very fine (platinum?) wires. The idea is that it is self-heating, and is relatively immune from ambient air temperature.
+ +
Figure 4.1.1 - Photo Of R53 Thermistor
Provided the tiny bead runs hot enough (perhaps 60°C or so), variations in ambient temperature will have little effect on the resistance of the bead. The whole idea is that its temperature is determined by the voltage across it. With a thermal time constant of about 1 second or so, the resistance doesn't change much with the applied AC waveform itself, only the RMS current through the bead is important.
+ +Despite this, the R53 and similar thermistors (and lamps) will show increased distortion at low frequencies. Fortunately, this is rarely a problem, and few people bother to measure amplifier distortion below 100Hz or so. A low frequency, low distortion source can be useful to measure the distortion from electrolytic caps as their reactance becomes significant compared to circuit impedance, but the audibility of distortion is very low at low frequencies anyway, provided it's no more than a couple of percent (and low-order).
+ +While there are many small bead type thermistors, this particular style in the vacuum tube is no longer made by anyone. People are constantly asking for assistance to find one (as a search will reveal), but no major supplier sells them any more. I accept that the market must be pretty small so they would be fairly expensive, but I am baffled as to why absolutely no-one seems to make a thermistor designed to stabilise audio oscillators. There is still a significant market for basic test equipment, and the audio oscillator is one of the most important. An entire enterprise (Hewlett Packard) started with a couple of blokes building audio oscillators in a garage - perhaps it's time to try that again. Chinese made audio oscillators are readily available from many sources, but I don't know what they use for stabilisation (although I know that some use small lamps).
+ +Even lamps are starting to disappear, because panel indicators and (analogue) meter scale illumination are done with LEDs now. As long as the market still exists for small lamps there shouldn't be any real difficulty, but no-one knows how long there will be a demand. Once usage falls significantly, the cost of making them increases dramatically, limiting options even further.
+ + +For the time being, we'll assume a lamp for stabilisation, especially since no-one can get RA53 thermistors any more. The lamp's resistance at 25°C needs to be known, and a reasonable approximation of the current needed can be determined. The current will be enough to raise the lamp filament resistance so that it is well above ambient temperature, but not hot enough to glow visibly. Based on a number of fairly typical circuits available in application notes and elsewhere, a lamp filament current of around 7-12mA seems fairly common, which makes the lamp's warm resistance somewhere between 90 and 300 ohms. Look at Figure 2, and note that the feedback resistor is 470 ohms. For a gain of 3 as required, the lamp's filament resistance must be 235 ohms, and the opamp must be able to provide sufficient voltage swing and current to supply the feedback circuit's total resistance (705 ohms). If you can't see where I got the numbers from, I suggest that you read the beginners' guides for opamps and opamp circuits. Most of this is nothing more than Ohm's law.
+ +I tested a likely looking miniature lamp (almost identical to the one pictured). For some reason, the US based IC manufacturers who publish the application notes all seem to think that everyone not only knows what a #327 lamp is, but can get one easily. Application notes refer to this mysterious 327 lamp as if it were some kind of (minor) holy grail.
Yes, it seems to be readily available in the US, but elsewhere? It transpires that the #327 is a 28V lamp, rated at 40mA or thereabouts (1.12W on that basis). At full temperature, the filament will have a resistance of 700 ohms. A photo of a #327 lamp is shown to the left, so for those of us not in the US, at least we know what it looks like. (Ok, I do admit that these lamps can be obtained outside the US, but they are not readily available.) The application notes generally fail to state that many different types of lamps can be used, and they provide no details to make it easier for the constructor to choose something suitable.
+ +RS Components has (had) a 28V, 40mA lamp (catalogue number 655-9621) that I believe works well. There are actually quite a few lamps that can be used, so it should be easy to find one that works. At least until these small lamps become unavailable! Unfortunately, no-one knows when that will happen, and maybe (if we're lucky) they will be with us for a few more years. Avoid lamps that demand high current (less than 40mA is preferred), and aim for a high operating voltage to minimise current. As noted below, the lamp voltage should ideally be at least of 10% of the rated voltage, although that's often hard to achieve.
+ +Miniature 12-24V lamps with a rating of 1-2W (or less) should be alright for most applications. Cold resistance should be as high as possible - aim for at least 25 ohms if you can. Some testing will be necessary, because it's irksome to try to calculate the lamp's resistance at all possible operating conditions. Lamps with a rated voltage below 12V probably will not work, because they require more current than most opamps can supply. A buffer amplifier can be added (or a discrete circuit can be built) that can provide the current needed by the lamp if you don't have a choice.
+ +The miniature bulb I used for the graph shown below has a cold resistance of 65 ohms (estimated), but even the ohm-meter supplied enough current to raise the resistance to 69 ohms. With a 220 ohm feedback resistor (as shown in Figure 3.1, the opamp output voltage will be 1.5V. You should see 500mV across the lamp, and total feedback current is 4.5mA - this means that the lamp's resistance must be 110Ω. Allowing for resistor tolerance (I didn't bother measuring the exact resistance) this all looks about right. It is also possible to use the resistance change to calculate temperature, but tungsten makes this task somewhat more difficult than more sensible metals, because the tempco changes (slightly) above ~100°C. However, as an approximation, tungsten increases its resistance by 0.0045% per °C. If we know that the resistance went from 69 ohms to 110 ohms, then this would indicate that the temperature of the tungsten has risen by 225°C, from 25°C to 250°C. This is so far above ambient temperature that normal variations cannot cause significant level changes.
+ +++ ++
+R = R0 ( 1 + α ΔT ) Where R is final resistance, R0 is res. at ambient, + α is the tempco of resistance (0.0045) and ΔT is the temp change in °C, or ... + T = T0 + ΔR / ( α × R0 ) Where T0 is ambient, T is final temp + (°C), ΔR is resistance change and R0 is initial resistance at ambient. +
Measured distortion with the lamp I had was 0.02% at 700Hz and with an output voltage of 1.69V RMS. Not a wonderful result, but more than acceptable for most general purpose applications. Somewhat surprisingly, the measured distortion with the lamp was slightly lower than with an RA53 thermistor. The latter showed just under 0.05%, and both were measured at 700Hz. The distortion residual (just the harmonics after the fundamental has been removed by the distortion meter) was smooth in both cases, with predominantly 3rd harmonics. The circuits were tested on my opamp test board, and there was no shielding of any kind. I used 4558 opamps, which are roughly equivalent to the TL072, but have BJT inputs rather than FETs.
+ +With the 220Ω feedback resistor, the lamp voltage was 550mV, meaning a current of about 5mA. There was considerable amplitude bounce, and the only 'cure' was to increase the feedback resistance and therefore the lamp voltage (and current). I increased the feedback resistor to 375Ω, which gave a lamp voltage of 1.39V (7.4mA) and an output of 4.16V RMS. This reduced the amplitude bounce, but it was still (IMO) unacceptable. A higher voltage would be preferable. However, that would make direct operation with an opamp impractical. Measured distortion was 0.03%.
+ +
Figure 4.2.1 - Lamp Current Vs. Voltage
The image shown above is the voltage vs. current graph for a 28V, 50mA lamp (560Ω nominal), being one that I measured recently. It's cold resistance is less than 70Ω - it's difficult to measure accurately because even the multimeter's ohms range current (~800µA) causes the resistance to rise. For an 'ideal' part, the resistance would change very rapidly as current is increased, but the slope seen in the graph is fairly gentle. If used with a current of 8mA, the resistance is only 207Ω with a voltage of 1.66V, so to get a higher resistance means more current and a higher current drive circuit. Adopting the '10% rule' (explained below) we'd like to have a voltage across the lamp that's around 10% of 28V, or 2.8V. However, this isn't always feasible, and with the lamp shown the Figure 3.1 oscillator works 'well enough'. This isn't too difficult to achieve, and will result in an oscillator output voltage of about 4.5V RMS. This isn't a bad result, as the lamp current is low enough to ensure a long life, the output voltage is acceptable, and the filament temperature is well above ambient.
+ +During testing, I determined that with a current of 10mA the filament appears dark, with an easily visible dull red glow appearing above 13mA. At 7.6mA, the formula shown above indicates a temperature of 437°C. The glow at that temperature is just visible in a very dark room with at least 5 minutes in complete darkness to allow your eyes to adjust. Unlikely though it may seem, using this technique you can see the glow from a soldering iron at ~350°C.
+ +As noted earlier, you may have difficulty finding a lamp that works without continuous amplitude bounce. Like the unobtainable R53, suitable lamps may meet the basic specifications (28V, 40mA), but that does not mean they will work reliably. Several lamps I tried (ostensibly identical to the #327) did not work at all well in a low voltage circuit (±12V), and even when the feed resistance (R2 in Figure 3.1) was carefully adjusted, operation was less than perfect, with prolonged amplitude bounce before the level settled. These latest tests were done in October 2021, while original test results were from 2010, so it seems that the 'new' lamps are different from those of a few years ago. This doesn't bode well for the future.
Note that some lamps may cause the output to be unstable, with continuous amplitude bounce (increasing and decreasing level, but never settling on a steady value). In extreme cases, you may even get a condition called 'squegging' - see Section 7 for an explanation of this phenomenon. This shouldn't happen with a simple lamp stabiliser, but I have seen it with some lamps I've tested. Upon further investigation, I've found that the lamp voltage should ideally be a minimum of 10% of the rated voltage. For a 28V lamp, that means no less than 2.8V RMS across the lamp, and preferably a little more. If you can meet this criterion, then lamp stabilisation works well, and distortion may be reduced further by adopting the arrangement described in Project 179. If the lamp were operated at 11mA (3V), the output voltage will be 9V RMS, requiring a supply voltage of at least ±22V.
+ +The resistance vs. voltage (or current) curve of lamps is such that you'll often find that the amplitude of the sinewave changes slightly when the frequency is adjusted. This is due to imperfect tracking of the frequency pot, and that changes the gain needed to ensure oscillation. Ultimately, it's a careful balancing act - everything has to be 'just so' to get good results (and distortion performance is often still not a good as you'd like).
+ +Be warned that many of the lamps that were common (and cheap) only a few years ago are now gone, and while there are a few reasonably priced ones left, they are rapidly diminishing. It's only a matter of time before they become very hard to obtain and expensive. I originally suspected that eventually most small incandescent lamps would have vanished from supplier shelves, but the supply (based on a recent search) indicates that there are more now than when this article was written. However, we can see small lamps selling for up to $5.00 or more (each!), and this can only get worse.
+ + +An optocoupler using LED and LDR makes a useful feedback network, and (unlike a lamp) it's very stable. The drawing below shows a circuit that I tested, and it works surprisingly well. Distortion is tolerable, at under 0.1% for frequencies above a few hundred Hertz, and despite the lack of filtering of the DC feedback signal (applied to the LED), performance at low frequencies is only a little worse. One might imagine that adding a capacitor in parallel with the LED would help, but most sensible values cause the amplitude to bounce continuously. A more complex filter circuit would help, but that defeats the purpose of a very simple design. This is likely the simplest possible sinewave oscillator, with no amplitude bounce and acceptable distortion for most purposes.
+ +
Figure 4.3.1 - Wien Bridge Oscillator Using LED/LDR Optocoupler
The diode bridge is powered directly from the opamp's output (with R5 to limit the peak current), and while you'd expect this to create distortion, it's well below the distortion caused by the LDR (it wasn't visible on the distortion residual displayed on my scope). The distortion residual is primarily third harmonic, and is surprisingly smooth with no sharp discontinuities that indicate higher order distortion. This scheme is very usable, and if you need a fairly decent sinewave, this is by far the simplest way to get it. The output level depends on the LED's forward voltage and that of the diodes, but my test unit provided 8V peak-peak (2.85V RMS).
+ +Expect distortion to be around 0.4% if the distortion trim is omitted, but with it adjusted properly you can get below 0.1% quite easily, even at 100Hz. This is an unexpectedly good result, and while greater complexity can make it much better, as a general purpose oscillator it's pretty good as-is. If the distortion trimpot is omitted, this increases distortion, but lets the oscillator stabilise almost instantly, even with a grossly mismatched frequency pot. I tried it with one of the capacitors increased to 110nF (a 100nF cap in parallel with the 10nF caps shown), and it was easily able to oscillate reliably and stabilise almost instantly - that's a serious mismatch!
+ +To set the distortion trimpot, it's adjusted so that the least possible appears across the LDR, while ensuring that there's enough gain to ensure reliable oscillation. That means the pot's set value will be in the order of 6k, but remember that the output level is also determined by the forward voltages of the diodes and the LED in the optocoupler. The amount of 'reserve' gain is that amount of gain, above three, which is the minimum required for oscillation if tuning values are 'fairly close' to the exact values. A 10k pot should be fine with most LDRs. A fixed resistor can be used once a workable value is found. It has to be a value that allows fast settling and reliable oscillation over the full frequency range.
+ +The idea is to have the lowest possible voltage across the LDR, consistent with reliable oscillation. The distortion of any LDR is highly dependent upon the voltage across it, so by minimising the voltage you also minimise the distortion. If you aim for a voltage of no more than 200mV RMS across the LDR, distortion should be well below 0.1% (I measured 0.07% with VR2 set to 6k, and 160mV across the LDR). Oscillation was close to instant, with zero amplitude bounce.
+ +Be aware that the signal amplitude will vary by at least -4mV/°C due to the tempco of the diodes (Schottky types should be a little less). The LED will also show some variation, but I didn't quantify that. I heated the rectifier/ LED/ LDR network with a hot air gun, and reduced the level from 1.77V RMS to about 1.6V RMS (it was pretty hot!). That's a change of less than 1dB, and is probably alright for most purposes. In reality it's unlikely to be an issue, because the temperature of most workshops will be set to something that's comfortable for people, so shouldn't vary by more than ~10°C.
+ +For a simple, easily set up general purpose oscillator, this is very hard to beat, and it works fine even with very ordinary opamps. Expensive, low distortion opamps aren't required, because the distortion is limited by the LDR. If you can't get the VTL5C4 shown, You can make your own DIY optocoupler by following the detailed instructions shown in Project 145.
+ + +Since the ideal thermistor is unobtainable, lamps may require more current than we have available and LDRs have more distortion than we may desire, we need to look at alternative methods. Even the supply of lamps is shrinking, with far fewer available now than even a 2-3 years ago. I have already shown a FET used as a variable resistance, and these are convenient, cheap, and work well enough so long as the (AC) voltage across the FET is kept to a minimum. Providing an AC signal at the gate which is exactly half the voltage on the drain helps dramatically, and even harmonics (2nd, 4th, 6th, etc.) are effectively cancelled, leaving only the small odd harmonic residuals. C2 would normally be connected in series with R5, but that creates a second time constant.
+ +To prevent this JFET 'feedback' from creating two time constants, one based around each capacitor - C1 and C2 (the latter shown in grey), it's better to direct-couple the JFET gate to C1, and use C2 in the drain circuit as shown (in series with the feedback circuit). C2 needs to be a relatively high value, such that there is little or no voltage across the cap at any frequency selected. This means it will be an electrolytic because a value of at least 220µF is needed, based on 'typical' feedback resistance values and a minimum frequency of 10Hz. Lower frequencies require a larger capacitor. Doing it the way shown does add a small perturbation as the JFET's gate voltage changes, but as there's only a few microamps available through R4 and R5 it has a minimal effect on the output (far less than amplitude bounce, which will last for around 500ms as simulated).
+ +
Figure 4.4.1 - JFET Electronic Stabilisation
The JFET circuit has only one time constant when connected as shown, and that makes it fairly easy to avoid unacceptable bounce when the frequency is changed. You would normally expect to see C2 in series with R5, but that creates another time constant. As shown, all requirements for a stable loop and minimal distortion are satisfied. While there are countless JFET stabilised oscillator circuit to be found on the Net, almost none are wired properly. Many don't include the drain to gate feedback at all (so distortion will be unacceptably high), and a few get tantalising close, but get the feedback path wrong.
+ +Done properly, a JFET can provide distortion performance that is as good or better than a lamp or thermistor. In simulations (real life will be worse), I've managed to achieve less than 0.001% THD, using both Wien bridge and state-variable topologies, but it's not known how well that will translate to reality. Remember that the lower the voltage across the JFET, the lower the distortion can be, but there's always a limit imposed by imperfectly tracking tuning pots, as this requires a greater available variable gain range.
+ +While discussed above in Section 4.3, a bit more information is warranted. The LED/ LDR circuit has only one time constant - LDRs have a slow response to illumination and a slower response when light is reduced or removed. However, it's usually necessary to include some filtering after the rectifier to minimise distortion at low frequencies. This extra time constant can cause serious bounce, and in extreme cases, what's known as squegging. This refers to the behaviour of an (analogue) electronic circuit that appears to function normally for a period (typically a few milliseconds), then shuts down for a period (from milliseconds to seconds) before repeating the process continuously. The design of control-loop time constants is almost a complete science in itself, and it is very easy to make a seemingly insignificant change that either causes or cures the problems. Squegging can be very difficult to prevent when there's more than one time-constant in a control-loop.
+ +Needless to say, control-loop theory is outside the scope of this article, but during both physical testing and simulation of the circuits shown, I encountered squegging on several occasions. Multiple time constants that are reasonably close together will cause problems, so it is generally necessary to ensure that time constants are widely different if more than one is involved. If low frequency performance is of little consequence, C1 in the drawing below can be eliminated (although it must be admitted that it doesn't help very much anyway).
+ +
Figure 4.4.2 - LED/ LDR Stabilisation
To prevent amplitude instability, the filter cap for the LED/LDR opto coupler feedback circuit is much smaller than it really should be. LEDs are extremely fast, but LDRs are relatively slow, with the VTL5C4 taking several seconds to return to maximum resistance after illumination. The experiments I performed showed that adding a filter cap of a useful value after the diodes caused squegging, and while I'm sure that there is a combination that would work, I simply left it out to prevent problems (thus, you too can feel free to omit C1 in the second circuit). However, this limits the low frequency range because distortion becomes very high at frequencies below ~50Hz or so (this depends on the specific opto coupler, as there are many different types with different response times).
+ +There is one major difference between the way these two circuits work. The FET has minimum impedance with no signal, and increasing the signal level increases the FET's impedance. In this respect, it is the equivalent of using a lamp, so must be in the same electrical location that would otherwise use a lamp. For the Wien bridge, this means that the FET connects from the feedback node to ground.
+ +The LED/LDR opto coupler is the equivalent of a thermistor as it has maximum resistance with no signal, and the resistance falls as the level increases. While the operation can be electrically reversed, doing so simply adds more parts for no real benefit. In both cases, it is important to minimise the voltage across the FET or LDR. Smaller voltages and/or currents mean lower distortion, so the variable resistance element should use series or parallel resistance (or a combination of both) to achieve the highest linearity.
+ +Naturally, you must ensure that there is always enough available gain to ensure that oscillation starts reliably, and this influences the distortion null setting. While you may be able to get low distortion from the Distortion Null control, you may then find that the circuit refuses to oscillate at high frequencies or when the oscillator is first turned on. This means that some distortion performance must be sacrificed to ensure reliable oscillation under all conditions. An LED/LDR optocoupler can be used to stabilise any oscillator, with placement depending on the topology.
+ +Another option is a LED/FET optocoupler, such as the Fairchild H11F1. I've not tried them, so can't comment based on direct experience. However, since the active element is a FET, we already know that the level has to be kept low to minimise distortion. The datasheet claims that low level AC and DC can be controlled "distortion free" (a quote from the datasheet), but I find this a little difficult to accept. Many posts on forum sites complain of high distortion with this (and similar) devices, so they are probably unsuitable for a low distortion oscillator gain control. There is no facility to provide the ½ AC voltage at the gate to cancel odd harmonic distortion, so it's unlikely that performance will be acceptable.
+ +A high quality VCA (voltage controlled amplifier) such as the THAT2180 (or SSM2018 which is now obsolete) could be used for amplitude stabilisation. See Project 141 to get an idea of how these work. Distortion is very low, at a claimed 0.02% for the THAT2180B. Using one in an oscillator would be a challenge though, and in practice it will be necessary to ensure that the VCA only provides a small fraction of the feedback signal to minimise its influence on the final THD figure. Getting a stable output may prove to be a challenge.
+ + + +There are actually relatively few sinewave oscillator topologies around. Given the long-term popularity of sinewaves, one would expect a plethora of different designs, but this is not the case. Certainly, there are more options for fixed frequency oscillators (such as phase-shift oscillators for example), but variable frequency is expected by most users so the options become much more limited. Those shown here are representative only, and include JFET, diode and lamp stabilisation.
+ +Stabilisation is the bane of all sinewave oscillators, because it either works quickly but with high distortion, or works slowly so has low distortion, but causes amplitude bounce when frequency is changed. Junction FETs are convenient but have relatively high distortion unless the level is kept very low (preferably under 100mV, ideally mush less). LED/LDR opto-couplers would seem to be a perfect choice, and if used appropriately can give low distortion. Thermistors and lamps have better linearity than FETs or LDRs, so one would think they'd will win every time, but this isn't always the case.
+ +Since all analogue sinewave oscillators require amplitude stabilisation, this is still a challenge. When a FET is used, it is a well known phenomenon that distortion is minimised if the gate has exactly half the AC signal level at the drain. Second harmonic distortion is virtually eliminated, leaving predominantly third harmonics. Interestingly, most of the application notes that use electronic stabilisation only show half-wave rectification. A full-wave rectifier is preferable, as it reduces the ripple voltage in the stabilisation loop which helps reduce distortion. If the FET's internal structure isn't perfectly symmetrical, it may be necessary to vary the AC gate voltage (at the sinewave frequency) slightly above or below the halfway point to get the lowest possible distortion.
+ + +The first of the alternatives is based on a state-variable filter, and although fairly complex, it performs well. The biggest advantage is that the sweep range can be made fairly wide - up to 100:1 is possible, although this can only be achieved realistically if a close tolerance dual pot is used for tuning. Distortion performance is acceptable, but is limited by the FET. Although I haven't tested this design with a thermistor in place of R6, I'd expect performance to be quite good. This design also has a cosine wave available from the output of U1B. A cosine wave is a sinewave, but displaced by 90°. The usefulness of this is dubious for a general purpose oscillator, although there are a few specialised applications where a cosine waveform is needed. Few audio hobbyists will ever require a cosine output. It doesn't matter which output is used if you only require a sinewave, but the second integrator (cosine output, U2A) gives a slightly lower distortion.
+ +
Figure 5.1.1 - Sine-Cosine Generator Using State Variable Filter
This is an interesting circuit, partly because I could find no definitive origin. Parts of it are shown in an Intersil application note, and there are several sites that either show either a very similar circuit as indicated in the references, or have a link to the page. Distortion performance depends almost entirely on the FET used. I've shown a 2N5484, and while these were common, it's possibly one of the worst FETs around for this application. The original showed a 2SK30A, but it is a discontinued device and therefore will be difficult to obtain. The resistor in parallel with the FET (R7) needs to be selected for reliable starting and lowest distortion (it's therefore a compromise). The available range of JFETs has diminished greatly in the last few years, and good candidates are now harder to find. The circuit has been revised so the JFET is used properly (without adding an additional time constant).
+ +R8 and R9 combine to ensure that the gate has exactly half of the signal present at the drain, and this reduces distortion (simulated) from 0.2% to 0.009%. Without the drain to gate feedback, distortion is increased. I have found it wise to use simulator distortion figures as a guide only, so expect the figures above to be somewhat higher than the simulator claims. See section 7 to look at electronic stabilisation in a bit more detail.
+ +Frequency is determined using the same formula as the Wien bridge. In the example shown it has the same frequency range as the Wien bridge above.
+ +
Figure 5.1.2 - State Variable Filter Oscillator With Diode Stabilisation
Another variant of the state variable filter oscillator is shown above. This uses diode stabilisation and distortion cancellation (R9 and R10). This is an especially interesting circuit, since it is easily capable of respectably low distortion (around 0.05% is possible), while still using simple and cheap diodes for amplitude control. R8 (1k) is selected for reliable oscillation, and can be increased to reduce distortion, but the oscillator may not start if the value is too high. The two diodes must be accurately matched for forward voltage at a current of around 0.5mA to ensure waveform symmetry and minimum distortion.
+ +Distortion cancellation relies on the primary distortion component being third harmonic, and this will always be the case when matched diodes, thermistors or lamps are used for amplitude control. Distortion components cannot be accurately predicted if LED/LDR optocouplers or FETs are used. The circuit as shown has a fairly high output impedance, so must be followed by a buffer stage.
+ +
Figure 5.1.3 - State Variable Filter Oscillator With Lamp Stabilisation
Finally, you may also be able to use a lamp to stabilise the amplitude. The same issues as with a Wien bridge apply, and it may take some experimentation to find a lamp that works properly, without squegging (see Section 7 for an explanation of this phenomenon). R6 will need to be adjusted to suit the characteristics of the lamp used. As a rough guide, R6 needs to be low enough to ensure that the lamp operates with a minimum of 10% of its rated voltage. Alternatively, you can adjust R3, with a lower value providing more feedback so the circuit oscillates more readily. Note that the opamps need to be capable of significant current to power the lamp, so NE5532 or similar are suggested. If you wish, the output of U1B to R6 can be buffered by a non-inverting opamp buffer to minimise distortion. An additional opamp may cause distortion performance to be reduced, especially at higher frequencies where the small extra phase shift may become 'significant' compared to the waveform's periodic time (1/f).
+ +There are several different configurations for the basic state-variable filter, so while you may see circuits that appear slightly different, the basic operation is the same.
+ +The state variable filter based oscillator shown above is also known as a quadrature oscillator, because it produces two sinewaves in quadrature - they are exactly 90° apart (sine and cosine). One sinewave is available from U2A as shown (technically, this is the cosine, delayed by 90° with respect to the sine output), and another can be taken from U1B. The cosine is sometimes preferred because it has lower distortion because of the second integrator.
+ + +This is not a particularly user-friendly oscillator, because there are three time constants that must be changed for tuning. All three must be identical, and the oscillation frequency is given by the normal formula. This is a common circuit, and is used where sine and cosine waveforms are needed but where only a single fixed frequency is necessary.
+ +The lowest distortion depends on how it's wired. Sometimes the sine output may have lower distortion with a different clipping network. There are many different ways you may see this circuit stabilised, but nearly all use diodes. This invariably limits the distortion performance. Amplitude depends on the diode voltage, and a more sophisticated arrangement is necessary if constant level is important.
+ +
Figure 5.2 - Quadrature Oscillator Using Three Integrators
The lowest distortion is obtained from the sine output (U1A), but there are several different ways that the circuit can be stabilised, and lowest distortion depends on where the signal is clipped. Residual distortion can be hard to eliminate. In general, THD below 1% is reasonably easy to achieve, but expecting less than 0.2% is probably optimistic. It is possible to use alternative amplitude stabilisation techniques, but as it's not a 'friendly' oscillator this isn't covered. The diode stabilisation certainly works, but distortion performance is less than stellar.
+ +The connections shown can vary, but the circuit must be arranged to have a loop gain of more than unity. Making R2 11k (rather than 10k) increases the gain, which is pulled back by the diodes and R4. This does affect the frequency slightly. Other variation may also be seen, but the net result is much the same no matter how it's arranged.
+ + +There are a number of variations on this theme, but this particular version is interesting in that it only requires one resistor value to be changed to change frequency. A single-gang pot eliminates any issues with tracking, and this problem is actually worse than expected. Few (affordable) dual-gang pots offer good tracking between sections, and this circuit solves the problem by not needing a dual-gang pot.
+ +Apart from anything else, the filter section itself is mildly interesting. If you look at the various active bandpass filters, you will see that this is really a multiple feedback bandpass filter, No-one seems to bother mentioning this, yet it surfaces in several application notes and on a number of websites. National Semiconductor refer to it as an 'easily tuned' oscillator, which is certainly true enough. Stabilisation is achieved using clipping diodes, but a thermistor, FET or LED/LDR can also be used with appropriate circuit changes. Distortion can be very low if a linear stabilisation technique is used, but even with diodes can be under 1%.
+ +
Figure 5.3 - Oscillating Bandpass Filter Oscillator
There are two major disadvantages of this circuit that are not mentioned in any of the application notes I've come across. High frequency performance is very limited, because the filter stage (around U1B) operates with considerable gain. As the frequency is increased, the tuning resistance (R2 + VR1) is reduced to the minimum, and this attenuates the signal from the diode clipper. With the values shown and at maximum frequency, the opamp needs a gain of at least 30, and preferably a lot more. Very fast opamps could be used, but they are expensive and in my opinion are wasted on this circuit. One common example on the Net shows VR1 as 1k and R2 as 51 ohms. The opamp is operated at high gain to get a high Q (which minimises distortion), and this limits the maximum usable frequency.
+ +The other problem is that the tuning range is comparatively small. To maintain acceptably low distortion, the tuning ratio is only about 4:1 - rather inconvenient and much lower than that from other topologies. Distortion is inversely proportional to frequency, so at the minimum frequency the THD will be roughly 4 times that of the maximum frequency. These two issues confine the circuit to the 'interesting but not very useful' basket. This is contradictory to some of the claims you may see for this circuit, but I've built and tested one so I know its limitations. With the pot at minimum, the operating frequency is approximately equal to ...
+ ++ f = 1 / ( 2π × C × √R1 × ( R2 + VR1 ) )+ +
+ f = 1 / ( 2π × 10nF × √560 × 470k ) = 981Hz +
The most (potentially) useful part of this design is the filter. While it's simply an adaptation of a MFB bandpass filter, using it as a variable tuned filter is unusual. I have used the same arrangement for tunable filters in a couple of projects, but only over a very limited frequency range. The Q varies with the pot setting (high resistance gives a low Q), but in many cases this is not a major limitation. Fig 5.3 has been redrawn to show the MFB filter in its 'normal' schematic representation.
+ +To calculate the MFB filter, use my MFB Filter (an executable program written in Visual Basic 6). The popup when you click the link will ask if you wish to save the file. It's clear of any virus and it does not connect to the Net. It does require the VB6 runtime library, and this should be present on all recent Windoze machines. Windows will probably give dire warnings about running the file, which can be ignored.
+ + +This is a very uncommon circuit, and is one that I happened to find in a commercial product. Although it is single frequency, it's a potentially interesting circuit because it is capable of fairly low distortion, despite the use of zener diodes for amplitude control. It typically has a higher output than many of the other circuits, and as shown below produces about 5V RMS output level with a distortion of as little as 0.04% (with considerable tweaking!).
+ +
Figure 5.4 - Low-Pass Filter + Integrator
Although the circuit is not complex, amplitude stability with changes in both supply voltage and temperature are quite good, but tuning is difficult. With the values shown it oscillates at about 1kHz, but no resistor value can be changed that affects only the tuning and not amplitude. This makes it suitable for fixed frequencies only. The value of R3 is quite critical - it needs to be low enough to ensure reliable oscillation, but not so low that the output distorts. The circuit gain around the integrator is almost exactly 2 (6dB). The voltage losses in the filter and zener clipping circuit need to be exactly replaced by the integrator gain to maintain oscillation.
+ +As mentioned - interesting, but only marginally useful. Determining the frequency is rather irksome, and can only be approximated. In theory, the frequency for the circuit as shown is 1026Hz - roughly based on the standard formula ...
+ ++ f = 1 / ( 2π × C1 × R1 ) ++ +
This is also influenced by the integrator (U1B) time constant determined by R3 and C3, and this makes it considerably less predictable. When simulated, the circuit oscillates at 1016Hz, but a test circuit oscillates at about 970Hz. This is outside the expected variation based on typical component tolerances. Although the circuit is (surprisingly) quite stable in operation, virtually every component changes the amplitude and frequency. The only exception is the zener diodes which may be changed with only minor effect, but even R4 changes both amplitude and frequency!
+ + +Of the alternatives, this is by far the best I've found. For most applications will easily beat almost anything else. It is described in Project 86, and has very good performance. The only down-side is that the dual-gang pot needs to track fairly accurately to prevent momentary drop-outs (bounce is virtually non-existent), but it's much better than a Wien bridge oscillator in that respect. In addition, PCBs are available from ESP - see the pricelist.
+ +
Figure 5.5.1 - Basic Phase Shift Oscillator
One method of using all-pass networks is shown above. There are two all-pass filters, with a variable resistance that allows a (theoretical) frequency range from 20Hz to 20kHz. With the values shown, the range is from 30Hz to 720Hz. The final amplifier has just over unity gain, with the diodes acting as amplitude limiters. The low-level gain is determined by the values of R8+R9 (in series), divided by R7. This gives a gain of 1.085 - just sufficient to ensure oscillation. When the peak amplitude attempts to exceed the diode voltage (650mV × 2) the diodes conduct and prevent the amplitude from increasing. The frequency range should be limited to no more than ~20:1, otherwise it's too hard to set high frequencies accurately.
+ +While the ability to operate over the full audio frequency range looks like a good idea, mostly it isn't. It will be almost impossible to set a particular frequency in the two upper octaves because of the pot - the total resistance will be less than 10k for any frequency above 2.3kHz using 6.8nF caps, and setting that with a 1MΩ pot is irksome (to put it mildly). For that reason, the P86 circuit (and the simplified version shown below) divide the frequency range into three bands.
+ +This principle is then taken a step further by incorporating distortion reduction. In the following circuit, the phase shift networks are inverting types (series capacitor and resistance to ground), but this makes no difference to operation.
+ +Because of the 'feedforward' distortion cancellation signals, the output can achieve a distortion as low as 0.1% with diode clipping. A thermistor can be used instead of diodes to reduce the distortion even further, but the problem of obtaining thermistors remains (of course). There is no reason that an LED/LDR solution wouldn't work equally well, although this has not been tried. Unlike most of the alternatives, this oscillator can have a much wider frequency range than expected - up to 25:1. This means that the entire audio band is more than adequately covered by only 3 ranges ...
+ ++ 10 to 140 Hz+ +
+ 140 to 1,960 Hz
+ 1,960 Hz to 27.44 kHz +
To be able to get beyond 20kHz requires fast opamps, and is usually not needed for the majority of tests. A more realistic upper limit is around 15kHz, and a reduced frequency range (such as 14:1 as shown above) allows more accurate frequency adjustment. The minimum frequency is limited only by the amount of capacitance you can use for CT (timing capacitors) and RT (timing resistor + pot). There is no theoretical lower limit for frequency, and if the diode limiter is used (rather than a thermistor or electronic stabilisation scheme) distortion will remain low at well below 1Hz. Frequency calculation uses the same formula as the Wien bridge.
+ +Please note that the term 'feedforward' is not strictly correct in the context used here, but it does convey the principle fairly well. Also, much like the state-variable filter based oscillator, I found little information on the Net about this circuit, except for the contributed project published on my site. It is based on a circuit described in the February 1982 issue of Wireless World (now Electronics World), contributed by Roger Rosens. The original relied on a long obsolete NTC thermistor for amplitude stabilisation.
+ +
Figure 5.5.2 - Phase Shift + Distortion Cancellation Oscillator
The secret of how this design achieves such low distortion from a diode limiter and no filtering lies in the final opamp. In much the same way as the diode shaper shown below sums the various outputs to approximate a sinewave, the final stage sums signals with a defined phase displacement. The result is almost complete cancellation of third harmonic distortion, along with a worthwhile reduction of fifth and seventh harmonics as well.
+ +Th frequency is set by the two phase-shift networks, and is determined by the values of RT and CT (timing resistors and capacitors). RT includes the fixed and variable resistors. The frequency is determined by ...
+ ++ fo = 1 / ( 2π × RT × CT ) ++ +
Overall, this is probably the most useful of all the different types shown, however it does use more opamps than most, and will use even more if electronic stabilisation is added. Because of the multiple summing points that give this circuit its low distortion, a lamp cannot be used, so amplitude limiting will be via the diodes as shown, or an RA53 thermistor if you can get one.
+ + +This general class of circuit is simple to build if you need a simple oscillator in a hurry. They don't require stabilisation (although it can be applied with some effort), and are very common circuits. Traditionally used in valve guitar amps as the tremolo/ vibrato oscillator, there are probably many millions of them around. The frequency formula is not especially accurate, and although frequency can be changed by around 5:1, the amplitude usually changes too. Any timing resistor can be changed to change the frequency, and for a limited range (~4:1), R3 can be varied with only small amplitude changes.
+ +Although shown with feedback using R4 and R5, the opamp can be operated open-loop (maximum possible gain) by omitting R5 and shorting R4. Output level will be higher, oscillation is absolutely guaranteed (as long as it's within the opamp's range of course), but distortion is greater and frequency stability isn't as good.
+ +
Figure 5.6.1 - Phase Shift Oscillator
With the values shown, the frequency is 1.73kHz, with 0.4% distortion (only odd harmonics). Output voltage is 370mV with ±15V supplies. Traditional frequency calculation isn't possible, because each RC network is loaded by the one following. That means that more gain is needed from the opamp, and the phase shift network interactions make straightforward calculations difficult. If the gain of the amplifier stage is reduced with feedback, there must be sufficient gain to ensure that oscillation is reliable. The amount of gain needed is normally around 30, but is increased if the three RC networks are not identical.
+ ++ fo = √6 / (2π × R × C) = 1.77kHz (This is approximate, and only works when all values of R and C are equal) ++ +
If R3 is increased to 33k, the frequency decreases to about 1.2kHz. However, you'll need to increase the value of R5 to get more gain or the circuit won't oscillate. This is a useful circuit, but it has limited application. For wider frequency range adjustment, all three resistors can be changed and this will keep the amplitude the same. Triple-gang pots are as common as R53 thermistors, so this is not really a viable option. A dual-gang pot could be used in series with R2 and R3, which will increase the frequency range. Output level will vary with frequency though. Note too that the output is at a high impedance, and must be buffered to prevent the external load from affecting the frequency. The signal at the output pin of the opamp has the highest distortion, so isn't generally very useful.
+ +
Figure 5.6.2 - Transistor Based Phase Shift Oscillator
You will see this circuit used with the resistors and capacitors interchanged as shown above. This method of wiring the phase shift networks is actually necessary for single transistor, FET or valve circuits, but it is not correct if the gain stage is an opamp. Note that the output is from the transistor's collector, and distortion is fairly high because the RC networks can't filter out the harmonics. The frequency isn't particularly stable, and a simulator indicates that the above circuit oscillates at 380Hz. This may not be expected, as one would think it should run at the same frequency as the opamp version because the RC networks use the same values. The change to the feedback path affects the way the circuit works, and in this circuit, every component affects the frequency.
+ +While an opamp will certainly oscillate when connected the same way, this arrangement provides poor frequency stability and high distortion, so its output is a very low quality 'sinewave'. As an alternative circuit using opamps it's completely pointless and I suggest that you don't bother. It's just a carry-over from the single valve and transistor circuits as shown above, but it makes no sense to wire an opamp phase shift oscillator this way.
+ + +More advanced versions of the same principle also exist, and may even be given exotic (or stupid) names (such as 'Bubba' oscillator). It doesn't matter what you call it, it's still a phase shift oscillator. Although traditionally phase shift oscillators have used 3 sections, more can be added - the 'Bubba' oscillator uses 4 phase shift networks. Extra sections give the opportunity for lower distortion, but at the expense of overall parts count. Because of the buffers, the output can be loaded by an external impedance without affecting frequency. Buffer stages reduce voltage losses, but add complexity and cost.
+ +
Figure 5.6.3 - Buffered Phase Shift Oscillator
Adding the buffers to a standard phase shift oscillator means that each RC network acts independently. Using 3 opamps as shown above means that the RC networks are truly independent of each other, although the final RC network (R3, C3) is loaded by the feedback resistance. Provided the resistances are high enough, loading is minimal. The gain opamp (U1A) should be used with feedback as shown, or the circuit will have excessive distortion. Each RC network contributes 60° of phase shift, providing 180° in total. Oscillation frequency is 1.26kHz in a simulation, and this circuit requires a gain of at least 8 for sustained oscillation. Frequency is approximately ...
+ ++ fo = √3 / ( 2π × R × C ) = 1.25kHz for the values shown ++ + +
I have no idea where the name came from, but the Bubba (ok, I'll drop the quotes ) is actually a useful design for fixed frequencies. Apart from fairly low distortion, it appears to have good frequency stability, which may be critical for some applications. The lowest distortion is from the junction of R4 and C4, but that requires yet another buffer so the load doesn't change the frequency and amplitude. Taking the output from the point shown is a reasonable compromise, and will give a sinewave with under 0.5% distortion.
Figure 5.6.4 - Bubba Phase Shift Oscillator
Adding a fourth phase shift network means that each RC network only contributes 45° phase shift, and this is claimed to improve the frequency stability. This is not a circuit that I've built and tested, but extensive simulations confirm that it performs as expected. Oscillation frequency with the values shown is 720Hz, and the Bubba circuit requires a gain of at least 4 for sustained oscillation. Frequency is approximately ...
+ ++ fo = 1 / ( 2π × R × C ) = 723Hz for the values shown ++ +
While it looks relatively complex, it's just repetition, and the cost is fairly low - especially if a quad opamp is used. For maximum frequency and amplitude stability, the opamps used should be FET input (for the high impedance) and rail-to-rail output so that output device saturation voltage has no effect. There are many suitable opamps, and CMOS types should be well suited to any of the phase shift oscillator designs.
+ + +A technique that was popular some time ago was the BFO. Two high frequency oscillators were used, generally operating at several hundred kHz or more. The two signals were fed into an RF mixer, and the audio output was the difference frequency. For example, if one RF oscillator operates at 1,000,000Hz and the other at 1,000,100Hz, the difference frequency is 100Hz.
+ +You may well ask why, and today no-one would bother. BFOs were used to generate sweep signals, and can easily cover from 20Hz to 20kHz in a single sweep. The change required for an RF oscillator is comparatively small in percentage terms (20kHz is only 2% of 1MHz), and was easily accommodated with the valve circuitry that was available at the time. Sweep signals are common today, and are primarily digitally derived as part of a testing suite for amplifiers, speakers, etc. There is still a need for stand-alone sweep generators, but it's far easier (and a great deal cheaper) to use a digital waveform generator or a PC based system, several of which can be obtained on-line as a free download.
+ +One of the issues is that distortion is comparatively high compared to most modern oscillators, and it will be difficult to keep distortion below around 2% even with a reasonable output filter. There will nearly always be vestiges of the two RF frequencies as well, because RF can be notoriously difficult to eliminate completely. BFOs are interesting, but are largely a thing of the past. They do have some nostalgia value though, and for that alone some readers may like the idea.
+ +However, there is no plan to provide circuitry for a BFO, and as noted in the previous section, a single sweep from 20Hz to 20kHz can be achieved with an all-pass filter oscillator.
+ + +This is another single frequency oscillator, and is included because it's referenced widely on the Net and you may find it useful as a quick project or just for experimentation.
+ +There are many circuits that use a squarewave oscillator as the basis for generating a sinewave, and what's needed is a filter to remove the harmonics (3rd, 5th, 7th, etc.). Despite claims you might see, this is not terribly effective, and the output is a fairly rubbish version of a 'sinewave'. While tuning is theoretically simple, the filter has to be tuned too, and that makes it quite difficult because you'll need a triple-ganged pot (you can get them, but they are not common and fairly expensive). This means that it's not really a viable way to generate a very poor attempt at a variable frequency sinewave.
+ +According to the literature [ 10 ], it's supposedly a worthwhile circuit, but a quick examination reveals many weaknesses. Firstly, the frequency depends on the voltage appearing across R1 and R2, and that depends on the saturation voltage of the opamp. Most opamps can get to within ~1.5V from each supply rail, but that changes with loading and temperature. In theory, the circuit is self-correcting, because both positive and negative feedback come from the output of U1A. In reality, the frequency stability is ok - not great, just ok.
+ +
Figure 5.8 - Squarewave Oscillator + Filter
With the values shown, the frequency is 467Hz, and the output level is 980mV RMS. The application note referenced claims that the filter section should be tuned to the same frequency as the oscillator, but distortion is woeful if you do that. Even with the filter shown (tuned to 239Hz), the distortion is still almost 4% - hardly a good result. However, you may find a need for a low cost oscillator, and this general arrangement works well enough.
+ +Note that the formula shown in the application note doesn't work, so to change frequency you can scale the resistor values (R1, R4 and R5) to change frequency. I don't propose to work out a formula for the frequency - you'll just have to experiment (a simulator is good for basic testing). While it can be improved from the circuit shown, for the most part I'd suggest that it's not worth the effort. Many of the alternatives shown above will give better results, simpler (and more accurate) tuning, more predictable performance and for around the same cost.
+ + +The twin-t (aka twin tee) oscillator is something of an oddball in several respects. Rather than using a tuned bandpass filter, it uses a tuned band stop (notch) filter, and the opamp is operated with positive feedback via R1 and R2, with the notch filter in the negative feedback path. Less positive feedback (by making R2 a lower value) results in lower distortion. However, if there's not enough positive feedback the circuit won't oscillate reliably.
+ +
Figure 5.9 - Twin-T Oscillator
In my version shown, the twin-t circuit is deliberately slightly unbalanced, so the notch is not as deep as it would be if the values were exact. Perhaps surprisingly, this improves performance slightly. More importantly, it reduces component count (by two parts), making the circuit (ever so slightly) cheaper to build. Very little non-linearity is needed to stabilise the output, and a resistor can be used in series with D1 and D2. Adding as much as 47k barely changes the output level, but also has only a minor effect on distortion. As simulated, the distortion with the circuit exactly as shown is around 0.75%.
+ +It works by ensuring that there is very little negative feedback at the tuned frequency, because of the twin-t notch filter. The frequency is determined by ...
+ ++ f = 1 / ( 2π × Rt × Ct ) ++ +
Normally, Rt / 2 and 2Ct would consist of 2 × Rt in parallel and 2C would be 2 × Ct in parallel. While this theoretically should improve performance, there seems little to suggest that it makes a great deal of difference. The twin-t does not lend itself to easy tuning, so it's mainly useful as a single frequency oscillator. As shown, frequency is 1.03kHz. This is less than the calculated value because 2Ct and Rt / 2 are not exactly double and half (respectively).
+ + +It's also fairly easy to make an oscillator using L/C (inductor/ capacitor) tuning, but it's not usually a viable option for audio. It was not uncommon many years ago in the valve era, but the inductor is large, heavy and will pick up magnetic fields at mains frequency. LC oscillators are best kept for radio frequency circuits, where the inductor can be physically small and air-cored. Because this approach is not suited to modern audio applications, LC oscillators are not covered here.
+ +There are several different topologies for L/C oscillators, and amplitude stabilisation is often achieved purely due to the limits imposed by the supply voltage. Fairly low distortion is possible with a high Q inductor, but variable frequency is irksome. Variable inductance is possible of course, but requires mechanical linkages that are usually too difficult to fabricate accurately. Variable capacitance is not viable for audio L/C oscillators because the capacitance is small so the inductance must be very large, and it will pick up unacceptable amounts of mains frequency noise.
+ +L/C oscillators remain common in RF applications, but even there they are often replaced by ceramic resonators, quartz crystals, frequency synthesis, or even MEMS (micro electro-mechanical systems) dedicated oscillators. The inductor is never a major problem at RF, because the inductance needed is small, and they are often used with a small ferrite slug (for tuning) and many are air-cored. These small (physically and electrically) inductors don't pick up appreciable hum, and it's so far away from their operating frequency that it will usually have no influence at all. The capacitance needed is also small, so tuning is not difficult, and the range needed is much smaller (comparatively speaking) than for audio. Audio covers a range of ten octaves, but RF tuned circuits rarely cover more than one octave, and usually much less.
+ +
Figure 6.1 - NIC + Gyrator Oscillator
The circuit shown above utilises a NIC (negative impedance converter, U1A) and an L/C oscillator based on a gyrator (U1B). The 'inductance' is created by the gyrator, with C1 in parallel. That creates a parallel tuned circuit, which has maximum impedance at resonance. Working the values, the inductance is (approximately) equal to R4 × R5 × C2 (1.54H). Resonance is ...
+ ++ f = 1 / ( 2π × √( L × C )) = 273Hz ++ +
When simulated, the frequency was 276Hz, so it's within acceptable limits. However, the circuit is difficult to tune, and a small change in the effective Q of the gyrator or the negative impedance created by the NIC will cause excessive distortion or no output. Distortion (as simulated) is surprisingly low, being less than 0.2% - this isn't wonderful, but it's not bad considering that the amplitude is limited only by the diodes in the NIC circuit. Output level is around 620mV peak, or 440mV RMS. The impedance of the NIC is nominally -30k, set by R1. This particular NIC circuit is one of the most common you'll come across, but this is the first time I've seen it used as anything other than a curiosity. The two diodes limit the negative impedance region and thus the amplitude of the oscillation.
+ +It works because a negative impedance is inherently unstable, and when connected to a tuned circuit (C1 plus the gyrator inductor) it will oscillate. Many years ago, tunnel diodes (which have a negative impedance region) were common for RF oscillators, requiring only the diode, tuned circuit and a low voltage power source. While tunnel diodes were moderately common in the 1960s and 70s, they are much less so today. If you want to know more about them, a web search will provide many circuits and explanations.
+ +This is an interesting circuit, and readers may well find it to be useful. It uses two rather 'off-the-wall' concepts, the NIC and the gyrator. These are both fascinating and obscure, although the linked articles on the ESP site are dedicated to the two circuit ideas. The 'common' NIC shown above is seen all over the Net as an example of how to build a NIC, but working examples are hard to come by.
+ +The circuit shown is a simplified version of one sent to me by Steven Dunbar, AD0DT.
+ + +Diode waveform shaping is very common with low cost analogue function generators (it's inside the IC itself in most cases), but the lowest distortion that is typically available is around 1%, although as low as 0.25% is claimed in some literature. The input signal is a triangle waveform, and the diodes progressively clip the peaks to give a reasonably smooth sinewave. Although the distortion is usually audible, it is still usable for simple signal sweeps, for example to find the resonant frequency of a speaker. There are countless different diode clipping schemes on the Net, and the one shown below is purely an example.
+ +Triangle waveforms are very easy to generate with simple opamp circuits, and that makes it attractive for low-cost function generators.
+ +
Figure 7.1 - Waveform Shaping Example
In the example shown, a ±6.6V triangle wave input gives the lowest distortion. Because of the different impedances in each of the 4 clipping circuits, the output amplifier sums a variety of clipped and un-clipped waveforms, with the end result looking rather like a sinewave. It is very important that the triangle waveform is perfectly symmetrical, or even-order harmonic distortion rises rapidly.
+ +
Figure 7.2 - Waveform Shaping Input And Output
The distortion on the output is about 1%, which is fine for basic tests, but is obviously useless for measuring distortion. It's generally not mentioned, but the output waveform will typically have more higher order harmonics than low order. For example, a FFT plot of the output shows a little 3rd harmonic (at about -58dB), but over 10dB more 5th (-42dB), 7th (-46dB) and even 6dB more of the 9th (-51dB). A basic filter can get THD below 1%, but it's hard to improve on that without additional complexity. This method is suitable for integration, but as a discrete circuit there's too many parts for a rather poor end result.
+ +Note that the output was inverted so the direct relationship can be seen (the circuit shown inverts the output signal).
+ +Another option is to use a logarithmic amplifier. While this is theoretically better than diode clipping, in reality there's usually very little difference between them. Unless proper temperature controlled log ICs are used there will usually be a small change in amplitude and distortion as the ambient temperature changes - this applies to diode shaping as well, although the effects are likely to be less severe with the simpler diode clipping circuits.
+ +
Figure 7.3 - Improved Sine Approximation
By using a transistor differential pair, it's possible to synthesise a triangle wave into a respectable approximation of a sinewave [ 13 ]. The two transistors must be matched and in close thermal contact with each other. The circuit's been simplified, showing a current source for the emitters, but this needs to be made using additional transistors. Power to the opamps is not shown. Note that the output impedance of the triangle wave generator must be as low as possible (an opamp buffer is recommended).
+ +The circuit as shown is capable of less than 0.2% distortion, which is considerably better than that obtained using the diode clipping circuit shown in Figure 7.2. The reference document contains the formulae for calculating the values, which are quite critical. The amplitude of the triangle waveform must be exactly ±1V for best performance. A deviation of only 100mV causes the distortion to rise dramatically. The requirement for odd value resistors (E48 Series, 48 values per decade) is limiting, and most hobbyists won't have them in stock. Most hobby suppliers only stock the E24 Series, and some don't even go that far.
+ + +Digitally generated sinewaves are becoming much more common than they once were, but most have a limited bit depth which limits the usefulness of such techniques. I would suggest that anything less than 8 bits is completely useless, because distortion will be too high. Eight bit resolution gives a theoretical distortion of 0.5%, and this is halved for each additional bit used. Two techniques that reduce distortion are the addition of a filter (preferably tracking) to remove the harmonics, and adding dither - essentially random (white) noise - at a very low level. For serious work, nothing less than 14 bits is really much use, as distortion is still too high unless post-filtering and dithering is used. A 14 bit system should be able to provide distortion below 0.01%.
+ +
Figure 8.1 - Digital Sinewave Generation
The above is an example of a digitally generated sinewave. The output frequency is 1/10th of the input (clock) frequency. In the example shown it's limited to 5 bits, so while the theoretical distortion is 4%, this can only be achieved if the signal is filtered. More aggressive filtering will reduce the distortion. With a 220nF cap in parallel with R5, distortion is reduced to a little over 2%. This type of generator can only be used with a tracking filter as described in a Silicon Chip magazine article that had a design for a complete system. However, in the SC article, the values of R2 and R3 were wrong - they were specified as 16k, but this makes the distortion a great deal worse than it should be.
+ +With some further experimentation, it turns out that a 4-stage twisted ring counter may actually provide lower distortion than the 5-stage version shown, despite theory saying otherwise. Certainly the unfiltered distortion of the 5-bit version is lower, but after filtering the difference is significant. If you use a 4-stage ring counter, the 2 × 6k8 resistors have to be reduced to 3k9. Ultimately, it all depends on the filter that's used to remove the clock frequency. Normally, it's suggested that the outputs should be taken from n-1 counters (where n is the number of flip-flops). A 4-stage twisted-ring counter violates that rule, but gives a better result!
+ +Calculation of the resistor weightings used to approximate a sinewave is not as straightforward as one might hope. I've seen a couple of papers on the subject [ 14, 15 ] and they are quite different. The values shown above were not calculated, but were determined empirically to obtain values that gave the lowest (unfiltered) distortion. With a 5-stage counter, the unfiltered output has a (wide band) distortion of about 18%, and the output filter (not shown) cleans that up. Around 0.1% THD is possible with only an 18dB/ octave low-pass filter, tuned to the frequency being generated.
+ +A viable option for digital sine generation is to use the sound card in a PC. There are several freeware and shareware programs available on the Net that will generate sine, square and triangle waveforms. With 16 bits and 41.4kHz or more sampling rate, the distortion should be very low, but it depends on your sound card. Most modern audio analysis and measurement sets use digital processing throughout, typically using 24 bits and a 196kHz sampling rate. Some (especially the very expensive kind) may use higher resolution and/ or a higher sampling rate.
+ +The analog Devices AD9833 is one option for a programmable sinewave generator, but it requires a micro-controller to tell it what to do, and has only 10 bit resolution. While this is certainly a viable solution, the IC itself is only available in a SMD (surface mount) package, and when you add all the support devices (including a display screen so you know the settings) it becomes a fairly major undertaking.
+ + +Throughout this article, I have used variable resistance (potentiometers) as the frequency control device, but these have known issues that limit their usefulness. While there are some very good quality wirewound pots that would be suitable for long-term reliable use, these are not commonly available with high values or in dual-gang configurations.
+ +Commonly available (cheap) pots have poor tracking, and that increases amplitude bounce as the frequency is changed. As the pot wears, there will also be small 'dead spots' caused by minute gaps in the contact area as it gradually wears away. Dust can also cause dead spots. Anything that disturbs the tracking causes the amplitude stabilisation network to work that much harder, and amplitude bounce is extremely annoying when carrying out many common tests.
+ +One common fix for this is to use (old AM) radio type tuning gangs - variable capacitors. Because the plates mesh together without ever touching, there is no wear, although bearings can fail if the instrument is heavily used. I have a number of pieces of test gear that use variable capacitors, and have never had a failure of any kind. Meanwhile, I have replaced pots several times in other equipment.
+ +There is no doubt that the variable capacitor is by far a better control system, but it too has its problems. Most importantly, the capacitance is low, usually around 100-500pF. This means that circuit impedances are high - for 20Hz at 500pF, you need resistors of 15.9M ohms. Small amounts of stray capacitance cause major problems, and the likelihood of hum pickup is greatly increased.
+ +In addition, all amplifiers used (whether discrete or opamp) must have FET inputs because of the very high impedances involved. However, nearly all of the circuits shown above can use a variable capacitor for tuning, and for different ranges the fixed tuning resistors are switched. Some more recent oscillators (Chinese origin) use variable caps that have higher than normal capacitance. This is achieved by using a plastic dielectric film between the plates, thus increasing the capacitance but at the expense of thermal stability.
+ +If possible, use conductive plastic pots for tuning. They are considerably more expensive and harder to get (especially dual-gang types), but they offer better tracking and more stable performance over time than 'ordinary' carbon pots. Dual wirewound pots in the values needed (typically 10 to 20k) are now virtually impossible to obtain.
+ +The choice of opamps, and capacitors depends on your expectations. If you use a diode clipping circuit to stabilise an oscillator, then quite obviously choosing expensive, low distortion opamps would be silly. Likewise, it would make no sense to use expensive capacitors either, since their contribution to overall distortion is far lower than that of the clipping network. It would be equally silly to use high quality opamps and amplitude stabilisation, then use high-k ceramic capacitors which have dreadful (and often easily addible) distortion.
+ +If you intend to optimise the circuit, then you must select opamps with vanishingly low distortion (they will be expensive). You also need to choose the topology carefully to minimise common-mode distortion in the opamps. Capacitors will ideally be polypropylene for lowest distortion and minimum thermal drift. Low value caps should be G0G (aka NP0) ceramic types. Polystyrene is also good, but you might want to avoid silvered mica. None of these suggestions will affect audible distortion one way or another, but for a measurement system it's critical to have the cleanest possible waveform so the final measurement does justice to the device being tested.
+ +All resistors should be metal film, as they are quieter and more stable than carbon types. Avoid carbon composition resistors, and ideally avoid all carbon resistors unless they are in non-critical parts of your circuit. The power supply (whether single or dual) needs to be well regulated and fairly quiet (particularly single supplies). Project 05 is a good choice.
+ +The final circuit needs to be in a shielded metal box to prevent mains hum and RF pickup. Both can ruin an otherwise perfectly good distortion measurement. In common with nearly all commercial test equipment, I suggest that outputs should use BNC connectors. Since this is not a project/ construction article, the details of yow you build the circuit are up to you. Apart from these few tips, no further construction advice can be provided.
+ + +While I've concentrated on variable frequency oscillators, in some cases it's easier to use a number of single-frequency oscillators. These are much easier, because there's no need for 'excess' gain and subsequent problems with stabilisation. Each oscillator can be optimised for just one frequency, and distortion can be reduced to almost nothing with a high-Q filter. The two-opamp gyrator described in Project 218 is a good candidate. I use two filters tuned to 400Hz and 1kHz that remove the distortion from my digital waveform generator, with it well below my measurement threshold.
+ +It's not difficult to get a distortion reduction of more than 100, so 0.5% distortion is reduced to 0.005%. While this is alright for many applications, it's still not good enough if you wish to characterise an ultra-low distortion opamp circuit or a high-quality DAC. You can use two filters in cascade, and this will improve the distortion more, at least until the opamps used in the filters become the dominant source of distortion and/or noise.
+ +In conjunction with an oscillator with reasonably low distortion to start with, it should be possible to get a total distortion of less than 0.0001% (-120dB). there are a few very/ ultra low distortion oscillators described on the Net, including Project 174 which was contributed in 2016. This is a spot-frequency oscillator, and it's not suitable for continuously variable operation. The choice of capacitors is very limited when you need extremely low distortion, and the ideal dielectric is either polystyrene or NP0 (aka G0G) ceramic.
+ +Spot-frequency oscillators are not at all uncommon when the highest performance is desired. It does mean that you need a separate oscillator for each test frequency, but there are few alternatives. The 16th reference is a well-known design that claims distortion below -140dB. While you need multiple oscillators, it's not particularly expensive to implement (other than polystyrene caps which are fairly hard to get with few available values and they're comparatively expensive).
+ + +Very low distortion may seem like a requirement in all cases, but it's no. There are oscillators that are designed specifically to have extremely good level control (so it doesn't change as the frequency is altered). Because these need very fast level correction to maintain the output voltage at the desired level, something else has to suffer. This is almost always distortion. An example is the HP 208A oscillator. This design has almost perfect level control, with the output voltage remaining steady over the full frequency range. As the frequency is changed (using a high-precision dual pot), the level doesn't change by more than 0.125dB.
+ +To get this degree of amplitude accuracy, something else has to suffer. Distortion is quoted as "less than 1%", which is not wonderful, but 'adequate'. To get the best possible settling time, the peak detector (that controls amplitude) has different capacitor values for each frequency range. Each is designed to provide the fastest amplitude response based on the frequency range in use. Most other sinewave oscillators use a long time-constant for the amplitude control to get low distortion.
+ +A reasonable approach is ...
+ ++ Amplitude Stability/ Fast Settling/ Low Distortion Pick any two! ++ +
If you need low distortion, you almost always have to accept either mediocre amplitude control or long settling times. It is possible to have all three, but the circuit becomes very complex, and the tracking of the control element (variable resistance or capacitance) has to be close to perfect! Common dual gang pots have poor tracking, and almost invariably cause amplitude 'bounce' or momentary drop-outs. For looking at frequency response (for example), you need (close to) zero amplitude bounce or dropouts, but distortion is not a major issue provided it's less than ~2% or so.
+ +Function generators using 'sinewave shaping' (see Section 7) are very good in this respect, and there is zero bounce. Amplitude stability is temperature dependent though, but for most tests that's not a major problem. A typical test will be fast enough that thermal drift won't cause any amplitude variation.
+ + +Based on the information in this article, you should now be able to build yourself an audio sinewave generator. They are not simple, and obtaining vanishingly low distortion is a serious challenge for all techniques, both analogue and digital. The one component that made a low distortion oscillator a comparatively simple project - the RA53 thermistor - is now gone. Apart from a few old buggers like me who've managed to squirrel a couple away over the years, they are virtually unobtainable. Small lamps do work surprisingly well though, and this remains a viable option. You must ensure that the lamp voltage is at least 10% of the rated voltage. For example, a 28V lamp should be operated with at least 2.8V RMS across the lamp itself, and preferably a bit more. If this isn't done, continuous amplitude bounce may occur with some lamps.
+ +Although I have shown that the lamp will work, they are less predictable than the thermistor unless you have a reliable source. More complex techniques using FETs or LED/LDR optocouplers may often be needed, but most are unable to get distortion figures that even approach the thermistor or lamp. Eventually, all audio oscillators will probably be digital, because the analogue techniques are getting too hard. Even finding a good quality dual-gang pot with accurate tracking between the sections is difficult - once, high value wirewound pots were made that were perfect, but these too have all but vanished.
+ +To be able to take distortion measurements, the easiest approach will almost always be to use a general purpose audio generator followed by a low-pass (or band-pass) filter. This can be made switchable, so you can have a few spot frequencies for distortion measurements, but still have the ability to sweep the signal over a wide range. The filter will reduce the level of harmonics, and would normally be built so that the -3dB frequency of the filter corresponds to the measurement frequency or slightly below (amplitude will be reduced). For most applications, a 12dB/octave (second order) filter will be sufficient, and will reduce distortion by a significant amount. Naturally a higher order filter will reduce the distortion further. In a test that I ran, initial distortion was fairly high - almost 0.7%. A 12dB/octave filter reduced this to 0.2% and a 24dB filter reduced it further to 0.06%. Naturally, if you start with a distortion at an already sensible value (around 0.02% or so), a 24dB filter will get you to perhaps 0.002%.
+ +Ultimately, you will quickly run out of distortion signal and be left only with noise. Even here, the filter helps a lot, because all noise above the test frequency is filtered out. With a much narrower bandwidth, noise is diminished significantly.
+ +As I hope is now very clear, sinewaves are not simple. They are without doubt the hardest signal to generate accurately (i.e. with minimum possible distortion), and I hope the information presented gives you a few new ideas. There are countless circuits, academic papers, discussions (between engineers and some hobbyists) and forum posts on the Net, with many discussions of stabilisation techniques and the optimum topology overall. If (as many 'audiophools' insist) "sinewaves are simple" then none of this would be necessary. Alas, it is the very 'simplicity' of sinewaves that makes them so difficult to produce. A single frequency, with no distortion or noise is, in fact, an impossible dream !
+ +Despite all the advances in electronics over the years since Hewlett-Packard started building Wien bridge audio oscillators in a converted garage, this still remains one of the best overall topologies. To be able to get the wide range of frequencies needed for response measurements, we still need to use discrete circuits. Few (affordable) opamps have good enough high frequency response at full level - we need at least 100kHz, preferably more.
+ +While the oscillator itself is not difficult, the downfall of almost every circuit is the stabilisation technique. There is no perfect method, the simple ones have either disappeared or are in the process of doing so (vacuum NTC thermistors and small incandescent lamps respectively), and eventually the stabilisation circuit ends up being vastly more complex than the oscillator. There are many circuits on the Net that manage that, as does Project 174. Unfortunately, there are few options, and it's getting even worse as the number of JFETs continues to shrink as well.
+ +For what it's worth, the audio generator I use the most now is a digitally synthesised function generator, which includes sine, triangle and square waveforms, dual oscillators (which can be synchronised), tone burst facilities, a sweep generator, arbitrary waveform generation, plus a great deal more. It's very flexible, but it's also less convenient than a standard analogue oscillator for some applications. Residual distortion is 0.02%, so it's (just) good enough to be able to take distortion measurements on most equipment. However, a 'traditional' analogue oscillator is much more user-friendly for many tasks, and there is still a need for a simple oscillator with distortion below 0.1%.
+ +For most hobbyists, I would not recommend a digital function generator, as it's far more than is necessary for most testing and they aren't as convenient as a normal audio oscillator. I needed it because of work I've been doing that involves very low frequencies (0.1Hz or less in some cases), but that's obviously not needed for normal audio work. One design that is recommended is Project 86, which uses the technique described in Section 6.5 in this article. It's not perfect, but it does perform surprisingly well. As noted above, there's also a PCB available, which makes it very easy to build.
+ +While many of the circuits shown here have been built and tested, many others are simulated. Unfortunately, the simulator I use doesn't include a non-linear resistor, so lamp stabilisation was tested on the workbench for most (but not all) lamp stabilised circuits shown. JFET circuits were simulated, and LED/LDR circuits were also bench tested (the simulator doesn't have an LED/LDR optoisolator either).
+ + ++ 1 - Thermistors, lamps, LED/LDR Stabilisation Techniques, Linear Technologies Application Note, AN43+ +
+ 2 - Easily Tuned" oscillator - National Semi Linear Brief, LB-16, 1995
+ 3 - Sine-Cosine Oscillator (massmind.org) - (Note that JFET is not connected properly, so expect more than 'normal' amplitude bounce.)
+ 4 - Intersil Application Note AN1087, March 20, 1998
+ 5 - Sinewave Generation Techniques, National Semiconductor Application Note, AN-263, 1999
+ 6 - Design of Opamp Sinewave Oscillators, Ron Mancini, Texas Instruments Application Note
+ 7 - Wien Bridge - Classic circuit, multiple sources (including several above).
+ 8 - Wien-Bridge Oscillator With Low Harmonic Distortion, J.L. Linsley-Hood, Wireless World, May 1981
+ 9 - Sine Wave Oscillators, Ron Mancini and Richard Palmer (SLOA087, Texas Instruments)
+ 10 - A Quick Sine Wave Generator (SNOA839, Texas Instruments)
+ 11 - Sine-Wave Oscillator (SLOA060 - Texas Instruments)
+ 12 - Sine-Wave Oscillators (SNOA665 - Texas Instruments)
+ 13 - till.com - An Improved Sine Shaper Circuit
+ 14 - Digital Generation of Low-Frequency Sine Waves (Anthony C. Davies, Member, IEEE, June 1969)
+ 15 - Create Sinewaves Using Digital ICs (Don Lancaster, American Radio History, November 1976)
+ 16 - An ultra low-distortion oscillator with THD below -140 dB + - Vojtech Janásek +
Recommended Reading
+
Designing With Opamps - Part 1 and Part 2 - ESP
![]() | + + + + + + + |
Elliott Sound Products | +Low-Power DC Supplies |
![]() ![]() |
As an extension from Small Power Supplies (Part I) and Small Power Supplies (Part II), this article concentrates on practical solutions, without being sidetracked by the many extra details provided in the first two articles. There is some duplication, but not as much as you might think when looking at them. The first article has more detail about regulators and how they work, but that's a purely theoretical examination that won't help you to build a supply. The second looks at transient response and noise, which are largely irrelevant to the supply ideas described here.
+ +When building projects, there are countless reasons that you'll need a low-voltage power supply to power 'stuff' that has little or nothing to do with the audio. These range from 12V trigger circuits (so an external 12V input turns the gear on), to power a soft-start such as Project 39, or to provide power to a speaker protection board (e.g. Project 33). Your project may require a push-on/ push-off circuit such as described in Project 166, or use a PIC or microcontroller that needs its own power, either full-time or just when the amp (or whatever else) is powered on. The question is which supply is the best?
+ +There's no simple answer, as some auxiliary circuitry may only need a few milliamps, others might need a great deal more. If it's permanently on, reliability (and safety against possible fire) becomes an issue you have to consider, and you ideally need something that will last for at least the life of the final product, and preferably more. This may mean a simple mains transformer-based PSU if you don't need much current, and although they draw more power in standby than a modern switchmode supply, they tend to have an indefinite life (20+ years is usually easily achieved).
+ +Across the Web, there are countless designs for low current (typically 1A or less) power supplies for preamps, small PIC based projects, ADCs, DACs and almost any other project you can think of. Many are very basic, using nothing more than a resistor and zener diode for regulation, while others are very elaborate. For most beginners and many experienced people alike, it often becomes harder than it should be. You have to make a decision, based on what you need (voltage and current), how much you're willing to spend, and expected life. If you need to power a relay (or several), consider that a 'typical' 10A relay with a 12V coil has a resistance of ~270Ω, and will draw 44mA, a dissipation of a bit over ½W. Higher voltage coils draw less current, but for a given size of relay, the power is fairly constant regardless of the voltage rating.
+ +Ultimately, the final choice depends on the application, but for most ancillary gear, a 12V, 1A supply will cover most requirements. If you're using a conventional (i.e. 50/60Hz transformer, rectifier, filter and regulator), you need a transformer of around 30VA to get a clean 12V, 1A regulated supply. Where the current needed is low (~100mA), a 6VA transformer will suffice. Sometimes you might not even need that much, so you may get away with a 2-3VA tranny. You need to beware of the pitfalls (see Section 1.1 which looks at the ratings in more detail). A small SMPS (switchmode power supply) will often be more economical, but you sacrifice long-term reliability.
+ +There may be applications where a 5V supply is preferred, perhaps for equipment that has Bluetooth or LAN connections that need to remain active. If that same supply is expected to activate relays, you need much higher current for a 5V relay than a 12V relay. For example, where a 12V relay may draw ~45mA, one of the same 'family' with a 5V coil will draw almost 110mA. The power consumed is the same though, so using a 5V supply certainly isn't out of the question. However, some ancillary equipment may not be able to function with 5V - P33 and P39 for example. The voltage is too low for the circuits to operate normally. One solution is to use a 12V supply with a secondary regulator to provide 5V. Small switchmode buck converters (step down) can be used to get high efficiency. However, the no-load current of these may be higher than a simple linear regulator.
+ +There is an endless fascination by some to build the smallest and cheapest power supply possible. Many circuits can be found that don't even use a transformer, and while some have acceptable or at least 'adequate' warnings about safety, others do not. Transformerless power supplies are not considered here (see Transformerless Power Supplies; +How To Configure Them Properly) for more info. In general, these are discouraged, because they are inherently dangerous.
+ +All of the designs shown are intended for use where the DC is fully isolated from mains voltages. Make sure that you read the Dangerous Or Safe? - Plug-Packs (aka 'Wall Warts') Examined article before you embark on the use of an AC/DC switchmode supply. Some are likely to be lethal (especially if purchased from eBay, Amazon or Ali Express (for example). Many of these claim to be approved, but some are incapable of passing the most rudimentary approvals tests. The majority of the linked article is not included here, but it is very important that you understand that some SMPS are complete rubbish and/ or dangerous.
+ +If you are not experienced with mains wiring, do not attempt the following circuits. In some countries it may be unlawful to work on mains powered equipment unless you are qualified to do so. Be aware that if someone is killed or injured as a result of faulty work you may have done, you may be held legally responsible, so make sure you understand the following ...
+ +++ ++
+WARNING : The following description is for circuitry, some of which requires connection to mains voltage. Extreme care is required to ensure that the + final installation will be safe under all foreseeable circumstances (however unlikely they may seem). The mains and low voltage sections must be fully isolated from each other, observing + required creepage and clearance distances. All mains circuitry operates at the full mains potential, and must be insulated accordingly. Do not work on the power supply while power + is applied, as death or serious injury may result. +
For anyone who is unfamiliar with the terms 'creepage' and 'clearance' as applied to electrical equipment, they are defined as follows ...
+ ++ Creepage: The shortest distance across a surface (PCB fibreglass or other insulating material) between conducting materials (PCB traces, etc.). Allow at least 8mm for general purpose + equipment.+ +
+ Clearance: The shortest distance through air between conductors. Again, 8mm is recommended, but it may be reduced if there is an insulation barrier between the conductors. +
All countries have electrical wiring codes and standards, and compliance may be voluntary, implied or (in a few countries) mandatory (at least for some products). In any case, if a product is found to be dangerous, there will usually be a recall, which may be mandatory if the safety breach is found to be a built-in 'feature' of the product that renders it unsafe or dangerous. It is the responsibility of anyone who builds mains powered equipment to ensure that it meets the requirements that apply in the country where it's built or sold. The authorities worldwide take electrical safety seriously, and woe betide anyone who falls foul of the standards (and subsequently the law courts) by killing or injuring someone.
+ +The power supplies described here are intended to power 'ancillary' circuitry, such as a speaker protection circuit, or perhaps a microcontroller or a motorised volume control. For powering preamps and other audio circuits, you'd typically use the P05 power supply, which is designed specifically for powering audio circuitry. I've shown 12V as the output voltage for the examples, but it can range from 5V up to 24V, depending on your needs.
+ + +The general schemes shown here range from around 50mA up to 1A. Lower current means lower cost, so there's no need to build (or obtain) a 1A supply if you only need 50mA, unless the cost is low enough to justify the added current capability. This is especially important for linear supplies, where the transformer is the most costly item. A 2VA transformer may be obtained for less than AU$10, and can supply up to 70mA. A 7VA transformer (250mA DC) will be about 50% more, but if you need more (say 18VA for 600mA DC) you'll pay an extra 25% again. Above that, the prices are significantly higher from most suppliers. As a general rule, assume that the maximum DC output current is roughly half the AC output current. The rectification and smoothing process has a poor power factor, and 2:1 is a safe margin (albeit generous in some cases). The current ratings listed above assume a transformer with a 15V secondary. The DC output will be around 20V with light loading, sufficient to allow for a regulator.
+ +The theory of small supplies depends on their technology (linear vs. SMPS), but all we're interested in doing is obtaining a power supply that can be incorporated in a chassis as part of the main circuitry. This may be used to power a Project 33 speaker protection, a Project 39 inrush limiter, or any of the other things that you may wish to include in your construction. You can use a basic regulator from the main power supply, but that may not be advised for any number of reasons. In particular, the dissipation of the regulator may become excessive, especially of the main supply voltage is greater than 35V or so.
+ +However, it remains an option and is included here because it's often the easiest way to get a low-voltage supply with the minimum of fuss. In general, this approach has limited current (around 100mA maximum) because the regulator may be dead simple, but it will dissipate power and will need a heatsink. This instantly increases the cost unless the chassis is aluminium (at least 1.5mm thick) that can be used as the heatsink. Note that circuits such as the Project 39 inrush limiter should never be operated from the main supply. If there's a fault, the circuit gets no power, and damage is guaranteed.
+ +Since most hi-if products are powered from the mains, we need to galvanically isolate the output of the supply from the mains voltage. This is a vital safety requirement, and cannot - ever - be ignored, regardless of output voltage or power requirements. Galvanic isolation simply means that there is no electrical connection between the mains and the powered device. A transformer satisfies this requirement, but is not the only solution. One could use a lamp and a stack of photo-voltaic (solar) cells, but this is extremely inefficient. However, this technique is used in some MOSFET isolated gate driver ICs, but they only have to output a few microamps. Because most of the alternatives are inefficient or just plain silly, transformer based supplies represent well over 99.99% of all power isolation methods. Switchmode supplies also use a transformer, so they are included.
+ +Transformers only work with AC, so the output voltage must be rectified and filtered to obtain DC. This is shown in Figure 1.1 - the transformer, rectifier and filter are shown on the left. For simplicity, single supply circuits will be examined in this article - dual supplies essentially duplicate the filtering and regulation with the opposite polarity. Since the idea here is to power ancillary circuitry, a negative supply is rarely needed. The filter is the first stage of the process of ripple (and noise) removal, and deserves some attention. However, many applications aren't particularly fussy, and while the next circuit can be improved, in many cases there's simply no point.
+ +C1 (the filter capacitor) needs to be chosen to maintain the DC (with superimposed AC as shown in Figure 1.2) above the minimum input voltage for the regulator. If the voltage falls below this minimum because of excess ripple, low mains input voltage or higher current, noise will appear on the output - even if the regulator circuit is ideal. No conventional regulator can function when the input voltage is equal to or less than the expected output. It can be done with some switching regulators, but that is outside the scope of this article. Remember that the transformer's output current will be roughly twice the DC current. The regulation of small transformers is generally awful, so the simple circuit shown in Fig 1.1 is only suitable for around 150mA DC output, requiring a transformer with no less than a 4VA rating. The secondary voltage is 15V because small transformers have very poor regulation. You might get away with a 12V secondary, but there's very little headroom.
+ +In the above schematic, there is about 300mV RMS (950mV peak-peak) ripple at the regulator's input, but only 10mV RMS (34mV p-p) at the output of the discrete regulator. This is a reduction of 30dB - not wonderful, but not bad for such a simple circuit. Load current is 120mA. With the addition of 1 extra resistor and capacitor to create a filter going to the base of Q1, ripple can be reduced to almost nothing. If you wish to experiment, replace R1 with 2 x 560Ω resistors in series, and connect the junction between the two to ground via a 100µF capacitor. This will reduce ripple to less than 300µV - 62dB reduction. Alternatively, one might imagine that just adding another large cap at the output would be just as good or perhaps even better. Not so, because of the low output impedance. Adding a 1,000µF cap across the load reduces the output ripple to 3.8mV - not much of a reduction.
+ +While simple, a discrete regulator will actually cost more to build and use more PCB real estate than a typical 3-terminal IC regulator. The IC will also outperform it in all significant respects. You must also remember that the discrete regulator has no current limiting, so a shorted output will almost certainly destroy the transistor! It's not difficult to add basic current limiting, but even in its simplest form it will add a low-value resistor and a transistor or a couple of diodes. On the positive side, the discrete regulator with a bigger transistor can handle much a higher input voltage. If you use a Darlington (e.g. TIP122 as used in the Rev-B P33 circuit), the input voltage can be up to 100V. R1 would be increased to suit, sized to provide a nominal base current of 5mA ...
+ ++ R = ( Vin - Vout ) / 5m For example, for a 56V supply ...+ +
+ R = ( 56 - 12 ) / 5m = 8.8kΩ (use 8.2kΩ, preferably at least 0.5W) +
The formula is 'close enough'. Aiming for accuracy is not required (and would be pointless) because there are too many variables. You'll need to use a 13V zener diode (or two series diodes) to compensate for the extra base-emitter junction of the Darlington transistor.
+ +The discrete regulator in Figure 1.1 is very basic - it has been simplified to such an extent that it is easy to understand, but it still works well enough for many basic applications. The output ripple of the IC version is not shown, but will generally be well below 1mV p-p. Prior to the introduction of low-cost IC regulators, the Fig. 1.1 circuit used to be quite common, and a very similar circuit was common using valves (vacuum tubes). Early voltage references were usually neon tubes, designed for a stable voltage. These will not be covered in this article.
+ +Referring to Fig. 1.2, it should be obvious that the filter capacitor C1 removes much of the AC component of the rectified DC, so it must have a small impedance at 100Hz (or 120Hz). If the impedance is small at 100Hz, then it is a great deal smaller at 1kHz, and smaller still at 10kHz (and so on). Ultimately, the impedance is limited by the ESR (equivalent series resistance) of the filter cap, which might be around 0.1Ω at 20°C. Using a larger capacitance reduces the ripple, but doesn't change the average DC voltage. If C1 is changed to 2mF (2,000μF), the input (and output) ripple is halved.
+ +It is important that capacitive reactance is not confused with ESR. A 1,000µF 25V capacitor has a reactance of 1.59Ω at 100Hz, or 15.9Ω at 10Hz. This is the normal impedance introduced by a capacitor in any circuit, and has nothing to do with the ESR. At 100kHz, the same cap has a reactance of only 1.59nΩ (nano-ohms), but ESR (and ESL - equivalent series inductance) will never allow this to be measured. The ESR will typically be less than 0.1Ω, and is generally measured at 100kHz. Indeed, at very high frequencies, the ESL becomes dominant, but this does not mean that the capacitor is incapable of acting as a filter. It's effectiveness is reduced, but it still functions just fine. Some people like to add 100nF caps in parallel with electros, but at anything below medium frequency RF (less than 1MHz), such a small value of capacitance will have little or no effect. While this is easily measured in a working circuit, few people have bothered and the myth continues that electrolytic caps can't work well at high frequencies.
+ +Contrary to popular belief in some quarters, electrolytic capacitors do not generally have a high ESL. Axial caps are the worst simply because the leads are further apart. ESL for a typical radial lead electro with 12mm lead spacing might be expected to be around 6nH. A short length of track can make this a great deal worse - this is not a fault with the capacitor, but with the PCB designer.
+ +Unfortunately, a simple linear circuit as shown above needs a transformer, the cost of which is often greater than the cost of a complete switchmode AC/DC converter. It might be possible to find one for no more than (say) AU$10.00 or so (a 1.9VA transformer may be as little as AU$8.00), but the transformer size (in VA) needs to be twice the product of DC voltage and current, before regulation. A 15V, 2VA transformer can deliver 133A AC, but expecting more than 60mA DC is most unwise. In general, my recommendation would be a maximum DC output current of no more than 60mA. These small transformers have terrible load regulation, so the output voltage collapses quickly when the output is loaded.
+ +Always consider the highest input voltage allowable for a regulator IC. For the common 7805/ 12/ 15 devices, that's 35V absolute maximum, or 40V for the 7824. For adjustable regulators (LM317/ 337) they quote the input-output differential (40V), so in theory you could have an input of 50V and an output of 12V (38V differential). However, at some point the regulator will die (if the output is shorted or during startup when it has to charge a capacitor). I strongly recommend that the maximum input voltage should be no more than 35V!
+ +One way of reducing the voltage is to use zener diodes in series with the input. If the supply is 50V, a pair of 12V zeners in series will allow up to 50mA (the current for a 12V, 1W zener is 83mA - at full power. This can't be sustained and the zener(s) will overheat and fail. A better approach is to use the discrete circuit in Fig. 1.1, with a higher power series-pass transistor (Q1), and set for an output voltage of ~20-24V for a 12V output. If you were to use the suggested TIP122 (65W) Darlington transistor, with a good heatsink you could easily draw up to 1A (over 30W transistor dissipation!). Of course, if this is intermittent it's not a problem. This arrangement is far better than any alternatives, and while the transistor and zener will cost more than the regulator, at least the survival of the latter is assured. This same circuit can be used with the switchmode buck regulator described in Section 4. The pre-regulator's output voltage should be higher to reduce dissipation - The LM2596 can handle an input of up to 40V.
+ +If you wanted to use a purely discrete supply, the one shown above should meet your needs. It includes (very basic) current limiting that will prevent destruction if the output is shorted. If the voltage across R3 exceeds 0.7V (about 350mA), Q2 conducts and (proportionally) removes base current from Q1, limiting the current to a nominal 350mA. In reality, it will be somewhat more into a short circuit. With the 50V supply shown, Q1 will dissipate close to 20W with a shorted output, so a heatsink is essential. The maximum allowable output current with a 50V input supply is about 800mA, which is just inside the SOA curve for the TIP122. At an output current of around 200mA, Q1 will dissipate 8W. That's rather a lot of heat to dispose of on a continuous basis.
+ +Because of the extra filter (R1, R2 and C2), ripple rejection is about 60dB at 120mA output. This is more than enough for ancillary equipment, and adds almost no cost to the design. Overall, this is a fairly convenient solution, but it isn't suited to 'permanently on' equipment because it relies on the main supply for its operation. While it could be used with P33 (for example) the new PCB already has an on-board regulator (albeit simplified - similar to Fig. 1.1 discrete).
+ + +While this topic might seem very simple, judging from the number of emails I get asking about it, perhaps it's not so simple after all. Most people don't give transformer selection a second thought, which may be because the specifications are provided in the project itself, or because it seems to be so easy that you can't go wrong. Well, you can go wrong, and end up with unexpected results. Of these, the most common is the final voltage after rectification. With small transformers at light loading, the voltage will often be much higher than expected, and when loaded, lower than expected.
+ +Transformer selection depends on many factors. The desired output voltage and current determine the transformer size, but the relationships are more complex than they may seem at first. Small transformers (< 10VA) have poor regulation because they have a high winding resistance. That means that you almost always need a higher voltage than you thought, so for a 5V output you'd generally need to start with a nominal output voltage of at least 7.5V AC. In theory, your unregulated output will be around 10V, but with no load it will be more than that. At full load (DC) it will be less than 10V, and you may not even have enough 'headroom' to ensure regulation without ripple breakthrough.
+ +I tested a 5VA, 18V (9+9V) transformer, capable of 277mA AC output at full (resistive) load. With no load and 230V mains, the output was 21V RMS, and it was just under 18V with a 65Ω load. The primary resistance measured 707Ω, with 6.62Ω for the secondary (a total equivalent series resistance of about 12.5Ω). When connected to the Fig. 1.1 bridge and filter cap, the average DC output is 28.7V with no load, falling to 21V DC (average, 20.5V minimum) with a 170mA load. The AC output current measured 276mA RMS - close enough to the maximum allowed (277mA RMS). Everything changes if the transformer has a higher or lower VA rating! There are many 'simple' formulae suggested online, and they give simple answers that are almost always simply wrong!
+ +For a 5V regulated supply using the transformer I tested, the two secondaries would be in parallel, rather than series. The unloaded DC voltage will be ~11V, falling to ~9.5V (average, 8.5V minimum) with an output current of 340mA. Because we expect a higher current, C1 would be increased to 2,000μF (2 x 1mF in parallel). The minimum voltage (the troughs of the ripple waveform) is increased to 9.1V, and the total secondary current is 546mA (just within the transformer's VA rating). Note that the voltages are not simply halved, because the diode voltage drop becomes more significant at lower voltages.
+ +This is the reason that I published the series of articles on transformers. Nothing is as straightforward as it seems (and is so often presented). At low voltages, you almost always need a higher nominal output voltage than you might guess. This is doubly true if the voltage is to be regulated, and despite claims you may see that LDO regulators are the answer, they often cause other issues (such as unexplained oscillation) that can be difficult to solve. See the Low Dropout (LDO) Regulators article for more.
+ +Remember that the mains voltage can increase/ decrease around the nominal value (230V or 120V), often by 10% or more. If you only have 10% headroom for a regulator, you will get ripple on the output if the mains voltage falls, and you'll also get higher regulator dissipation if the mains voltage rises. These anomalies must be accounted for in a design. Sometimes it doesn't matter though, because the voltage supply for many auxiliary circuits isn't at all critical. If relays have to be activated, the voltage must be above the minimum allowable voltage. For a 12V relay, that's generally about 9V, but it varies (always consult the datasheet!).
+ + +A regulator (in almost any form other than a zener diode) is an amplifier. Admittedly the amplifier is 'unipolar', in that it is designed for one polarity, and can only source current to the load. Very few regulators can sink current from the load, but shunt regulators are an exception! Since amplifiers can oscillate, it follows that regulators (being amplifiers) can also oscillate. As the bandwidth of a regulator is increased to make it faster, it will suffer from the same problems as any other wide bandwidth amplifier, including the likelihood of oscillation if bypassing isn't applied properly.
+ +The regulator itself has two primary functions. The first is to provide a stable output voltage, and the second is reduction of the power supply filter noise - mainly ripple, and this pretty much comes free when the voltage is regulated. The regulated voltage may not be especially accurate, but this is rarely an issue.
+ +The output impedance should be low, because this allows the voltage to remain constant as the load current changes. For example, if the output impedance were 1Ω, then a 1A current change would cause the output voltage to change by 1V. This may not be an issue with some circuits, but it will be unacceptable for others. One might normally expect the output impedance to be less than 0.1Ω, and that's easily achieved - even with simple designs.
+ +In order to maintain low impedance at very high frequencies, an output capacitor is almost always required. This will be in addition to any RF bypass capacitors that are required to prevent oscillation. A 10μF output cap is usually quite sufficient to ensure stability. The output capacitor generally has little affect on the output ripple or noise, but it can help to provide instantaneous output current for nonlinear loads.
+ +Remember that in any real circuit, there will be PCB traces that introduce inductance. Capacitors and their leads also have inductance, and it is theoretically possible to create a circuit that may act as an RF oscillator if your component selection is too far off the mark (or your PCB power traces or wiring are excessively long). In the common applications that are covered by this article, wiring inductance will never be a problem.
+ +Bypassing is especially important where a circuit draws short-term impulse currents. This type of current waveform is common in mixed signal applications (analogue and digital), and the impulse current noise can cause havoc with circuitry - an improperly designed supply path can cause supply glitches that cause false logic states to be generated. Even the ground plane may be affected, and great care is needed in the layout and selection of bypass caps to ensure that the circuit will perform properly and not have excessive digital noise. Again, this is unlikely to be an issue for common ancillary circuits.
+ +Maximum power dissipation, maximum current and internal protection are all things that need to be considered. These are dependent on the type of regulator, and the specifications and terminology can vary widely. Many of the parameters are far too complex to provide a simple 'figure of merit', and graphs are shown to indicate the transient performance (load and line) and other information as may be required to select the right part for a given task.
+ +One special family of regulators are called LDO (low drop-out) regulators. Where a common regulator IC might need 2 to 5V input/output differential, an LDO type will generally function down to as little as perhaps 0.6V between the input and output. These are commonly used in battery operated equipment to maximise battery life. Some of these devices also have very low quiescent current, so there is a minimum of power wasted in the regulator itself. They are covered in Low Dropout (LDO) Regulators, but in general it's better to use a 'standard' regulator unless you really need the low-dropout. They make no sense for mains powered supplies.
+ + +Very few (especially non-audio) applications really need anything more than the traditional fixed voltage regulators, such as the 7812, 7815 and 7824 (positive) and 7912 etc. (negative). They are not ultra-quiet (electrically) at up to 90μV (15V version), but the noise is generally (but not always) immaterial when the circuit is only used for ancillary circuitry. Their ripple rejection is at least 54dB with an input-output differential of 10V. They include current limiting and over-temperature protection. Output current is ≥1A.
+ +A 7812 (or 7912) has a typical output range of from 11.5V to 12.5V, so expecting the voltage to be exact is unrealistic and unnecessary. The load regulation (i.e. the change in output when the load current is changed) is anything from 12mV to 150mV when the load current is changed from 5mA to 1.5A. For this test, the input voltage is maintained constant. The dropout voltage is 2V, so the input voltage (including ripple) must be 2V higher than the output voltage at all times. See Fig. 1.2 (red trace) to see how ripple is measured. The minimum voltage in the graph is about 15.7V.
+ +Ripple rejection is quoted as a minimum of 54dB to a typical value of 74dB, somewhat dependent on the input voltage headroom (at least 5V is a good idea if possible). These figures can be bettered by using the LM317/337 variable regulators. They have lower noise and better ripple rejection than the much older fixed regulators, but in most circuits it makes no difference whatsoever. Of more importance is the fact that they are variable, so you can keep a few on hand to regulate to any voltage you need (within their maximum input voltage range).
+ +There are quite a few other regulator types on the market, but the National Semiconductor types seem to have the lion's share of the market as far as normal retail outlets are concerned. Not that there is anything wrong with them - they perform well at a reasonable price, and have a very good track record for reliability. While one can obtain more esoteric devices (with some searching), many of the traditional manufacturers are concentrating on switching regulators, and don't seem to be very interested in developing new analogue designs (other than LDO regulators).
+ +Switchmode regulators are also available as a single IC, but they need more (and more expensive) support components. Of course they are also more efficient, so heatsink requirements are usually minimal for a few hundred milliamps output. The design process for many of these ICs is daunting, especially for most hobbyists. If you select a switchmode controller IC with an external switching MOSFET you gain a wider range of input voltages (up to 100V or more), but the ICs are almost all SMD, and the design process becomes much harder. An example is the LTC3894, with up to 150V input and adjustable output voltage (0.8 to 60V). However, it's SMD and not inexpensive, and there are many external parts needed (at least 8 capacitors, 6 resistors, a P-Channel MOSFET and an inductor).
+ +While there are many discrete or semi-discrete linear regulators to be found in various books, websites (including this site) and elsewhere, they are usually only ever used because no readily available IC version exists. An example is the ESP P96 phantom power regulator - this design is optimised for low noise and the relatively high voltage needed by the 48V phantom system. Regulation is secondary, since the phantom power voltage specification is quite broad. It is still quite credible in this respect, but it has fairly poor transient response, which is not an issue for the application.
+ + +Many people would consider a switchmode buck (step down) regulator to be the easiest way to get (say) 12V from a 40-70V main supply rail. While this is true up to a point, most of the ICs you'll find are only rated for a maximum input voltage of around 30-40V, but often less. One IC that I've used is the LM2596T-ADJ, and it's surprisingly easy to get it working provided you're not after the highest possible current and efficiency.
+ +The circuit is taken from Project 220, and it's a well tried circuit. The maximum input voltage is 40V, and the output can be adjusted from 1.23V up to 37V (the latter assumes a 40V supply). With a maximum output current of 3A, it can do most things you need. The datasheet provides very comprehensive formulae for determining the inductor value, but for around 200mA or so a 100μH inductor is generally fine. For higher current, the inductance needs to be lower, with thicker wire and a core that will not saturate. If that happens, bad things quickly follow.
+ +These are available from various on-line 'auction' sites as a complete module, for little more than you'd expect to pay for the IC. So, while it's dead easy to build one on Veroboard (and I've done so), it will almost certainly cost more than a pre-built module. As noted above, there are other devices, with some even including the inductor in the package (e.g. WPMDH1200601/ 171020601). These are not cheap ICs though - expect to pay almost AU$30 for the IC alone. This is not viable for most hobby applications, and it's probably marginal for commercial designs as well.
+ +You can built a very basic switchmode buck converter with nothing more than a cheap CMOS IC, a suitable MOSFET and an inductor (plus resistors and capacitors of course). The viability of this approach depends on your application, but in most cases it's simply not worth the effort. While I'm all for experimentation, if you're installing a circuit as part of an amplifier (for example) it's better to stay with something simple that can be repaired or replaced if (when?) it fails. SMPS are more prone to failure than simple linear circuits, and will almost always be harder to repair (especially when the IC becomes obsolete).
+ +If you need to accommodate a supply voltage above 40V, you can use the Fig. 1.1 discrete circuit to supply the IC. You lose efficiency (and Q1 may require a heatsink), but it's a low-cost option that will work well. As the allowable input voltage of switchmode ICs increases, so does their cost. Most are also far more complex than the one shown, meaning that there are more things to go wrong.
+ + +Shunt regulators have some advantages over traditional series regulators, despite their low efficiency and comparatively high power dissipation. It's uncommon to see shunt regulation used any more, but they are useful at low current or where some ripple can be tolerated. The advantages of shunt regulators are that they are inherently short-circuit proof, can sink current from the load as well as sourcing current to the load and they provide (almost) fool-proof over voltage protection, including transient suppression.
+ +Naturally, there are also disadvantages, as is to be expected. They have comparatively high power dissipation regardless of load current, and simple versions may have relatively poor overall performance. However, they are still worth considering where the load current is low (e.g. 10-20mA or so).
+ +The simplest shunt regulator consists of nothing more than a resistor and a zener. If designed properly, this is a very simple power supply arrangement, and offers acceptable performance for many low-current applications. They are very rarely used where the circuit needs more than around 100mA or so, because dissipation becomes a real problem. Consider a shunt regulator expected to supply 12V at 100mA, fed from a 42V amplifier supply. In an 'ideal' world, the feed resistor will dissipate 3W continuously, regardless of load current! In reality it will be at least 5W to allow for voltage variations from the main supply.
+ +This is one of the reasons that there are very few shunt regulators used in modern equipment. This is not necessarily a good thing, since almost no-one designs in an over-voltage crowbar circuit, so failure of a series regulator is often accompanied by wholesale destruction of the circuitry that uses the regulated supply. This is especially so with logic circuitry ... 5V logic circuits will typically suffer irreparable damage with a supply voltage above 7V.
+ +In the circuit shown above, a simple zener is boosted (or enhanced) by adding R3 and Q1. As a quick test, the circuit was simulated. The 24V DC input was deliberately 'polluted' with a 2V peak (1.414V RMS) 100Hz sinewave to measure the ripple rejection. The circuit as shown was able to reduce the ripple from 1.4V RMS to 2mV RMS, a reduction of 56dB.
+ +If R1 and R2 are replaced with a single 100Ω resistor (retaining C2), ripple rejection falls to 40dB (14mV RMS ripple). This technique for ripple reduction used to be very common when people built discrete regulated power supplies. The two resistors and the 470μF capacitor (C2) form a low pass filter, with a -3dB frequency of 14.4Hz. The enhanced zener performs far better than the zener diode by itself, because it introduces gain, and minimises the current through the zener diode. The bulk of the dissipated power is in Q1. Without the transistor, performance is much worse (6mV RMS ripple, 47dB attenuation at 100Hz).
+ +The capacitor in parallel with the zener (C2) is far less effective than C1. Why? Because the zener has a low impedance (especially the enhanced version shown), this acts in parallel with the cap's impedance. Even a 470μF cap for C2 has little effect in this circuit. With no capacitors at all, the output ripple is 15mV, so C2 only reduces the ripple by less than a few microvolts. It's there to bypass the output at high frequencies.
+ +By splitting the resistance to C1, the capacitor works with the effective impedance of the two resistors in parallel - this is much greater than the impedance of the zener, so the cap has more effect. Needless to say, a larger capacitance gives better ripple performance - doubling the capacitance halves the ripple voltage. The circuit was supplying a load current of about 60mA (12V, 200Ω load).
+ +At full load (~60mA), the zener dissipation is under 20mW, and Q1 dissipates 270mW. This rises to over 1W with no load. If only a 1W zener were used, it would fail if the circuit were operated with no load for more than a few seconds. Resistor dissipation remains the same whether the circuit is loaded or not, but it increases if the output is shorted to ground. The two resistors need to be at least 1W, since each dissipates about 680mW.
+ +For more information on the use of zener diodes in general, see AN008 - How to Use Zener Diodes on the ESP website. The design of shunt regulators in general isn't difficult, but there are quite a few things that need to be calculated. The unregulated input voltage must be higher than the desired output, and this includes any ripple. For example, if the minimum voltage is 13V and the maximum 17V (4V peak-to-peak of ripple) you can't expect to get 12V output because 1V headroom just isn't enough. The minimum voltage should be not less than 50% greater than the desired output. For 12V out, that means no less than 18V input, but performance will be poor with less than a 100% margin (24V in for 12V out). Remember too that the incoming mains will vary and this has to be taken into account as well.
+ +The feed resistance (R1 and R2 in Figure 5.1) should pass a minimum of ~1.2 times the maximum load current. If your circuit draws 50mA then the resistors need to pass at least 60mA. The voltage across the feed resistance is the input voltage minus the output voltage. You then need to work out the power dissipation of the resistors, zener and shunt transistor.
+ + +Where a physically small power supply is required for a project (including audio, but not necessarily for true hi-fi use), one can use the intestines of a miniature 'plug-pack' (aka 'wall-wart') SMPS. Although only small, some of these are capable of considerable power, but installation is not for the faint-hearted. Quite obviously, the circuit board must be extremely well insulated from chassis and protected against accidental contact when the case is open.
+ +The advantage is that the project does not require an external supply. This is often a real pain to implement, because there is always the possibility that the wrong voltage or polarity can be applied if the external supplies are mixed up (which is not at all uncommon). The disadvantage is that the unit now must have a fixed mains lead or an approved mains receptacle so a lead can be plugged in. Somewhat surprisingly, there's no requirement for 'special' approvals (as apply to all plug-pack supplies sold in Australia). Because the supply is not external, it isn't possible for anyone to come into contact with any part of it, but it will still be safe if installed into an earthed (grounded) chassis. This means a 3-pin plug - no exceptions!
+ +That doesn't mean that you can buy any old rubbish from China - it must be a safe design, with proper insulation, filtering and all necessary EMI (electromagnetic interference) prevention measure in place. There are many supplies that are fit for one location only - the local rubbish tip! (Or preferably an electronics recycling facility.)
+ +++ ++
++ WARNING : The following description is for circuitry, some of which is not isolated from the mains. Extreme care is required when dismantling + any external power supply, and even greater care is needed to ensure that the final installation will be safe under all foreseeable circumstances (however unlikely they may seem). All primary + circuitry operates at the full mains potential, and must be insulated accordingly. It is highly recommended that the negative connection of the output is earthed to chassis and via the mains + safety earth. Do not work on the power supply while power is applied, as death or serious injury may result.
The photo in Fig 5.1 shows a typical 12V 1A plug-pack SMPS board. As removed from the original housing, it has no useful mounting points, so it is necessary to fabricate insulated brackets or a sub-PCB (made to withstand the full mains voltage) to hold the PCB in position. Any brackets or sub-boards must be constructed in such a manner that the PCB cannot become loose inside the chassis, even if screws are loose or missing. Any such board or bracket must also allow sufficient creepage and clearance distances to guarantee that the primary-secondary insulation barrier cannot be breached. I shall leave the details to the builder, since there are too many possible variations to consider here.
+ +This arrangement has some important advantages for many projects. These supplies are relatively inexpensive, and the newer ones satisfy all criteria for minimum energy consumption. Most will operate at less than 0.5W with no load, and they have relatively high efficiency (typically greater than 80% at full load). The output is already regulated, so you save the cost of a transformer, bridge rectifier, filter capacitor and regulator IC. Note that this supply used UK mains pins, and does not have Australian approval. However, it is compliant with CE regulations, it would almost certainly pass tests to AS/NZS¹ and is safe and well designed. In particular, the isolation barrier between mains and output sides is generous, and is a minimum of 6mm.
+ +¹ AS/NZS - Australian/ New Zealand Standards+ +
Overall, this is a far better supply than most of those available from eBay or the like, and it's small - the outside dimensions of the ½ case seen below are 65 × 39mm (the 'ears' required by UK regulations were removed). If you keep the top cover, that can be clipped back on after installation. However, getting it off again if required may pose a real challenge.
+ +The SMPS pictured is a 12V 1A (12W) unit, and for most applications this will provide more than enough current. Consider the safety advantage compared to a transformerless supply - the finished project can have accessible inputs and outputs, and is (at least to the current standards) considered safe in all respects. Personally, I would only consider it to be completely safe if the chassis is earthed. However, it is legally allowed to be sold in Australia, and we have reasonable safety standards for external power supplies. They are 'prescribed items' under the Australian safety standards, meaning that they must be approved before they can be sold.
+ +In some cases, the original plug-pack case may be able to be re-used. Of course, this means that you need to be careful when it's split apart, but it is possible as seen above. The two mains pins and plastic 'earth' pin were removed, and the holes for the mains pins provide convenient mounting points (check for adequate clearance, and add insulation!). The case shown has 8mm clearance below the bottom of the PCB. However, there are components under the board, so insulation is an absolute requirement. You could use plastic screws, but they aren't very strong. There are many options for mounting, so you can decide what works for you.
+ +In the above, you can see 3mm threaded brass inserts (available from eBay for about AU$10.00 for 100 pcs.), melted into the pin holes. Because there's not a lot of plastic in this region, reinforcement with epoxy or UV (ultraviolet) cured adhesive is essential so the inserts can't be pulled out. Make sure that you don't get any glue in the threaded hole, or the insert will be ruined. The photo also shows the insulating sheet that goes under the PCB. While this is specific to the PSU I used, a similar approach can be used with any SMPS case. When building any project you need to be a bit adventurous (or inventive) to come up with a solution that's easy to put together, while still retaining the maximum safety of the end result.
+ +There is no more effort required to install a supply such as this instead of a linear supply, and in reality there's less if you can retain (and modify) the original case. When wired up, you can safely work on the secondary side (as with a linear supply). While it might be a little more expensive than a linear supply, it's also much smaller. If you are a canny shopper, you should be able to get a supply of the type shown for about AU$10 (I got mine from Element14 for less than AU$10 at the time). It came with a UK plug, but that was irrelevant as it was never going to be plugged in.
+ +Another possibility is a stand-alone AC/DC converter such as many advertised on eBay. The type shown doesn't come with a case, so you'll need to fabricate something, using metal (with appropriate insulation), glued plastic or 3-D printed. The boards for a 12V, 500mA versions typically measure around 52 × 24mm. These are available from China, at a cost of around AU$7.00 each including postage. Compared to the Fig. 6.1/2 versions there's a lot more messing around, as there is no case that can be re-purposed. This is still a worthwhile option though.
+ +¹ The photo is from an eBay supplier page, and is shown for reference. It's almost impossible to describe these adequately, hence the photo. A link is pointless, because they change regularly.+ +
In particular, look for input common-mode chokes (the dual-winding part at the top left) and an output choke (the cylindrical part between the output caps on the right). Proper filtering is essential, or the noise level will be much higher than it should be. You can't test for electrical safety unless you have access to a Megger (high voltage insulation tester), which will have an output voltage of 500V or 1kV (DC). The measured resistance between input and output should normally be at least 1,000MΩ (1GΩ), and anything less is an indication of leakage between primary and secondary. You may need to remove the Class-Y cap if it's rated as Y2 - the test voltage should be no more than 500V. No 'no-name' SMPS should ever be used unless you can verify that the insulation is sufficiently robust.
+ +I generally test at 1kV, but keep the test duration to about 10 seconds so parts aren't stressed too much. Most supplies I have tested show 2,000MΩ (2GΩ - the upper readable limit for my tester). Interestingly (or not, depending on your perspective), one supply I measured gave a rather poor 50MΩ from input to output. This was traced to the inexplicable addition of 5 × 10MΩ SMD resistors in series, bridging the isolation barrier. Needless to say these (and their PCB pads & traces) were removed. I have no idea of why anyone thought that was a good idea.
+ +A Megger See Note (high voltage insulation tester) is a very worthwhile piece of test gear for any hobbyist. It lets you verify that your latest creation is electrically safe (at least within the limits of the tester), and you can be fairly sure that if the insulation tester tells you that the insulation resistance is over 200MΩ at 500V DC (or 1kV DC for the paranoid), there is little likelihood of insulation failure and you haven't made any silly mistakes that could cost you your life. Most have an upper limit of 2GΩ, with some extending to over 5GΩ. An insulation tester is not a panacea though, so you must always use best wiring principles when working with mains voltages. 'Generic' high voltage testers can be obtained for around AU$60.00+ - not an especially cheap item, but if it saves your life it's a bargain!
+ ++ Note: Megger® is a registered trade mark for insulation testers, but like Variac® the name has become part of the lexicon of electronics because they've been with us for so + long. See Megger for the original. ++ +
Insulation tests are performed using DC. There is always some capacitance between the primary and secondary of a transformer (50/ 60Hz or SMPS), and with any SMPS there's also the Class-Y capacitor. These will give an impedance proportional to the capacitance and frequency. At 50Hz, you'd normally expect an impedance (not resistance) of around 1MΩ or more. Using DC (at a voltage ≥ the peak of the AC voltage) eliminates problems due to capacitance. Subjecting insulation to a 'hipot' (high potential) test (especially AC) is often considered destructive, and if so, the item tested must not be used after testing! The insulation may not have failed, but it has been subjected to a test voltage well beyond its design ratings, which may weaken the insulation materials.
+ +As always, obtaining the test procedures for where you live involves getting a copy of the relevant standards documents. These are only available from the bodies that set the standards, and they are very costly. I've complained about this in several pages on the ESP site, and the situation is made worse because not everything is in one document, so you may have to purchase several standards to get all the information you need. Typically, standards documents refer to other standards documents, and you need them all to know just what is required.
+ +Protection against accidental contact with live parts is always advised, even when the device is obviously mains powered. With any supply from China, always verify that the Y-cap (next to the transformer in the photo) really is a Class-Y component. I've seen too many supplies using 1kV ceramic caps as a substitute. If there is any doubt, replace it with a genuine Class-Y1 (or Class-Y2) certified safety capacitor.
+ +The regulations worldwide are different, but in most cases, it's expected that one will have to use a 'tool' to gain access to live parts. A screwdriver generally counts, but as many will be aware, some manufacturers take this to extremes, using 'security' screws that require a particular tool that fits the recess. These range from Torx to more 'advanced' tools, but nothing will keep people out if they are determined enough. Many commercial SMPS use a glued case that can be difficult to get apart without damaging it beyond repair, while others use (very secure!) clips that can be undone if you know where they are and have the right tools.
+ +One thing that you need to be aware of is that almost all modern SMPS are designed to comply with energy efficiency standards. That means that at low (or no) load, they operate in a mode commonly referred to as 'skip-cycle'. The supply will switch off for much of the time, only turning on when the output voltage falls below the threshold by a few millivolts. These have a no-load rating of less than 500mW (sometimes as low as 100mW), with the idea that they don't draw significant mains power when plugged in but unused. The regulators (world-wide) determined in their infinite wisdom (note careful use of sarcasm) that everyone leaves their power supplies plugged in, even when they aren't being used. Some people do, but many don't!
+ +The result is that the supplies are very noisy within the audio frequency range with minimal load. I have captured the noise I found with the example supply shown above, with no load, 12mA output, 24mA output and 63mA output. If your application is audio (or within the audio frequency range, the supply is unusable unless the minimum load is drawn (typically around 100mA, but it varies). This also applies to USB chargers, so if you try to use one of those to power a project, you must ensure that you draw enough current to force the supply to operate with a 'normal' duty cycle. The clue will be that your project is noise-free when powered from the USB port on a PC, but unusable with a separate USB charger.
+ ++ No Load, 242Hz+ +
+ 12mA Load, 1,917Hz
+ 24mA Load, 3,267Hz
+ 63mA Load, 4,560Hz +
These noises were recorded from the SMPS output via a 15kHz low pass filter to minimise high-frequency 'hash' that would alter the recording. Each was amplified by 100 (40dB) post recording. The files are MP3 because there is no expectation of fidelity - this is stuff you don't want to hear. Unless you draw enough current (or add a very serious filter stage) you will get this noise through an audio circuit. Maybe not all, or maybe it will be amplified to make it even worse. For the frequency, only the fundamental is shown. The waveforms are roughly triangular, and contain both even and odd harmonics. With 100mA current drain, the noise was well outside the audio range (minimal or no skip-cycle behaviour). The supply you use will be different!
+ +The sound files are each ~10s long, and have been boosted so they are louder than the direct output from the supply. I used the 12V supply described above, and captured the noise using a PC sound card. This is a very real problem, and there doesn't appear to be any form of filtering that prevents the noise from getting through. It could (probably) be done with a so-called 'capacitance multiplier' but at the expense of some voltage loss. In most cases, a resistor that draws enough current to force the SMPS into continuous operation will be the easiest - albeit wasteful of energy. At least 500mW will be needed in most cases, but up to 1W may be required.
+ + + +The main purpose of this article is to provide some ways you can create a small power supply to power ancillary circuitry within a chassis. It's not a substitute for the main article that covers a much wider range and includes transformerless power supplies (see Small, Low Current Power Supplies - Part 1.
+ +There is no doubt that the traditional transformer based supply is the safest and has the highest reliability. It is extremely easy to ensure that no live connections are accessible, often needing nothing more than some heatshrink tubing to insulate joined wires. Note that if possible, two layers of heatshrink should be used to provide reinforced insulation over joined wiring. I have linear supplies that are over 50 years old, and they remain functional to this day. The same cannot be expected of switchmode supplies! Good ones can still survive for a reasonable time, especially if they are operated in free air (without the original enclosure). The lower the operating temperature, the longer they will survive. Protection from accidental contact is very important though, and is harder with a SMPS than a simple transformer based linear supply.
+ +A 50/ 60Hz transformer has full galvanic isolation and requires little or no EMI filtering, leakage current is extremely low, and a well made transformer based supply is so reliable that it will almost certainly outlive any equipment into which it is installed. While it's usually not the cheapest option, a transformer provides a reasonable attenuation of common mode mains noise, and the final supply can be made to be extremely quiet, with virtually no hum or noise whatsoever. No-load efficiency is not as good as a modern SMPS, but the 'wasted' power is generally no more than a couple of watts. Yes, you pay for it, but it won't be noticed on your electricity bill.
+ +The next best option is a modified plug-pack SMPS or a purpose built chassis mounting SMPS. These are useful where high efficiency is needed, along with very low standby power requirements. They are rather (electrically) noisy though, and the full range of voltages is not available. Where possible, design circuits to suit available voltages (12V is always a safe bet, and that's used throughout this article), rather than trying to find a supply that provides an 'odd' voltage. An example is 30V - it's a nice round number, but try to get a 30V supply that you don't have to build yourself!
+ +![]() ![]() |
![]() | + + + + + + + |
Elliott Sound Products | +Switchmode Power Supply Primer |
Contents
+ +The linear power supply is far from dead, but in commercial products switchmode power supplies (SMPS) and switching regulators have pretty much taken over. While the circuit complexity is far greater, there are significant savings to be made with the transformer in particular. Where a linear supply transformer needs to operate from 50Hz or 60Hz mains, the switching equivalent typically operates at 25kHz or more, so it's a lot smaller.
+ +For regulators, the saving is in the heatsink. A linear regulator with an input voltage of 22V DC and an output voltage of 15V will dissipate 7W for each amp drawn by the load. For low currents this isn't a problem - a preamp power supply may only draw 100mA or so at most, so the dissipation will be 700mW and a small heatsink is all that's needed. At higher currents, the losses become far greater. With any linear regulator, the losses are dissipated as heat, and there is no way to get rid of it other than by using a heatsink.
+ +A switching regulator will have far fewer losses, but more importantly, they also provide a transformation of input and output power. For example, if we assume for a moment that the switching regulator is 100% efficient (no losses at all), the power output must equal the power input. If the input voltage is 22V, output voltage is 15V and load current is 1A we can do a quick calculation ...
+ ++ Power Output = Power input = 15W+ +
+ Output current = 1A
+ Input current = 15 / 22 = 682mA +
We have an output current that's greater than the input current. This is important, and compared to a linear equivalent represents a significant gain. Now, work out the linear regulator's loss if the input voltage is increased to 30V. We are still drawing 1A, so now the loss is 15W - 15V across the regulator and 1A output current. The input power is 30W, and half of that is effectively thrown away. Of course there are losses with a switching regulator, typically copper and core loss in the transformer or inductor, switching losses in the BJTs, MOSFETs or IGBTs, and power needed to run the controller circuitry.
+ +The same thing applies to a full-blown 'off-line' (powered directly from the AC mains) SMPS. It can not only perform the power conversion with greater efficiency, but it can be designed to provide a fully regulated output as well. So, not only has the transformer size been reduced by a factor of 20 or more, but there's no need for additional circuitry to provide a regulated output.
+ +It would seem to be an 'all-win' situation, but naturally that's not really the case. The SMPS operates at anything from 25kHz up to 150kHz or more and uses square-wave switching. As a result there is noise generated that extends well into the radio frequency bands (> 1MHz) and the DC output will also have noise superimposed on it. This noise is notoriously difficult to remove, and it can radiate a considerable distance, using the mains and DC leads as antennas.
+ +However, switchmode supplies and regulators are here to stay, and the noise is something that we just have to deal with. This article explains the basics of how switchmode power supplies and regulators work. It's not a design guide, and it contains no project circuits for you to build. The intent is to provide a good explanation of how each type of circuit functions, because there is surprisingly little information available that isn't either highly technical or 'dumbed down' to the point where it's of little practical use.
+ +Linear | Switchmode + | |
Size/ Weight | Large/ Heavy | Small/ Light + |
Efficiency | 30 - 40% See Note | 70 - 95% + |
Complexity | Low | High + |
Design Skills Needed | Low - Medium | High - Very High + |
EMI | Low Noise/ Low Frequency | High noise/ High Frequency + |
Cost | Medium - High | Low - Medium + |
Note that the efficiency of a linear supply shown in the table assumes it's regulated. For a transformer, bridge and filter cap unregulated supply, efficiency can be better than 85%, assuming a transformer rated for more than ~100VA. Small, low current mains transformers are notoriously inefficient though, and the table reflects that.
+ +The table above shows the relative differences between a linear power supply or regulator and its switchmode equivalent. When switching supplies first started to appear, they were very expensive and the required parts were uncommon. All the parts needed are now readily available and comparatively cheap, helped along by the proliferation of SMPS and the need to get the highest efficiency possible. The latter has become an imperative as electricity prices have risen dramatically, and governments worldwide have imposed minimum efficiency standards to many products. Some of these are wide-ranging and divisive - the phase-out of incandescent lamps being a case in point. All 'new' lighting products (LED, CFL) use switchmode supplies.
+ +The general principles described here are also applied in an area that you may not have expected - induction cooktops. These rely on a steel base in the saucepan, and energy that flows through the coils induces large eddy currents in the base of the pan, causing it to get hot. The switching devices operate at between 25kHz and 60kHz, and most use IGBTs (insulated gate bipolar transistors) for their high voltage rating and power handling.
+ +In a switchmode power supply, eddy current losses in the core are undesirable, and most use ferrite cores because they have much lower losses than the silicon steel laminations that are used for lower frequencies. There are many formulations of ferrite, which is a ceramic material containing minute iron oxide and other magnetic particles, with each insulated from the other so that eddy current is much lower than would otherwise be the case. Ferrites used for SMPS applications usually have high magnetic permeability and high electrical resistance. All forms of ferrite are fragile and easily chipped or broken if mishandled.
+ + +The advent of switchmode supplies has brought with it a whole range of new terms, some of which are self-explanatory and many that are not. The following is not extensive, and covers the basics only. Where power levels are given, these are a guide only, and examples of much higher/ lower power can be found.
+ +++ ++
+Buck Regulator One of the most common switching regulators. The output voltage is lower than the input voltage. + Boost Regulator Another common switching regulator. The output voltage is higher than the input voltage. + Buck-Boost Regulator Provides a fixed regulated output regardless of input voltage (at least within the design range). Output voltage is inverted. + SEPIC/ Cuk Buck-boost topologies that use capacitors and inductors for energy storage. + Flyback Simple SMPS topology, suitable for low to moderate power output (usually no more than 150W). + Forward Converter Medium to high power SMPS (50 - 200W). Higher efficiency than flyback, but also more complex and costly. + Push-Pull Medium to high power SMPS, up to 1,000W. Semiconductor switches are stressed by a voltage that's double the input voltage. + Half-Bridge Medium to high power SMPS applications, typically up to 500W but often much more. + Full-Bridge High power SMPS, from 500W up to the maximum possible for a given mains circuit (10A, 20A, etc.) 2kW and more is not uncommon. +
+Switch BJT, MOSFET + or IGBT, depending on desired power level from converter and switching frequency. + Duty Cycle The ratio of on time to off time for switching devices. A squarewave has a 50% or 1:1 duty cycle. + Reset When applied to magnetics, this refers to a sequence that returns the core to a demagnetised state to prevent asymmetrical operation and/or saturation. + PWM Pulse Width Modulation, used in all regulators and many off-line switchmode supplies. PWM changes the duty cycle. + Galvanic Isolation This means there is no ohmic connection between primary and secondary. Some capacitive coupling will always be present. + EMI/ EMR Electromagnetic interference/ radiation. Noise that may affect other nearby equipment and is difficult to filter out. + CCM Continuous Conduction Mode - The magnetic flux in the transformer or inductor does not fall to zero from one switching (commutating) cycle to the next. + DCM Discontinuous Conduction Mode - The flux in the transformer or inductor falls to zero from one commutating cycle to the next. + Hold-Up Time The length of time the SMPS can provide normal operation after input power is disconnected (allows for momentary interruptions). +
The above covers the most common circuit topologies and terminology, and topologies are discussed in a little more detail in the next section. Greater analysis will be performed later in this article, but we need to be acquainted with the basics first. Any discussion of magnetics (inductors and transformers) will be limited to a few basic parameters, as there is far too much involved to go into any great detail.
+ +While most SMPS types use a controller IC to provide regulation and other 'housekeeping' functions (current sensing, soft start, protection circuits, etc.), in some cases the SMPS will be self-oscillating, and may use very crude techniques overall. In some cases this even extends to high power supplies used for large amplifiers and other applications where you might imagine that a few dollars extra would have been money well spent. For a simple boost regulator used in an LED torch (flashlight) you can accept that the cost will be kept to a minimum, but in other cases it would seem prudent to use a slightly more complex circuit to get the best performance. However, some high-power self-oscillating designs often work surprisingly well ... at least until they fail.
+ +There are two SMPS projects on the ESP website. The first (Project 69) is a low power supply that's largely intended for people to dabble with a switching supply without breaking the bank. The second (Project 89) is designed to run power amps in a car from ±35V using the car's 12V supply (typically 13.8V when the engine is running). Both use the SG3525 controller - one of the few that has survived for any length of time.
+ + +The following circuits and descriptions are very basic, and only show the essential ingredients of each design. There will always be additional components used, not only for the control circuitry, but also resistor/ capacitor (or resistor/ capacitor/ diode) snubber circuits in parallel with inductors and transformer primaries. These circuits suppress high voltage spikes caused by leakage inductance. There will also be additional filtering for both inputs and outputs to reduce EMI.
+ +Where a magnetic component (inductor or transformer) has more than one winding, the start of the winding is shown by a dot. All topologies using dual (or more) windings on a single core require that the primary and secondary are properly phased to ensure that the circuit operates as intended. If windings are not correctly phased, the circuit either won't work or will fail when power is applied.
+ + +Buck converter: One of the simplest, cheapest and most common topologies. This topology does not provide isolation between primary and secondary, but it is ideal as a DC-DC converter used to step down from a high voltage to a lower voltage. Typically has high efficiency, and works well at high power levels. The down side to buck converters is that the input current is always discontinuous, resulting in higher EMI. The buck topology only requires a single inductor.
+ +Boost converter: Like the buck converter, non-isolating. The boost topology steps up the voltage, so the output voltage is greater than the input. The boost converter generally operates in continuous conduction mode (CCM). It is used in SMPS designs as an active power factor correction (PFC) circuit, and is also used to provide higher operating voltages for circuitry powered by 5V (such as USB devices).
+ +Buck-boost: These converters can either step the voltage up or down. This topology is very common in battery powered applications, where the input voltage varies depending on the state of the battery. It has the disadvantage of inverting the output voltage, but if the source is a battery or some other form of floating supply this is easily corrected by reversing the source voltage's polarity. Buck-boost converters use only a single inductor.
+ +SEPIC: Single-ended primary-inductor converter and Cuk (named after its inventor) topologies both use capacitors for energy storage, as well as two inductors. With SEPIC designs, the two inductors can either be separate or a single component in the form of a coupled inductor, but Cuk designs use two separate inductors. Both topologies are similar to the buck-boost topology in that they can step-up or step-down the input voltage, making them ideal for battery applications. The SEPIC has the additional advantage over both the Cuk and the buck-boost in that the output is non-inverting.
+ +Flyback: These converters are essentially a buck-boost topology that uses a transformer for isolation and as the storage inductor. The transformer provides isolation by means of separate windings, and by varying the turns ratio the output voltage can be adjusted. Since a transformer is used, multiple outputs are possible. The flyback is the simplest and most common of the isolated topologies for low-power applications. They are well suited for high output voltages, but the peak switching currents are high. The flyback topology is not suited to output current above 10A.
+ +One advantage of the flyback topology over the other isolated topologies is that many of them require a separate storage inductor. Since the flyback transformer is also the storage inductor, no separate inductor is needed. Because the rest of the circuitry is simple, this makes the flyback topology a cost effective and popular choice. Operating mode is discontinuous.
+ +Forward Converter: Basically a transformer isolated buck converter. The forward converter is best suited for medium power applications. While efficiency is comparable to the flyback, it does have the disadvantage of having an extra inductor on the output and is not well suited for high voltage outputs. The forward converter has the advantage over the flyback converter when high output currents are required. Since the output current is non-pulsating, it is well suited for applications where the current is in excess of 10-15A, as a comparatively small output capacitor is needed.
+ +Push-Pull: This topology is essentially a forward converter with a centre-tapped primary winding. This utilises the core of the transformer more efficiently than the flyback or the forward converters. However, only half the copper in the winding is being used at any one time, increasing the copper losses. For similar power levels, a push-pull converter will have smaller filters compared to a forward converter.
+ +The main advantage that push-pull converters have over flyback and forward converters is that they can be scaled up to higher powers. Switching control is critical with push-pull converters, because 'dead time' has to be provided to ensure that both switches are never on at the same time. Doing so will cause the equal and opposite flux in the transformer, resulting in a low impedance and a very large shoot-through current through the switches, destroying them. Peak switch voltage is at least double the input voltage, and high voltage MOSFETs are required. Push-pull converters are common for converters that operate with relatively low input voltages (12V to 48V or so).
+ +Half-Bridge: These converters can be scaled up well to high power levels and are similar to the forward converter topology. Half-bridge also has the same issue of the shoot-through current if both switches are on at the same time. The duty cycle is therefore limited to about 45%. The half-bridge switch voltage is equal to the input voltage, and this makes it much more suited to 250VAC and PFC applications.
+ +Full-Bridge: These require four switches and fairly complex control circuitry. The full transformer primary is used, and like the push-pull and half-bridge the maximum duty cycle is limited to about 45% to prevent shoot-through current in the switches. Full-bridge converters are suitable for very high power and often use IGBTs rather than MOSFETs.
+ +Resonant LLC: This is a half or full-bridge topology that uses a resonant tank circuit to reduce the switching losses. All switching is done with (close to) zero voltage across the switching devices. These converters scale up well to high power levels and have very low losses in the switching devices. They are not well suited for stand-by mode power supplies because the resonant tank circuit needs to be energized continuously.
+ +The resonant LLC also has an advantage over both push-pull and half-bridge topologies in that it is suitable for a wide range of input voltages. The main disadvantages of this are its complexity, design difficulty and cost. However, it remains popular because the stresses on the switching devices are reduced.
+ +While there are many variations on the above, the brief descriptions cover the important points. Not all will be covered in detail below, because this is an article, not a book. In every case though, the transformer or inductor is the heart of the circuit, and is the most difficult to design. High switching speeds mean that the skin effect becomes problematic - this is the effect where current migrates to the outer layer of the conductor. The core itself is also critical, and some topologies require a gapped core due to an effective DC component, while others do not. These topics will be covered in greater detail further on.
+ +An important point to make is that inductors (including 'transformers' used for energy storage) store energy in the core as magnetic flux. Energy is not stored in the winding(s). Leakage inductance is a by-product of winding an inductor or transformer, and is almost always undesirable. PCB traces and other wiring also add inductance that can ruin the performance of a switchmode regulator or power supply. Leakage inductance is inductance that exists on both sides of the transformer, but the primary is the most critical. While it's added to the total primary inductance, its 'charge' is not transferred to the load and must be dissipated by a snubber network or clamp. It is the result of imperfect coupling between the windings and magnetic flux 'leakage'.
+ +The drawing shows the equivalent circuit of a transformer. This is pretty much a 'universal' model, and almost everyone who discusses transformers will use a similar circuit. The model assumes that there is no saturation. This is a non-linear function that's very difficult to model accurately without dedicated software, and the closest most people get is to include a resistor (RP) to represent the core loss. Of particular interest as leakage inductance, as this causes voltage spikes in a switchmode circuit. Note that the 'ideal transformer' is lossless, and has no resistance or capacitance, and has infinite inductance.
+ +A critical aspect of the design (and understanding) of switchmode supplies is the behaviour of an inductor when a voltage source is connected. Inductance causes the current to rise relatively slowly, so when the voltage is connected there is initially zero current, and it rises exponentially to a value limited by the internal winding resistance. All SMPS limit the time a voltage is applied across an inductance so that the current never has time to rise above the peak design value. As the frequency is increase, the amount of inductance needed is reduced.
+ +It's also important to understand the difference between an inductor (even though it may have more than one winding) and a transformer. Transformers do not store energy - it is passed directly to the secondary as an alternating voltage. Flyback converters use a multi-winding inductor, and it is technically incorrect to call it a transformer because of this. However, energy is passed from the primary to the secondary via transformer action (due to the coupled inductors), so it's really a moot point. Most people will continue to call flyback inductors 'transformers' whether it's technically correct or not, and that includes me.
+ + +As noted above, when a voltage is applied to an inductor, the current builds from zero to some value determined by the inductance, applied voltage and time. For the waveforms shown below, the inductance of the choke or transformer is 200µH, the voltage is +12V (±12V for the transformer) and the frequency is 100kHz. A complete cycle takes 10µs. The duty cycle is 50% (5µs on, and 5µs off). Duty cycle may also be expressed as a number between 0 and 1, so 1:1 = 50% = 0.5 as an example.
+ +Inductors are often used as a 'choke' - part of a filter circuit with a low-pass characteristic. When used this way, the output voltage is determined by the duty cycle of the input waveform. A DC input means that the input and output voltages are nearly equal, limited only by winding resistance. A perfect squarewave (50% duty cycle) means that the output will be half the input (a 0-12V squarewave gives a 6V output), and that's the first thing we'll look at. It's important to understand that the reactance of the inductor must be high for the frequency used. A 200µH inductor has a reactance of almost 126 ohms at 100kHz, and the load needs to be lower than that before the effect of the inductor is fully realised. The waveforms shown below were taken after steady state conditions were reached.
+ +As can be seen, when the input voltage is at +12V, the inductor current (blue) ramps up, as does the output voltage. When the input falls to zero, the inductor current ramps down. Note that the inductor current never falls to zero - this indicates CCM (continuous conduction mode). The output voltage can be seen to vary between 5.4V and 6.6V, and the average is 6V - exactly as expected. The small diagram is indicative of the output filter that's used with most PWM switchmode supplies, but does not include the capacitor. The idea is to show what the inductor does by itself.
+ +The flyback technique is very different in all respects. This is the arrangement used for boost converters and flyback SMPS. The input is 12V DC, and the other end of the inductor is shorted to earth/ ground via the switch. Again, the switching has a 50% duty cycle, and it's immediately apparent that the peak output voltage averages double the DC input. Although shown with a resistive load, it should be obvious that adding a diode and capacitor will maintain a voltage across the load that's higher than the DC input. Note that the average voltage across the load is still 12V, but the peak is double, at 24V.
+ +When the switch closes, current again builds in the inductor, and the energy is stored as a magnetic field. When the switch opens again, the back-EMF from the inductor tries to maintain the same current through the windings. Because there is a load that absorbs energy, the magnetic field collapses and power is sent to the load. The incoming supply is 12V DC, so the flyback energy 'stacks' on top of the existing voltage, producing an average peak value of 24V. The actual average is 12V, and it only becomes useful if transformed (via a second winding on the inductor) or the peak value is used via a diode and capacitor.
+ +Now there is a diode and capacitor added between the inductor/ switch and the load. The flyback voltage from the inductor can now flow only one way via the diode. It charges the capacitor and supplies load current. The current waveform still behaves as before, but the average output current has now risen dramatically because all the energy stored in the inductor is being used by the load. To maintain balance, the current increases, and the output voltage is now (almost) double the input voltage. About 1V is lost across the diode.
+ +Average current is 4.63A. and with a 12V input that equates to 51.6 watts. The average output voltage across the load is 22.5V, so output power is 50.6W with around 1W lost in the diode. You can see that the output voltage falls when the switch is on, and rises when the switch turns off again. Current behaves as expected, and as shown above.
+ +These interactions with inductors are very important, and if you don't know what to expect then you don't have much chance understanding how switching regulators work. If at all possible, the reader should either build the circuits or at least run some simulations.
+ +If you intend to build the boost converter test circuit shown above, the inductor should ideally be an air-cored type (such as a loudspeaker crossover coil). These can never give you a nasty surprise due to saturation, because air-cored coils don't saturate. They don't rely on a core at all. You will also need a fast squarewave generator, and a 555 timer configured as an oscillator will do fine. The frequency needs to be around 100kHz if you use a 200 - 220µH inductor. The switching device will be a MOSFET (IRF540 or similar) and you need a fast diode such as a MUR120 (200V, 1A). The load resistor will need to be no less than 47 ohms or the current will be too high (the 10 ohms used in the above examples was used for the graphs, and is not recommended for experimental circuits). Don't operate the circuit with no load, as the voltage will be too high for the capacitor (I've measured over 400V with no load while experimenting!). The maximum recommended load resistor is 1k, which will give an output of up to 50V with 50% duty cycle. Higher duty cycle (greater on time) means more output voltage.
+ +There is an expectation that anyone building test circuits already has an understanding of basic electronic principles and experience with circuit construction.
+ +Transformer waveforms are nowhere near as interesting as inductor waveforms, because a transformer simply transfers the input waveform to the output, stepped up or down based on the turns ratio. There is one example that is needed though, and that's an input waveform for push-pull, half or full-bridge SMPS circuits. The required switching waveform features a dead-time, when neither switch is on. The duty cycle of each switch is less than 50%, typically up to a maximum of 45%. The waveform and test circuit is shown below.
+ +The waveform in Figure 4 is idealised - in reality it will be somewhat different because of the effects of the transformer's primary inductance, and it's also shown using an ideal transformer (having 'infinite' inductance, zero winding resistance and perfect primary-secondary coupling). As the inductance is reduced, the waveform changes and much of what you would see wouldn't make any sense. The principles don't change though. When Q1 turns on, that end of the winding is connected to earth, and the other end rises to +24V - this is simple transformer action. Conversely, when Q2 turns on, the other end of the winding rises to +24V. The peak-to-peak voltage across the winding is 48V.
+ +The period where both switches are off is called the dead time, and without it there is the possibility of both switches being partially on at the same time. This causes what's known as 'shoot-through' current, a brief current spike that can produce enough momentary current flow to cause failure of the switching devices. The dead time is included to ensure that can never happen, as it results in extreme dissipation in the switches and drastically reduced efficiency - at least until the switches fail. When both switches are on, the momentary current is almost unlimited. 100A or more is easily achieved even with relatively low voltage circuits, and limited only by circuit resistances.
+ +Although it's not shown in the waveform above, the dead-time creates a problem with a real transformer, because it will have leakage inductance. This causes voltage spikes when each MOSFET turns off, and a snubber circuit is almost always necessary. The snubber is a deliberately lossy circuit, and the spikes generated are absorbed and ultimately dissipated as heat. This is energy that can't be delivered to the load, reducing efficiency.
+ +One thing that is absolutely critical is the waveform symmetry with push-pull, half-bridge and full-bridge converters. These converters use a transformer that has no air gap, and if the drive signal is not symmetrical a DC component will appear in the transformer's primary winding. This will cause partial (and asymmetrical) core saturation, and the supply will (not may - will) blow up. The half-bridge is a little bit more forgiving because one end of the winding is capacitively coupled, and the cap(s) will equalise the voltage across the winding. However, if the drive is asymmetrical, the output will be too, and there will be more ripple as a result. Likewise, the windings must be symmetrical as well, with the same number of turns and winding resistance for any dual primary configuration.
+ +A reasonable understanding of all the concepts seen in this section will be needed when we examine the basic circuits. In a switchmode regulator or power supply, a microsecond is a long time, and a fault lasting only a few µs can cause instantaneous failure. It can take a while before you get your head around the idea of such short timing sequences, but every test and experiment shown can be performed using a 1kHz switching circuit. What that means in the real world is that the size of the inductor and any filter capacitors increases dramatically, but the principles are unchanged. If you reduce the frequency by a factor of 100, then the inductance and capacitance needed increase by the same amount. It's easy to understand why high switching frequencies are used - everything is smaller and cheaper.
+ + +In the following descriptions, diodes will normally be either ultra-fast or Schottky. Standard diodes will fail, because they cannot turn off quickly enough. Schottky diodes have a lower forward voltage so dissipate less power. This improves overall efficiency. Switches may be BJTs, MOSFETs or IGBTs depending on power levels. The controller is simply shown as a 'black box', with a DC input, switch driver and feedback terminals.
+ +The feedback applied to the controller is used to vary the duty cycle of the switch control signal. It's generally assumed that the switching frequency is constant, but that's not always the case. Sometimes the frequency is 'dithered' or modulated to spread the RF interference over a range of frequencies, rather than one fixed value. With the standard RF noise tests, this usually results in a lower level of EMI and a passing grade for a circuit that may otherwise fail conducted or radiated emissions tests.
+ +In most cases, a longer 'on' time results in a higher output voltage, as this allows the inductor current to reach a higher level, thereby having more energy to transfer to the load. Squarewave inverters are not normally able to provide a regulated output unless there is an inductor in the secondary circuit, and the output waveform from the converter uses PWM. In some cases, regulation may be provided by an active power factor correction circuit that produces a constant regulated voltage to the inverter.
+ +It's also common for SMPS to use 'hybrid' technology. One that's fairly common is to combine flyback and forward converter techniques to create what's sometimes known as a 'forward-flyback' topology, where the design utilises both techniques simultaneously. These are most commonly found in small supplies, with outputs of up to 30W or so.
+ +In the circuits that follow (Figures 5 to 9) there is no galvanic isolation between the input and output. These circuits cannot be used where isolation is required. Figures 5 and 6 include component values and switching duty cycle, based on a 50kHz switching frequency. The inductor used for both is 220µH with a 1 ohm series resistance.
+ +Component values are not shown for most of the remaining circuits.
+ + +We'll start with step-down (buck) switching regulators, because they are easier to understand and will help lead the way into more complex topologies. There are many different circuits, and a great many variations on each, so it's necessary to limit the discussions to the most common types. Of these, the buck regulator is one of the most common, and is used in wide variety of different products.
+ +Figure 5 shows the essential elements of a buck converter. The switch is usually a PNP transistor or a P-Channel MOSFET because it's located in the incoming positive supply line. The switch can be located in the negative lead, but there's little to be gained by doing so. The output voltage is determined by the duty cycle of the switch. If it's on permanently the output is the same as the input (ignoring resistive loss in the inductor). Likewise, if the switch is off permanently, the output voltage will be zero.
+ +At a 1:1 duty cycle (switch on for 50% of the time), the output voltage will be some fraction of the input, and input current will be less than the output current. The exact voltage is load dependent, and is also determined by the reactance of the inductor at the designed switching frequency. PWM is used to maintain the output voltage at the desired value. Buck regulators always operate in discontinuous mode.
+ +When the switch closes, current flows through the inductor to the load and filter capacitor. The control circuit will adjust the on-time to ensure that the voltage is always at the preset value. When the switch opens, the energy stored in the inductor's core is returned to the capacitor and load via the diode, ensuring that as little total energy as possible is lost. Without D1, a very high (flyback) voltage would be developed across the inductor which will destroy the switching device. With the designed load current, a boost converter will normally operate in continuous mode. It will enter discontinuous mode with no (or very light) load.
+ +Losses in the switch, diode and inductor mean that the simplistic approach above does not hold true, but if the inductor is well designed the losses will be very low. Inductor quality is proportional to cost, so expecting a very low loss component means it will cost more. Diode losses are minimised by using a Schottky device, and switching losses depend on the type of device used and the switching speed. Overall efficiency is typically around 85%.
+ +The component values and duty cycle shown are based on a simulation, and I've also run tests that validate the simulated results. The feedback will modify the duty cycle depending on the load, so if less current is drawn from the output, the duty cycle will be reduced to ensure that the voltage remains at 5V. Obviously, if the load is increased so too is the duty cycle. The average peak current will be around 1.1A with the conditions shown.
+ +Something to be aware of with all choke input filters such as a buck regulator, or at the output of most PWM converters, is shown above. The output voltage waveform initially overshoots the design value, then has a damped ringing waveform at a relatively low frequency (this is very much load dependent). The frequency is determined by the inductance (220µH) and capacitance (100µF), which in this case is 1.07kHz. The overshoot is made worse by a light (or no) load, because there's nothing to damp the circuit.
+ +If this isn't accounted for, the overshoot can easily damage sensitive components, so a soft-start arrangement is needed to gently ramp up the duty cycle when power is first applied. In the sections below, there are many SMPS that use this output filter, and all of them will show similar behaviour. The low frequency resonance also makes the feedback network more critical because there are phase shifts that must be accounted for to ensure a stable feedback loop.
+ +The same effect can be seen with SEPIC and Cuk DC-DC converters. It may also occur with boost or conventional buck-boost designs operating in boost or buck mode, depending on component values, load and duty cycle.
+ + +Boost converters provide an output voltage that's higher than the input. This also means that the input current is higher than the output current by a ratio that depends on the step-up ratio of the converter. If the voltage is doubled, then the input current will be slightly more than double the output current. When the switch is closed, current builds up in the inductor, and when the switch opens the stored charge is dumped into the filter cap and load via D1.
+ +Boost converters rely on the flyback technique. The high voltage impulse developed when the switch opens is the load's source of voltage, which is effectively 'stacked' on top of the supply voltage. If the switch is permanently open, the load will get the incoming supply voltage, less the drops across the diode and inductor. The switch must never be allowed to close permanently, or a very high current will flow through the inductor, limited only by the resistance of the winding.
+ +A common usage for boost converters is for active PFC (power factor correction) in switchmode supplies. The rectified but un-smoothed mains voltage is boosted to around 420V DC, with cycle-by-cycle PWM used to ensure that the input current is very close to being sinusoidal. The DC is then provided to a DC-DC converter to provide the voltage and current required by the load. Power supplies with active PFC can achieve a power factor of 0.95 easily (a PF of 1 is ideal). Active PFC is a highly specialised use for boost converters.
+ +As with the buck converter shown above, the values shown are from a simulation and verified with bench tests. The peak MOSFET current is 1A for the conditions shown.
+ +The waveform for a boost regulator is interesting, because it shows the flyback response clearly. The MOSFET is turned on for 13µs and off for 7µs, which gives the 65% duty cycle as shown. When the MOSFET (Q1) is on, the voltage across it is close to zero, and the flyback pulse rises to a little over 48V when Q1 turns off. This boosted voltage has sufficient current capacity to charge C2 to 48V, as well as produce the load current of 100mA. The peak charge current is 990mA, and the output ripple will be about 15mV.
+ +The behaviour of a flyback SMPS (covered below) is not much different, but the voltages are a great deal higher for 'off-line' (mains powered) converters. The principle isn't changed though. The ringing you can see happens when there is not enough energy left to charge the output cap, but it hasn't fallen to zero. The voltage 'bounces' up and down a few times, but ringing is stopped when the MOSFET turns on again. The effect is load and duty-cycle dependent, and can happen with any flyback circuit.
+ +The output voltage of a boost regulator can be anything from a few volts above the input supply, or can be hundreds of volts for low output currents. The MOSFET and diode must be able to withstand the full output voltage
+ + +The buck-boost topology allows the output voltage to be lower, higher or the same as the input voltage, but of the opposite polarity. The ratio of input to output voltage is determined by the duty cycle. When the reactance of the inductor and load resistance are the same, a 50% duty cycle means the output voltage is close enough to being the same as the input voltage, but reversed polarity.
+ +A high duty cycle (greater than 50%) causes the output voltage to increase (become more negative) and a low duty cycle does the opposite. If the incoming supply is from a battery the polarity inversion is of no consequence. By reversing the supply's output connections, the output voltage will be the desired 'normal' polarity. With the designed load current, a buck-boost converter will normally operate in continuous mode. It will enter discontinuous mode with no (or very light) load.
+ +The voltage across the diode is equal to the sum of the input and output voltages. For example, with 12V input and -12V output, the diode's peak inverse voltage is 24V.
+ + +The SEPIC (single-ended primary-inductor converter) topology is useful when you need voltage step-up and step-down. SEPIC converters use a capacitor and two inductors for energy storage, and have slightly higher efficiency than the standard buck-boost circuit. It has the advantage of being non-inverting, but this comes at a cost because of the extra components. The two inductors can be wound on a single core (coupled inductors) or they can be separate. The SEPIC topology is unique, in that leakage inductance is actually a benefit. This eases the design process for the magnetics.
+ +Although I've shown the coupled-inductor version, the inductors can be separate and even of different values. When coupled, the inductors will be equal values with the same number of turns on each section. Coupled inductor designs will typically provide an efficiency improvement over a separate inductor solution. The capacitor (C2) must carry slightly more than the full load current.
+ +The operation of this type of converter is far more complex than those shown so far. It is often described as being equivalent to a boost converter followed by a buck-boost converter, but with both controlled by a single switch. While this may be a convenient way to describe it, it fails to provide a real understanding of its operation. However, that description will have to do, because I'm not about to provide many paragraphs and drawings for one converter. There's plenty of information on the Net of course, so look it up if you are interested.
+ + +This regulator is superficially similar to the SEPIC, but is quite different. For a start, the output voltage is inverted, having a negative output for a positive input. While it may appear that you only need to reverse the diode (D1) to get a positive output, this changes operation dramatically and its input current rises alarmingly. Like the SEPIC, the Cuk converter uses two inductors, and they are not usually coupled. The capacitor carries the full load current.
+ +There doesn't appear to be any particular advantage of the Cuk over the SEPIC or vice versa, so the decision as to which one to use becomes the choice of the designer. Cuk converters do have the distinct disadvantages of requiring two separate inductors and having an inverted output.
+ +There is a coupled inductor version of the Cuk converter, and a bit more info used to be available at boostbuck.com but the site has vanished. The main claim to fame of the coupled inductor version is very low (possibly zero) output ripple. It is specifically recommended for use in PWM (Class-D) amplifiers, although I've not seen an example.
+ + +This next batch of converters are commonly used following a bridge rectifier and high voltage filter capacitor, powered directly from the AC mains. These SMPS provide full galvanic isolation between the input and output, and are used where isolation is required. Y-Class capacitors are usually connected between input and output for EMI suppression. These caps must be certified, and are generally no more than 4.7nF to minimise the chance of electric shock.
+ +However, the charge stored by even a 1nF cap is more than enough to damage sensitive circuitry, such as the inputs/ outputs of opamps or digital circuits. Many devices using an SMPS will not use the mains protective earth, so the output of the supply may float at around half the mains voltage. This can (and does) cause equipment failures, most of which will be considered inexplicable by the average user. More detailed analysis of the input stage and filtering is provided below.
+ +As with the previous examples, the controller is simply shown as a 'black box', with a DC input, switch driver and feedback terminals. In reality, the controller will generally be a dedicated IC, although discrete transistors may be usable in some cases. Control circuitry is not included as part of this article, but may be provided in a later update if demand warrants a second article.
+ +In some squarewave converter applications (push-pull, half-bridge or full-bridge) the output is unregulated, and the output voltage will change along with the incoming mains supply voltage. In such cases, the inductor on the secondary side is not used, and the filter capacitor(s) charge to the peak value of the secondary voltage. Filtering is comparatively easy, because the off time is short and the filter caps can be a fairly low value. However, any ripple on the incoming rectified DC is passed straight through the converter, so the high voltage filter capacitor (shown in Figure 16) needs to be substantial.
+ +All diodes must be high speed types. For low voltages, Schottky diodes are recommended, and for higher voltages they must be fast or ultra-fast types designed for switchmode power supply use. The diodes will need to be mounted on a heatsink when appreciable current is expected. For example, with an output current of 5A, fast diodes will dissipate 4W each, and they will get very hot without a heatsink.
+ +In each case, feedback is provided by an optocoupler (LED + photo-transistor) to maintain isolation between the primary and secondary sides. You will also see that only the secondary side is shown with an earth reference, because the input is referred to the mains and is decidedly user-hostile.
+ + +The flyback topology is probably the most popular switchmode topology of all time. It's not especially efficient, but is relatively easy to design and is ideally suited to small off-line SMPS (direct to the mains with a bridge rectifier). Flyback supplies are used almost exclusively in 'wall' power supplies (i.e. 'plug-packs', 'wall-warts'), and are very common for many other supplies used in lighting (mainly LED), battery chargers for phones, tablets and many laptop PCs.
+ +The flyback topology is based on the boost converter, but uses a transformer instead of an inductor. This allows the output to be fully isolated from the mains, and by manipulating the turns ratio, any output voltage desired can be achieved. Multiple outputs are common, with regulation usually based on the main output - the one with the highest power output. Flyback supplies are not common for output powers of more than about 100W because other topologies provide far better efficiency at high power levels.
+ +When the switch closes, current builds in the inductor, creating a magnetic field. No power is supplied to the load while the switch is closed. When the switch opens, the back-EMF (the flyback voltage) transfers the stored energy into the load via D1. The normally very high back-EMF is clamped by the load, and is transformed by the turns ratio to produce the required output voltage. Note that the switch must never remain on, as that would represent almost a short-circuit across the supply.
+ +The transformer requires an air-gap because there is an effective DC component in the transformer primary current. This also means that the transformer is somewhat larger than it would be without the DC component. The gap isn't always a physical piece of plastic or paper - it's not uncommon for flyback circuits to use a powdered iron core, where the 'gap' is distributed as microscopic spaces between the magnetic domains.
+ +Voltage regulation is achieved by PWM. At no load, the switch will be on for a very short period, with the on-time increasing with increasing load. It's common to apply cycle-by-cycle current limiting to ensure that the primary current can never exceed the maximum allowed in the design. This provides automatic overload protection. Although not shown, a snubber network is (almost) always used in parallel with the primary to prevent voltage spikes caused by leakage inductance from damaging the switching device.
+ + +The forward converter is more efficient than flyback designs, and can be used at higher power levels. While the two look superficially similar, they are actually very different indeed. The first and most obvious difference is the second 'primary' winding. This is called a reset winding, and in conjunction with the diode it 'resets' the core to remove the DC component. The transformer does not require an air-gap. The reset winding is the one that has D1 in series, and has the same number of turns as the primary.
+ +Power is transferred to the load when the switch is closed, and the core is reset by the separate winding when the switch opens. No power is transferred to the load when the switch opens, and the transformer is used as a 'real' transformer rather than an energy storage device. Because there is no energy storage in the transformer, a secondary inductor is required, and it forms a choke-input filter (these used to be common with very high performance valve circuits, although not common for audio amps). A choke input filter is a design exercise in itself, and forward converters require more design skills than flyback types.
+ +Forward converters have a disadvantage over flyback designs in that there is a requirement for a second primary (reset) winding, and they need an inductor on the secondary side. Because the reset winding takes up space in the core's winding window, it's not possible to get very low primary resistance as the wire size must be reduced to fit the two windings. However, the output filter capacitor requirements are eased somewhat and forward converters can be built to operate at higher power than flyback.
+ +Note that there are many variations on the standard forward converter, and they may use two switches and no separate reset winding, or the reset circuit may even be on the secondary side. It's not possible to try to list or show examples of every kind, because there are so many.
+ + +For medium-high power applications, the push-pull topology is useful. It's not especially efficient in its use of the winding window because there are two primary windings, and it has a disadvantage in that the switches (usually MOSFETs) have to withstand double the input voltage. When the mains is rectified and smoothed, the voltage is 325V for 230V mains, so the switches must withstand a peak voltage of more than 650V. Allowing for voltage spikes caused by leakage inductance and also for high mains voltages (typically up to 267V RMS), MOSFETs rated for around 900V are needed.
+ +Power is transferred to the load when either switch is on, and it uses normal transformer action. The switches must be carefully controlled to ensure that they can never be on simultaneously, and it's usual to limit the duty cycle to each of them to no more than 45%. This provides a dead-band, where both switches are turned off. The disadvantage of this is that the collapsing magnetic field creates a back-EMF that cannot be completely absorbed by the load. Leakage inductance generates spikes that must be tamed with snubber networks, further reducing efficiency due to the power lost in the snubbers.
+ +Regulation (if provided) relies on the use of an inductor in the secondary circuit, and the duty cycle of the switching waveform is varied. The output inductor then functions as a buck converter, with the PWM being applied to the primary of the transformer rather than in a secondary circuit. While shown with a centre-tapped secondary and two diodes, a single winding can be used with a bridge, or the output can be configured for dual outputs (e.g. ±40V for an audio power amplifier).
+ +Many push-pull SMPS have been built that are self-oscillating (using small additional windings to provide feedback), and most of these do not provide a regulated output. As the mains voltage changes, so does the output voltage. The self-oscillating topology is simple and fairly easy to understand, but is hard to get right. Failure is usually catastrophic, as it's difficult to provide good protection circuitry yet maintain a low component count.
+ +Push-pull converters can be seen to be similar to a push-pull valve amplifier's output stage, hence (at least in part) the name.
+ + +The half-bridge uses two switches, one to connect one end of the primary winding to the positive supply and the other to connect it to the negative supply. The other end of the transformer's primary is capacitively coupled to ensure that no DC component can exist in the winding. R1 and R2 are used to ensure the voltage across C1 and C2 is equal. The voltage applied to the primary is half the rectified and smoothed voltage, so with 230V mains, the primary voltage is about 162V peak, or 325V peak-to-peak. Since the applied voltage is somewhat lower than a push-pull design, the secondary windings require more turns, increasing the copper loss.
+ +Power is transferred by normal transformer action when one or the other switch is closed. The controller must ensure that both switches can never be closed at the same time, or the shoot-through current will destroy the devices. The drive circuit is complicated by the fact that one of the switches is 'high-side', meaning it's at a high voltage and is not referenced to the negative supply. Fortunately, this is no longer the issue it once was, as high-side driver ICs are now commonly available for voltages up to 700V or so. Early designs used driver transformers which increased the cost.
+ +Like the push-pull design, self-oscillating SMPS exist using the half-bridge topology. The same issues apply, and again it's difficult to provide good protection circuitry with a self-oscillating design. There are countless ICs that are available for half-bridge converters, with some offering very comprehensive protection schemes. Cycle-by-cycle current limiting ensures that even a short circuit can be tolerated in some designs, with the controller entering a 'hiccough' protection mode (a fault condition causes the controller to enter a 'safe' mode where attempts to restart normal operation are made at perhaps 1 second intervals.
+ +Like the push-pull converter, regulation is only possible if the secondary side uses an inductor.
+ + +The full-bridge design uses four switches, and the primary is alternately connected to the supply, first with normal polarity, then with the polarity reversed. This effectively doubles the voltage across the primary, and fewer turns are needed on the secondary for a given output voltage. This class of SMPS can be used for very high power, and 2-5kW is not uncommon. The switches will often be IGBTs in very high power converters because they usually have lower losses than MOSFETs. When either pair of switches is activated, the primary is connected directly across the incoming DC supply. As the switch pairs alternate, the connection to the primary is reversed.
+ +Power is transferred using normal transformer action, when each pair of switches is activated. In the drawing, Q1 and Q2 will always be switched on together, as will Q3 and Q4. Q1 and Q3 are both 'high-side' MOSFETs so must be driven from an IC high-side driver, or they can use transformers. Only two driver transformers are needed because each can have two separate windings for each transistor's gate. If all switches are turned on at the same time, the incoming supply is shorted and instant failure will occur. While it is theoretically possible to build a self-oscillating full-bridge SMPS, it would probably be unwise to do so.
+ +These circuits will generally be used for high power, and the switching devices will be expensive, so it makes sense to include proper control circuits that offer good device protection, soft-start capability and fast, high current driver circuits.
+ +Like the push-pull and half-bridge converters, regulation is only possible if the secondary side uses an inductor.
+ + +Due to the complexity of this topology (at least if taken to the extreme), there is little detail in this article. Resonant switching systems offer very low switching losses because all switch commutation is performed (ideally) under zero voltage conditions. The basic circuit is shown below. L1 and C2 form a series resonant circuit, with the transformer being the second 'L' in the name. There are many variations, using series, parallel and series-parallel resonant circuits. Strictly speaking, not all are classified as 'LLC', although they may still use a resonant circuit. Unlike most of the other circuits, voltage regulation is often achieved by changing the frequency instead of using PWM.
+ +Choosing the resonance frequency and obtaining a stable feedback loop are both difficult, and a brief description cannot do justice to the engineering effort needed to get a stable circuit. One of the primary reasons for adopting this type of converter is to allow higher switching speeds. Switching losses increase with higher speed, but if a resonant circuit can ensure that switching is only performed when there is little or no voltage across the switch, losses become negligible.
+ +The switching frequency may be above, below or at the resonant frequency of the tuned circuit, depending on the desired outcome. Ideally, the current in the tuned circuit will be a sinewave, even though the driving signal is a squarewave. For anyone wanting more information, I suggest Fairchild AN-4151 as a good starting point.
+ +If this topology is used in what may be classified as a 'relaxed' implementation, it's actually fairly easy to make an SMPS. In particular, if it's used in an unregulated converter, many of the benefits can be obtained with little or no added complexity. It's a far better arrangement than a normal half-bridge when driving a capacitive load. The inductor (L1) is often obtained by deliberately winding the transformer in a way that ensures a high leakage inductance. This is normally something that has to be minimised, but it's used in the resonant LLC circuit to eliminate (or reduce the value of) the extra inductor.
+ +Resonant LLC is not restricted to half-bridge converters. It can also be used with full-bridge circuits. The only downside is the requirement for a capacitor (C2) that can handle the high current that's needed for very high power output. Fortunately, this isn't quite as hard as it may seem at first. Polypropylene caps are readily available that are rated for the high current experienced in this type of converter.
+ + +Probably the most common combination is the use of a high voltage boost converter, followed by a DC-DC converter. This arrangement is used to provide active power factor correction (PFC), and the incoming rectified AC is not fed to a storage capacitor. The boost converter operates from the pulsating full-wave rectified AC, and adjusts its duty cycle to ensure close to a sinusoidal mains current. The power factor can be as high as 0.97 in a well designed circuit. Unity (1.0) is ideal, and represents a resistive load where voltage and current waveforms are the same, with no phase displacement.
+ +Following the boost converter, the DC-DC converter has a stable and regulated input voltage of around 400-420V DC, and often secondary regulation is not required. There are SMPS designed specifically for LED lighting that use a tertiary switchmode regulator (although it may be combined with the main DC-DC converter). This will usually regulate the voltage to something that's above the LED array's normal operating voltage, but more importantly will regulate the current. Since LEDs have a low impedance, regulating the current provides a more consistent output over time, and ensures that small voltage variations don't cause excessive LED current and subsequent failure due to overheating. Current regulation is shown in the drawing, but not voltage regulation. In most cases, both are used. Voltage regulation is only included to ensure that filter capacitors aren't damaged by over-voltage.
+ +This class of power supply (with active PFC) has become very common in recent years because regulatory bodies worldwide are demanding high power factor from common appliances, luminaires, etc. With the current limiting shown, the above would be used for LED lighting, which requires a constant current output.
+ +Power factor is important because although the consumer only pays for the power consumed for residential premises, a poor power factor means that a higher than normal current flows, and the VA (volt-amp) rating of an uncorrected power supply can be as much as 3-4 times the power. For example, an uncorrected power supply might draw 100W at 230V, but the current may be 1A (230VA) instead of 434mA (100W). This gives a power factor of 0.43 (100W / 230VA).
+ +Despite what you might read elsewhere, phase angle due to inductance or capacitance is not relevant to any switchmode power supply. The input is shown as pulsating 'DC' - it is rectified, but not smoothed. C1 will normally be no more than 470nF and is used to reduce switching noise, provide a low impedance source for L1, and ensure the circuit doesn't oscillate at an unwanted high frequency.
+ +In the above drawing, there is no large capacitor across the output of the bridge rectifier (the input to this circuit). It has simply been moved so it's after the inductor and diode (C3 & C4). This needs to be considered when we look at the next topic, because when power is applied, a high current can flow through L1 and D1 to charge the capacitor before the PFC controller can start to operate normally.
+ + +The circuit shown is typical of a great many SMPS. The values of the components is the only difference, with a low power supply needing less filtering (C4) and smaller EMI caps (C1, C2 and C3). The bridge rectifier may be individual diodes or a bridge module, and only standard low speed diodes are needed because the input is 50/ 60Hz mains. The MOV (metal oxide varistor) is included to absorb voltage transients that may damage the circuitry. They aren't generally used on very small supplies, but are common in most medium to high power units.
+ +When any power supply is connected to the mains, there is an initial current 'surge' as capacitors charge, or (in the case of a linear supply) the transformer saturates until steady-state conditions are achieved. Off-line switchmode supplies generally have an EMI filter, bridge rectifier and the main filter cap, and there is little to prevent very high current surges when power is applied. The general arrangement is shown below, and it's common to include an NTC (negative temperature coefficient) thermistor to limit the inrush current, at least to a degree.
+ +Thermistors only work when the current drawn from the supply is steady, and they ideally show a high resistance when cold, with the resistance falling when they get hot. It is expected that the steady state current will be high enough to ensure the thermistor remains at a low resistance, but if it runs hot it is dissipating power. Wasted power in a sealed supply is a problem, because it adds to the overall heat load and raises the temperature of everything inside.
+ +For lighting and many other applications, inrush current can be very limiting. For example, if a 100W lighting fixture normally draws 500mA (a power factor of 0.87), that means that (in theory) 16 luminaires can be operated from a single 8A lighting circuit, although a prudent installer will use fewer - 10 or 12 would be fine. However, if each draws an inrush current of 10A (many are a great deal worse!) the instantaneous current at switch-on can be as high as 120A, and that will cause fuses or circuit breakers to open. Such a high current is also well beyond the ratings for the light switch itself. Something that seemed to be quite alright is anything but, due to inrush current.
+ +It's now becoming common for active inrush limiting to be used. The idea is to ensure that the current drawn at the moment of switch-on is no more than (say) double the operating current. In the above example, that would limit the inrush current to a maximum of around 24A, well within the ratings for fuses and circuit breakers. Active inrush protection adds more parts and cost, but many manufacturers have discovered that not including it causes big problems for installers and customers.
+ +There are many different schemes for inrush limiting, ranging from series resistors with a bypass switch (a relay, MOSFET or TRIAC for AC circuits), NTC thermistors with or without bypass (not recommended as this scheme does not work well and dissipates power needlessly), or purely active systems using one or more MOSFETs. While instantaneous power dissipation may be very high, it lasts for a short time - typically no more than 10 AC mains cycles or around 200ms. During the inrush period, the main switching circuits should operate with a low duty cycle, and it's common to provide a 'soft start' anyway, where the power level is increased gradually rather than providing full power from the instant power is available.
+ +Figure 17 also shows mains filtering components. There's an input capacitor (C1, one (sometimes more) common-mode chokes (L1), each followed by another capacitor (c2). These caps must be X-Class mains rated capacitors, typically designed for 275V AC or more. These filter components reduce conducted emissions - that RF noise that is conducted into the mains distribution circuits via the power supply's mains lead. C5 is (IMO) an abomination, but is very common. Where the supply does not have a protective earth, C5 will be wired to one or both of the AC inputs instead. The purpose of C5 is to reduce EMI, mainly what's called 'radiated emissions' (noise that's transmitted in the same way as broadcast signals).
+ + +Buck, boost and other non-isolated circuits can use a direct coupled feedback circuit, because the input and output share a common connection. The feedback circuit is designed to change the switching duty cycle so that as the load increases and the output voltage falls, the duty cycle is increased to provide more power and bring the voltage back to the design value. Feedback networks can be very challenging, because there are several time constants involved and if the feedback is too fast the output can become unstable. If it's too slow, a load change results in an output that takes a relatively long time to correct itself. Feedback network design is complex, and it must result in a circuit that's unconditionally stable. No value of load should cause the voltage to change substantially or become unstable (typically oscillating around the design value but never settling).
+ +Of the isolated designs, flyback converters are relatively easy to regulate, because it's a simple matter to change the duty cycle of the main supply to obtain a regulated output and it's not hard to keep the feedback network stable. Because all secondary windings are close coupled, different voltages can be provided, but with only one having active regulation circuitry. The ratio of all output voltages is determined by the turns ratio - not just between primary and secondary, but also between multiple secondaries. If the voltage for (say) the 5V supply is regulated, then ±12V supplies are set by the turns ratio and will track well over the designed current range. It's common that many supplies require a load on the regulated output before the others will be accurate. The load depends on the circuit used and maximum power output, but may range from 100mA to 5A or so.
+ +With isolated topologies, the feedback loop usually includes an optocoupler - an LED and photo transistor in the same light-proof IC. These are designed to maintain galvanic isolation between input and output circuits, usually to withstand voltages up to around 4.5kV. The LED is driven from a voltage sensor, which can be as simple as a zener diode or as complex as several ICs. In some cases, the LED is driven from a current sensor as well as the voltage sensor. This is done either to provide short circuit protection, or to allow the supply to operate in constant current mode (for driving lighting LEDs for example).
+ +The voltage sensing is done by a zener diode, and if the output voltage exceeds (a little over) 36V the duty cycle will be reduced as the LED in the optocoupler turns on via Q1. The current is regulated to 1A, so with any load from zero up to 1A the voltage will remain at 36V, but a greater load will limit the current and the voltage will fall to ensure that the 1A design current is maintained in the load. The 1 ohm current sensing resistor will cause a voltage drop of 1V at maximum output current, and the resistor dissipates 1W.
+ +A lower resistance can be used, but the voltage drop then needs to be amplified so there's enough to turn on the transistor. Please note that the above is not intended to be a precision feedback circuit, and there will be variations to the design values due to component tolerance. I have shown essential resistors to limit fault or transient current, namely R2 and R3, but again the drawing is only intended as an example, and is not a final circuit.
+ + +The term 'magnetics' covers all inductors, chokes and transformers. Cores are almost invariably ferrite, usually a manganese zinc type, and there are many different formulations with differing properties. One of the hard parts of the design is determining the best formulation for the application, deciding on the maximum flux density you wish to use, and deciding how hot it should run. Many ferrites achieve their optimum performance at elevated temperatures. The core material also has to be chosen based on the operating frequency, as a ferrite designed for 25kHz operation may not be able to be used efficiently at 150kHz.
+ +Proper magnetic design will normally ensure that the total losses will be split between copper loss (caused by winding resistance in all windings) and core loss. This is a careful balancing act, and requires knowledge and skills that are well outside the normal knowledge base of most electronics designers. Core loss depends on the switching frequency, flux level and temperature. Many core types show the lowest losses at a temperature of around 80°C or so, and if you ever wondered why SMPS transformers seem to run hotter than expected, this could be one of the reasons.
+ +Switchmode converters of all types operate at high frequencies. The optimum frequency is a trade-off between core size and switching losses. Low frequency operation means that the dynamic switching losses are reduced, but that also means a larger core. At high frequencies, tiny amounts of leakage inductance become a problem, skin effect causes higher than expected copper loss and dynamic switching losses increase. Operating frequencies between 25kHz and 100kHz are common and reasonably trouble-free, and generally still allow the ferrite core to be acceptably small even for surprisingly high power converters.
+ +Where a conventional 50/ 60Hz transformers has many primary turns (perhaps 3-5 turns/ volt for a mid sized transformer), the transformer for an SMPS will use volts per turn. A winding intended for 325V peak-to-peak (~160V RMS) may use 2-3 volts per turn, and may have only 50 turns or so on the primary. During the design phase, the engineer will calculate the maximum flux density based on the peak current, frequency and number of turns. The saturation flux is usually taken to be around 350mT (milli-Tesla), and it is critical to ensure that the core doesn't saturate for most topologies. It's generally wise to keep the flux density far enough below the maximum to ensure that the final design has a safety margin. Typical SMPS designs will keep the flux density to no more than perhaps 200-250mT. Compare this to a conventional mains transformer, where the peak flux density at idle can be as high as 1.4 Tesla.
+ ++ 1T = 10,000 Gauss (older terminology), so 200mT = 2kG (gauss). Higher frequencies mean that flux density must be reduced. ++ +
The flux density in a core is determined by the current and the number of turns. Provided the core is not saturated, increasing the number of turns reduces the flux density for a given (constant) input voltage. If the core does saturate, for all intents and purposes it ceases to exist, and the permeability (ability to carry a magnetic field) falls, approaching unity when fully saturated (the permeability of air is 1.0). Strictly speaking, we should refer to 'initial permeability' because with all ferrites it changes with flux density and temperature.
+ +Once the primary turns have been calculated it's fairly straightforward to calculate the turns ratio, and from that the secondary turns can be worked out. As explained earlier, this is an introduction to SMPS, and is not a design guide, and no calculations for turns, flux density or core losses will be covered. There are many dependencies in all calculations, and SMPS design is the subject of many books, countless articles, and (it seems) an infinite number of forum questions and answers.
+ +Although it's not always possible, most design guides suggest that the turns ratio should be as close to 1:1 as possible. While ideal, 1:1 is rarely useful, but it does minimise leakage inductance. Turns ratios up to 7:1 are considered acceptable, and 10:1 can be used if the extra losses aren't likely to cause a problem. Many texts on the subject suggest that turns ratios exceeding 14:1 are not practical. In many cases, the designer has no choice, and the turns ratio is selected based on the required secondary voltage, without fretting about using a high turns ratio. If the design demands a 20:1 turns ratio to get the voltage you need, then that's what has to be used. Ideology and practicality often don't coincide.
+ +When it comes to winding wire, the skin effect is well known (and exploited by snake-oil cable makers). With switchmode power supply transformers it is a real problem, and the most common way to minimise the influence is to use multiple small (insulated) wires in parallel - typically bundled and twisted into a single rope-like strand. This is commonly referred to as Litz wire, and its use reduces skin effect losses because the wire bundle has a comparatively large surface (or 'skin') area.
+ +You don't normally hear much (if anything) of the so-called proximity effect, but it refers to the (often chaotic) disturbance of the current flow in a conductor when that conductor is immersed in an intense magnetic field. This does not appear to be an issue with most SMPS transformers, but in large (low frequency) transformers it can cause localised heating because the current is forced to use far less of the wire's cross section than expected. Use of Litz wire again reduces the proximity effect, and since it's common in high frequency transformers anyway, the effects are already mitigated to a degree. Proximity effect may reduce current carrying ability far more dramatically than does skin effect, and at much lower frequencies.
+ +The proximity effect therefore has the potential to cause localised 'hot spot' thermal problems, that degrade the insulation and cause eventual failure. It is especially problematical when the transformer current is highly distorted, and this is invariably the case when a transformer is used with a rectangular waveform - nearly all SMPS transformers.
+ +It's not at all uncommon for secondaries of high current transformers to be wound using copper foil - a flat sheet that is the full width of the winding space. This provides a large cross sectional area for low resistance, and because it's a flat sheet, the skin effect is minimised. Providing insulation can be challenging with foil windings, as it may occupy far more space than a conventional winding of similar cross sectional area.
+ +The selection of wire gauge is determined by the current and how aggressive the designer is. A good starting point is between 3A and 4A per mm², with higher current density causing greater copper loss and lower density potentially making the winding difficult to fit into the winding window of the core. A one square millimetre wire has a diameter of 1.13mm ( A = π * R² ), and it's often a lot easier to use multiple parallel windings of thinner wire when high current is needed. Where the supply will have only momentary high current demands with a somewhat lower average demand, it's usually possible to use a higher peak current density than 4A/ mm², provided the average is somewhat lower. This requires experience and careful design.
+ ++ Vout = D × Ns / Np × Vin+ +
+ D is duty cycle, Ns is secondary turns and Np is primary turns +
Output voltage of PWM squarewave converters can be estimated by the formula shown above, but it doesn't work for all converter types and should be considered a rough guide at best. However, it's a useful start and can be used to get a general idea of what the converter will do. It won't work with flyback converters, because they operate by utilising the back-EMF from the transformer to generate the secondary voltage. The duty cycle or 'on' time of the switch is changed to store more or less energy as needed. Remember that a flyback converter transfers no power when the switch closes - only when it opens.
+ + +In most of the circuits shown above, there is a controller shown as a 'black box'. There are many different ICs designed for SMPS, and the vast majority use a simple resistive start-up circuit. Some need an auxiliary winding (commonly known as a 'bias' winding) on the main transformer or inductor. This is used to provide the normal operating voltage and current for the IC, via a very simple power supply - often nothing more than a couple of extra turns on the transformer and a single extra diode. It's unusual for any controller IC to use only the resistive start-up supply for normal use, because relatively high peak current is needed to drive the MOSFET gate(s) and drawing that through a resistor from the 325V DC supply causes excessive power dissipation.
+ +The general idea is shown below, and the start-up supply only needs to provide enough energy for two or three switching cycles. After that, the auxiliary supply takes over and provides the power needed for normal operation. It's not uncommon to include a voltage divider (R2 & R3 below) to sense the main supply voltage, so the power supply will shut down if the input voltage is too high or too low. This protects the SMPS from excessive current due to a low input voltage, or damage if the voltage is too high.
+ +In the above, the start-up current is provided by R1, and once the SMPS is operating D1 provides the supply voltage via the tertiary winding on transformer T1. R1 is selected to be able to supply enough current for the controller to start, but cannot provide enough for normal operation. This is done to minimise the resistor's dissipation. The incoming voltage is sensed by the controller after the voltage divider formed by R2 and R3. This enables the controller to turn the supply off if the voltage is too low to allow normal operation, or is too high, placing the switching MOSFET at risk.
+ +R6 is used to sense the current in the switching MOSFET (Q1). Current sensing ensures that the supply can protect itself in case of a shorted output, and it usually monitors each switching cycle so that any fault that causes the MOSFET current to exceed the design maximum will cause the supply to shut down immediately. Shut-down may be 'latched', meaning that the supply will require a power cycle (hard reboot), or it may retry at preset intervals until the fault is cleared. The latter is more common.
+ +The snubber circuit (C3, R4 & D2) is designed to limit the amplitude of voltage spikes when Q1 turns off, and is a lossy circuit. Any power consumed by the snubber is wasted, so it's important to optimise the transformer design to ensure that the wasted power is as low as possible. There are many variations for snubber circuits, and sometimes they can even be dispensed with. This is circuit dependent, and 'snubberless' designs are not common. There's even a few very clever (and patented) methods of using the snubber to provide auxiliary power.
+ +Many SMPS are designed for low power applications, often where they will enter a standby mode where power consumption has to be as low as possible. Many countries have requirements that standby or no load operation should result in the supply drawing no more than 0.5W and sometimes less. One way to achieve this is to use what's called 'skip-cycle' mode, where the supply maintains its output voltage by operating the main switching device at a very low repetition rate. Rather than running at 50kHz or more all the time, a couple of cycles at 'normal' speed may be enough to keep the output voltage above a preset minimum for somewhere between a few milliseconds to perhaps a second or more. During the off state, very little power is drawn, thus keeping the average consumption well below the maximum allowed.
+ +Not to be confused with inrush current limiting, most dedicated SMPS controllers feature a 'soft-start' feature, where they start operation after power-on with a very low duty cycle. The duty cycle is increase steadily until normal operation is reached. This ensures that magnetics and other circuitry have time to stabilise before the supply reaches full power, and also helps to minimise inrush current because everything doesn't happen at once. First the inrush limiters bring the voltage up to normal, then the soft start circuits increase power output. While the description takes some time to read, the whole process may only take about 200ms from the moment that power is applied, and is usually barely noticeable by the user.
+ + +One of the main goals of any SMPS is to ensure the highest possible efficiency. One of the things that always places a limit is the humble rectifier diode, as there is a voltage across the PN junction that is dependent on the basic junction characteristics. A normal high speed diode will have a forward voltage of 0.65V at low current, but there is an inevitable resistive component as well. That means that at rated current, the forward voltage might be 1V or more. If the forward current is 10A, dissipation is 10W. Two diodes used as the output rectifier for a 10A supply will each pass 10A for around 50% of the time, so the loss just due to the diodes is 10W continuous.
+ +Schottky diodes are better, but are only suitable for low voltages, typically limited to around 50V reverse breakdown (although there are exceptions - up to 250V). This is due to the internal construction of these diodes. The forward conduction voltage can be as low as 150mV, but the resistive component still exists. A diode such as the MBRAF440 is rated for 40V, 4A, and has a forward voltage of 0.485V at 4A and 25°C junction temperature. Dissipation is therefore less than 2W for each diode, but for 10A the best we can hope for is around 5W total dissipation.
+ +You may have noticed that none of the example circuits use a bridge rectifier at the transformer output. Even Schottky diodes will have excessive losses when used for low-voltage high-current applications, and a bridge simply doubles the loss. It's easier (and cheaper) to use a centre-tapped transformer. This is in marked contrast to conventional mains frequency transformers, where the centre tapped arrangement is inferior due to poor winding utilisation and much greater losses due to many turns of wire. Also, it's uncommon to try to get a high current 3.3V or 5V DC output (for example) directly from a normal mains tranny, but this is a very common use for switchmode supplies.
+ +Synchronous rectifiers generally use MOSFETs. All MOSFETs have an internal diode, but that's effectively shorted out when the MOSFET is turned on. By synchronising the MOSFET drive signal, it can act as a very low-loss rectifier. The internal diode is fairly ordinary, but when the MOSFET is turned on there is only the RDS(on) figure (on resistance between drain and source) to contend with. For example, an IRF540N has an on resistance of 44mΩ, so at 10A the voltage drop will be 0.44V - not zero, but lower than any diode. This limits the dissipation to 4.4W, and two in parallel reduces that further to 2.2W - better than any conventional or Schottky diode can ever manage. There are many other MOSFETs with even lower on resistance, for example IRFZ44E, IRFZ46N with 23mΩ and 20mΩ respectively. The On-Semi BXL4004 is even lower - 3.9mΩ.
+ +Of course there is a down side. The MOSFET(s) need a control signal, and it has to be carefully managed to ensure that the MOSFET is never turned on when its internal body diode would not be conducting. This adds parts to the design, but they all operate at low power so there's very little efficiency penalty. The synchronous rectifier MOSFETs are Q3 and Q4, and each is turned on when the diode would normally be conducting (i.e. when the voltage across the MOSFET is reversed). The whole process is rather non-intuitive, but it works and is recommended in many papers on the subject. It's not obvious, but MOSFETs conduct with either polarity between drain and source, and that's how they can be used in this application. With optimum devices, their voltage drop is far less than any diode, and rectifier losses can be (almost) eliminated. Instead of losing a volt or more across diodes, the loss can be limited to well under 100mV with the right MOSFETs.
+ +In the same way that switching devices used with push-pull, half and full-bridge converters must allow a dead band, the same applies with synchronous rectifiers. The above drawing is highly simplified and is not intended to refer to any specific controller. In some cases, the rectifier MOSFET gates may be driven by small pulse transformers, or can be driven directly by the transformer output for some designs (see Figure 24). Synchronous rectifiers are not only used as shown above - they can also be added forward converter SMPS, or to buck, boost, buck-boost, SEPIC or Cuk non-isolated DC-DC converters. It should also be possible to use the scheme with a flyback SMPS, but it's probable that there would be little to gain.
+ + +With any switchmode supply, losses are minimised by ensuring that the MOSFETs or IGBTs are switched on and off as quickly as possible. The same applies to BJTs of course, but they aren't used in many SMPS any more because they are too slow - especially to turn off. Both MOSFETs and IGBTs use an insulated gate, and there is capacitance between the gate and the remainder of the device. This capacitance can be quite large (several hundred picofarads or more), and the gate drive must be capable of charging and discharging the capacitance in the shortest possible time.
+ +Many commercial gate driver ICs exist that can deliver 2A or more for a short time, with the sole purpose of ensuring fast charge and discharge of the gate and any stray capacitance. One is shown in the next section, but there are hundreds of different types, including specialised 'high side' drivers. These are designed to interface with normal 12V signals, but provide a method of driving MOSFET gates that may be at 400V or more above the controller's operating voltages. There are also high-side drivers that just allow the use of an N-Channel MOSFET in circuits such as those shown in Figures 5 and 7. Rather than allowing a big voltage differential, these provide enough voltage to turn on the MOSFET, above the supply rail. For example, the supply voltage might be 12V, and the high-side driver supplies a gate voltage of 24V by one means or another. There are countless different designs using a wide variety of techniques.
+ +One common system is to use a 'charge pump', a circuit that acts as a voltage doubler (often with the required capacitor integrated within the IC). This allows the gate drive voltage to exceed the supply voltage, so an N-Channel MOSFET can be used where one would otherwise have to use a P-Channel device. The availability of the P-Channel types is nowhere near as great as for N-Channel, and N-Channel devices typically have around one third the on-resistance of a P-Channel part of the same size and cost.
+ +A conceptual circuit for a charge pump is shown above. When Q2 turns on, the end of C2 is connected to the common bus, and it charges to the supply voltage via D1 (less a diode voltage drop of course). Now Q2 turns off and Q1 turns on. C2 is charged to a bit under 12V, and D2 conducts and passes the charge to C3 which will be charged up to around 22V. This really is only a conceptual circuit - the MOSFET gate drive needs to incorporate a dead time to prevent shoot-through current. There are many ICs that have all the circuitry needed, and in some cases that even includes C2, and there is nothing else needed (the MAX1614 is a case in point). When more current is needed (for a proper SMPS for example), the IC is likely only to contain the oscillator and MOSFETs, and the diodes, pump capacitor (C2) and output cap will be external.
+ +Another common system is called bootstrapping - essentially it works in exactly the same way as described for the charge pump. This arrangement would be used to turn on the MOSFET(s) in a half or full-bridge SMPS as shown above. In both these cases, N-Channel MOSFET(s) that connect to the incoming supply need a gate voltage that's around 12V more than the supply itself. When Q2 turns on, C3 is charged to Vcc (e.g. 12V), and when Q1 turns on (Q2 is now off) the high-side driver inside U2 is powered by the charge stored in C3. The voltage at 'Vboot' will be around 337V, 12V more than the voltage at the source terminal when the MOSFET Q1 is on, and the gate will be driven to the full bootstrapped voltage.
+ +How much drive current is needed for a typical MOSFET? An IRF540N was simulated using a 12V gate drive with several rise and fall times, and the peak current was the same for rise and fall. A 100ns risetime demanded a peak current of 700mA, rising to 1A with 50ns and 1.2A at 25ns. The IRF540N is claimed to have a gate capacitance of almost 2nF, but there's also capacitance between the drain and gate (the 'Miller' capacitance). MOSFETs are usually rated for the total gate charge, measured in Coulombs (symbol C) - the charge transferred by a constant current of 1A flowing for 1 second. The total gate charge for the IRF540N is 71nC. This total gate charge must be overcome every time the MOSFET is turned on or off, and the current needed depends on the switching speed - higher speed means more current.
+ +The above drawing shows a common gate drive technique, using a pair of bipolar transistors. This arrangement can deliver more than enough gate current with the right transistors, and the delay circuit introduces some dead time. When the output from the controller goes high, C2 has to charge via R1, introducing a delay (it only needs to be around 0.1µs ... 100ns). When the controller's output goes low again, C2 is discharged via D1, minimising the delay. It's not really a delay, just a simple filter, and the turn-on time of the MOSFET is increased slightly. The two transistors provide significant current gain, minimising the load on the controller.
+ +With the values shown, the MOSFET turn-on is delayed by about 130ns, and gate voltage is removed almost immediately. The turn-off time depends on the MOSFET. This circuit can be seen in many SMPS, because it's simple and cheap to implement. However, if a suitable controller IC is used it shouldn't be needed, because there will be provision for dead time and a high current MOSFET gate drive within the IC itself. Most dedicated ICs use MOSFET drive (similar to the complementary MOSFETs shown in Figure 20).
+ +There are many gate drivers available, including simple low-side types, dedicated high-side versions, and others that combine the two. Some have limitations as to the peak current they can source or sink, and require low gate-charge MOSFETs. Some Class-D amplifier ICs require the use of special MOSFETs for exactly this reason. It's worth noting that a Class-D amplifier is simply a specialised SMPS that has been designed specifically to handle audio frequencies rather than provide a DC voltage.
+ +A Class-D amplifier can easily be 'tricked' into thinking it's supposed to be a voltage regulator, but it won't provide optimum performance in that role. Overall though, the primary difference between a Class-D amp and an SMPS is that the Class-D amp can both sink and source current, and provide an output that can be positive or negative. SMPS (like most power supplies) are designed to source current of one polarity only, and they can't absorb current that may be provided by the source. This a topic unto itself.
+ + +A complete SMPS using many of the principles explained above is adapted from a TI user's guide for an evaluation PCB. The circuit is a 48V to 3.3V 10A forward converter, and uses instantaneous peak current detection for the switching MOSFET, and a self-powered synchronous rectifier. I looked at a great many different circuits, but this one seems to show most of the topics covered above. The main controller part number in the original TI schematic is wrong which isn't helpful! Note that a separate 12V supply is required for the controller IC.
+ +The main switching MOSFET is Q1, and it's driven by U1 - a dedicated gate driver as discussed above. The TPS2829DBV is now obsolete, but it provides 2A gate drive and 25ns rise and fall times. The main controller (UCC35705D) provides the PWM switching waveform, instantaneous MOSFET current monitoring and full regulation via the opto-coupler.
+ +Voltage is monitored by U4, a TL431. This is a programmable shunt regulator, and will turn on the LED in the opto-coupler to just that level needed to stabilise the output voltage at 3.3 volts. The components around the TL431 are to ensure feedback loop stability. SMPS feedback is not straightforward, and it's always necessary to provide networks to obtain the time constants needed to keep the loop stable. Direct feedback without compensation networks would cause the entire supply to be unstable, typically with an output that oscillates around the desired value.
+ +The feedback loop must remain stable as loads are connected and disconnected, and must also remain stable with any load from zero to the maximum allowed. It's very common for SMPS to have complex and convoluted compensation networks, and the circuit shown is a perfect example. C16, D5 and R23 provide a soft-start that's intended to prevent output voltage overshoot at start-up. Overshoot at start-up is normal with any choke input filter - in this case L1 and C17, C18, C19, and the time constants involved make feedback loop stabilisation all the more difficult. Three different value capacitors are used to ensure low impedance over a wide frequency range. Switchmode supplies need bypassing that's effective up to at least 10MHz because of the very fast switching time.
+ +The MOSFETs (Q2 and Q3) used as rectifiers are designed specifically for this application, and are driven directly from the transformer secondary. No additional circuitry is used, other than snubber circuits in parallel with each MOSFET. The overall efficiency is claimed to be 85%, and that could not possibly be achieved without the MOSFET rectifiers.
+ + +At present, this is the only article covering switchmode supplies in any detail, although there is some more info in the Lamps & Energy section of the ESP site. Depending on demand and my workload, I may add another article with some experimental circuits that readers can play with to become better acquainted with SMPS techniques.
+ +There can be no doubt that switchmode supplies will be used in more and more products over time. There are now very few linear supplies used in common household products, and items such as TV sets, set-top boxes, DVD (and other) players, home theatre receivers, microwave ovens, induction cooktops and air conditioners all use switchmode supplies. Linear supplies are still the most common for DIY, simply because SMPS are unsuited for inexperienced constructors, and are a technology that is really only economical when made in large numbers. For hobbyists, the difficulties of getting the controllers, MOSFETs and (most difficult of all) the magnetics mean that a home-build is not really feasible except for a few diehard experimenters. Most of the parts needed are SMD, and there is zero tolerance for errors. Making an SMPS without a PCB is extremely difficult, especially if SMD parts are used.
+ +A home built linear supply will just blow the fuse if a mistake is made, or possibly there will be no output. With most SMPS designs, many opportunities for errors exist, most of which will result in instantaneous destruction of switching MOSFETs and other parts. They are completely unforgiving of assembly errors, and since many include under-voltage protection, you can't gradually increase the voltage with a Variac to make sure that everything is ok. What happens instead is nothing ... until the voltage exceeds the preset under-voltage threshold. The supply then attempts to run - if there's an error all the protection circuitry in the world is unlikely to help.
+ +It's also important to understand that most of the 'interesting' circuitry operates with a direct connection to the mains, and poking around and trying to take measurements of an operating (or misbehaving) supply is extremely dangerous. You can use an isolation transformer of course, but that only makes the situation a little bit better - there will still be high voltages (up to 420V DC) with very high instantaneous current capability. If you happen to connect yourself across that sort of voltage, you may not survive.
+ +This isn't to say that home building of SMPS isn't possible - it can certainly be done. Naturally enough, most DIY builders won't be able to test for EMI, and it may transpire that their pride and joy wipes out radio or TV reception for themselves and many neighbours as well. If that happens, there aren't many choices other than to try putting it into a metal enclosure, and if that doesn't work it becomes a door-stop.
+ +Design is not trivial at any level, despite web sites that can create a design to your specifications in seconds! Like all electronics it's an interesting endeavour that will give great satisfaction when the project works. The converse it also true, and if it doesn't work, it may not even be possible to fix it. Compare this to a linear supply, where almost everything operates at relatively safe voltage levels, and even gross errors will just cause a fuse to blow (or a capacitor to explode!). Twenty years later, it will still be easy to fix it if something goes wrong. PCBs are not necessary for simple supplies, and everything can be replaced and/ or a substitute part found.
+ +The half-life of many SMPS parts is very short, and you may find that the controller you used is no longer available in as little as a couple of years after you built the supply. Other parts may also become obsolete very quickly, and since a PCB is essential it may not be possible to use a substitute part because it won't fit the PCB. Commercial products have similar issues, and it's uncommon for suppliers to repair SMD PCBs - they are replaced, and when the required board is no longer available the product can't be fixed at all. This is very common now, and cannot be expected to improve. A perfect example is the circuit shown in Figure 20 - one of the main ICs that the circuit relies upon is now obsolete (although there does appear to be a compatible replacement).
+ + +Copyright Notice. This material, including but not limited to all text and diagrams, is the intellectual property of Rod Elliott, and is Copyright © 2015. Reproduction or re-publication by any means whatsoever, whether electronic, mechanical or electro-mechanical, is strictly prohibited under International Copyright laws. The author (Rod Elliott) grants the reader the right to use this information for personal use only, and further allows that one (1) copy may be made for reference. Commercial use in whole or in part is prohibited without express written authorisation from Rod Elliott. | +
![]() | + + + + + + |
Elliott Sound Products | +Flyback SMPS |
After (or before) reading this, I recommend that you also read Dangerous Or Safe? - Plug-Packs (aka 'Wall Warts') Examined, which covers the hazards in some detail. It also explains why buying from sellers that distribute products directly from Asian manufacturers is a really bad idea. If you build your own, it's likely to be just as dangerous as many of the examples shown in that article, and quite possibly even more so.
+ +Once you start looking at the details, the idea of DIY becomes less attractive. One of the biggest hurdles is the transformer. There is absolutely no doubt that you can wind the transformer, as the number of turns is generally low. You typically only need up to around 150 turns for the primary, and even fewer turns for the bias and secondary windings.
+ +The bias (aka auxiliary) winding supplies the switchmode IC with power (there's a startup process that provides just enough to get the switching circuit to function). The secondary winding provides isolated DC to the device being powered, after rectification and filtering. The secondary winding must be isolated from the mains to a very high standard. Should the insulation fail, 'bad things' will happen.
+ +++ ++
++ WARNING : The following circuits are connected the mains and must never be tested without extreme care. An isolation transformer is + essential if you intend to take measurements while the supply is operating. All circuitry must be considered to operate at the full mains potential, and must be treated accordingly. + The DC output should be earthed via the mains safety earth to provide an additional safety 'barrier'. Do not work on the power supply while power is applied, as death or serious + injury may result.
+ Under no circumstances should anyone who is not experienced with mains voltages attempt construction or examination of switchmode power supply, as even a small error can be very dangerous. + Great care is needed - always! By continuing, you accept all risk and hold ESP harmless for any death or injury suffered.
Just as there is no doubt that anyone can wind the transformer, there's no doubt that very few people will have the equipment needed to test it for electrical safety. When you buy a small SMPS, provided it has approvals for your country, it will have passed all necessary tests for electrical safety and EMC (electromagnetic compliance/ compatibility). These tests require equipment that very few hobbyists will possess.
+ +If you make an error with the insulation of the transformer, such as too little creepage or clearance distance between windings or the wrong type of insulation material, you can easily have an electrical breakdown that places the user at risk of electric shock or electrocution. As the person who built the circuit, you will be directly responsible for any injury or death, and you may be prosecuted (assuming that the injured person is not you).
+ +There are countless 'projects' on the Net telling you how to build your own off-line (mains powered) switchmode power supply (SMPS/ PSU). Many of these are rated for 1A or so, with voltages ranging from 3.3V to 12V. As a hobbyist and 'creator', the idea is (at least initially) appealing, as much for me as anyone else. The simplest SMPS is the flyback type, and many ICs are available that are designed specifically for this application.
+ ++++
++
Beware of YouTube (and other) videos that show you how to build a flyback supply. All that I saw neglect nearly everything that's important. While it's + certainly possible to build your own SMPS, be aware that it's not advisable. Build one for testing and experimentation by all means, but using it in place of a commercial + approved SMPS is ill-advised. +
Without adequate testing (with equipment you almost certainly don't have), you have no idea if your insulation is acceptable, and nor do you know that it will remain acceptable for the expected life of the product. Most on-line circuits never cover this with sufficient clarity to ensure that the builder gets it right, so you may have constructed a 'time-bomb' that fails catastrophically or lethally several years after it's been built.
+ +Most small SMPS don't use vacuum impregnation for the transformer, but that's one way to ensure that it remains safe for years to come - assuming it was safe beforehand. Few hobbyists have the equipment for this procedure, so the constructor will never know for certain that the transformer can never short between live (hazardous voltage) to the secondary.
+ +In general, you need a transformer with a rated dielectric strength of at least 2.5kV RMS (preferably 3kV RMS), as this provides the safety barrier between mains and the end user. Despite a fairly well-equipped workshop, I can't test anything at that voltage, as I don't have anything that can provide it with any degree of safety. If you buy a suitable transformer, the datasheet will include the isolation specifications.
+ +Then there's the cost. You need a PCB, the switching IC, a transformer (either ready-made or DIY), capacitors, bridge rectifier, at least one high-speed diode, an opto-isolator and zener diode (or variable voltage reference) and input/ output terminals. The IC is the easiest, but the device you use then determines the specifications for the transformer, and finding compatible parts isn't always easy. You can buy SMPS transformers that will (hopefully) be safe, but you still need to know exactly what to look for. There are several transformers that look ideal, but they don't have a high enough isolation voltage to be useful. Not all flyback SMPS are used with a mains input, and some are designed (as DC-DC converters) to provide isolation between different parts of internal circuitry.
+ +You can buy a small (12V, 1A) SMPS as a plug-pack for around AU$18 or so (some as low as AU$13 in bulk), and if purchased from a reputable supplier it will be rated as Class-II (double insulated) and have full approvals for your country. It will have been tested for electrical safety and EMC, so you get something that contains all the required safety and filtering parts that are essential for safe (and legal) operation. If you need an internal PSU in a chassis, it's an easy matter to split the case and remove the PCB, which can then be installed in a small ABS plastic 'utility' box and installed.
+ +If you decide to build your own, you'll still spend about the same (probably more), but you won't know if it will pass any of the tests that are usually mandatory. What you do get is experience with the design and construction of the PSU, along with the fun of doing so. At the end of the exercise, you (hopefully) get a power supply that will work, but it's one that you probably shouldn't actually use. If it's installed inside Class-I equipment (protected by a mains earth/ ground 3-core mains cable) it might be 'safe', but if your insulation fails expect a spectacular fuse failure as a result.
+ +Naturally, you can buy the required test equipment to test for electrical safety. The cheapest I've seen is 'only' AU$1,500, but most are considerably more expensive. Testing for EMC will set you back at least AU$10k (but probably a great deal more).
+ +A 'home-made' PSU can never be rated for Class-II (double-insulated), because it's virtually impossible to run the necessary safety tests without certified test equipment designed for the job. Not one 'DIY' SMPS article I saw includes this important point, and I shudder to think how many home-made power supplies are out there waiting to kill someone.
+ + + +The flyback topology is popular, because it's the lowest cost option for low to medium power. Up to around 150W is possible, but other circuits perform much better at higher power levels. One thing that the reader may (or may not) have noticed is the polarity indicators on the transformer windings. The dot traditionally indicates the start of the winding, and flyback transformers always have the primary and secondary (or secondaries) operating with inverted windings. The basic principle has existed for as long as 'electronics' as we know it. It's also the underlying principle of the spark coil in a (petrol) car engine.
+ +Considerations for use as a power supply are many. Cost is almost always among them, but technical issues are regulation, transient performance, ripple and filtering, EMI generation, efficiency at specific load points and across a load range. One also needs to consider size, weight, BoM complexity, stability, temperature, and performance despite component tolerances. Other factors include isolated vs. non-isolated designs, which are defined by the application.
+ +The flyback principle is the same as that for a relay or other electromagnetic coil. Many people will have experienced the high-voltage 'spike' generated when a relay coil is disconnected. The peak voltage can rise to hundreds of volts and is quite capable of destroying the switching transistor. The standard fix is a diode in parallel with the coil. A flyback circuit just adds another winding, and delivers the flyback pulse current to the load, rather than 'wasting' it in a diode. For mains use, Flyback circuits are (almost) invariably galvanically isolated, having no conducting material (wire or other conductor) between the input (mains) and the output (user accessible voltage). Instead, current is transferred magnetically, and feedback usually involves an optoisolator.
+ +When the power switch operates, current flows through the transformer primary via the switching MOSFET. This is either internal to the IC for low power, or external for anything above around 12W or so. The origin of the flyback circuit (as we know) it dates from the 1940s, with the introduction of television. A flyback circuit was used to generate the CRT horizontal sweep (when the switch was 'on'), the 'flyback' or retrace sweep (switch off), and also to generate the EHT voltages for electron-beam acceleration and focus. However, the principles were known well before the introduction of TV.
+ +A flyback transformer is not a transformer in the traditional sense. It's really two or more coupled inductors. With a 'true' transformer, current flows in the primary and secondary at the same time, with the two being in-phase (assuming proper connection). In a flyback transformer, no current flows in the secondary while current builds up in the primary. When the switching device turns 'off', current is induced into the secondary winding. This is not 'transformer action' in the accepted sense.
+ +Most flyback converters are operated in discontinuous conduction mode (DCM), as this minimises the size of the magnetic component (the transformer/ coupled inductors) and makes the job of the secondary rectifier a little easier, as there is no current flow when it turns off. In the explanations that follow, DCM is assumed in all cases. The alternative is CCM (continuous conduction mode) where the DC component in the primary never falls to zero. These are not covered here, but they may be used to minimise RF interference or to minimise the size of filter components.
+ +The two states for the switch are tON ('on' time) and tOFF ('off' time). When the switch closes, current builds in the transformer's primary, storing energy in the magnetic field. The switch 'on' time must be long enough to store the energy needed, but short enough to ensure that the core doesn't saturate, as this will cause high current and switching MOSFET failure. Many ICs have current monitoring to limit the peak current to a safe value. For example, the VIPer22A limits the peak primary current to 700mA.
+ +The following circuit was used for simulations. CX2 is an X-Class (275V RMS) mains capacitor, and it works in combination with LEMI to suppress electromagnetic (RF) interference. All of the examples that are shown further below use the same principle. There are very few apparent variations with flyback converters, regardless of the IC used or the power level. The control mode is PWM, where the 'on' time of the switch is varied. At low output power the switch will only be 'on' for a very short time, and the 'on' time is increased as the load current increases to maintain the desired output voltage. While the example assumes a fixed frequency, many flyback control ICs use variable/ modulated switching frequencies.
+ +In its simplest form, a flyback supply consists of a pulse generator, a switch and a transformer. Because we want DC at the output, there's a rectifier diode and a filter/ storage capacitor on the secondary of the transformer. Current flows in the primary circuit when the switch (MOSFET) turns 'on', and current flows in the secondary when the MOSFET turns 'off'. The pulse generator will run at a minimum of 25kHz, with around 60kHz or more being more common. The peak primary current is determined by the 'on' time and the inductance of the transformer primary.
+ +The transformer's magnetic core is usually ferrite, and it must include an air-gap. Inclusion of the air-gap means that more primary turns are needed for a given inductance, due to reduced core permeability. Technically, the stored energy that's released to the secondary is not contained in the core, but the gap. To avoid introducing a large amount of leakage inductance, the winding must fully enclose the gap (for 'E' type cores, the gap is on the centre leg).
+ +Early versions of flyback converters used bipolar junction transistors (BJTs) or even valves (vacuum tubes) for early TV sets, but these were too slow for efficient operation at high frequencies. Switching losses are generally significantly higher with a BJT. MOSFETs are almost universally used as the switch of choice.
+ +The secondary diode's reverse voltage rating must be higher than you think. The diode typically needs a voltage rating of at least 5 times the output voltage under load, but it can be a great deal more with no load. Even for a 5V output you typically need a minimum of 36V (a 40V diode is just sufficient). I measured a small 5V SMPS (similar to that shown in Fig 7.1), and with no load the diode's reverse voltage was 42V peak. Many designs use Schottky diodes because they have a lower forward voltage and hence a lower power loss, but others use 'normal' high-speed diodes. You cannot use a standard diode such as 1N4004/ 1N5404 etc., as their turn-off is too slow and they will overheat and fail. The RC snubber across the output diode is usually omitted, but may be required to suppress RF interference.
+ +The RCD snubber circuit shown is intended to minimise the voltage spike generated by the transformer's leakage inductance. This is covered in detail in the Transformers, Part 2 article (the link takes you directly to the section that covers leakage inductance). Leakage inductance causes damped ringing when the switch turns off, as seen in the waveforms shown below. Leakage inductance is shown as Lleak, and it's a lossy (Rdamp)) parasitic inductance caused by imperfect containment of the magnetic field within the core. The value shown is an example, but note that Lleak is not a separate component - it's part of the transformer and unavoidable in 'real life'. Ideally, leakage inductance will be as low as possible.
+ +When the switch turns 'off', a high voltage would normally appear across the primary, but instead the energy is transferred to the secondary, and via D1 to the filter cap and load. In 'real' (as opposed to 'imaginary') SMPS, the pulse generator is controlled via feedback, so the pulse-width (MOSFET 'on' time) is just enough to supply the load current. In many cases, the lowest possible pulse width will still be too much with very low (or no) load current, so the controller will inhibit pulses for several cycles. This is commonly known as 'skip-cycle' operation. The output voltage ripple is usually higher than normal in this mode.
+ +Figure 1.2 shows the primary waveforms with annotations. The red trace is voltage, and the incoming DC is nominally 325V (rectified and smoothed 230V RMS). When the MOSFET turns on, the voltage falls to zero and current (green trace) starts to increase. The 'on' time is about 2.43µs, during which time the current rises to about 545mA. When the switch turns off, the voltage across the primary winding increases to 472V as the magnetic field collapses. The stored energy is dissipated via the secondary diode, as it charges the output filter capacitor and supplies the load current. Once the primary's stored energy is depleted the circuit is 'idle', and it will 'ring' at a frequency determined by the primary inductance and stray capacitance. A scope capture of the voltage from a real flyback SMPS is almost identical to that simulated (see below).
+ +The parameter shown as 'VOR' is sometimes referred to as 'reflected voltage', and it's the output voltage across the secondary, reflected back through the transformer and multiplied by the turns ratio. It's not my intention to go into detail about transformer design, but suffice to say that this is not a matter of guesswork, but is a highly refined process. IC manufacturers often provide dedicated software to assist with the design, or sometimes spreadsheets. The transformer is the most critical part of the design, and it's essential to get it right to enable maximum efficiency and low no-load losses.
+ +Calculating the peak primary current is not easy, but it's a function of time, inductance, applied voltage and circuit resistance. Most transformer design procedures will cover this in detail. I don't propose to go into this here, because as already noted I don't consider a DIY flyback SMPS to be a viable proposition. However, I will offer the following ...
+ ++ ΔI/Δt = V / L+ +
+ Where ΔI/Δt (aka dI/dt) is change of current over time (amps & seconds)
+ V is voltage
+ L is inductance +
If this is calculated for 325V and 1.5mH, the answer is a rather large 217kA/s (yes, that's kilo-amps). Since we have an 'on' time of 2.6µs, we simply divide by 1 million (106) to get current per microsecond, and multiply by 2.43 (µs). The final answer is 548mA peak, which agrees quite well with the simulation (the error is less than 1%). Because the resistance is so low (5Ω) it can be ignored for a current of less than ~500mA, but you may need to adjust the voltage to compensate if it's much higher. Remember that in normal use, the SMPS will be powered from the mains, which can vary by up to ±10% (207 to 253V RMS for nominal 230V mains).
+ +The peak current cannot be increased beyond that which causes the maximum recommended flux density for the core being used, as it will saturate. Once a transformer core is fully saturated, it effectively ceases to exist, and the current is limited only by the series resistance. Because there is a net DC in the primary waveform, transformers used for flyback SMPS always have an air-gap. This has to be calculated too, and it reduces the inductance (compared to the same number of turns with no air-gap). In most flyback designs, the duty-cycle (ratio of 'on' to 'off' time) is almost always less than 50%. The flux density in a flyback transformer increases with higher output current (and therefore a longer on-time). This is the opposite of a linear (mains frequency) transformer, where the flux density is reduced as output current increases. They are different in nearly all respects, and cannot be compared!
+ +The highest flux density will be at maximum output load and minimum input voltage, because the switch 'on' time is at its maximum. If you look at the specifications for wide-range (small) switchmode supplies, efficiency is generally higher with 230V AC input than with 120V input. The output voltage (without feedback) is directly proportional to the input voltage for the same on-off ratio. The feedback circuit changes the ratio in 'real time' to ensure the output voltage is stable.
+ +The secondary voltage and current are shown in Fig 1.3, and you can see that the peak reverse voltage is fairly high. For the simulated circuit shown in Fig 1.1, the DC output voltage is about 22V, and the diode's peak inverse voltage is over 90V (including the ringing 'spike'), both with a 47Ω load resistor (468mA DC output). The peak diode current is 3.5A, over seven times the average DC. It's easy to be caught out by the behaviour of the secondary circuit, because most initial assumptions will be wrong! Unless you've taken serious measurements you'll be completely unaware of the reality. I tested a 5V 1A SMPS (after repair because the output rectifier had failed), and found that the diode was subjected to a reverse voltage of 42V with 230V mains. That's more than eight times the output voltage.
+ +As for waveforms, the final one is a scope capture from a 24V, 4A flyback SMPS, operating with a 600mA load. The peak amplitude can be seen to be about 450V, and the MOSFET 'on' time is 2.5µs. Power is delivered to the load for 6.5µs, after which the transformer 'rings' for 6 cycles (no appreciable power is consumed, as evidenced by the slow decay of the ringing frequency). The switching frequency shown on the scope capture is correct, at 45.56kHz. In common with many SMPS ICs, the switching is frequency modulated. This is done to help ensure that the product will pass EMC tests.
+ +The standard method for capturing the RF energy is a 'quasi-peak' measurement, and when the frequency is changed that results in a lower overall level of RF emissions. A quasi-peak detector generates a higher voltage output when the event occurs more frequently, so by modulating the switching frequency, the repetition rate (at any particular frequency) is reduced. I do not intend to even try to cover the test methodology, but I will point out that there are two different measurements. Conducted emissions are those passed back into the mains wiring via the power lead, and radiated emissions are those detected with a specially designed receiver, and are radiated into free space. The Y1 (safety) capacitor bridges the input to the output, and without it few SMPS will pass radiated emissions tests.
+ +Note the short damped burst of RF at the peak voltage. I measured this, and it's at 5-6MHz, and is caused primarily by leakage inductance. The damped ringing between cycles is at 455kHz. There's also ringing at the output (at around 22MHz), and although it looks ugly it's not audible (yes, I connected the output to a speaker via a capacitor). However, in an audio application, SMPS noise can cause intermodulation products, although getting worthwhile info on that can be challenging.
+ +The following examples are selected to demonstrate some of the newer (some are not-so-new) flyback controller ICs. These are not a specific endorsement of the ICs featured, but are a reasonable representation of the devices (or device 'families') that are available. It would not be sensible to even try to cover all examples, as there are countless ICs from many different makers. Most require relatively few external parts, but the mains input EMI (electromagnetic interference) filter, bridge rectifier and main filter caps are common to all. Many higher output ICs include active PFC (power factor correction) functions as well, and the LT3798 (Linear Technology) and HVLED007 (ST Microelectronics are two examples (the LT3798 is not shown here). Active PFC is becoming more important as energy usage increases worldwide - see Part 3 - Active Power Factor Correction.
+ +I've included five examples below, all of which are adapted from the applicable datasheet or application note. This gives you an idea of the range of devices used, but doesn't even scratch the surface - there are literally countless others, but they all operate in a similar fashion. Flyback has been the topology of choice for many, many years, particularly for low-medium power. This isn't expected to change any time soon. All examples are shown with voltage feedback, but a combination of voltage and current feedback is often used for LED lighting. The voltage feedback prevents the no-load voltage from destroying the output filter cap, and current feedback limits the current into a series string of LEDs.
+ +A FFT (fast Fourier transform) of the voltage waveform shown in Fig 1.2 shows the magnitude of the harmonics, extending to 16MHz. These harmonics continue up to at least 30MHz while still having 'significant' amplitude. This is the primary source of EMI, and to prevent radiation the PCB has to be very well designed. The switching device has to be close to the primary winding terminal, and in some cases it may even be necessary to ground the transformer core, and/or add an external shield. It's also essential to prevent conduction back into the mains, because the household mains wiring can make an excellent antenna, causing problems for other equipment. Both conducted and radiated emissions are tested when a supply is submitted for approvals testing.
+ +There is a frequency peak at every multiple of the switching frequency, so for the example shown (60kHz switching) there's a peak at every multiple of that. The second harmonic is at 120kHz, the third at 180kHz, continuing through to at least 30MHz, with smaller peaks well beyond that. The harmonics include even and odd-order, because the switching waveform is asymmetrical. Additional frequencies are also generated due to ringing, in both voltage and current waveforms.
+ +In the examples below, input EMI filters have been included (mostly) as described in the datasheets. I have not included a MOV, although these are sometimes recommended. The MOV (if included) provides some protection against mains 'transients' that may damage the supply. In some cases, a TVS diode may be specified instead. In either case, the transient protection device must be located after the fuse/ fusible resistor as both types can fail short-circuit.
+ + +There are many common ICs used for small SMPS. One of these is the VIPer22 series, by ST Microelectronics. These integrate the oscillator, feedback regulation and high-voltage power switch in a single IC, and minimal external parts are needed for a basic supply. Parts that must be included are an input EMI filter, bridge rectifier, filter capacitor and a reference with an optocoupler. The reference can be a zener diode if regulation isn't particularly critical.
+ +The transformer will be a 'special', designed for a primary inductance of about 2.25mH, with low leakage inductance (no more than 22µH). The turns ratio to the secondary winding will be about 1:0.127 and 1:0.67 for the auxiliary winding. The maximum VDD voltage for the VIPer22 is 50V, and this must not be exceeded. The isolation barrier is created by the transformer and optocoupler, and is bridged by the Y-Class capacitor CY1 (4.7nF). No other capacitor type is permitted in this role. C1 must be an X-Class mains rated capacitor, and C2, C3 are rated at 400V. C4 (100pF) is a 1kV (minimum) ceramic.
+ +Note: There should be a resistor in parallel with the mains input to allow C1 to discharge when the PSU is unplugged from the mains outlet. This eliminates the possibility of the user receiving an electric shock if s/he touches the pins. Normally, it will be at least two high-value resistors in series to ensure that the resistor's voltage rating isn't exceeded. A total value of around 500-600k would normally be used. There's an IC called CAPZero (by Power Integrations) that's designed to connect the discharge resistors only when the AC voltage disappears, minimising wasted power. It may be less than 100mW, but even that may be considered 'excessive' in some jurisdictions. A discharge resistor should also be used with the next example.
+ + +Another complete SMPS on a chip is the TinySwitch family from Power Integrations. These are very capable, and for low-power applications such as standby circuits, the auxiliary winding on the transformer can be omitted. This works up to around 1-2 watts output, but for any more the auxiliary winding is still needed.
+ +The transformer requirements are similar to that for the VIPer22. The application note doesn't have any values which isn't very helpful. Like the VIPer22, this IC minimises the external parts needed, but the general requirements are the same for any SMPS - full isolation from the mains voltage is essential, as is adequate RF filtering to ensure EMC compliance.
+ + +A higher power SMPS (40W for this example) is the 6-pin TO-220 FSDM0565RB from Fairchild (now OnSemi). Like the previous two, these are very capable, and I obtained a couple to repair the power supply of a laser printer (it turned out that the IC wasn't faulty after all, but the printer remained dead). There's the now familiar auxiliary winding on the transformer, and another secondary to provide the two output voltages. Only the 5V supply is regulated, and this is a common approach for multi-output SMPS.
+ +I've reproduced the drawing with the original part designators. Mostly, it's much the same as the other circuits, except the transformer will be significantly larger to accommodate the additional power. The SMPS IC would be attached to a heatsink, and it's a 'full-pack' design (fully encapsulated, including the reverse side) so an insulating washer isn't required, only thermal 'grease'. The output diodes also require a heatsink, as they will dissipate more than 2W each at full power. Overall there's a low parts-count for a 40W SMPS, which can be used for up to 60W in an 'open-frame' design (exposed heatsink) or 70W if it's only used with 230V mains.
+ + +This example shows something quite different. Instead of using an optocoupler, the InnoSwitch3 ICs use an internal magnetic coupling system, using a magnetically coupled set of coils (a transformer) to provide the regulation feedback. It also features a synchronous rectifier (Q1) which is a MOSFET used as an 'ideal' diode. This reduces rectification losses. A synchronous rectifier is not recommended for the 12V output.
+ +The version shown is only relatively low-power (13W) and it has a low parts count (according to the datasheet). Some may disagree, as it uses more parts than the Fig 4.1 version which also has higher power. However, it is designed to have the highest possible efficiency, and the datasheet claims that R4 can be adjusted to give the lowest no-load power consumption (only 5mW no-load at 230V AC input). The synchronous rectifier gate drive is triggered via R5, which senses when secondary current is available and turns on Q1 via the secondary-side controller.
+ +I've included this example because it shows just how important IC manufacturers (and industry in general) consider efficiency and no-load power to be. Once, no-one thought anything of a SMPS that drew 1W or so when idle, but government regulation and the cost to users of so-called 'phantom' power have demanded better performance. Achieving good results is now something of a 'contest', with each IC manufacturer trying to out-do its competitors for a slice of an ever-growing market.
+ + +In Section 1 I mentioned ICs with active PFC, with the one shown from ST Microelectronics. Based on the model number, it appears to be intended for LED lighting applications, where active PFC has become the de-facto standard. Light fittings are used in large numbers, and including power-factor correction is becoming a requirement for many lighting products. Note the very low value of the input capacitor. Where the other designs shown use between 10 to 100µF, in the following circuit it's only 120nF.
+ +The datasheet doesn't include any values, and those shown were obtained using ST's on-line design software. The software works out almost everything for you, including the transformer. It provides the required parameters that need to be used for the design. The software even calculates the expected efficiency with all losses accounted for. I've not specified the diodes, but they all must be high-speed types (other than the input bridge rectifier).
+ +++ ++
+Transformer Details + Parameter Value + Primary Inductance (Lp) 700 µH + Leakage Inductance (Lplk) 14 µH + Primary Saturation Current (Isat) 2.01 A + Primary RMS Current 472 mA + Secondary 1 Turns Ratio (Np:Ns1) 10:1 + Secondary 1 RMS Current 4.13 A + Auxiliary Turns Ratio (Np:Na) 8:1 + Auxiliary RMS Current 11 mA + Secondary 1 Voltage (Vsec1) 12.7 V + Auxiliary Voltage (Vaux) 15.7 V +
This enough information to allow one to calculate the transformer windings. These values were obtained from the design software as well. The active PFC ensures that the current drawn from the mains is as close to being sinusoidal as practicable, where all other supplies shown do not include PFC, and draw a very unfriendly (to the grid) spike waveform. To understand how this ruins the power factor, see Part 3 - Active Power Factor Correction.
+ +With active PFC, the feedback control loop has to be slow - note the high value of Cs used to stabilise the TL431. As a result, the output capacitance has to be much greater than with the other circuits shown. This means that for acceptable output voltage stability, the main output filter capacitance has to be very large, in this case 5 x 7,500µF (7.5mF) caps in parallel. This is an inevitable compromise when a single IC is used for PFC and power conversion.
+ + +I don't know how many ICs are available for flyback SMPS, but there are a great many, by many different manufacturers. Attempting to research them all would be a massive task, and it's one I don't intend to undertake. Some are very similar across different vendors, others (in particular, those that have been in production for some time) may require much more circuit complexity. There are DIP and SMD versions of the same IC in many cases, with the DIP version almost always capable of higher output power due to a larger package and bigger pins to aid heat removal.
+ +There are also many higher power versions, often in a modified TO-220 package with five leads, or sometimes six. The range is bewildering, but many devices you come across are Chinese and there may be zero information available. Some will be copies of other devices, but it's almost impossible to guess which one. The TOPSwitch family of ICs from Power Integrations includes a 3-terminal TO-220 version that incorporates all power and control functions onto a single IC pin (however, it's not recommended for new designs). I've not included one of these, but there's plenty of information on-line.
+ +Several flyback designs use the auxiliary winding for voltage sensing, eliminating the need for an optocoupler to provide feedback. This requires careful transformer design to ensure close coupling of the windings so that output voltage regulation isn't adversely affected. This may be referred to as 'PSR' - primary-side regulation. An example of this type of controller is the TI UCC28632, which uses an external MOSFET. There are so many different options for ICs, regulation system, output power (etc.) that it's hard to choose unless you have a particular preference for one reason or another.
+ +The number of ICs you can get will always be limited to those available from your preferred supplier(s). I generally don't recommend on-line 'auction' sites for buying ICs, as counterfeits are rife. If you're trying to repair an existing SMPS you may find that the IC is no longer available, or is proprietary. In some cases the part number will have been removed, or the only datasheet is in Chinese.
+ +The one shown above comes from a datasheet that is in Chinese, and it's one of the supplies that prompted the article on dangerous power supplies. There is no input filter, and the circuit is about as 'bare bones' as you can get. The one that I have using the DK1203 IC omits both the input and output filter, but it has a somewhat more refined feedback network. What you see may well be what you get if you buy a cheap, unapproved SMPS on-line, and it may (or may not) have an improved regulation circuit. Interestingly, there appears to be no requirement for an auxiliary supply.
+ +You can be certain that as shown, the SMPS definitely won't have approval from any of the appropriate agencies, and it made it to #1 on my 'dangerous' list. The details are shown in the reference [ 7 ].
+ +Transformer design is well developed, and there are guides and design programs to assist with the process. Most are very detailed, and cover the magnetic, electrical and mechanical processes needed to produce a transformer that's safe and (hopefully) easy to wind. There's a lot of information about insulation and material ratings, choice of wire size and mounting arrangements. The insulation test voltage depends on the device class, either Class-I (basic insulation plus power earth/ ground) or Class-II (double/ reinforced insulated). Stand-alone supplies (plug-pack types) can only be Class-II, and must be insulated to very high standards.
+ +The transformer's core loss may be greater than expected because of the rectangular switching waveform. If the core loss is too high, the transformer will get hot. The same will happen if the wire size isn't sufficient for the current. At high frequencies, the skin-effect will cause large-diameter wire to have a higher effective resistance than at DC, and it's common (in high quality transformers) to use multiple strands of thin (insulated) wires rather than a single larger diameter conductor. This complicates the winding process.
+ +Care is needed with the transformer terminations, to ensure that acceptable separation between primary and secondary is maintained within the transformer. Expecting the wire's insulation to withstand the required test voltage (which is 4.5kV RMS or more) is not realistic. All lead-in/ lead-out wires need to be kept separated, and have additional insulation if required. The recommendations for the insulated wire ('magnet wire') vary, with triple-insulated wire recommended for maximum safety. The insulation tape also has to be chosen carefully. Most commercial transformers use Mylar (polyester), but the adhesive must be suitable for the job. Some adhesives may interact with the wire's insulation, although this is (probably) unlikely with modern insulated wire.
+ +Winding geometry dictates the inter-winding capacitance and leakage inductance, both which should be as low as possible. The choice of insulation material depends on the expected maximum temperature. Excessive capacitance and/ or leakage inductance causes high voltage switching spikes and ringing, increasing EMI (electromagnetic interference). It's also important to minimise voltage stress between the ends of a winding where it covers two or more layers on the bobbin. The enamel insulation on 'magnet wire' cannot withstand excessive voltage, and you may also get a corona discharge which will degrade the insulation.
+ +The critical voltage for a corona discharge is around 3kV/mm, so if two wires are 0.1mm apart, the voltage needed is only 300V. This is something that isn't covered in detail anywhere other than in documents protected by a 'pay wall', so I've not been able to access them. In any insulation system, corona discharges can occur at well below the breakdown voltage. These lead to the deterioration and eventual destruction of nearly all insulating materials. Vacuum impregnation is one way to minimise the likelihood of corona (and other) discharges. The transformer's insulation is the most important isolation barrier in the power supply, so it has to be right.
+ +In all cases, creepage and clearance distances must be maintained for the appropriate level of protection. In a PSU that's not enclosed you also have to consider the pollution degree, as any dust or other material that collects on the transformer, PCB or other components may become conductive with high humidity. There's a great deal to consider, most of which takes a lot of research to find.
+ + +For any project you may be contemplating, a far safer and more reliable solution is recommended. This involves using a commercial (and fully approved) SMPS and removing the PCB from the original 'plug-pack' housing. It's then installed inside a small 'jiffy box' or equivalent to ensure that live parts are not accessible when installed in the equipment chassis.
+ +Fig 8.1/2 are my preferred methods, and I've used these supplies in a few my own projects. The PCB for Fig 8.1 was 'liberated' from a commercial (and approved) plug-pack supply, and it was mounted onto a piece of acrylic. The second supply was a small 'in-line' type supply, with a 2-pin IEC mains input socket and a flying lead for the DC output. Both examples can be installed in a small plastic 'utility' box, which protects against accidental contact. The total cost will generally be lower than that to build one, and its safety is as assured as is possible for this class of device. The Fig 8.2 supply is higher power, but it will still fit inside a readily available utility box.
+ +I've also recommended this approach in several projects, as it's a lower cost (plus smaller and more efficient) alternative to a 'traditional' linear supply. However, a linear supply will have a much longer life (50 years and more is not uncommon), where the SMPS has an indeterminate life - it will work until it doesn't. I have switchmode supplies that are well over 10 years old and still work fine, but I've also had a couple that lasted for less than two years (one much less).
+ +One thing that's obvious on both is the input filter, necessary to ensure that EMC requirements are met. These two are well made, and appear to have a generous safety barrier between hazardous and 'safe' voltages. While I can't run tests for isolation (other than using my 1kV insulation test meter), I am confident that both supplies are compliant with all regulations. Both had all of the required safety certifications printed on the original (discarded) enclosures.
+ +The 12V, 1A SMPS above looks like it was designed for the box it's in. The box is 85 x 38mm, and about 24mm deep. The PCB fits inside perfectly, and it has separated 'ports' for the incoming mains, and a single outlet for the DC wires. This can be installed in a chassis to provide a safe 12V supply for anything that needs it. The supply itself is from a well known reputable supplier, and was liberated from its original plug-pack enclosure. This is as safe as you're likely to find anywhere, and it's small enough to fit into any chassis I'm likely to use it in.
+ +Another alternative is the Hi-Link HLK-PMxx - a small modular supply measuring only 34 × 20.2 × 15mm high. These are 94-265V AC input (50/ 60Hz) and are available with an output voltage of 3.3, 5, 12 and 24V, with a maximum power of 3W. I've not used one of these, but they are low-cost (some are under AU$7.00 depending on seller) and smaller than any other option. They don't have any safety agency approvals though, which is cause for some concern. The only sources I've found are eBay or Amazon. The datasheets claim compliance with major safety regulations, but the products have no printed compliance markings (as required in most jurisdictions).
+ + +A switchmode supply provides many exciting ways to cause frustration, angst, poor performance and (of course) smoke. The referenced document [ 6 ] has all the details, but a quick summary is worthwhile.
+ +If the IC has controllable soft-start circuitry, it has to be programmed properly for the transformer used. If the setting is wrong you will experience problems. The exact nature of the issues you may face can't be determined beforehand, as they have too many dependencies. Some of the possible failure modes may not show up initially, or may be affected by the load.
+ +Fault protection circuits can be troublesome if the PCB layout isn't ideal, because even tiny amounts of stray inductance and/or capacitance due to PCB traces can cause havoc. The clamp circuit (resistor || capacitor plus diode, aka RCD [resistor, capacitor, diode]) shown in each circuit reduces efficiency, but it's required due to inevitable leakage inductance in the transformer. However, attempting to reduce the leakage inductance will add considerable time and cost, and if you expect to get below 2% of the primary inductance that's probably unrealistic. Note that some circuits specify a TVS (transient voltage suppressor) diode in place of the parallel resistor and capacitor.
+ +There may also be issues with audible noise, which can be created by the transformer or filter inductors. Vacuum impregnation will keep most of these parts quiet, but adds cost. Also, beware of 'High-K' ceramic capacitors, which can act as tiny piezo 'speakers' if the AC voltage across them is too great. Larger capacitors are more likely to make noise, either by themselves or by flexing the PCB.
+ +Not using a large enough input filter capacitor can lead to ripple breakthrough at maximum output current, and the same will happen as it degrades over its life. The former is easy - use a bigger cap, but the only way to minimise degradation is to ensure minimal losses throughout the system to keep the temperature rise as low as possible. Use 105°C caps whenever possible. Reducing the temperature of electronic components will generally double their life for every 10°C temperature reduction. Also remember that aluminium electrolytic capacitors are notoriously unreliable when used at their maximum rated temperature, ripple current and/or voltage. Using an input capacitor that's too big may cause problems due to inrush current.
+ +Optocouplers are usually considered (by the uninitiated) to be ultra-reliable, but this isn't always true. The LED is subject to 'lumen depreciation' over its life, and the feedback circuit must be able to provide enough LED current to maintain regulation for the life of the product. If the regulation circuit can't supply enough current, the output voltage will rise, either causing an over-voltage shutdown or damaging connected equipment. The optocoupler's CTR (current transfer ratio) is also temperature dependent, and it falls with increasing temperature.
+ +When you buy an approved SMPS, most of the issues mentioned above will have been solved already, at least to a degree that is 'satisfactory'. Expecting perfection is unrealistic, but most of the approved SMPS I've used are 'good enough'. I've had failures (haven't we all?), but most of those I have in my collection have been ok. This mainly excludes cheap 'no-name' types with no approvals, although (surprisingly) some are not too bad.
+ +If you try to build your own, you'll have to do a '1-off' design, debug the circuitry (assuming you can get/ wind a transformer to an acceptable standard), perform perhaps several PCB re-designs and source all of the parts needed. If you imagine for an instant that you'll save money you're mistaken. You will get experience of course, but you're working with live mains and limited test equipment compared to that used by professional SMPS designers. Since you can't get the required approvals (safety and EMC), you'll find that you can't sell it lawfully, and it may even be an offence to give it away!
+ + +There are several DIY flyback SMPS that use 1/2 wave rectifiers, presumably to save a couple of cents on the total cost. This is a really bad idea, and isn't recommended for any power level greater than perhaps 0.25W (that's only 50mA at 5V, or 21mA at 12V). Mostly, such low power levels aren't useful other than for a dedicated standby power supply that only has to provide a few milliamps to keep the equipment 'alive' (e.g. to power an IR receiver so the equipment can be turned on via a remote control). Once turned on, the low-power supply isn't needed. Even then, using a bridge rectifier is always a better idea (I really dislike 1/2 wave rectifiers in anything).
+ +Something that should be considered is audible noise. At low power, most of the ICs available operate in 'skip-cycle' mode, where the switching shuts down for several cycles and the output voltage is usually somewhat unstable. This may or may not be a problem for the equipment, but one hopes that the supply doesn't make audible noise. A vacuum impregnated transformer usually ensures no (or minimal) audible noise, and some otherwise very ordinary transformers are completely silent. You won't know until you test it for yourself.
+ +You may have noticed that there are more similarities than differences in the circuits shown. This is inevitable, because the flyback topology doesn't really lend itself to many significant variations. The essential parts will always be similar, but that doesn't mean that you can use (for example) any IC with any transformer. There are requirements for operating frequency, peak primary current and rated power level. While it can seem that the design process is 'trivial', that's not the case at all. Stabilising the feedback loop can be very challenging in some cases.
+ +Much as I'd like to be able to produce a project for a flyback AC/DC converter, I won't do so for many reasons. The first of these is (of course) safety, since making a transformer that's guaranteed to be safe isn't possible because it's home-made. Not everyone can ensure that everything is done perfectly, and I can't even test a prototype to the required standards. Even getting the correct core and bobbin can be challenging, as availability is somewhat variable, depending on your supplier. Because of electrical safety concerns, the circuit would need a PCB, and that's something I'm unwilling to undertake because it promotes the idea of a DIY version.
+ +The alternative to a DIY transformer is to specify a ready-made transformer, but if the constructor can't get it, then the project is over before it even starts. Even ICs can be difficult, as not all suppliers have the same product range, and component shortages make that worse. There are also input and output filters, which require specific parts to perform as intended. I can't do EMC testing, and EMI depends on the construction, so there's no way of knowing if the circuit will kill AM radio reception for you or your neighbours, or interfere with other radio-frequency devices.
+ +Should you decide that you really want to build your own flyback SMPS, you must have 'test points' to allow you to measure as much as you can. That means being able to connect an oscilloscope to all of the important circuit nodes. The supply will almost certainly not be safe to use (especially with no protective earth/ ground), but you will be able to observe and measure its behaviour. During your testing the supply may fail, and unless you get everything right it may even blow up when power is first applied. Rather than being powered from the mains, I suggest that you have an isolated power supply that can deliver up to 150V DC or so at no more than 100mA, so you can run tests safely. You can make adjustments to your design to suit the lower voltage.
+ +The only 'safe' way to work on any mains powered supply is to use an isolation transformer. That means there is no common (neutral) conductor, so contact with one mains lead won't kill you. Contact with both mains leads may be lethal, so the 'protection' provided is totally reliant on your vigilance. Should you get an electric shock, the (IMO mandatory) safety switch won't work, and it could be the last electric shock you ever receive!
+ +While this article looks a bit like a collection of projects, it's not! Ideally, it should be looked at as education, but with no intention to construct any of the circuits shown. These are adapted from manufacturer's datasheets, using the values suggested. I have (deliberately) not included voltage ratings for capacitors or power ratings for resistors, because I really don't want anyone to think that building a flyback SMPS is a good idea.
+ +There are now several approaches to obtaining very low no-load power consumption, but you probably won't be in a position to make use of the ICs used because they are somewhat specialised. A reasonably good SMPS should have no more than ~150mW no-load power draw from the mains, and even this is difficult to achieve. Getting this down to 5mW is possible, using dedicated ICs available from some suppliers. Unless you get everything just right, your supply will likely draw 500mW or more with no load. The SMPS shown in Fig. 8.3 draws 90mW with no load, which is a good result.
+ +After (or before) reading this, I recommend that you also read Reference #7, which covers the hazards in some detail. It also explains why buying from sellers that distribute cheap products directly from Asian manufacturers is a really bad idea.
+ +